User login
Extended use of oral anticoagulants reduces VTE recurrence
AUSTIN, TEX. – Extended treatment with any of the novel oral anticoagulants, but with apixaban in particular, provides a net clinical benefit in patients at risk of recurrent venous thromboembolism, according to a review of three randomized trials.
Apixaban appears to provide the optimal net clinical benefit, with the lowest number needed to treat to avoid one venous thromboembolic or major bleeding event, Dr. Alpesh Amin reported at the annual meeting of the American College of Chest Physicians.
In 5,035 patients in three trials of extended treatment with novel oral anticoagulants (NOACs) for venous thromboembolism (VTE) – including the RE-SONATE trial, the EINSTEIN-EXT trial, and the AMPLIFY-EXT trial – the differences in event rates, compared with placebo, were –5.15% for dabigatran, –5.74% for rivaroxaban, –7.14% for 2.5 mg apixaban, and –7.0% for 5 mg apixaban, reported Dr. Amin of the University of California, Irvine.
The number needed to treat to avoid one VTE or major bleeding event was 21 for dabigatran, 20 for rivaroxaban, 14 for 2.5 mg apixaban, and 13 for 5 mg apixaban, Dr. Amin said.
“The good news is that the number needed to treat for all of [the oral anticoagulants] is actually less than 25,” he said.
As for costs, the savings from avoiding a recurrent VTE were $2,995 with dabigatran, $3,300 for rivaroxaban, and $4,100 for both 2.5 and 5 mg apixaban.
For major bleeding events, the corresponding rates, compared with placebo, were 0.29%, 0.67%, –0.20%, and –0.36%.
There was a net clinical benefit for all patients treated with the NOACs, but in those treated with 5 mg apixaban, the rates of improvement were highest at –7.44%, followed by –7.38% for 2.5 mg apixaban. The rates were –5.0% with rivaroxaban and –4.85% with dabigatran.
“So we see a low number needed to treat, and a significant amount of cost avoidance by using the NOACs across the board,” he said, adding that apixaban may provide the best net clinical benefit for the lowest number needed to treat to avoid one VTE or major bleeding event, and is associated with the greatest medical cost avoidance.
“In terms of safety endpoints, dabigatran and rivaroxaban cost the system a little bit of money, whereas apixaban actually decreased the cost,” he said.
“How these results translate into real-world outcomes will require further evaluation, and as we get more numbers out there, we will actually be looking at the impact in the real world,” he said.
Dr. Amin reported serving as a paid consultant and/or member of a speakers bureau or advisory committee for Bristol-Myers Squibb and Pfizer.
AUSTIN, TEX. – Extended treatment with any of the novel oral anticoagulants, but with apixaban in particular, provides a net clinical benefit in patients at risk of recurrent venous thromboembolism, according to a review of three randomized trials.
Apixaban appears to provide the optimal net clinical benefit, with the lowest number needed to treat to avoid one venous thromboembolic or major bleeding event, Dr. Alpesh Amin reported at the annual meeting of the American College of Chest Physicians.
In 5,035 patients in three trials of extended treatment with novel oral anticoagulants (NOACs) for venous thromboembolism (VTE) – including the RE-SONATE trial, the EINSTEIN-EXT trial, and the AMPLIFY-EXT trial – the differences in event rates, compared with placebo, were –5.15% for dabigatran, –5.74% for rivaroxaban, –7.14% for 2.5 mg apixaban, and –7.0% for 5 mg apixaban, reported Dr. Amin of the University of California, Irvine.
The number needed to treat to avoid one VTE or major bleeding event was 21 for dabigatran, 20 for rivaroxaban, 14 for 2.5 mg apixaban, and 13 for 5 mg apixaban, Dr. Amin said.
“The good news is that the number needed to treat for all of [the oral anticoagulants] is actually less than 25,” he said.
As for costs, the savings from avoiding a recurrent VTE were $2,995 with dabigatran, $3,300 for rivaroxaban, and $4,100 for both 2.5 and 5 mg apixaban.
For major bleeding events, the corresponding rates, compared with placebo, were 0.29%, 0.67%, –0.20%, and –0.36%.
There was a net clinical benefit for all patients treated with the NOACs, but in those treated with 5 mg apixaban, the rates of improvement were highest at –7.44%, followed by –7.38% for 2.5 mg apixaban. The rates were –5.0% with rivaroxaban and –4.85% with dabigatran.
“So we see a low number needed to treat, and a significant amount of cost avoidance by using the NOACs across the board,” he said, adding that apixaban may provide the best net clinical benefit for the lowest number needed to treat to avoid one VTE or major bleeding event, and is associated with the greatest medical cost avoidance.
“In terms of safety endpoints, dabigatran and rivaroxaban cost the system a little bit of money, whereas apixaban actually decreased the cost,” he said.
“How these results translate into real-world outcomes will require further evaluation, and as we get more numbers out there, we will actually be looking at the impact in the real world,” he said.
Dr. Amin reported serving as a paid consultant and/or member of a speakers bureau or advisory committee for Bristol-Myers Squibb and Pfizer.
AUSTIN, TEX. – Extended treatment with any of the novel oral anticoagulants, but with apixaban in particular, provides a net clinical benefit in patients at risk of recurrent venous thromboembolism, according to a review of three randomized trials.
Apixaban appears to provide the optimal net clinical benefit, with the lowest number needed to treat to avoid one venous thromboembolic or major bleeding event, Dr. Alpesh Amin reported at the annual meeting of the American College of Chest Physicians.
In 5,035 patients in three trials of extended treatment with novel oral anticoagulants (NOACs) for venous thromboembolism (VTE) – including the RE-SONATE trial, the EINSTEIN-EXT trial, and the AMPLIFY-EXT trial – the differences in event rates, compared with placebo, were –5.15% for dabigatran, –5.74% for rivaroxaban, –7.14% for 2.5 mg apixaban, and –7.0% for 5 mg apixaban, reported Dr. Amin of the University of California, Irvine.
The number needed to treat to avoid one VTE or major bleeding event was 21 for dabigatran, 20 for rivaroxaban, 14 for 2.5 mg apixaban, and 13 for 5 mg apixaban, Dr. Amin said.
“The good news is that the number needed to treat for all of [the oral anticoagulants] is actually less than 25,” he said.
As for costs, the savings from avoiding a recurrent VTE were $2,995 with dabigatran, $3,300 for rivaroxaban, and $4,100 for both 2.5 and 5 mg apixaban.
For major bleeding events, the corresponding rates, compared with placebo, were 0.29%, 0.67%, –0.20%, and –0.36%.
There was a net clinical benefit for all patients treated with the NOACs, but in those treated with 5 mg apixaban, the rates of improvement were highest at –7.44%, followed by –7.38% for 2.5 mg apixaban. The rates were –5.0% with rivaroxaban and –4.85% with dabigatran.
“So we see a low number needed to treat, and a significant amount of cost avoidance by using the NOACs across the board,” he said, adding that apixaban may provide the best net clinical benefit for the lowest number needed to treat to avoid one VTE or major bleeding event, and is associated with the greatest medical cost avoidance.
“In terms of safety endpoints, dabigatran and rivaroxaban cost the system a little bit of money, whereas apixaban actually decreased the cost,” he said.
“How these results translate into real-world outcomes will require further evaluation, and as we get more numbers out there, we will actually be looking at the impact in the real world,” he said.
Dr. Amin reported serving as a paid consultant and/or member of a speakers bureau or advisory committee for Bristol-Myers Squibb and Pfizer.
Key clinical point: All of the NOACs provide a net clinical benefit for reducing VTE recurrence.
Major finding: The number needed to treat to avoid one VTE or major bleeding event was 21 for dabigatran, 20 for rivaroxaban, 14 for 2.5 mg apixaban, and 13 for 5 mg apixaban.
Data source: An analysis of data from three clinical trials, including a total of 5,035 patients.
Disclosures: Dr. Amin reported serving as a paid consultant and/or member of a speakers bureau or advisory committee for Bristol-Myers Squibb and Pfizer.
Shrink Rap News: The surprisingly high cost of Abilify
Recently, I gave a patient a prescription for Abilify. I wrote for 30 tablets of the lowest dose, 2 mg. While I knew it was expensive, I was shocked when the patient returned a few days later and told me it had cost $1,100 to fill the prescription; he not yet met the deductible for his health insurance and he had paid cash for the medication.
According to Medscape (and Twitter, too), Abilify is the medication that grosses more money than any other pharmaceutical in the United States. In the 12-month period from July 2013 to June 2014, sales of Abilify totaled $7.2 billion. An atypical antipsychotic medication that is widely marketed to TV viewers as an augmenting agent to treat major depression, Abilify is the 14th most-prescribed medication in the United States. If you’re wondering, the most prescribed medications are Synthroid, Crestor, and Nexium. Abilify has been available in this country since 2002, initially with an indication for schizophrenia. Since then, indications have expanded to include bipolar disorder, irritability in autism, as well as augmentation for major depression.
Still, $1,100 for 30 tablets? I wondered if the high cost was attributed to where the patient went – a boutique independent pharmacy. I decided to make some calls to local pharmacists (see the table), and queried druggists at CVS, Walmart, and Lykos, an independent pharmacy in Towson, Md. I also checked with a Walmart in Vermont to see if the prices were the same in another part of the country, and they were. Let me share with you what I learned.

For the three pharmacies I called, a single 2-mg tablet of Abilify cost between $30 and $33, so the cost was less than the $1,100 my patient paid. There is no discount for buying in bulk, and the price per pill stays virtually the same whether a patient buys 1 pill, 30 pills, or 90 pills. I checked with two pharmacies, and the price for a tablet is the same for the dosages of 2 mg, 5 mg, 10 mg, and 15 mg. The price rises to $38-$47 per pill for the 20-mg and 30-mg dose.
As physicians in a health care system where resources are limited, it is incumbent upon us to at least consider the cost of the tests and treatments we order, but we often have no way of knowing what these costs actually are. Was I missing something? Is everyone else aware that Abilify is this costly? I did a quick survey of a handful of psychiatrists by text message (please don’t count this as science), requesting a guess for the cost of a single 2-mg tablet of Abilify. The responses I received ranged from $7 to $20, and a lone respondent answered $40. For the most part, the cost of medications remains opaque to the prescriber.
If I had to do it again, I still would have prescribed Abilify to this particular patient. I would have suggested he buy only a few tablets to start, and I would have prescribed the 5-mg dose and recommended splitting the pills to halve the cost. In a December 2006 article in Current Psychiatry, “Pros and cons of pill splitting,” Dr. Rakesh Jain and Dr. Shailesh Jain note that it is safe to divide Abilify tablets. Filling only a few tablets seems like the prudent thing to do with such an expensive medication, at least until it is clear that it is tolerable to the patient, but as we know, filling less than a month’s supply often creates hurdles and increased copays when health insurance is paying for the prescription. And with requirements for preauthorization, I’m not certain if it’s even possible for a patient to take home just a few to try.
When I informed the psychiatrists I queried that Abilify costs $30-$33 per 2-mg dose, they expressed their surprise. One friend, however, put it most aptly with her reply of simply, “Good grief.”
Dr. Miller is a coauthor of “Shrink Rap: Three Psychiatrists Explain Their Work” (Baltimore: Johns Hopkins University Press, 2011).
Recently, I gave a patient a prescription for Abilify. I wrote for 30 tablets of the lowest dose, 2 mg. While I knew it was expensive, I was shocked when the patient returned a few days later and told me it had cost $1,100 to fill the prescription; he not yet met the deductible for his health insurance and he had paid cash for the medication.
According to Medscape (and Twitter, too), Abilify is the medication that grosses more money than any other pharmaceutical in the United States. In the 12-month period from July 2013 to June 2014, sales of Abilify totaled $7.2 billion. An atypical antipsychotic medication that is widely marketed to TV viewers as an augmenting agent to treat major depression, Abilify is the 14th most-prescribed medication in the United States. If you’re wondering, the most prescribed medications are Synthroid, Crestor, and Nexium. Abilify has been available in this country since 2002, initially with an indication for schizophrenia. Since then, indications have expanded to include bipolar disorder, irritability in autism, as well as augmentation for major depression.
Still, $1,100 for 30 tablets? I wondered if the high cost was attributed to where the patient went – a boutique independent pharmacy. I decided to make some calls to local pharmacists (see the table), and queried druggists at CVS, Walmart, and Lykos, an independent pharmacy in Towson, Md. I also checked with a Walmart in Vermont to see if the prices were the same in another part of the country, and they were. Let me share with you what I learned.

For the three pharmacies I called, a single 2-mg tablet of Abilify cost between $30 and $33, so the cost was less than the $1,100 my patient paid. There is no discount for buying in bulk, and the price per pill stays virtually the same whether a patient buys 1 pill, 30 pills, or 90 pills. I checked with two pharmacies, and the price for a tablet is the same for the dosages of 2 mg, 5 mg, 10 mg, and 15 mg. The price rises to $38-$47 per pill for the 20-mg and 30-mg dose.
As physicians in a health care system where resources are limited, it is incumbent upon us to at least consider the cost of the tests and treatments we order, but we often have no way of knowing what these costs actually are. Was I missing something? Is everyone else aware that Abilify is this costly? I did a quick survey of a handful of psychiatrists by text message (please don’t count this as science), requesting a guess for the cost of a single 2-mg tablet of Abilify. The responses I received ranged from $7 to $20, and a lone respondent answered $40. For the most part, the cost of medications remains opaque to the prescriber.
If I had to do it again, I still would have prescribed Abilify to this particular patient. I would have suggested he buy only a few tablets to start, and I would have prescribed the 5-mg dose and recommended splitting the pills to halve the cost. In a December 2006 article in Current Psychiatry, “Pros and cons of pill splitting,” Dr. Rakesh Jain and Dr. Shailesh Jain note that it is safe to divide Abilify tablets. Filling only a few tablets seems like the prudent thing to do with such an expensive medication, at least until it is clear that it is tolerable to the patient, but as we know, filling less than a month’s supply often creates hurdles and increased copays when health insurance is paying for the prescription. And with requirements for preauthorization, I’m not certain if it’s even possible for a patient to take home just a few to try.
When I informed the psychiatrists I queried that Abilify costs $30-$33 per 2-mg dose, they expressed their surprise. One friend, however, put it most aptly with her reply of simply, “Good grief.”
Dr. Miller is a coauthor of “Shrink Rap: Three Psychiatrists Explain Their Work” (Baltimore: Johns Hopkins University Press, 2011).
Recently, I gave a patient a prescription for Abilify. I wrote for 30 tablets of the lowest dose, 2 mg. While I knew it was expensive, I was shocked when the patient returned a few days later and told me it had cost $1,100 to fill the prescription; he not yet met the deductible for his health insurance and he had paid cash for the medication.
According to Medscape (and Twitter, too), Abilify is the medication that grosses more money than any other pharmaceutical in the United States. In the 12-month period from July 2013 to June 2014, sales of Abilify totaled $7.2 billion. An atypical antipsychotic medication that is widely marketed to TV viewers as an augmenting agent to treat major depression, Abilify is the 14th most-prescribed medication in the United States. If you’re wondering, the most prescribed medications are Synthroid, Crestor, and Nexium. Abilify has been available in this country since 2002, initially with an indication for schizophrenia. Since then, indications have expanded to include bipolar disorder, irritability in autism, as well as augmentation for major depression.
Still, $1,100 for 30 tablets? I wondered if the high cost was attributed to where the patient went – a boutique independent pharmacy. I decided to make some calls to local pharmacists (see the table), and queried druggists at CVS, Walmart, and Lykos, an independent pharmacy in Towson, Md. I also checked with a Walmart in Vermont to see if the prices were the same in another part of the country, and they were. Let me share with you what I learned.

For the three pharmacies I called, a single 2-mg tablet of Abilify cost between $30 and $33, so the cost was less than the $1,100 my patient paid. There is no discount for buying in bulk, and the price per pill stays virtually the same whether a patient buys 1 pill, 30 pills, or 90 pills. I checked with two pharmacies, and the price for a tablet is the same for the dosages of 2 mg, 5 mg, 10 mg, and 15 mg. The price rises to $38-$47 per pill for the 20-mg and 30-mg dose.
As physicians in a health care system where resources are limited, it is incumbent upon us to at least consider the cost of the tests and treatments we order, but we often have no way of knowing what these costs actually are. Was I missing something? Is everyone else aware that Abilify is this costly? I did a quick survey of a handful of psychiatrists by text message (please don’t count this as science), requesting a guess for the cost of a single 2-mg tablet of Abilify. The responses I received ranged from $7 to $20, and a lone respondent answered $40. For the most part, the cost of medications remains opaque to the prescriber.
If I had to do it again, I still would have prescribed Abilify to this particular patient. I would have suggested he buy only a few tablets to start, and I would have prescribed the 5-mg dose and recommended splitting the pills to halve the cost. In a December 2006 article in Current Psychiatry, “Pros and cons of pill splitting,” Dr. Rakesh Jain and Dr. Shailesh Jain note that it is safe to divide Abilify tablets. Filling only a few tablets seems like the prudent thing to do with such an expensive medication, at least until it is clear that it is tolerable to the patient, but as we know, filling less than a month’s supply often creates hurdles and increased copays when health insurance is paying for the prescription. And with requirements for preauthorization, I’m not certain if it’s even possible for a patient to take home just a few to try.
When I informed the psychiatrists I queried that Abilify costs $30-$33 per 2-mg dose, they expressed their surprise. One friend, however, put it most aptly with her reply of simply, “Good grief.”
Dr. Miller is a coauthor of “Shrink Rap: Three Psychiatrists Explain Their Work” (Baltimore: Johns Hopkins University Press, 2011).
Protein discovery paves way for patient-specific HSCs
Credit: John Perry
A protein known as GPI-80 is integral to the self-renewal of hematopoietic stem cells (HSCs) during human development, investigators have reported in Cell Stem Cell.
The team says this discovery lays the groundwork for researchers to generate HSCs in the lab that better mirror HSCs in their natural environment.
This could lead to improved therapies for hematologic disorders by enabling the creation of patient-specific HSCs for transplantation.
In a 5-year study, Hanna Katri Annikki Mikkola, MD, PhD, of the University of California, Los Angeles, and her colleagues investigated a unique HSC surface protein called GPI-80.
They found that GPI-80 is produced by a subpopulation of human fetal hematopoietic stem/progenitor cells (HSPCs)—the only group of cells that could self-renew and differentiate into various blood cell types.
The investigators also found that this subpopulation—CD34+CD38lo/-CD90+GPI-80+ HSPCs—was the sole population able to permanently integrate into and thrive within the blood system of a recipient mouse.
Dr Mikkola and her colleagues further discovered that GPI-80 identifies human HSPCs during multiple phases of development and migration.
These include the early first trimester of fetal development, when newly generated HSCs can be found in the placenta, and the second trimester, when HSCs are actively replicating in the fetal liver and the fetal bone marrow.
“We found that whatever HSC niche we investigated, we could use GPI-80 as the best determinant to find the stem cell as it was being generated or colonized different hematopoietic tissues,” Dr Mikkola said.
“Moreover, loss of GPI-80 caused the stem cells to differentiate. This essentially tells us that GPI-80 must be present to make HSCs. We now have a very unique marker for investigating how human hematopoietic cells develop, migrate, and function.”
Dr Mikkola’s team is exploring different stages of human HSC development and pluripotent stem cell differentiation based on the GPI-80 marker and comparing how HSCs are being generated in vitro and in vivo.
The group says this paves the way for scientists to redirect pluripotent stem cells into patient-specific HSCs for transplantation into a patient without the need to find a suitable donor.
“Now that we can use GPI-80 as a marker to isolate the human hematopoietic stem cell at different stages of development, this can serve as a guide for identifying and overcoming the barriers to making human HSCs in vitro, which has never been done successfully,” Dr Mikkola said.
“We can now better understand the missing molecular elements that in vitro-derived cells don’t have, which is critical to fulfilling the functional and safety criteria for transplantation to patients.”
Credit: John Perry
A protein known as GPI-80 is integral to the self-renewal of hematopoietic stem cells (HSCs) during human development, investigators have reported in Cell Stem Cell.
The team says this discovery lays the groundwork for researchers to generate HSCs in the lab that better mirror HSCs in their natural environment.
This could lead to improved therapies for hematologic disorders by enabling the creation of patient-specific HSCs for transplantation.
In a 5-year study, Hanna Katri Annikki Mikkola, MD, PhD, of the University of California, Los Angeles, and her colleagues investigated a unique HSC surface protein called GPI-80.
They found that GPI-80 is produced by a subpopulation of human fetal hematopoietic stem/progenitor cells (HSPCs)—the only group of cells that could self-renew and differentiate into various blood cell types.
The investigators also found that this subpopulation—CD34+CD38lo/-CD90+GPI-80+ HSPCs—was the sole population able to permanently integrate into and thrive within the blood system of a recipient mouse.
Dr Mikkola and her colleagues further discovered that GPI-80 identifies human HSPCs during multiple phases of development and migration.
These include the early first trimester of fetal development, when newly generated HSCs can be found in the placenta, and the second trimester, when HSCs are actively replicating in the fetal liver and the fetal bone marrow.
“We found that whatever HSC niche we investigated, we could use GPI-80 as the best determinant to find the stem cell as it was being generated or colonized different hematopoietic tissues,” Dr Mikkola said.
“Moreover, loss of GPI-80 caused the stem cells to differentiate. This essentially tells us that GPI-80 must be present to make HSCs. We now have a very unique marker for investigating how human hematopoietic cells develop, migrate, and function.”
Dr Mikkola’s team is exploring different stages of human HSC development and pluripotent stem cell differentiation based on the GPI-80 marker and comparing how HSCs are being generated in vitro and in vivo.
The group says this paves the way for scientists to redirect pluripotent stem cells into patient-specific HSCs for transplantation into a patient without the need to find a suitable donor.
“Now that we can use GPI-80 as a marker to isolate the human hematopoietic stem cell at different stages of development, this can serve as a guide for identifying and overcoming the barriers to making human HSCs in vitro, which has never been done successfully,” Dr Mikkola said.
“We can now better understand the missing molecular elements that in vitro-derived cells don’t have, which is critical to fulfilling the functional and safety criteria for transplantation to patients.”
Credit: John Perry
A protein known as GPI-80 is integral to the self-renewal of hematopoietic stem cells (HSCs) during human development, investigators have reported in Cell Stem Cell.
The team says this discovery lays the groundwork for researchers to generate HSCs in the lab that better mirror HSCs in their natural environment.
This could lead to improved therapies for hematologic disorders by enabling the creation of patient-specific HSCs for transplantation.
In a 5-year study, Hanna Katri Annikki Mikkola, MD, PhD, of the University of California, Los Angeles, and her colleagues investigated a unique HSC surface protein called GPI-80.
They found that GPI-80 is produced by a subpopulation of human fetal hematopoietic stem/progenitor cells (HSPCs)—the only group of cells that could self-renew and differentiate into various blood cell types.
The investigators also found that this subpopulation—CD34+CD38lo/-CD90+GPI-80+ HSPCs—was the sole population able to permanently integrate into and thrive within the blood system of a recipient mouse.
Dr Mikkola and her colleagues further discovered that GPI-80 identifies human HSPCs during multiple phases of development and migration.
These include the early first trimester of fetal development, when newly generated HSCs can be found in the placenta, and the second trimester, when HSCs are actively replicating in the fetal liver and the fetal bone marrow.
“We found that whatever HSC niche we investigated, we could use GPI-80 as the best determinant to find the stem cell as it was being generated or colonized different hematopoietic tissues,” Dr Mikkola said.
“Moreover, loss of GPI-80 caused the stem cells to differentiate. This essentially tells us that GPI-80 must be present to make HSCs. We now have a very unique marker for investigating how human hematopoietic cells develop, migrate, and function.”
Dr Mikkola’s team is exploring different stages of human HSC development and pluripotent stem cell differentiation based on the GPI-80 marker and comparing how HSCs are being generated in vitro and in vivo.
The group says this paves the way for scientists to redirect pluripotent stem cells into patient-specific HSCs for transplantation into a patient without the need to find a suitable donor.
“Now that we can use GPI-80 as a marker to isolate the human hematopoietic stem cell at different stages of development, this can serve as a guide for identifying and overcoming the barriers to making human HSCs in vitro, which has never been done successfully,” Dr Mikkola said.
“We can now better understand the missing molecular elements that in vitro-derived cells don’t have, which is critical to fulfilling the functional and safety criteria for transplantation to patients.”
Early Cancer Detection Helps Underserved Women
Nearly 60,000 breast and cervical cancers were caught and diagnosed between 1991 and 2011 through the CDC’s National Breast and Cervical Cancer Early Detection Program (NBCCEDP).
The NBCCEDP is the only nationwide cancer screening program serving all 50 states, the District of Columbia, 5 U.S. territories, and 11 tribes or tribal organizations. In its first 20 years, the program served > 4.3 million women who might not otherwise have received preventive screenings. More than 10.7 million received mammograms and Pap tests.
More than 90% of the women in whom cancerous or precancerous lesions were detected received appropriate and timely follow-up care, according to a CDC report, published in an August 2014 supplement to Cancer. The supplement, National Breast and Cervical Cancer Early Detection Program: Two Decades of Service to Underserved Women, contains 13 new papers that evaluate aspects of the NBCCEDP, showing “consistent value” in the program, the CDC says, even beyond its original purpose of detecting cancers in underserved women.
This is the first time detailed information has been published about the program’s screening activities and other interventions. Partnerships with national organizations, community-based organizations, government agencies, tribes, health care systems, and professional organizations have played a “critical role” in achieving NBCCEDP goals, the CDC says.
Nearly 60,000 breast and cervical cancers were caught and diagnosed between 1991 and 2011 through the CDC’s National Breast and Cervical Cancer Early Detection Program (NBCCEDP).
The NBCCEDP is the only nationwide cancer screening program serving all 50 states, the District of Columbia, 5 U.S. territories, and 11 tribes or tribal organizations. In its first 20 years, the program served > 4.3 million women who might not otherwise have received preventive screenings. More than 10.7 million received mammograms and Pap tests.
More than 90% of the women in whom cancerous or precancerous lesions were detected received appropriate and timely follow-up care, according to a CDC report, published in an August 2014 supplement to Cancer. The supplement, National Breast and Cervical Cancer Early Detection Program: Two Decades of Service to Underserved Women, contains 13 new papers that evaluate aspects of the NBCCEDP, showing “consistent value” in the program, the CDC says, even beyond its original purpose of detecting cancers in underserved women.
This is the first time detailed information has been published about the program’s screening activities and other interventions. Partnerships with national organizations, community-based organizations, government agencies, tribes, health care systems, and professional organizations have played a “critical role” in achieving NBCCEDP goals, the CDC says.
Nearly 60,000 breast and cervical cancers were caught and diagnosed between 1991 and 2011 through the CDC’s National Breast and Cervical Cancer Early Detection Program (NBCCEDP).
The NBCCEDP is the only nationwide cancer screening program serving all 50 states, the District of Columbia, 5 U.S. territories, and 11 tribes or tribal organizations. In its first 20 years, the program served > 4.3 million women who might not otherwise have received preventive screenings. More than 10.7 million received mammograms and Pap tests.
More than 90% of the women in whom cancerous or precancerous lesions were detected received appropriate and timely follow-up care, according to a CDC report, published in an August 2014 supplement to Cancer. The supplement, National Breast and Cervical Cancer Early Detection Program: Two Decades of Service to Underserved Women, contains 13 new papers that evaluate aspects of the NBCCEDP, showing “consistent value” in the program, the CDC says, even beyond its original purpose of detecting cancers in underserved women.
This is the first time detailed information has been published about the program’s screening activities and other interventions. Partnerships with national organizations, community-based organizations, government agencies, tribes, health care systems, and professional organizations have played a “critical role” in achieving NBCCEDP goals, the CDC says.
DNA finding has implications for MPNs
Credit: NIGMS
A new study suggests the timing of DNA replication—including where the origin points are and in what order DNA segments are copied—varies from person to person.
The research also revealed the first genetic variants that orchestrate replication timing.
And researchers found evidence suggesting that differences in replication timing may explain why some people are more prone than others to developing myeloproliferative neoplasms (MPNs).
“Everyone’s cells have a plan for copying the genome,” said study author Steven McCarroll, PhD, of Harvard Medical School in Boston. “The idea that we don’t all have the same plan is surprising and interesting.”
Dr McCarroll and his colleagues described this research in Cell.
Replication timing and MPNs
The researchers noted that DNA replication is one of the most fundamental cellular processes, and any variation among people is likely to affect genetic inheritance, including individual disease risk as well as human evolution.
Replication timing is known to affect mutation rates. DNA segments that are copied too late or too early tend to have more errors.
The new study indicates that people with different timing programs therefore have different patterns of mutation risk across their genomes. For example, differences in replication timing could explain predisposition to MPNs.
Researchers previously showed that acquired mutations in JAK2 lead to MPNs. They also noticed that people with JAK2 mutations tend to have a distinctive set of inherited genetic variants nearby, but they weren’t sure how the inherited variants and the new mutations were connected.
Dr McCarroll’s team found that the inherited variants are associated with an “unusually early” replication origin point and proposed that JAK2 is more likely to develop mutations in people with that very early origin point.
“Replication timing may be a way that inherited variation contributes to the risk of later mutations and diseases that we usually think of as arising by chance,” Dr McCarroll said.
A new method of study
Dr McCarroll and his colleagues were able to make these discoveries, in large part, because they invented a new way to obtain DNA replication timing data. They turned to the 1000 Genomes Project, which maintains an online database of sequencing data collected from hundreds of people around the world.
Because much of the DNA in the 1000 Genomes Project had been extracted from actively dividing cells, the team hypothesized that information about replication timing lurked within, and they were right.
They counted the number of copies of individual genes in each genome. Because early replication origins had created more segment copies at the time the sample was taken than late replication origins had, the researchers were able to create a personalized replication timing map for each person.
“People had seen these patterns before but just dismissed them as artifacts of sequencing technology,” Dr McCarroll said. After conducting numerous tests to rule out that possibility, “we found that they reflect real biology.”
The researchers then compared each person’s copy number information with his or her genetic sequence data to see if they could match specific genetic variants to replication timing differences. From 161 samples, the team identified 16 variants. The variants were short, and most were common.
“I think this is the first time we can pinpoint genetic influences on replication timing in any organism,” said study author Amnon Koren, PhD, also of Harvard Medical School.
The variants were located near replication origin points, leading the researchers to wonder if they affect replication timing by altering where a person’s origin points are. The team also suspects the variants work by altering chromatin structure, exposing local sequences to replication machinery.
The group intends to find out. They also want to search for additional variants that control replication timing.
“These 16 variants are almost certainly just the tip of the iceberg,” Dr Koren said.
He and his colleagues believe that, as more variants come to light in future studies, researchers should be better able to manipulate replication timing in the lab and learn more about how it works and its biological significance.
“All you need to do to study replication timing is grow cells and sequence their DNA, which everyone is doing these days,” Dr Koren said. “[This new method] is much easier, faster, and cheaper, and I think it will transform the field because we can now do experiments in large scale.”
“We found that there is biological information in genome sequence data,” Dr McCarroll added. “But this was still an accidental biological experiment. Now imagine the results when we and others actually design experiments to study this phenomenon.”
Credit: NIGMS
A new study suggests the timing of DNA replication—including where the origin points are and in what order DNA segments are copied—varies from person to person.
The research also revealed the first genetic variants that orchestrate replication timing.
And researchers found evidence suggesting that differences in replication timing may explain why some people are more prone than others to developing myeloproliferative neoplasms (MPNs).
“Everyone’s cells have a plan for copying the genome,” said study author Steven McCarroll, PhD, of Harvard Medical School in Boston. “The idea that we don’t all have the same plan is surprising and interesting.”
Dr McCarroll and his colleagues described this research in Cell.
Replication timing and MPNs
The researchers noted that DNA replication is one of the most fundamental cellular processes, and any variation among people is likely to affect genetic inheritance, including individual disease risk as well as human evolution.
Replication timing is known to affect mutation rates. DNA segments that are copied too late or too early tend to have more errors.
The new study indicates that people with different timing programs therefore have different patterns of mutation risk across their genomes. For example, differences in replication timing could explain predisposition to MPNs.
Researchers previously showed that acquired mutations in JAK2 lead to MPNs. They also noticed that people with JAK2 mutations tend to have a distinctive set of inherited genetic variants nearby, but they weren’t sure how the inherited variants and the new mutations were connected.
Dr McCarroll’s team found that the inherited variants are associated with an “unusually early” replication origin point and proposed that JAK2 is more likely to develop mutations in people with that very early origin point.
“Replication timing may be a way that inherited variation contributes to the risk of later mutations and diseases that we usually think of as arising by chance,” Dr McCarroll said.
A new method of study
Dr McCarroll and his colleagues were able to make these discoveries, in large part, because they invented a new way to obtain DNA replication timing data. They turned to the 1000 Genomes Project, which maintains an online database of sequencing data collected from hundreds of people around the world.
Because much of the DNA in the 1000 Genomes Project had been extracted from actively dividing cells, the team hypothesized that information about replication timing lurked within, and they were right.
They counted the number of copies of individual genes in each genome. Because early replication origins had created more segment copies at the time the sample was taken than late replication origins had, the researchers were able to create a personalized replication timing map for each person.
“People had seen these patterns before but just dismissed them as artifacts of sequencing technology,” Dr McCarroll said. After conducting numerous tests to rule out that possibility, “we found that they reflect real biology.”
The researchers then compared each person’s copy number information with his or her genetic sequence data to see if they could match specific genetic variants to replication timing differences. From 161 samples, the team identified 16 variants. The variants were short, and most were common.
“I think this is the first time we can pinpoint genetic influences on replication timing in any organism,” said study author Amnon Koren, PhD, also of Harvard Medical School.
The variants were located near replication origin points, leading the researchers to wonder if they affect replication timing by altering where a person’s origin points are. The team also suspects the variants work by altering chromatin structure, exposing local sequences to replication machinery.
The group intends to find out. They also want to search for additional variants that control replication timing.
“These 16 variants are almost certainly just the tip of the iceberg,” Dr Koren said.
He and his colleagues believe that, as more variants come to light in future studies, researchers should be better able to manipulate replication timing in the lab and learn more about how it works and its biological significance.
“All you need to do to study replication timing is grow cells and sequence their DNA, which everyone is doing these days,” Dr Koren said. “[This new method] is much easier, faster, and cheaper, and I think it will transform the field because we can now do experiments in large scale.”
“We found that there is biological information in genome sequence data,” Dr McCarroll added. “But this was still an accidental biological experiment. Now imagine the results when we and others actually design experiments to study this phenomenon.”
Credit: NIGMS
A new study suggests the timing of DNA replication—including where the origin points are and in what order DNA segments are copied—varies from person to person.
The research also revealed the first genetic variants that orchestrate replication timing.
And researchers found evidence suggesting that differences in replication timing may explain why some people are more prone than others to developing myeloproliferative neoplasms (MPNs).
“Everyone’s cells have a plan for copying the genome,” said study author Steven McCarroll, PhD, of Harvard Medical School in Boston. “The idea that we don’t all have the same plan is surprising and interesting.”
Dr McCarroll and his colleagues described this research in Cell.
Replication timing and MPNs
The researchers noted that DNA replication is one of the most fundamental cellular processes, and any variation among people is likely to affect genetic inheritance, including individual disease risk as well as human evolution.
Replication timing is known to affect mutation rates. DNA segments that are copied too late or too early tend to have more errors.
The new study indicates that people with different timing programs therefore have different patterns of mutation risk across their genomes. For example, differences in replication timing could explain predisposition to MPNs.
Researchers previously showed that acquired mutations in JAK2 lead to MPNs. They also noticed that people with JAK2 mutations tend to have a distinctive set of inherited genetic variants nearby, but they weren’t sure how the inherited variants and the new mutations were connected.
Dr McCarroll’s team found that the inherited variants are associated with an “unusually early” replication origin point and proposed that JAK2 is more likely to develop mutations in people with that very early origin point.
“Replication timing may be a way that inherited variation contributes to the risk of later mutations and diseases that we usually think of as arising by chance,” Dr McCarroll said.
A new method of study
Dr McCarroll and his colleagues were able to make these discoveries, in large part, because they invented a new way to obtain DNA replication timing data. They turned to the 1000 Genomes Project, which maintains an online database of sequencing data collected from hundreds of people around the world.
Because much of the DNA in the 1000 Genomes Project had been extracted from actively dividing cells, the team hypothesized that information about replication timing lurked within, and they were right.
They counted the number of copies of individual genes in each genome. Because early replication origins had created more segment copies at the time the sample was taken than late replication origins had, the researchers were able to create a personalized replication timing map for each person.
“People had seen these patterns before but just dismissed them as artifacts of sequencing technology,” Dr McCarroll said. After conducting numerous tests to rule out that possibility, “we found that they reflect real biology.”
The researchers then compared each person’s copy number information with his or her genetic sequence data to see if they could match specific genetic variants to replication timing differences. From 161 samples, the team identified 16 variants. The variants were short, and most were common.
“I think this is the first time we can pinpoint genetic influences on replication timing in any organism,” said study author Amnon Koren, PhD, also of Harvard Medical School.
The variants were located near replication origin points, leading the researchers to wonder if they affect replication timing by altering where a person’s origin points are. The team also suspects the variants work by altering chromatin structure, exposing local sequences to replication machinery.
The group intends to find out. They also want to search for additional variants that control replication timing.
“These 16 variants are almost certainly just the tip of the iceberg,” Dr Koren said.
He and his colleagues believe that, as more variants come to light in future studies, researchers should be better able to manipulate replication timing in the lab and learn more about how it works and its biological significance.
“All you need to do to study replication timing is grow cells and sequence their DNA, which everyone is doing these days,” Dr Koren said. “[This new method] is much easier, faster, and cheaper, and I think it will transform the field because we can now do experiments in large scale.”
“We found that there is biological information in genome sequence data,” Dr McCarroll added. “But this was still an accidental biological experiment. Now imagine the results when we and others actually design experiments to study this phenomenon.”
Strategy could reduce TRALI after platelet transfusion
PHILADELPHIA—Researchers believe a simple screening strategy could reduce the risk of transfusion-related acute lung injury (TRALI) in patients receiving apheresis platelets (APs) by about 60%.
Studying TRALI cases reported to the American Red Cross, the investigators found evidence to support the idea that testing female AP donors who report prior pregnancy and deferring those with human leukocyte antigen (HLA) antibodies could greatly decrease the risk of TRALI.
Anne Eder, MD, of the American Red Cross in Rockville, Maryland, presented this research at the AABB Annual Meeting 2014 (abstract S82-040B).
Dr Eder and her colleagues assessed cases of TRALI and possible TRALI reported to the American Red Cross’s national hemovigilance program. The researchers compared the incidence of TRALI according to the type of blood component transfused as well as the sex of the donor.
TRALI cases due to APs and red blood cells (RBCs) from 2006 to 2013 and male-donor-predominant plasma from 2008 to 2013 were calculated as rates per 106 distributed units.
The blood center distributed 6.6 million AP units (>70% from male donors, excluding platelet additive solution), 9.6 million plasma units (>95% from male donors), and 48.6 million RBC units (54% from male donors).
In all, there were 224 cases of TRALI, 175 among patients who received a single type of blood component within 6 hours. There were 36 TRALI cases among plasma recipients, 92 among RBC recipients, and 41 among AP recipients.
The TRALI risk was about 3-fold greater for AP recipients than for RBC recipients or recipients of male-predominant plasma. The odds ratios (ORs) were 3.2, 1.0, and 0.8, respectively. The OR for all plasma recipients (including group AB female plasma) was 2.0.
The rate of fatalities was higher for AP recipients than RBC recipients, at 0.6 per 106 and 0.2 per 106, respectively (P=0.04).
When the researchers analyzed TRALI cases according to donor, they found a nearly 6-fold predilection for female donors among AP recipients (OR=5.6) and a nearly 5-fold predilection for female donors in RBC recipients (OR=4.5).
The investigators also considered the 41 AP TRALI cases individually to assess how effective a screening program might have been for reducing the risk of TRALI.
In 12 cases, patients had received AP from a male donor. Of the 29 female donors, 26 had reported a prior pregnancy, and 2 had test results suggesting a prior pregnancy.
Of those 28 donors, 3 were negative for HLA antibodies, leaving 25 cases, or 61%, positive for HLA antibodies.
Seventeen of the female donors had HLA class I and II antibodies, including 3 whose donation resulted in a fatality. One had HLA class I only, 2 had HLA class II only, 5 had HLA I or II and a specific human neutrophil antigen (HNA) antibody, and 1 had a specific HNA antibody only.
The researchers evaluated 7 cases in which donors had HLA class I or II antibodies. And they found that all 7 had signal-to-cutoff ratios much higher than any cutoff discussed for screening donors (greater than 100).
“So we predict that a strategy to test female apheresis donors who report prior pregnancy and to defer those with HLA antibodies may reduce the risk of TRALI by about 60% and prevent cases from human neutrophil antibodies as well,” Dr Eder concluded.
PHILADELPHIA—Researchers believe a simple screening strategy could reduce the risk of transfusion-related acute lung injury (TRALI) in patients receiving apheresis platelets (APs) by about 60%.
Studying TRALI cases reported to the American Red Cross, the investigators found evidence to support the idea that testing female AP donors who report prior pregnancy and deferring those with human leukocyte antigen (HLA) antibodies could greatly decrease the risk of TRALI.
Anne Eder, MD, of the American Red Cross in Rockville, Maryland, presented this research at the AABB Annual Meeting 2014 (abstract S82-040B).
Dr Eder and her colleagues assessed cases of TRALI and possible TRALI reported to the American Red Cross’s national hemovigilance program. The researchers compared the incidence of TRALI according to the type of blood component transfused as well as the sex of the donor.
TRALI cases due to APs and red blood cells (RBCs) from 2006 to 2013 and male-donor-predominant plasma from 2008 to 2013 were calculated as rates per 106 distributed units.
The blood center distributed 6.6 million AP units (>70% from male donors, excluding platelet additive solution), 9.6 million plasma units (>95% from male donors), and 48.6 million RBC units (54% from male donors).
In all, there were 224 cases of TRALI, 175 among patients who received a single type of blood component within 6 hours. There were 36 TRALI cases among plasma recipients, 92 among RBC recipients, and 41 among AP recipients.
The TRALI risk was about 3-fold greater for AP recipients than for RBC recipients or recipients of male-predominant plasma. The odds ratios (ORs) were 3.2, 1.0, and 0.8, respectively. The OR for all plasma recipients (including group AB female plasma) was 2.0.
The rate of fatalities was higher for AP recipients than RBC recipients, at 0.6 per 106 and 0.2 per 106, respectively (P=0.04).
When the researchers analyzed TRALI cases according to donor, they found a nearly 6-fold predilection for female donors among AP recipients (OR=5.6) and a nearly 5-fold predilection for female donors in RBC recipients (OR=4.5).
The investigators also considered the 41 AP TRALI cases individually to assess how effective a screening program might have been for reducing the risk of TRALI.
In 12 cases, patients had received AP from a male donor. Of the 29 female donors, 26 had reported a prior pregnancy, and 2 had test results suggesting a prior pregnancy.
Of those 28 donors, 3 were negative for HLA antibodies, leaving 25 cases, or 61%, positive for HLA antibodies.
Seventeen of the female donors had HLA class I and II antibodies, including 3 whose donation resulted in a fatality. One had HLA class I only, 2 had HLA class II only, 5 had HLA I or II and a specific human neutrophil antigen (HNA) antibody, and 1 had a specific HNA antibody only.
The researchers evaluated 7 cases in which donors had HLA class I or II antibodies. And they found that all 7 had signal-to-cutoff ratios much higher than any cutoff discussed for screening donors (greater than 100).
“So we predict that a strategy to test female apheresis donors who report prior pregnancy and to defer those with HLA antibodies may reduce the risk of TRALI by about 60% and prevent cases from human neutrophil antibodies as well,” Dr Eder concluded.
PHILADELPHIA—Researchers believe a simple screening strategy could reduce the risk of transfusion-related acute lung injury (TRALI) in patients receiving apheresis platelets (APs) by about 60%.
Studying TRALI cases reported to the American Red Cross, the investigators found evidence to support the idea that testing female AP donors who report prior pregnancy and deferring those with human leukocyte antigen (HLA) antibodies could greatly decrease the risk of TRALI.
Anne Eder, MD, of the American Red Cross in Rockville, Maryland, presented this research at the AABB Annual Meeting 2014 (abstract S82-040B).
Dr Eder and her colleagues assessed cases of TRALI and possible TRALI reported to the American Red Cross’s national hemovigilance program. The researchers compared the incidence of TRALI according to the type of blood component transfused as well as the sex of the donor.
TRALI cases due to APs and red blood cells (RBCs) from 2006 to 2013 and male-donor-predominant plasma from 2008 to 2013 were calculated as rates per 106 distributed units.
The blood center distributed 6.6 million AP units (>70% from male donors, excluding platelet additive solution), 9.6 million plasma units (>95% from male donors), and 48.6 million RBC units (54% from male donors).
In all, there were 224 cases of TRALI, 175 among patients who received a single type of blood component within 6 hours. There were 36 TRALI cases among plasma recipients, 92 among RBC recipients, and 41 among AP recipients.
The TRALI risk was about 3-fold greater for AP recipients than for RBC recipients or recipients of male-predominant plasma. The odds ratios (ORs) were 3.2, 1.0, and 0.8, respectively. The OR for all plasma recipients (including group AB female plasma) was 2.0.
The rate of fatalities was higher for AP recipients than RBC recipients, at 0.6 per 106 and 0.2 per 106, respectively (P=0.04).
When the researchers analyzed TRALI cases according to donor, they found a nearly 6-fold predilection for female donors among AP recipients (OR=5.6) and a nearly 5-fold predilection for female donors in RBC recipients (OR=4.5).
The investigators also considered the 41 AP TRALI cases individually to assess how effective a screening program might have been for reducing the risk of TRALI.
In 12 cases, patients had received AP from a male donor. Of the 29 female donors, 26 had reported a prior pregnancy, and 2 had test results suggesting a prior pregnancy.
Of those 28 donors, 3 were negative for HLA antibodies, leaving 25 cases, or 61%, positive for HLA antibodies.
Seventeen of the female donors had HLA class I and II antibodies, including 3 whose donation resulted in a fatality. One had HLA class I only, 2 had HLA class II only, 5 had HLA I or II and a specific human neutrophil antigen (HNA) antibody, and 1 had a specific HNA antibody only.
The researchers evaluated 7 cases in which donors had HLA class I or II antibodies. And they found that all 7 had signal-to-cutoff ratios much higher than any cutoff discussed for screening donors (greater than 100).
“So we predict that a strategy to test female apheresis donors who report prior pregnancy and to defer those with HLA antibodies may reduce the risk of TRALI by about 60% and prevent cases from human neutrophil antibodies as well,” Dr Eder concluded.
Sickle cell trait linked to increased risk of CKD
while another looks on
Credit: NCI
Sickle cell trait may increase the risk of chronic kidney disease (CKD) and poor kidney function, according to a study published in JAMA.
Researchers evaluated nearly 16,000 African Americans and found that subjects with sickle cell trait had a greater risk of CKD and incident CKD than subjects who did not have the trait.
Trait carriers were also more likely to have albuminuria and a decrease in estimated glomerular filtration rate (eGFR), both characteristics of poor kidney function.
This study was released to coincide with its presentation at the American Society of Nephrology’s Kidney Week Annual Meeting.
Rakhi P. Naik, MD, of Johns Hopkins University in Baltimore, and her colleagues conducted this research to investigate the relationship between sickle cell trait and kidney impairment.
The team looked at data from 5 large, population-based studies. They evaluated 15,975 self-identified African Americans—1248 of whom had sickle cell trait and 14,727 who did not.
The researchers assessed the incidence of CKD, which was defined as an eGFR of <60 mL/min/1.73m2 at baseline or follow-up, and incident CKD. They also assessed the rate of albuminuria, which was defined as a spot urine albumin:creatinine ratio of >30mg/g or albumin excretion rate >30mg/24 hours, and decline in eGFR, which was defined as a decrease of >3 mL/min/1.73m2 per year.
CKD and incident CKD were more common among sickle cell trait carriers than noncarriers. CKD was present in 19.2% (239/1247) of carriers and 13.5% (1994/14,722) of noncarriers. And incident CKD was present in 20.7% (140/675) of carriers and 13.7% (1158/8481) of noncarriers.
Sickle cell trait was associated with a faster decline in eGFR, as 22.6% (150/665) of carriers and 19.0% (1569/8249) of noncarriers met the definition of eGFR decline.
And the trait was associated with a higher incidence of albuminuria, as 31.8% (154/485) of carriers had albuminuria, compared to 19.6% (1168/5947) of noncarriers.
So subjects with sickle cell trait had a greater risk of CKD (odds ratio [OR], 1.57), incident CKD (OR, 1.79), decline in eGFR (OR, 1.32), and albuminuria (OR, 1.86).
The researchers said the associations found in this study may offer an additional genetic explanation for the increased risk of CKD observed among African Americans compared with other racial groups.
They added that the study also highlights the need for further research into the renal complications of sickle cell trait. Because screening for the trait is widely performed, accurate characterization of disease associations with sickle cell trait is needed to inform policy and treatment recommendations.
while another looks on
Credit: NCI
Sickle cell trait may increase the risk of chronic kidney disease (CKD) and poor kidney function, according to a study published in JAMA.
Researchers evaluated nearly 16,000 African Americans and found that subjects with sickle cell trait had a greater risk of CKD and incident CKD than subjects who did not have the trait.
Trait carriers were also more likely to have albuminuria and a decrease in estimated glomerular filtration rate (eGFR), both characteristics of poor kidney function.
This study was released to coincide with its presentation at the American Society of Nephrology’s Kidney Week Annual Meeting.
Rakhi P. Naik, MD, of Johns Hopkins University in Baltimore, and her colleagues conducted this research to investigate the relationship between sickle cell trait and kidney impairment.
The team looked at data from 5 large, population-based studies. They evaluated 15,975 self-identified African Americans—1248 of whom had sickle cell trait and 14,727 who did not.
The researchers assessed the incidence of CKD, which was defined as an eGFR of <60 mL/min/1.73m2 at baseline or follow-up, and incident CKD. They also assessed the rate of albuminuria, which was defined as a spot urine albumin:creatinine ratio of >30mg/g or albumin excretion rate >30mg/24 hours, and decline in eGFR, which was defined as a decrease of >3 mL/min/1.73m2 per year.
CKD and incident CKD were more common among sickle cell trait carriers than noncarriers. CKD was present in 19.2% (239/1247) of carriers and 13.5% (1994/14,722) of noncarriers. And incident CKD was present in 20.7% (140/675) of carriers and 13.7% (1158/8481) of noncarriers.
Sickle cell trait was associated with a faster decline in eGFR, as 22.6% (150/665) of carriers and 19.0% (1569/8249) of noncarriers met the definition of eGFR decline.
And the trait was associated with a higher incidence of albuminuria, as 31.8% (154/485) of carriers had albuminuria, compared to 19.6% (1168/5947) of noncarriers.
So subjects with sickle cell trait had a greater risk of CKD (odds ratio [OR], 1.57), incident CKD (OR, 1.79), decline in eGFR (OR, 1.32), and albuminuria (OR, 1.86).
The researchers said the associations found in this study may offer an additional genetic explanation for the increased risk of CKD observed among African Americans compared with other racial groups.
They added that the study also highlights the need for further research into the renal complications of sickle cell trait. Because screening for the trait is widely performed, accurate characterization of disease associations with sickle cell trait is needed to inform policy and treatment recommendations.
while another looks on
Credit: NCI
Sickle cell trait may increase the risk of chronic kidney disease (CKD) and poor kidney function, according to a study published in JAMA.
Researchers evaluated nearly 16,000 African Americans and found that subjects with sickle cell trait had a greater risk of CKD and incident CKD than subjects who did not have the trait.
Trait carriers were also more likely to have albuminuria and a decrease in estimated glomerular filtration rate (eGFR), both characteristics of poor kidney function.
This study was released to coincide with its presentation at the American Society of Nephrology’s Kidney Week Annual Meeting.
Rakhi P. Naik, MD, of Johns Hopkins University in Baltimore, and her colleagues conducted this research to investigate the relationship between sickle cell trait and kidney impairment.
The team looked at data from 5 large, population-based studies. They evaluated 15,975 self-identified African Americans—1248 of whom had sickle cell trait and 14,727 who did not.
The researchers assessed the incidence of CKD, which was defined as an eGFR of <60 mL/min/1.73m2 at baseline or follow-up, and incident CKD. They also assessed the rate of albuminuria, which was defined as a spot urine albumin:creatinine ratio of >30mg/g or albumin excretion rate >30mg/24 hours, and decline in eGFR, which was defined as a decrease of >3 mL/min/1.73m2 per year.
CKD and incident CKD were more common among sickle cell trait carriers than noncarriers. CKD was present in 19.2% (239/1247) of carriers and 13.5% (1994/14,722) of noncarriers. And incident CKD was present in 20.7% (140/675) of carriers and 13.7% (1158/8481) of noncarriers.
Sickle cell trait was associated with a faster decline in eGFR, as 22.6% (150/665) of carriers and 19.0% (1569/8249) of noncarriers met the definition of eGFR decline.
And the trait was associated with a higher incidence of albuminuria, as 31.8% (154/485) of carriers had albuminuria, compared to 19.6% (1168/5947) of noncarriers.
So subjects with sickle cell trait had a greater risk of CKD (odds ratio [OR], 1.57), incident CKD (OR, 1.79), decline in eGFR (OR, 1.32), and albuminuria (OR, 1.86).
The researchers said the associations found in this study may offer an additional genetic explanation for the increased risk of CKD observed among African Americans compared with other racial groups.
They added that the study also highlights the need for further research into the renal complications of sickle cell trait. Because screening for the trait is widely performed, accurate characterization of disease associations with sickle cell trait is needed to inform policy and treatment recommendations.
Optimizing the Primary Care Management of Chronic Pain Through Telecare
Study Overview
Objective. To evaluate the effectiveness of a collaborative telecare intervention on chronic pain management.
Design. Randomized clinical trial.
Settings and participants. Participants were recruited over a 2-year period from 5 primary care clinics within a single Veterans Affairs medical center. Patients aged 18 to 65 years were eligible if they had chronic (≥ 3 months) musculoskeletal pain of at least moderate intensity (Brief Pain Inventory [BPI] score ≥ 5). Patients were excluded if they had a pending disability claim or a diagnosis of bipolar disorder, schizophrenia, moderately severe cognitive impairment, active suicidal ideation, current illicit drug use or a terminal illness or received primary care outside of the VA. Participants were randomized to either the telephone-delivered collaborative care management intervention group or usual care. Usual care was defined as continuing to receive care from their primary care provider for management of chronic, musculoskeletal pain.
Intervention. The telecare intervention comprised automated symptom monitoring (ASM) and optimized analgesic management through an algorithm-guided stepped care approach delivered by a nurse case manager. ASM was delivered either by an interactive voice-recorded telephone call (51%) or by internet (49%), set according to patient preference. Intervention calls occurred at 1 and 3 months. Additional contact with participants from the intervention group was generated in response to ASM trend reports.
Main outcome measures. The primary outcome was the BPI total score. The BPI scale ranges from 0 to 10, with higher scores indicating worsening pain. A 1-point change is considered clinically important. Secondary pain outcomes included BPI interference and severity, global pain improvement, treatment satisfaction, and use of opioids and other analgesics. Patients were interviewed at 1, 3, 6, and 12 months.
Main results. A total of 250 participants were enrolled, 124 assigned to the intervention group and 126 assigned to usual care. The mean (SD) baseline BPI scores were 5.31 (1.81) for the intervention group and 5.12 (1.80) for usual care. Compared with usual care, the intervention group had a 1.02-point lower BPI score at 12 months (95% confidence interval [CI], −1.58 to −0.47) (P < 0.001). Patients in the intervention group were nearly twice as likely to report at least a 30% improvement in their pain score by 12 months (51.7% vs. 27.1%; relative risk [RR], 1.9 [95% CI, 1.4 to 2.7]), with a number needed to treat of 4.1 (95% CI, 3.0 to 6.4) for a 30% improvement.
Patients in the intervention group were more likely to rate as good to excellent the medication prescribed for their pain (73.9% vs 50.9%; RR, 1.5 [95% CI, 1.2 to 1.8]). Patients in the usual care group were more likely to experience worsening of pain by 6 months compared with the intervention group. A greater number of analgesics were prescribed to patients in the intervention group; however, opioid use between groups did not differ at baseline or at any point during the trial period. For the secondary outcomes, the intervention group reported greater improvement in depression compared with the usual care group, and this difference was statistically significant (P < 0.001). They also reported fewer days of disability (P = 0.34).
Conclusion. Telecare collaborative management was more effective in improving chronic pain outcomes than usual care. This was accomplished through the optimization of non-opioid analgesic therapy facilitated by a stepped care algorithm and automated symptom monitoring.
Commentary
Chronic pain affects up to 116 million American adults and is recognized as an emerging public health problem that costs the United States a half trillion dollars annually, with disability and hospitalization as the largest burdens [1].The physical and psychological complexities of chronic pain require comprehensive individualized care from interdisciplinary teams who will facilitate prevention, treatment, and routine assessment in chronic pain sufferers [2]. However, enhancing pain management in primary care requires overcoming the high costs and considerable time needed to continually support patients in pain. Telecare represents an improved means by which doctors and nurses can provide primary care services to patients in need of comprehensive pain management. However, the effectiveness of interventions delivered to patients suffering from chronic pain, via telecare, is largely unknown.
This study had several strengths, including a distinct and well-defined intervention, population, comparator, and outcome. The inclusion criteria were broad enough to account for various age-groups, and therefore various pain experiences, yet excluded patients with characteristics likely to confound pain outcomes, such as severe mental health disorders. Participants were randomized in blinded fashion to 1 of 2 clearly defined groups. The stepped algorithm used in the study, SCOPE [3], is a validated and reliable method for assessing chronic pain outcomes. The statistical analyses were appropriate and included analyses of variance to detect between-group differences for continuous variables. The rate of follow-up was excellent, with 95% of participants providing measurable outcome assessments at 12 months. The scientific background and rationale for this study were explicit and relevant to current advances in medicine.
The study is not without limitations, however. It is unclear whether the 2 trial groups were treated equally. Data received through ASM from the intervention group prompted physicians to adjust a patient’s medication regimen, essentially providing caregivers updates on a patient’s status. This occurred in addition to the 4 monthly interviews that both groups received per protocol. The study did not elucidate exactly what care was provided to the usual care group and, therefore, does not allow for the disaggregation of the relative effects of optimizing analgesics and continuous provider monitoring. It is difficult to distinguish if additional care or the intervention was more effective in managing pain than usual care. Another limitation, noted by the authors, is the study’s use of a single VA medical center. Demographics reveal a skewed population, 83% male and 77% white, limiting the trial’s generalizability. Most clinical outcomes were considered, though cost-effectiveness of the intervention was not analyzed. As the VA is a cost-sensitive environment, it is important that interventions assessed are not more costly than usual care. Further cost analysis beyond health resource utilization reported in the study would provide a nuanced assessment of telecare’s feasibility as a replacement for usual primary care. Statistically, the study shows significant improvements in chronic pain in those who received the intervention via telecare, therefore, cost analysis is indeed warranted.
Applications for Clinical Practice
This study illuminates the need for a more intensive pain management program that allows for continuous monitoring. Though the intervention was successfully delivered via telecare, further research is needed to assess whether other programs would be as effective when delivered through telecare, and more importantly, to investigate what characteristics of interventions make telecare successful. Telecare has the potential to improve outcomes, reduce costs, and reduce strains on understaffed facilities, though it is still unknown which conditions would gain from this innovation. This study shows that chronic disease, a predominately self-managed condition, would benefit from a more accessible management program [4]. This, however, may not be the case for other health issues, which require continual testing and equipment usage, such as infectious diseases. Further studies should focus on populations that command a patient-centered intervention delivered using a potentially low-cost tool, like the telephone or internet. Finally, a significant cost driver with chronic pain is disability, and though change in disability days was not statistically significant in this trial, patients in the intervention group self-reported a decrease in disability days, where as patients in the usual care group self-reported an increase. A clinical improvement in pain management has the potential to shave millions of dollars from the U.S. economy, this hypothesis deserves further investigation.
—Sara Tierce-Hazard, BA, and Tina Sadarangani, MSN, ANP-BC, GNP-BC
Study Overview
Objective. To evaluate the effectiveness of a collaborative telecare intervention on chronic pain management.
Design. Randomized clinical trial.
Settings and participants. Participants were recruited over a 2-year period from 5 primary care clinics within a single Veterans Affairs medical center. Patients aged 18 to 65 years were eligible if they had chronic (≥ 3 months) musculoskeletal pain of at least moderate intensity (Brief Pain Inventory [BPI] score ≥ 5). Patients were excluded if they had a pending disability claim or a diagnosis of bipolar disorder, schizophrenia, moderately severe cognitive impairment, active suicidal ideation, current illicit drug use or a terminal illness or received primary care outside of the VA. Participants were randomized to either the telephone-delivered collaborative care management intervention group or usual care. Usual care was defined as continuing to receive care from their primary care provider for management of chronic, musculoskeletal pain.
Intervention. The telecare intervention comprised automated symptom monitoring (ASM) and optimized analgesic management through an algorithm-guided stepped care approach delivered by a nurse case manager. ASM was delivered either by an interactive voice-recorded telephone call (51%) or by internet (49%), set according to patient preference. Intervention calls occurred at 1 and 3 months. Additional contact with participants from the intervention group was generated in response to ASM trend reports.
Main outcome measures. The primary outcome was the BPI total score. The BPI scale ranges from 0 to 10, with higher scores indicating worsening pain. A 1-point change is considered clinically important. Secondary pain outcomes included BPI interference and severity, global pain improvement, treatment satisfaction, and use of opioids and other analgesics. Patients were interviewed at 1, 3, 6, and 12 months.
Main results. A total of 250 participants were enrolled, 124 assigned to the intervention group and 126 assigned to usual care. The mean (SD) baseline BPI scores were 5.31 (1.81) for the intervention group and 5.12 (1.80) for usual care. Compared with usual care, the intervention group had a 1.02-point lower BPI score at 12 months (95% confidence interval [CI], −1.58 to −0.47) (P < 0.001). Patients in the intervention group were nearly twice as likely to report at least a 30% improvement in their pain score by 12 months (51.7% vs. 27.1%; relative risk [RR], 1.9 [95% CI, 1.4 to 2.7]), with a number needed to treat of 4.1 (95% CI, 3.0 to 6.4) for a 30% improvement.
Patients in the intervention group were more likely to rate as good to excellent the medication prescribed for their pain (73.9% vs 50.9%; RR, 1.5 [95% CI, 1.2 to 1.8]). Patients in the usual care group were more likely to experience worsening of pain by 6 months compared with the intervention group. A greater number of analgesics were prescribed to patients in the intervention group; however, opioid use between groups did not differ at baseline or at any point during the trial period. For the secondary outcomes, the intervention group reported greater improvement in depression compared with the usual care group, and this difference was statistically significant (P < 0.001). They also reported fewer days of disability (P = 0.34).
Conclusion. Telecare collaborative management was more effective in improving chronic pain outcomes than usual care. This was accomplished through the optimization of non-opioid analgesic therapy facilitated by a stepped care algorithm and automated symptom monitoring.
Commentary
Chronic pain affects up to 116 million American adults and is recognized as an emerging public health problem that costs the United States a half trillion dollars annually, with disability and hospitalization as the largest burdens [1].The physical and psychological complexities of chronic pain require comprehensive individualized care from interdisciplinary teams who will facilitate prevention, treatment, and routine assessment in chronic pain sufferers [2]. However, enhancing pain management in primary care requires overcoming the high costs and considerable time needed to continually support patients in pain. Telecare represents an improved means by which doctors and nurses can provide primary care services to patients in need of comprehensive pain management. However, the effectiveness of interventions delivered to patients suffering from chronic pain, via telecare, is largely unknown.
This study had several strengths, including a distinct and well-defined intervention, population, comparator, and outcome. The inclusion criteria were broad enough to account for various age-groups, and therefore various pain experiences, yet excluded patients with characteristics likely to confound pain outcomes, such as severe mental health disorders. Participants were randomized in blinded fashion to 1 of 2 clearly defined groups. The stepped algorithm used in the study, SCOPE [3], is a validated and reliable method for assessing chronic pain outcomes. The statistical analyses were appropriate and included analyses of variance to detect between-group differences for continuous variables. The rate of follow-up was excellent, with 95% of participants providing measurable outcome assessments at 12 months. The scientific background and rationale for this study were explicit and relevant to current advances in medicine.
The study is not without limitations, however. It is unclear whether the 2 trial groups were treated equally. Data received through ASM from the intervention group prompted physicians to adjust a patient’s medication regimen, essentially providing caregivers updates on a patient’s status. This occurred in addition to the 4 monthly interviews that both groups received per protocol. The study did not elucidate exactly what care was provided to the usual care group and, therefore, does not allow for the disaggregation of the relative effects of optimizing analgesics and continuous provider monitoring. It is difficult to distinguish if additional care or the intervention was more effective in managing pain than usual care. Another limitation, noted by the authors, is the study’s use of a single VA medical center. Demographics reveal a skewed population, 83% male and 77% white, limiting the trial’s generalizability. Most clinical outcomes were considered, though cost-effectiveness of the intervention was not analyzed. As the VA is a cost-sensitive environment, it is important that interventions assessed are not more costly than usual care. Further cost analysis beyond health resource utilization reported in the study would provide a nuanced assessment of telecare’s feasibility as a replacement for usual primary care. Statistically, the study shows significant improvements in chronic pain in those who received the intervention via telecare, therefore, cost analysis is indeed warranted.
Applications for Clinical Practice
This study illuminates the need for a more intensive pain management program that allows for continuous monitoring. Though the intervention was successfully delivered via telecare, further research is needed to assess whether other programs would be as effective when delivered through telecare, and more importantly, to investigate what characteristics of interventions make telecare successful. Telecare has the potential to improve outcomes, reduce costs, and reduce strains on understaffed facilities, though it is still unknown which conditions would gain from this innovation. This study shows that chronic disease, a predominately self-managed condition, would benefit from a more accessible management program [4]. This, however, may not be the case for other health issues, which require continual testing and equipment usage, such as infectious diseases. Further studies should focus on populations that command a patient-centered intervention delivered using a potentially low-cost tool, like the telephone or internet. Finally, a significant cost driver with chronic pain is disability, and though change in disability days was not statistically significant in this trial, patients in the intervention group self-reported a decrease in disability days, where as patients in the usual care group self-reported an increase. A clinical improvement in pain management has the potential to shave millions of dollars from the U.S. economy, this hypothesis deserves further investigation.
—Sara Tierce-Hazard, BA, and Tina Sadarangani, MSN, ANP-BC, GNP-BC
Study Overview
Objective. To evaluate the effectiveness of a collaborative telecare intervention on chronic pain management.
Design. Randomized clinical trial.
Settings and participants. Participants were recruited over a 2-year period from 5 primary care clinics within a single Veterans Affairs medical center. Patients aged 18 to 65 years were eligible if they had chronic (≥ 3 months) musculoskeletal pain of at least moderate intensity (Brief Pain Inventory [BPI] score ≥ 5). Patients were excluded if they had a pending disability claim or a diagnosis of bipolar disorder, schizophrenia, moderately severe cognitive impairment, active suicidal ideation, current illicit drug use or a terminal illness or received primary care outside of the VA. Participants were randomized to either the telephone-delivered collaborative care management intervention group or usual care. Usual care was defined as continuing to receive care from their primary care provider for management of chronic, musculoskeletal pain.
Intervention. The telecare intervention comprised automated symptom monitoring (ASM) and optimized analgesic management through an algorithm-guided stepped care approach delivered by a nurse case manager. ASM was delivered either by an interactive voice-recorded telephone call (51%) or by internet (49%), set according to patient preference. Intervention calls occurred at 1 and 3 months. Additional contact with participants from the intervention group was generated in response to ASM trend reports.
Main outcome measures. The primary outcome was the BPI total score. The BPI scale ranges from 0 to 10, with higher scores indicating worsening pain. A 1-point change is considered clinically important. Secondary pain outcomes included BPI interference and severity, global pain improvement, treatment satisfaction, and use of opioids and other analgesics. Patients were interviewed at 1, 3, 6, and 12 months.
Main results. A total of 250 participants were enrolled, 124 assigned to the intervention group and 126 assigned to usual care. The mean (SD) baseline BPI scores were 5.31 (1.81) for the intervention group and 5.12 (1.80) for usual care. Compared with usual care, the intervention group had a 1.02-point lower BPI score at 12 months (95% confidence interval [CI], −1.58 to −0.47) (P < 0.001). Patients in the intervention group were nearly twice as likely to report at least a 30% improvement in their pain score by 12 months (51.7% vs. 27.1%; relative risk [RR], 1.9 [95% CI, 1.4 to 2.7]), with a number needed to treat of 4.1 (95% CI, 3.0 to 6.4) for a 30% improvement.
Patients in the intervention group were more likely to rate as good to excellent the medication prescribed for their pain (73.9% vs 50.9%; RR, 1.5 [95% CI, 1.2 to 1.8]). Patients in the usual care group were more likely to experience worsening of pain by 6 months compared with the intervention group. A greater number of analgesics were prescribed to patients in the intervention group; however, opioid use between groups did not differ at baseline or at any point during the trial period. For the secondary outcomes, the intervention group reported greater improvement in depression compared with the usual care group, and this difference was statistically significant (P < 0.001). They also reported fewer days of disability (P = 0.34).
Conclusion. Telecare collaborative management was more effective in improving chronic pain outcomes than usual care. This was accomplished through the optimization of non-opioid analgesic therapy facilitated by a stepped care algorithm and automated symptom monitoring.
Commentary
Chronic pain affects up to 116 million American adults and is recognized as an emerging public health problem that costs the United States a half trillion dollars annually, with disability and hospitalization as the largest burdens [1].The physical and psychological complexities of chronic pain require comprehensive individualized care from interdisciplinary teams who will facilitate prevention, treatment, and routine assessment in chronic pain sufferers [2]. However, enhancing pain management in primary care requires overcoming the high costs and considerable time needed to continually support patients in pain. Telecare represents an improved means by which doctors and nurses can provide primary care services to patients in need of comprehensive pain management. However, the effectiveness of interventions delivered to patients suffering from chronic pain, via telecare, is largely unknown.
This study had several strengths, including a distinct and well-defined intervention, population, comparator, and outcome. The inclusion criteria were broad enough to account for various age-groups, and therefore various pain experiences, yet excluded patients with characteristics likely to confound pain outcomes, such as severe mental health disorders. Participants were randomized in blinded fashion to 1 of 2 clearly defined groups. The stepped algorithm used in the study, SCOPE [3], is a validated and reliable method for assessing chronic pain outcomes. The statistical analyses were appropriate and included analyses of variance to detect between-group differences for continuous variables. The rate of follow-up was excellent, with 95% of participants providing measurable outcome assessments at 12 months. The scientific background and rationale for this study were explicit and relevant to current advances in medicine.
The study is not without limitations, however. It is unclear whether the 2 trial groups were treated equally. Data received through ASM from the intervention group prompted physicians to adjust a patient’s medication regimen, essentially providing caregivers updates on a patient’s status. This occurred in addition to the 4 monthly interviews that both groups received per protocol. The study did not elucidate exactly what care was provided to the usual care group and, therefore, does not allow for the disaggregation of the relative effects of optimizing analgesics and continuous provider monitoring. It is difficult to distinguish if additional care or the intervention was more effective in managing pain than usual care. Another limitation, noted by the authors, is the study’s use of a single VA medical center. Demographics reveal a skewed population, 83% male and 77% white, limiting the trial’s generalizability. Most clinical outcomes were considered, though cost-effectiveness of the intervention was not analyzed. As the VA is a cost-sensitive environment, it is important that interventions assessed are not more costly than usual care. Further cost analysis beyond health resource utilization reported in the study would provide a nuanced assessment of telecare’s feasibility as a replacement for usual primary care. Statistically, the study shows significant improvements in chronic pain in those who received the intervention via telecare, therefore, cost analysis is indeed warranted.
Applications for Clinical Practice
This study illuminates the need for a more intensive pain management program that allows for continuous monitoring. Though the intervention was successfully delivered via telecare, further research is needed to assess whether other programs would be as effective when delivered through telecare, and more importantly, to investigate what characteristics of interventions make telecare successful. Telecare has the potential to improve outcomes, reduce costs, and reduce strains on understaffed facilities, though it is still unknown which conditions would gain from this innovation. This study shows that chronic disease, a predominately self-managed condition, would benefit from a more accessible management program [4]. This, however, may not be the case for other health issues, which require continual testing and equipment usage, such as infectious diseases. Further studies should focus on populations that command a patient-centered intervention delivered using a potentially low-cost tool, like the telephone or internet. Finally, a significant cost driver with chronic pain is disability, and though change in disability days was not statistically significant in this trial, patients in the intervention group self-reported a decrease in disability days, where as patients in the usual care group self-reported an increase. A clinical improvement in pain management has the potential to shave millions of dollars from the U.S. economy, this hypothesis deserves further investigation.
—Sara Tierce-Hazard, BA, and Tina Sadarangani, MSN, ANP-BC, GNP-BC
LISTEN NOW: Emergency Medicine and Hospitalist Collaboration
This
The focus for emergency physicians, says Dr. Heinrich, is triage and disposition. Differing incentives for hospitalists and emergency physicians can cause stress between the groups, and dialogue is needed to defray the tension, he notes. Dr. Epstein says he thinks that collaboration can be an effective tactic against becoming a “30 day readmission rule” statistic. Shared metrics, developed in partnership, can also improve patient care, he adds.
For more features, visit The Hospitalist's podcast archive.
This
The focus for emergency physicians, says Dr. Heinrich, is triage and disposition. Differing incentives for hospitalists and emergency physicians can cause stress between the groups, and dialogue is needed to defray the tension, he notes. Dr. Epstein says he thinks that collaboration can be an effective tactic against becoming a “30 day readmission rule” statistic. Shared metrics, developed in partnership, can also improve patient care, he adds.
For more features, visit The Hospitalist's podcast archive.
This
The focus for emergency physicians, says Dr. Heinrich, is triage and disposition. Differing incentives for hospitalists and emergency physicians can cause stress between the groups, and dialogue is needed to defray the tension, he notes. Dr. Epstein says he thinks that collaboration can be an effective tactic against becoming a “30 day readmission rule” statistic. Shared metrics, developed in partnership, can also improve patient care, he adds.
For more features, visit The Hospitalist's podcast archive.
Enhanced thyroid cancer guidelines expected in 2015
CORONADO, CALIF. – Expect significant enhancements to the updated thyroid cancer management guidelines from the American Thyroid Association, due to be released in early 2015.
Last updated in 2009, the goal of the new guidelines is to “be evidence based and helpful,” guidelines task force chair Dr. Bryan R. Haugen said at the annual meeting of the American Thyroid Association. For example, the new guidelines will contain 101 recommendations, up from 80 in the 2009 version; 175 subrecommendations, up from 103; and 998 references, up from 437. “Still, 59 of the existing 80 recommendations are not substantially changed, showing a general stability in our field over the past 5 to 6 years,” he said.
One enhancement is a definition of risk of structural disease recurrence in patients without structurally identifiable disease after initial therapy for thyroid cancer. Low risk is defined as intrathyroidal differentiated thyroid cancer involving up to five metastases less than 0.2 cm in size. Intermediate risk is defined as the presence of aggressive histology, minor extrathyroidal extension, vascular invasion, or more than five involved lymph nodes with metastases 0.2-0.3 cm in size. High risk is defined as the presence of gross extrathyroidal extension, incomplete tumor resection, distant metastases, or lymph node metastases greater than 3 cm in size.
The guidelines also include a table that defines a patient’s response to therapy as a dynamic risk assessment. “This best applies to the low- to intermediate-risk patients, although it definitely applies to high risk as well,” said Dr. Haugen, who heads the division of endocrinology, metabolism, and diabetes at the University of Colorado Health Sciences Center, Denver. “It’s [a] strong recommendation based on low-quality evidence to use this risk-based response to therapy. A lot of this data is generated from patients who’ve had a thyroidectomy and have received radioiodine. So we’re on a bit more shaky ground right now in a patient who’s had a thyroidectomy but no radioiodine, or a patient who’s had a lobectomy.”
Other changes include the concept that it’s not necessary to biopsy every nodule more than 1 cm in size. “We’re going to be guided by the sonographic pattern in who we biopsy and how we monitor them,” Dr. Haugen explained. “A new recommendation adds follow-up guidance for nodules that do not meet FNA [fine-needle aspiration] criteria. We’re also recommending use of the Bethesda Cytology Classification System for cytology.”
Changes in the initial management of thyroid cancer include a recommendation for cross-sectional imaging with contrast for higher-risk disease and the consideration of lobectomy for some patients with tumors 1-4 cm in size. “This is a controversial recommendation,” Dr. Haugen said. “We got some feedback from members asking if you do it, what’s the TSH target? Should we give them synthetic levothyroxine? We are revising the guidelines based on this feedback to help guide clinicians.”
The new guidelines also call for more detailed/standardized pathology reports, with inclusion of lymph node size, extranodal invasion, and the number of invaded vessels. “I’ve talked to a number of pathologists and clinicians who are very happy about this guidance,” he said. “We also need to look at tumor stage, recurrence risk, and response to therapy in our patients, and the use of selective radioiodine. There is some more information on considering lower administered activities, especially in the lower-risk patients.”
For the first time, the guidelines include a section on radioiodine treatment for refractory differentiated thyroid cancer, including tips on directed therapy, clinical trials, systemic therapy, and bone-specific therapy.
Dr. Haugen disclosed that he has received grants and research support from Veracyte and Genzyme.
On Twitter @dougbrunk
CORONADO, CALIF. – Expect significant enhancements to the updated thyroid cancer management guidelines from the American Thyroid Association, due to be released in early 2015.
Last updated in 2009, the goal of the new guidelines is to “be evidence based and helpful,” guidelines task force chair Dr. Bryan R. Haugen said at the annual meeting of the American Thyroid Association. For example, the new guidelines will contain 101 recommendations, up from 80 in the 2009 version; 175 subrecommendations, up from 103; and 998 references, up from 437. “Still, 59 of the existing 80 recommendations are not substantially changed, showing a general stability in our field over the past 5 to 6 years,” he said.
One enhancement is a definition of risk of structural disease recurrence in patients without structurally identifiable disease after initial therapy for thyroid cancer. Low risk is defined as intrathyroidal differentiated thyroid cancer involving up to five metastases less than 0.2 cm in size. Intermediate risk is defined as the presence of aggressive histology, minor extrathyroidal extension, vascular invasion, or more than five involved lymph nodes with metastases 0.2-0.3 cm in size. High risk is defined as the presence of gross extrathyroidal extension, incomplete tumor resection, distant metastases, or lymph node metastases greater than 3 cm in size.
The guidelines also include a table that defines a patient’s response to therapy as a dynamic risk assessment. “This best applies to the low- to intermediate-risk patients, although it definitely applies to high risk as well,” said Dr. Haugen, who heads the division of endocrinology, metabolism, and diabetes at the University of Colorado Health Sciences Center, Denver. “It’s [a] strong recommendation based on low-quality evidence to use this risk-based response to therapy. A lot of this data is generated from patients who’ve had a thyroidectomy and have received radioiodine. So we’re on a bit more shaky ground right now in a patient who’s had a thyroidectomy but no radioiodine, or a patient who’s had a lobectomy.”
Other changes include the concept that it’s not necessary to biopsy every nodule more than 1 cm in size. “We’re going to be guided by the sonographic pattern in who we biopsy and how we monitor them,” Dr. Haugen explained. “A new recommendation adds follow-up guidance for nodules that do not meet FNA [fine-needle aspiration] criteria. We’re also recommending use of the Bethesda Cytology Classification System for cytology.”
Changes in the initial management of thyroid cancer include a recommendation for cross-sectional imaging with contrast for higher-risk disease and the consideration of lobectomy for some patients with tumors 1-4 cm in size. “This is a controversial recommendation,” Dr. Haugen said. “We got some feedback from members asking if you do it, what’s the TSH target? Should we give them synthetic levothyroxine? We are revising the guidelines based on this feedback to help guide clinicians.”
The new guidelines also call for more detailed/standardized pathology reports, with inclusion of lymph node size, extranodal invasion, and the number of invaded vessels. “I’ve talked to a number of pathologists and clinicians who are very happy about this guidance,” he said. “We also need to look at tumor stage, recurrence risk, and response to therapy in our patients, and the use of selective radioiodine. There is some more information on considering lower administered activities, especially in the lower-risk patients.”
For the first time, the guidelines include a section on radioiodine treatment for refractory differentiated thyroid cancer, including tips on directed therapy, clinical trials, systemic therapy, and bone-specific therapy.
Dr. Haugen disclosed that he has received grants and research support from Veracyte and Genzyme.
On Twitter @dougbrunk
CORONADO, CALIF. – Expect significant enhancements to the updated thyroid cancer management guidelines from the American Thyroid Association, due to be released in early 2015.
Last updated in 2009, the goal of the new guidelines is to “be evidence based and helpful,” guidelines task force chair Dr. Bryan R. Haugen said at the annual meeting of the American Thyroid Association. For example, the new guidelines will contain 101 recommendations, up from 80 in the 2009 version; 175 subrecommendations, up from 103; and 998 references, up from 437. “Still, 59 of the existing 80 recommendations are not substantially changed, showing a general stability in our field over the past 5 to 6 years,” he said.
One enhancement is a definition of risk of structural disease recurrence in patients without structurally identifiable disease after initial therapy for thyroid cancer. Low risk is defined as intrathyroidal differentiated thyroid cancer involving up to five metastases less than 0.2 cm in size. Intermediate risk is defined as the presence of aggressive histology, minor extrathyroidal extension, vascular invasion, or more than five involved lymph nodes with metastases 0.2-0.3 cm in size. High risk is defined as the presence of gross extrathyroidal extension, incomplete tumor resection, distant metastases, or lymph node metastases greater than 3 cm in size.
The guidelines also include a table that defines a patient’s response to therapy as a dynamic risk assessment. “This best applies to the low- to intermediate-risk patients, although it definitely applies to high risk as well,” said Dr. Haugen, who heads the division of endocrinology, metabolism, and diabetes at the University of Colorado Health Sciences Center, Denver. “It’s [a] strong recommendation based on low-quality evidence to use this risk-based response to therapy. A lot of this data is generated from patients who’ve had a thyroidectomy and have received radioiodine. So we’re on a bit more shaky ground right now in a patient who’s had a thyroidectomy but no radioiodine, or a patient who’s had a lobectomy.”
Other changes include the concept that it’s not necessary to biopsy every nodule more than 1 cm in size. “We’re going to be guided by the sonographic pattern in who we biopsy and how we monitor them,” Dr. Haugen explained. “A new recommendation adds follow-up guidance for nodules that do not meet FNA [fine-needle aspiration] criteria. We’re also recommending use of the Bethesda Cytology Classification System for cytology.”
Changes in the initial management of thyroid cancer include a recommendation for cross-sectional imaging with contrast for higher-risk disease and the consideration of lobectomy for some patients with tumors 1-4 cm in size. “This is a controversial recommendation,” Dr. Haugen said. “We got some feedback from members asking if you do it, what’s the TSH target? Should we give them synthetic levothyroxine? We are revising the guidelines based on this feedback to help guide clinicians.”
The new guidelines also call for more detailed/standardized pathology reports, with inclusion of lymph node size, extranodal invasion, and the number of invaded vessels. “I’ve talked to a number of pathologists and clinicians who are very happy about this guidance,” he said. “We also need to look at tumor stage, recurrence risk, and response to therapy in our patients, and the use of selective radioiodine. There is some more information on considering lower administered activities, especially in the lower-risk patients.”
For the first time, the guidelines include a section on radioiodine treatment for refractory differentiated thyroid cancer, including tips on directed therapy, clinical trials, systemic therapy, and bone-specific therapy.
Dr. Haugen disclosed that he has received grants and research support from Veracyte and Genzyme.
On Twitter @dougbrunk
EXPERT ANALYSIS FROM THE ATA ANNUAL MEETING