User login
Barrett’s esophagus length predicts disease progression
ORLANDO – Barrett’s esophagus length is a readily accessible endoscopic marker for disease progression, and it could aid in risk stratification and decision making about patient management, according to a review of records at a tertiary care center.
Of 301 patients who were diagnosed with Barrett’s esophagus and who underwent radiofrequency ablation (RFA) between March 2006 and 2016, 106 met a standardized definition of Barrett’s esophagus and were included in the study on the basis of the remaining criteria, including having nondysplastic Barrett’s esophagus and at least 1 year of follow-up from the time of initial diagnosis.
Of those 106 patients, 53 progressed to high-grade dysplasia/esophageal adenocarcinoma (HGD/EAC). The overall annual risk of EAC and combined HGD/EAC for the entire cohort was 1.23%/year and 5.94%/year, respectively. Those who progressed had significantly longer Barrett’s esophagus length, compared with 53 nonprogressors (6.37 cm vs. 4.3 cm).
In fact, of all characteristics assessed, including Barrett’s esophagus length, age, sex, race, mean body mass index, family history of esophageal cancer, proton pump inhibitor use, and total duration of follow-up, only the first was a significant predictor of progression.
“For every 1-cm increase in length of BE [Barrett’s esophagus], the risk of progression to EAC increases by 16%,” Dr. Spataro said.
Although this work, which was awarded a “Presidential Poster” ribbon, is limited by the retrospective design, lack of standardization of surveillance intervals and biopsy protocols, and by the possibility of elevated progression rates due to the nature of the center (a referral center with ablative therapy options), the study included a “decent sample and follow-up,” and has important implications for patient care, he noted, explaining that the incidence of EAC has increased faster than any other malignancy in the Western world.
Despite therapeutic advances, the prognosis for patients with EAC remains poor; the annual risk of progression from Barrett’s esophagus to HGD is 0.38%, he added.
Currently, the most commonly used risk-stratification tool for determining surveillance intervals and management of patients with Barrett’s esophagus is the degree of dysplasia. Prior studies have evaluated Barrett’s esophagus length as a predictor of progression to HGD/EAC, but findings have been conflicting, he said.
The current findings suggest that until molecular biomarkers are identified and validated as adjunctive tools for risk stratification, Barrett’s esophagus length could be used to identify patients with nondysplastic Barrett’s esophagus at risk for disease progression.
This could facilitate more rational tailoring of endoscopic surveillance, explained lead author Christina Tofani, MD.
Currently, Barrett’s esophagus patients at the center who have dysplasia generally undergo ablation, while those without dysplasia generally undergo surveillance. Barrett’s esophagus length could be used to adjust surveillance intervals, or to lower the bar for ablation in some cases, she said.
The authors reported having no disclosures.
ORLANDO – Barrett’s esophagus length is a readily accessible endoscopic marker for disease progression, and it could aid in risk stratification and decision making about patient management, according to a review of records at a tertiary care center.
Of 301 patients who were diagnosed with Barrett’s esophagus and who underwent radiofrequency ablation (RFA) between March 2006 and 2016, 106 met a standardized definition of Barrett’s esophagus and were included in the study on the basis of the remaining criteria, including having nondysplastic Barrett’s esophagus and at least 1 year of follow-up from the time of initial diagnosis.
Of those 106 patients, 53 progressed to high-grade dysplasia/esophageal adenocarcinoma (HGD/EAC). The overall annual risk of EAC and combined HGD/EAC for the entire cohort was 1.23%/year and 5.94%/year, respectively. Those who progressed had significantly longer Barrett’s esophagus length, compared with 53 nonprogressors (6.37 cm vs. 4.3 cm).
In fact, of all characteristics assessed, including Barrett’s esophagus length, age, sex, race, mean body mass index, family history of esophageal cancer, proton pump inhibitor use, and total duration of follow-up, only the first was a significant predictor of progression.
“For every 1-cm increase in length of BE [Barrett’s esophagus], the risk of progression to EAC increases by 16%,” Dr. Spataro said.
Although this work, which was awarded a “Presidential Poster” ribbon, is limited by the retrospective design, lack of standardization of surveillance intervals and biopsy protocols, and by the possibility of elevated progression rates due to the nature of the center (a referral center with ablative therapy options), the study included a “decent sample and follow-up,” and has important implications for patient care, he noted, explaining that the incidence of EAC has increased faster than any other malignancy in the Western world.
Despite therapeutic advances, the prognosis for patients with EAC remains poor; the annual risk of progression from Barrett’s esophagus to HGD is 0.38%, he added.
Currently, the most commonly used risk-stratification tool for determining surveillance intervals and management of patients with Barrett’s esophagus is the degree of dysplasia. Prior studies have evaluated Barrett’s esophagus length as a predictor of progression to HGD/EAC, but findings have been conflicting, he said.
The current findings suggest that until molecular biomarkers are identified and validated as adjunctive tools for risk stratification, Barrett’s esophagus length could be used to identify patients with nondysplastic Barrett’s esophagus at risk for disease progression.
This could facilitate more rational tailoring of endoscopic surveillance, explained lead author Christina Tofani, MD.
Currently, Barrett’s esophagus patients at the center who have dysplasia generally undergo ablation, while those without dysplasia generally undergo surveillance. Barrett’s esophagus length could be used to adjust surveillance intervals, or to lower the bar for ablation in some cases, she said.
The authors reported having no disclosures.
ORLANDO – Barrett’s esophagus length is a readily accessible endoscopic marker for disease progression, and it could aid in risk stratification and decision making about patient management, according to a review of records at a tertiary care center.
Of 301 patients who were diagnosed with Barrett’s esophagus and who underwent radiofrequency ablation (RFA) between March 2006 and 2016, 106 met a standardized definition of Barrett’s esophagus and were included in the study on the basis of the remaining criteria, including having nondysplastic Barrett’s esophagus and at least 1 year of follow-up from the time of initial diagnosis.
Of those 106 patients, 53 progressed to high-grade dysplasia/esophageal adenocarcinoma (HGD/EAC). The overall annual risk of EAC and combined HGD/EAC for the entire cohort was 1.23%/year and 5.94%/year, respectively. Those who progressed had significantly longer Barrett’s esophagus length, compared with 53 nonprogressors (6.37 cm vs. 4.3 cm).
In fact, of all characteristics assessed, including Barrett’s esophagus length, age, sex, race, mean body mass index, family history of esophageal cancer, proton pump inhibitor use, and total duration of follow-up, only the first was a significant predictor of progression.
“For every 1-cm increase in length of BE [Barrett’s esophagus], the risk of progression to EAC increases by 16%,” Dr. Spataro said.
Although this work, which was awarded a “Presidential Poster” ribbon, is limited by the retrospective design, lack of standardization of surveillance intervals and biopsy protocols, and by the possibility of elevated progression rates due to the nature of the center (a referral center with ablative therapy options), the study included a “decent sample and follow-up,” and has important implications for patient care, he noted, explaining that the incidence of EAC has increased faster than any other malignancy in the Western world.
Despite therapeutic advances, the prognosis for patients with EAC remains poor; the annual risk of progression from Barrett’s esophagus to HGD is 0.38%, he added.
Currently, the most commonly used risk-stratification tool for determining surveillance intervals and management of patients with Barrett’s esophagus is the degree of dysplasia. Prior studies have evaluated Barrett’s esophagus length as a predictor of progression to HGD/EAC, but findings have been conflicting, he said.
The current findings suggest that until molecular biomarkers are identified and validated as adjunctive tools for risk stratification, Barrett’s esophagus length could be used to identify patients with nondysplastic Barrett’s esophagus at risk for disease progression.
This could facilitate more rational tailoring of endoscopic surveillance, explained lead author Christina Tofani, MD.
Currently, Barrett’s esophagus patients at the center who have dysplasia generally undergo ablation, while those without dysplasia generally undergo surveillance. Barrett’s esophagus length could be used to adjust surveillance intervals, or to lower the bar for ablation in some cases, she said.
The authors reported having no disclosures.
AT THE 13TH WORLD CONGRESS OF GASTROENTEROLOGY
Key clinical point:
Major finding: Barrett’s esophagus length was found to be a significant independent predictor of progression to adenocarcinoma (odds ratio, 1.16).
Data source: A retrospective review of 106 cases.
Disclosures: The authors reported having no disclosures.
Scheduling patterns in hospital medicine
For years, the Society of Hospital Medicine has been asking hospital medicine programs about operational metrics in order to understand and catalog how they are functioning and evolving. After compensation, the scheduling patterns that hospital medicine groups (HMGs) are using is the most reviewed item in the report.
When hospital medicine first started, 7 days working followed by 7 days off (7-on-7-off) quickly became vogue. No one really knows how this happened, but it was most likely due to the fact that hospital medicine most closely resembled emergency medicine and scheduling similar to emergency medicine seemed to make sense (that is, 14 shifts per month). That along with the assumption that continuity of care was critical in inpatient care and would improve quality most likely resulted in the popularity of the 7-on-7-off schedule.
In the most recent survey in 2016, HMGs were once again asked to comment on how they schedule. Groups were able to choose from five scheduling options:
1. Seven days on followed by 7 days off
2. Other fixed rotation block schedules (such as 5-on 5-off; or 10-on 5-off)
3. Monday to Friday with rotating weekend coverage
4. Variable schedule
5. Other
Looking at HMG programs that serve only adult populations, a majority of them (48%) follow a fixed rotating schedule either 7 days on followed by 7 days off, or some other fixed schedule, while 31% of programs that responded stated that they used a Monday to Friday schedule. Looking at the programs as a whole, it would seem that the 7-on-7-off schedule was quickly losing popularity while the Monday to Friday schedule was increasingly being used. However, this broad generalization doesn’t really give you the full picture.
Upon analyzing the data further, we see some distinct differences arise based on program size. Small programs (fewer than 10 full-time employees [FTEs]) are much more likely to schedule a Monday to Friday schedule than any other model, whereas only a handful of large programs (greater than 20 FTEs) schedule in this way, rather choosing to use a 7-on-7-off schedule.
The last survey was done in 2014 and a lot has changed since then. Significantly more programs responded in 2016, compared with 2014 (530 vs. 355) and the majority of this increase was made of up smaller programs (fewer than 10 FTEs). Programs with four or fewer FTEs, compared with the prior survey, increased by over 400% (37 programs in 2014 vs. 151 programs in 2016). Overall, programs with fewer than 10 FTEs constituted over 50% of the total programs that responded in 2016 (whereas they made up only a third in 2014). This was particularly significant since size of the program was the one variable that determined how a program might schedule – other factors like geographic region, academic status, or primary hospital GME status did not show significant variance in how groups scheduled.
The second major change that occurred is that these same small programs (those with fewer than 10 FTEs) moved overwhelmingly to a Monday to Friday schedule. In 2014, only 3% of small programs scheduled using a Monday to Friday pattern, but in 2016 almost 50% of small programs reported scheduling in this way. This change in the overall composition of programs, with small programs now making up over 50% of the programs that reported, and the specific change in how small programs schedule results in a noteworthy decrease of programs using a 7 days on followed by 7 days off (7-on-7-off) schedule (53.8% in 2014 and only 38.1% in 2016), and a corresponding increase in the number of programs that schedule using a Monday to Friday schedule (4% in 2014 to 31% in 2016).
In distinct contrast to programs with fewer than 10 FTEs, a very similar number of programs with greater than 20 FTEs reported in 2016 as in 2014 – there was no increase in this subgroup. I’m not clear at this time if this is because there is truly no increase in the number of large programs nationally, or if there is another factor causing larger programs to under-report. The large programs that did report data in 2016 continue to utilize a 7-on-7-off schedule or another fixed rotating block schedule more than 50% of the time. In fact, the utilization of one of these two scheduling patterns increased slightly from 2014 to 2016 (from 52% to 58%). Those that did not use one of the prior mentioned scheduling patterns were most likely to schedule with a variable schedule. A Monday to Friday schedule was almost never used in programs of this size and showed no significant change from 2014 to 2016.
This snapshot highlights the changing landscape in hospital medicine. Hospital medicine is penetrating more and more into smaller and smaller hospitals, and has even made it into critical access hospitals. As recently as 5-10 years ago, it was felt that these hospitals were too small to have a hospital medicine program. This is likely one of the reasons for the increase in programs with four or fewer FTEs. There has also been increasing discontent with the 7-on-7-off schedule, which many feel is leading to burnout. Dr. Bob Wachter famously said during the closing plenary of the 2016 Society of Hospital Medicine Annual Meeting that the 7-on-7-off schedule was “a mistake.” Despite this brewing discontent, larger programs have not changed their scheduling patterns, likely because finding a another scheduling pattern that is effective, supports high-quality care, and is sustainable for such a large group is challenging.
Many people will say that there are as many different types of hospital medicine programs as there are hospital medicine programs. This is true for scheduling as for other aspects of hospital medicine operations. As we continue to grow and evolve as an industry, scheduling patterns will continue to change and evolve as well. For now, two patterns are emerging – smaller programs are utilizing a Monday to Friday schedule and larger programs are utilizing a 7-on-7-off schedule. Only time will tell if these scheduling patterns persist or continue to evolve.
Dr. George is a board certified internal medicine physician and practicing hospitalist with over 15 years of experience in hospital medicine. She has been actively involved in the Society of Hospital Medicine and has participated in and chaired multiple committees and task forces. She is currently executive vice president and chief medical officer of Hospital Medicine at Schumacher Clinical Partners, a national provider of emergency medicine and hospital medicine services. She lives in the northwest suburbs of Chicago with her family.
For years, the Society of Hospital Medicine has been asking hospital medicine programs about operational metrics in order to understand and catalog how they are functioning and evolving. After compensation, the scheduling patterns that hospital medicine groups (HMGs) are using is the most reviewed item in the report.
When hospital medicine first started, 7 days working followed by 7 days off (7-on-7-off) quickly became vogue. No one really knows how this happened, but it was most likely due to the fact that hospital medicine most closely resembled emergency medicine and scheduling similar to emergency medicine seemed to make sense (that is, 14 shifts per month). That along with the assumption that continuity of care was critical in inpatient care and would improve quality most likely resulted in the popularity of the 7-on-7-off schedule.
In the most recent survey in 2016, HMGs were once again asked to comment on how they schedule. Groups were able to choose from five scheduling options:
1. Seven days on followed by 7 days off
2. Other fixed rotation block schedules (such as 5-on 5-off; or 10-on 5-off)
3. Monday to Friday with rotating weekend coverage
4. Variable schedule
5. Other
Looking at HMG programs that serve only adult populations, a majority of them (48%) follow a fixed rotating schedule either 7 days on followed by 7 days off, or some other fixed schedule, while 31% of programs that responded stated that they used a Monday to Friday schedule. Looking at the programs as a whole, it would seem that the 7-on-7-off schedule was quickly losing popularity while the Monday to Friday schedule was increasingly being used. However, this broad generalization doesn’t really give you the full picture.
Upon analyzing the data further, we see some distinct differences arise based on program size. Small programs (fewer than 10 full-time employees [FTEs]) are much more likely to schedule a Monday to Friday schedule than any other model, whereas only a handful of large programs (greater than 20 FTEs) schedule in this way, rather choosing to use a 7-on-7-off schedule.
The last survey was done in 2014 and a lot has changed since then. Significantly more programs responded in 2016, compared with 2014 (530 vs. 355) and the majority of this increase was made of up smaller programs (fewer than 10 FTEs). Programs with four or fewer FTEs, compared with the prior survey, increased by over 400% (37 programs in 2014 vs. 151 programs in 2016). Overall, programs with fewer than 10 FTEs constituted over 50% of the total programs that responded in 2016 (whereas they made up only a third in 2014). This was particularly significant since size of the program was the one variable that determined how a program might schedule – other factors like geographic region, academic status, or primary hospital GME status did not show significant variance in how groups scheduled.
The second major change that occurred is that these same small programs (those with fewer than 10 FTEs) moved overwhelmingly to a Monday to Friday schedule. In 2014, only 3% of small programs scheduled using a Monday to Friday pattern, but in 2016 almost 50% of small programs reported scheduling in this way. This change in the overall composition of programs, with small programs now making up over 50% of the programs that reported, and the specific change in how small programs schedule results in a noteworthy decrease of programs using a 7 days on followed by 7 days off (7-on-7-off) schedule (53.8% in 2014 and only 38.1% in 2016), and a corresponding increase in the number of programs that schedule using a Monday to Friday schedule (4% in 2014 to 31% in 2016).
In distinct contrast to programs with fewer than 10 FTEs, a very similar number of programs with greater than 20 FTEs reported in 2016 as in 2014 – there was no increase in this subgroup. I’m not clear at this time if this is because there is truly no increase in the number of large programs nationally, or if there is another factor causing larger programs to under-report. The large programs that did report data in 2016 continue to utilize a 7-on-7-off schedule or another fixed rotating block schedule more than 50% of the time. In fact, the utilization of one of these two scheduling patterns increased slightly from 2014 to 2016 (from 52% to 58%). Those that did not use one of the prior mentioned scheduling patterns were most likely to schedule with a variable schedule. A Monday to Friday schedule was almost never used in programs of this size and showed no significant change from 2014 to 2016.
This snapshot highlights the changing landscape in hospital medicine. Hospital medicine is penetrating more and more into smaller and smaller hospitals, and has even made it into critical access hospitals. As recently as 5-10 years ago, it was felt that these hospitals were too small to have a hospital medicine program. This is likely one of the reasons for the increase in programs with four or fewer FTEs. There has also been increasing discontent with the 7-on-7-off schedule, which many feel is leading to burnout. Dr. Bob Wachter famously said during the closing plenary of the 2016 Society of Hospital Medicine Annual Meeting that the 7-on-7-off schedule was “a mistake.” Despite this brewing discontent, larger programs have not changed their scheduling patterns, likely because finding a another scheduling pattern that is effective, supports high-quality care, and is sustainable for such a large group is challenging.
Many people will say that there are as many different types of hospital medicine programs as there are hospital medicine programs. This is true for scheduling as for other aspects of hospital medicine operations. As we continue to grow and evolve as an industry, scheduling patterns will continue to change and evolve as well. For now, two patterns are emerging – smaller programs are utilizing a Monday to Friday schedule and larger programs are utilizing a 7-on-7-off schedule. Only time will tell if these scheduling patterns persist or continue to evolve.
Dr. George is a board certified internal medicine physician and practicing hospitalist with over 15 years of experience in hospital medicine. She has been actively involved in the Society of Hospital Medicine and has participated in and chaired multiple committees and task forces. She is currently executive vice president and chief medical officer of Hospital Medicine at Schumacher Clinical Partners, a national provider of emergency medicine and hospital medicine services. She lives in the northwest suburbs of Chicago with her family.
For years, the Society of Hospital Medicine has been asking hospital medicine programs about operational metrics in order to understand and catalog how they are functioning and evolving. After compensation, the scheduling patterns that hospital medicine groups (HMGs) are using is the most reviewed item in the report.
When hospital medicine first started, 7 days working followed by 7 days off (7-on-7-off) quickly became vogue. No one really knows how this happened, but it was most likely due to the fact that hospital medicine most closely resembled emergency medicine and scheduling similar to emergency medicine seemed to make sense (that is, 14 shifts per month). That along with the assumption that continuity of care was critical in inpatient care and would improve quality most likely resulted in the popularity of the 7-on-7-off schedule.
In the most recent survey in 2016, HMGs were once again asked to comment on how they schedule. Groups were able to choose from five scheduling options:
1. Seven days on followed by 7 days off
2. Other fixed rotation block schedules (such as 5-on 5-off; or 10-on 5-off)
3. Monday to Friday with rotating weekend coverage
4. Variable schedule
5. Other
Looking at HMG programs that serve only adult populations, a majority of them (48%) follow a fixed rotating schedule either 7 days on followed by 7 days off, or some other fixed schedule, while 31% of programs that responded stated that they used a Monday to Friday schedule. Looking at the programs as a whole, it would seem that the 7-on-7-off schedule was quickly losing popularity while the Monday to Friday schedule was increasingly being used. However, this broad generalization doesn’t really give you the full picture.
Upon analyzing the data further, we see some distinct differences arise based on program size. Small programs (fewer than 10 full-time employees [FTEs]) are much more likely to schedule a Monday to Friday schedule than any other model, whereas only a handful of large programs (greater than 20 FTEs) schedule in this way, rather choosing to use a 7-on-7-off schedule.
The last survey was done in 2014 and a lot has changed since then. Significantly more programs responded in 2016, compared with 2014 (530 vs. 355) and the majority of this increase was made of up smaller programs (fewer than 10 FTEs). Programs with four or fewer FTEs, compared with the prior survey, increased by over 400% (37 programs in 2014 vs. 151 programs in 2016). Overall, programs with fewer than 10 FTEs constituted over 50% of the total programs that responded in 2016 (whereas they made up only a third in 2014). This was particularly significant since size of the program was the one variable that determined how a program might schedule – other factors like geographic region, academic status, or primary hospital GME status did not show significant variance in how groups scheduled.
The second major change that occurred is that these same small programs (those with fewer than 10 FTEs) moved overwhelmingly to a Monday to Friday schedule. In 2014, only 3% of small programs scheduled using a Monday to Friday pattern, but in 2016 almost 50% of small programs reported scheduling in this way. This change in the overall composition of programs, with small programs now making up over 50% of the programs that reported, and the specific change in how small programs schedule results in a noteworthy decrease of programs using a 7 days on followed by 7 days off (7-on-7-off) schedule (53.8% in 2014 and only 38.1% in 2016), and a corresponding increase in the number of programs that schedule using a Monday to Friday schedule (4% in 2014 to 31% in 2016).
In distinct contrast to programs with fewer than 10 FTEs, a very similar number of programs with greater than 20 FTEs reported in 2016 as in 2014 – there was no increase in this subgroup. I’m not clear at this time if this is because there is truly no increase in the number of large programs nationally, or if there is another factor causing larger programs to under-report. The large programs that did report data in 2016 continue to utilize a 7-on-7-off schedule or another fixed rotating block schedule more than 50% of the time. In fact, the utilization of one of these two scheduling patterns increased slightly from 2014 to 2016 (from 52% to 58%). Those that did not use one of the prior mentioned scheduling patterns were most likely to schedule with a variable schedule. A Monday to Friday schedule was almost never used in programs of this size and showed no significant change from 2014 to 2016.
This snapshot highlights the changing landscape in hospital medicine. Hospital medicine is penetrating more and more into smaller and smaller hospitals, and has even made it into critical access hospitals. As recently as 5-10 years ago, it was felt that these hospitals were too small to have a hospital medicine program. This is likely one of the reasons for the increase in programs with four or fewer FTEs. There has also been increasing discontent with the 7-on-7-off schedule, which many feel is leading to burnout. Dr. Bob Wachter famously said during the closing plenary of the 2016 Society of Hospital Medicine Annual Meeting that the 7-on-7-off schedule was “a mistake.” Despite this brewing discontent, larger programs have not changed their scheduling patterns, likely because finding a another scheduling pattern that is effective, supports high-quality care, and is sustainable for such a large group is challenging.
Many people will say that there are as many different types of hospital medicine programs as there are hospital medicine programs. This is true for scheduling as for other aspects of hospital medicine operations. As we continue to grow and evolve as an industry, scheduling patterns will continue to change and evolve as well. For now, two patterns are emerging – smaller programs are utilizing a Monday to Friday schedule and larger programs are utilizing a 7-on-7-off schedule. Only time will tell if these scheduling patterns persist or continue to evolve.
Dr. George is a board certified internal medicine physician and practicing hospitalist with over 15 years of experience in hospital medicine. She has been actively involved in the Society of Hospital Medicine and has participated in and chaired multiple committees and task forces. She is currently executive vice president and chief medical officer of Hospital Medicine at Schumacher Clinical Partners, a national provider of emergency medicine and hospital medicine services. She lives in the northwest suburbs of Chicago with her family.
Skills training improves psychosocial outcomes for young cancer patients
Compared with standard psychosocial care, a one-on-one skills-based intervention improved psychosocial outcomes in adolescents and young adults with cancer, according to results of a pilot randomized study presented at the Palliative and Supportive Care in Oncology Symposium.
The novel intervention was associated with improved patient resilience, cancer-specific quality of life, and hope, plus fewer cases of depression, said lead study author Abby R. Rosenberg, MD, director of palliative care and resilience research at Seattle Children’s Research Institute.
Brief, developmentally-targeted psychosocial interventions are promising for this population of adolescents and young adults with cancer, Dr. Rosenberg said in a press conference at the symposium, which was cosponsored by AAHPM, ASCO, ASTRO, and MASCC.
Adolescents and young adults with cancer tend to have poor psychosocial outcomes, possibly because they have not yet developed skills that would help them manage hardships they encounter as a result of having cancer, according to Dr. Rosenberg.
She and her colleagues previously designed and tested the intervention, called Promoting Resilience in Stress Management (PRISM). The intervention is brief and focuses on helping patients develop skills in stress management, goal setting, positive reframing, and benefit finding.
The clinical evaluation of PRISM presented at the symposium included 100 English-speaking patients aged 12-25 who had new or recently recurrent cancer. They were randomized to the skills-based intervention or standard psychosocial care.
In the PRISM group, the adolescents and young adults participated in four in-person one-on-one training sessions lasting 30-60 minutes, plus a facilitated family meeting. Patients were surveyed at baseline and again at 6 months to measure the impact of the intervention.
A total of 36 patients in the PRISM arm and 38 in the usual-care arm completed the study. Most attrition was due to medical complications or death, the investigators said.
Results showed that, compared with standard psychosocial care, the skills-based intervention was associated with significant improvements in resilience (+2.3; 95% confidence interval, 0.7-4.0), hope (+2.8; 95% CI, 0.5-5.1), quality of life (+6.3; 95% CI, –0.8-13.5), and a trend toward less distress (–1.6; 95% CI –3.3-0.0).
Fewer cases of depression occurred in the PRISM group compared with the standard care group (two versus eight cases), Dr. Rosenberg added.
The psychosocial toll of cancer can be significant, especially in a vulnerable population such as adolescents and young adults, according to Andrew S. Epstein, MD, of Memorial Sloan Kettering Cancer Center, New York. “The intervention by Rosenberg and her coauthors represents an important beacon of hope for improving the cancer experience for this population,” Dr. Epstein said.
Compared with standard psychosocial care, a one-on-one skills-based intervention improved psychosocial outcomes in adolescents and young adults with cancer, according to results of a pilot randomized study presented at the Palliative and Supportive Care in Oncology Symposium.
The novel intervention was associated with improved patient resilience, cancer-specific quality of life, and hope, plus fewer cases of depression, said lead study author Abby R. Rosenberg, MD, director of palliative care and resilience research at Seattle Children’s Research Institute.
Brief, developmentally-targeted psychosocial interventions are promising for this population of adolescents and young adults with cancer, Dr. Rosenberg said in a press conference at the symposium, which was cosponsored by AAHPM, ASCO, ASTRO, and MASCC.
Adolescents and young adults with cancer tend to have poor psychosocial outcomes, possibly because they have not yet developed skills that would help them manage hardships they encounter as a result of having cancer, according to Dr. Rosenberg.
She and her colleagues previously designed and tested the intervention, called Promoting Resilience in Stress Management (PRISM). The intervention is brief and focuses on helping patients develop skills in stress management, goal setting, positive reframing, and benefit finding.
The clinical evaluation of PRISM presented at the symposium included 100 English-speaking patients aged 12-25 who had new or recently recurrent cancer. They were randomized to the skills-based intervention or standard psychosocial care.
In the PRISM group, the adolescents and young adults participated in four in-person one-on-one training sessions lasting 30-60 minutes, plus a facilitated family meeting. Patients were surveyed at baseline and again at 6 months to measure the impact of the intervention.
A total of 36 patients in the PRISM arm and 38 in the usual-care arm completed the study. Most attrition was due to medical complications or death, the investigators said.
Results showed that, compared with standard psychosocial care, the skills-based intervention was associated with significant improvements in resilience (+2.3; 95% confidence interval, 0.7-4.0), hope (+2.8; 95% CI, 0.5-5.1), quality of life (+6.3; 95% CI, –0.8-13.5), and a trend toward less distress (–1.6; 95% CI –3.3-0.0).
Fewer cases of depression occurred in the PRISM group compared with the standard care group (two versus eight cases), Dr. Rosenberg added.
The psychosocial toll of cancer can be significant, especially in a vulnerable population such as adolescents and young adults, according to Andrew S. Epstein, MD, of Memorial Sloan Kettering Cancer Center, New York. “The intervention by Rosenberg and her coauthors represents an important beacon of hope for improving the cancer experience for this population,” Dr. Epstein said.
Compared with standard psychosocial care, a one-on-one skills-based intervention improved psychosocial outcomes in adolescents and young adults with cancer, according to results of a pilot randomized study presented at the Palliative and Supportive Care in Oncology Symposium.
The novel intervention was associated with improved patient resilience, cancer-specific quality of life, and hope, plus fewer cases of depression, said lead study author Abby R. Rosenberg, MD, director of palliative care and resilience research at Seattle Children’s Research Institute.
Brief, developmentally-targeted psychosocial interventions are promising for this population of adolescents and young adults with cancer, Dr. Rosenberg said in a press conference at the symposium, which was cosponsored by AAHPM, ASCO, ASTRO, and MASCC.
Adolescents and young adults with cancer tend to have poor psychosocial outcomes, possibly because they have not yet developed skills that would help them manage hardships they encounter as a result of having cancer, according to Dr. Rosenberg.
She and her colleagues previously designed and tested the intervention, called Promoting Resilience in Stress Management (PRISM). The intervention is brief and focuses on helping patients develop skills in stress management, goal setting, positive reframing, and benefit finding.
The clinical evaluation of PRISM presented at the symposium included 100 English-speaking patients aged 12-25 who had new or recently recurrent cancer. They were randomized to the skills-based intervention or standard psychosocial care.
In the PRISM group, the adolescents and young adults participated in four in-person one-on-one training sessions lasting 30-60 minutes, plus a facilitated family meeting. Patients were surveyed at baseline and again at 6 months to measure the impact of the intervention.
A total of 36 patients in the PRISM arm and 38 in the usual-care arm completed the study. Most attrition was due to medical complications or death, the investigators said.
Results showed that, compared with standard psychosocial care, the skills-based intervention was associated with significant improvements in resilience (+2.3; 95% confidence interval, 0.7-4.0), hope (+2.8; 95% CI, 0.5-5.1), quality of life (+6.3; 95% CI, –0.8-13.5), and a trend toward less distress (–1.6; 95% CI –3.3-0.0).
Fewer cases of depression occurred in the PRISM group compared with the standard care group (two versus eight cases), Dr. Rosenberg added.
The psychosocial toll of cancer can be significant, especially in a vulnerable population such as adolescents and young adults, according to Andrew S. Epstein, MD, of Memorial Sloan Kettering Cancer Center, New York. “The intervention by Rosenberg and her coauthors represents an important beacon of hope for improving the cancer experience for this population,” Dr. Epstein said.
FROM PALLONC 2017
Key clinical point: A one-on-one skills-based intervention improved psychosocial outcomes, compared with standard psychosocial care, in adolescents and young adults with cancer.
Major finding: The skills-based intervention was associated with improvements in resilience (+2.3; 95% CI, 0.7-4.0), hope (+2.8; 95% CI, 0.5-5.1), quality of life (+6.3; 95% CI, –0.8-13.5), and distress (–1.6; 95% CI –3.3-0.0).
Data source: A pilot study of 100 English-speaking cancer patients aged 12-25 who were randomly assigned to the skills-based intervention or standard psychosocial care.
Disclosures: The study was partly funded by the National Institutes of Health. The authors reported having no financial disclosures.
ACIP recommends third MMR dose, if outbreak risk
The Advisory Committee on Immunization Practices voted Oct. 25 to recommend a 3rd dose of measles, mumps, and rubella (MMR) vaccine for individuals at mumps risk from an outbreak.
The recommendation applies to individuals who already have been vaccinated with the usual two doses of MMR “who are identified by public health as at increased risk for mumps because of an outbreak,” according to draft text of the recommendation. This practice would “improve protection against mumps disease and related complications.”
Young adults are at highest risk, she said.
Key evidence supporting the ACIP’s recommendation includes one recent study suggesting a 3rd dose of MMR is effective for mumps outbreak control (N Engl J Med. 2017 Sep 7; doi: 10.1056/NEJMoa1703309).
In that study, Cristina V. Cardemil, MD, of the CDC, and her colleagues looked at college students who received a 3rd MMR dose during an outbreak of at the University of Iowa in Iowa City. Almost a quarter of students (4,783 of 20,496) enrolled in the 2015-2016 academic year received a 3rd dose. Compared with two doses of MMR, students receiving three total doses had a 78% lower risk of mumps at 28 days after vaccination, investigators reported.
“These findings suggest that the campaign to administer a 3rd dose of MMR vaccine improved mumps outbreak control and that waning immunity probably contributed to propagation of the outbreak,” Dr. Cardemil and her colleagues wrote.
The vote in favor of a 3rd dose was unanimous among 15 voting members of ACIP. The committee’s recommendations must be approved by the CDC director before they are considered official recommendations.
The Advisory Committee on Immunization Practices voted Oct. 25 to recommend a 3rd dose of measles, mumps, and rubella (MMR) vaccine for individuals at mumps risk from an outbreak.
The recommendation applies to individuals who already have been vaccinated with the usual two doses of MMR “who are identified by public health as at increased risk for mumps because of an outbreak,” according to draft text of the recommendation. This practice would “improve protection against mumps disease and related complications.”
Young adults are at highest risk, she said.
Key evidence supporting the ACIP’s recommendation includes one recent study suggesting a 3rd dose of MMR is effective for mumps outbreak control (N Engl J Med. 2017 Sep 7; doi: 10.1056/NEJMoa1703309).
In that study, Cristina V. Cardemil, MD, of the CDC, and her colleagues looked at college students who received a 3rd MMR dose during an outbreak of at the University of Iowa in Iowa City. Almost a quarter of students (4,783 of 20,496) enrolled in the 2015-2016 academic year received a 3rd dose. Compared with two doses of MMR, students receiving three total doses had a 78% lower risk of mumps at 28 days after vaccination, investigators reported.
“These findings suggest that the campaign to administer a 3rd dose of MMR vaccine improved mumps outbreak control and that waning immunity probably contributed to propagation of the outbreak,” Dr. Cardemil and her colleagues wrote.
The vote in favor of a 3rd dose was unanimous among 15 voting members of ACIP. The committee’s recommendations must be approved by the CDC director before they are considered official recommendations.
The Advisory Committee on Immunization Practices voted Oct. 25 to recommend a 3rd dose of measles, mumps, and rubella (MMR) vaccine for individuals at mumps risk from an outbreak.
The recommendation applies to individuals who already have been vaccinated with the usual two doses of MMR “who are identified by public health as at increased risk for mumps because of an outbreak,” according to draft text of the recommendation. This practice would “improve protection against mumps disease and related complications.”
Young adults are at highest risk, she said.
Key evidence supporting the ACIP’s recommendation includes one recent study suggesting a 3rd dose of MMR is effective for mumps outbreak control (N Engl J Med. 2017 Sep 7; doi: 10.1056/NEJMoa1703309).
In that study, Cristina V. Cardemil, MD, of the CDC, and her colleagues looked at college students who received a 3rd MMR dose during an outbreak of at the University of Iowa in Iowa City. Almost a quarter of students (4,783 of 20,496) enrolled in the 2015-2016 academic year received a 3rd dose. Compared with two doses of MMR, students receiving three total doses had a 78% lower risk of mumps at 28 days after vaccination, investigators reported.
“These findings suggest that the campaign to administer a 3rd dose of MMR vaccine improved mumps outbreak control and that waning immunity probably contributed to propagation of the outbreak,” Dr. Cardemil and her colleagues wrote.
The vote in favor of a 3rd dose was unanimous among 15 voting members of ACIP. The committee’s recommendations must be approved by the CDC director before they are considered official recommendations.
AT AN ACIP MEETING
VIDEO: Burnout affects half of U.S. gastroenterologists
ORLANDO – Nearly half of U.S. gastroenterologists who responded to a recent survey had symptoms of burnout that seemed largely driven by work-life balance issues.
Burnout appeared to disproportionately affect younger gastroenterologists, those who spend more time on chores at home including caring for young children, physicians who were neutral toward or dissatisfied with a spouse or partner, and clinicians planning to soon leave their practice, Carol A. Burke, MD, said at the World Congress of Gastroenterology at ACG 2017.
Factors not linked with burnout included their type of practice, whether the gastroenterologists worked full or part time, their location, and their compensation, said Dr. Burke, director of the Center for Colon Polyp and Cancer Prevention at the Cleveland Clinic.
The life issues that appeared most strongly linked to burnout “speak to a problem for physicians to balance” their professional and personal lives, Dr. Burke said in a video interview. Several interventions exist that can potentially mitigate burnout, and the American College of Gastroenterology, which ran the survey, is taking steps to make information on these interventions available to members, noted Dr. Burke, the organization’s president.
Dr. Burke and her associates sent a 60-item survey to all 11,080 College members during 2014 and 2015 and received 1,021 replies including 754 fully completed responses. Their prespecified definition of burnout was a high score for emotional exhaustion or for depersonalization, or both, on the Maslach Burnout Inventory. The results showed that 45% of respondents had a high score for emotional exhaustion, 21% scored high on depersonalization, and overall 49% met the burnout criteria set by the investigators. The Inventory answers also showed that 18% had a low sense of personal accomplishment.
A multivariate analysis showed that significant links with burnout were younger age, more time spent on domestic chores, having a neutral or dissatisfying relationship with a spouse or partner, and plans for imminent retirement from gastroenterology practice, Dr. Burke reported.
The main reasons for planning imminent retirement were reimbursement, cited by 32% of this subgroup, regulations, cited by 21%, recertification, cited by 16%, and electronic medical records, cited by 10% as the main reason for leaving practice.
Strategies and resources aimed at better dealing with burnout were requested by 60% of all survey respondents, and the College is in the process of making these tools available, Dr. Burke said.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
[email protected]
On Twitter @mitchelzoler
ORLANDO – Nearly half of U.S. gastroenterologists who responded to a recent survey had symptoms of burnout that seemed largely driven by work-life balance issues.
Burnout appeared to disproportionately affect younger gastroenterologists, those who spend more time on chores at home including caring for young children, physicians who were neutral toward or dissatisfied with a spouse or partner, and clinicians planning to soon leave their practice, Carol A. Burke, MD, said at the World Congress of Gastroenterology at ACG 2017.
Factors not linked with burnout included their type of practice, whether the gastroenterologists worked full or part time, their location, and their compensation, said Dr. Burke, director of the Center for Colon Polyp and Cancer Prevention at the Cleveland Clinic.
The life issues that appeared most strongly linked to burnout “speak to a problem for physicians to balance” their professional and personal lives, Dr. Burke said in a video interview. Several interventions exist that can potentially mitigate burnout, and the American College of Gastroenterology, which ran the survey, is taking steps to make information on these interventions available to members, noted Dr. Burke, the organization’s president.
Dr. Burke and her associates sent a 60-item survey to all 11,080 College members during 2014 and 2015 and received 1,021 replies including 754 fully completed responses. Their prespecified definition of burnout was a high score for emotional exhaustion or for depersonalization, or both, on the Maslach Burnout Inventory. The results showed that 45% of respondents had a high score for emotional exhaustion, 21% scored high on depersonalization, and overall 49% met the burnout criteria set by the investigators. The Inventory answers also showed that 18% had a low sense of personal accomplishment.
A multivariate analysis showed that significant links with burnout were younger age, more time spent on domestic chores, having a neutral or dissatisfying relationship with a spouse or partner, and plans for imminent retirement from gastroenterology practice, Dr. Burke reported.
The main reasons for planning imminent retirement were reimbursement, cited by 32% of this subgroup, regulations, cited by 21%, recertification, cited by 16%, and electronic medical records, cited by 10% as the main reason for leaving practice.
Strategies and resources aimed at better dealing with burnout were requested by 60% of all survey respondents, and the College is in the process of making these tools available, Dr. Burke said.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
[email protected]
On Twitter @mitchelzoler
ORLANDO – Nearly half of U.S. gastroenterologists who responded to a recent survey had symptoms of burnout that seemed largely driven by work-life balance issues.
Burnout appeared to disproportionately affect younger gastroenterologists, those who spend more time on chores at home including caring for young children, physicians who were neutral toward or dissatisfied with a spouse or partner, and clinicians planning to soon leave their practice, Carol A. Burke, MD, said at the World Congress of Gastroenterology at ACG 2017.
Factors not linked with burnout included their type of practice, whether the gastroenterologists worked full or part time, their location, and their compensation, said Dr. Burke, director of the Center for Colon Polyp and Cancer Prevention at the Cleveland Clinic.
The life issues that appeared most strongly linked to burnout “speak to a problem for physicians to balance” their professional and personal lives, Dr. Burke said in a video interview. Several interventions exist that can potentially mitigate burnout, and the American College of Gastroenterology, which ran the survey, is taking steps to make information on these interventions available to members, noted Dr. Burke, the organization’s president.
Dr. Burke and her associates sent a 60-item survey to all 11,080 College members during 2014 and 2015 and received 1,021 replies including 754 fully completed responses. Their prespecified definition of burnout was a high score for emotional exhaustion or for depersonalization, or both, on the Maslach Burnout Inventory. The results showed that 45% of respondents had a high score for emotional exhaustion, 21% scored high on depersonalization, and overall 49% met the burnout criteria set by the investigators. The Inventory answers also showed that 18% had a low sense of personal accomplishment.
A multivariate analysis showed that significant links with burnout were younger age, more time spent on domestic chores, having a neutral or dissatisfying relationship with a spouse or partner, and plans for imminent retirement from gastroenterology practice, Dr. Burke reported.
The main reasons for planning imminent retirement were reimbursement, cited by 32% of this subgroup, regulations, cited by 21%, recertification, cited by 16%, and electronic medical records, cited by 10% as the main reason for leaving practice.
Strategies and resources aimed at better dealing with burnout were requested by 60% of all survey respondents, and the College is in the process of making these tools available, Dr. Burke said.
The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
[email protected]
On Twitter @mitchelzoler
AT THE 13TH WORLD CONGRESS OF GASTROENTEROLOGY
Key clinical point:
Major finding: Forty-nine percent of surveyed U.S. gastroenterologists showed a high level of emotional exhaustion, depersonalization, or both.
Data source: Survey results from 754 members of the American College of Gastroenterology.
Disclosures: The American College of Gastroenterology funded the survey. Dr. Burke had no relevant disclosures.
Citrate reactions seen in 7% of apheresis donations
SAN DIEGO – , based on data presented from Héma-Québec, Montreal, presented at the annual meeting of the American Association of Blood Banks.
Vasovagal reactions were seen in 2.5% of procedures, and reactions with loss of consciousness occurred in 0.1%, reported Pierre Robillard, MD, of McGill University and Héma-Québec.
The fairly high rates of adverse reactions speak to the importance of taking preventive measures in donors undergoing apheresis, he said. Hypocalcemia and other citrate-induced abnormalities can affect neuromuscular and cardiac function. Most reactions are mild dysesthesias, but tetany, seizures, and cardiac arrhythmias can occur. Prophylactic oral or intravenous calcium supplements can correct decreased ionized calcium levels and manage the symptoms of hypocalcemia, which are especially likely in procedures involving platelet collection.
Donor exposure to citrate can vary depending on the type and length of the specific apheresis procedure as well as the type of system used, he said. The risk for vasovagal reactions also varies with the type of procedure performed and the use of volume replacement.
Dr. Robillard and his colleagues examined the severity of all cases of donor complications reported to Héma-Québec, beginning in October 2015. A Trima Accel system by Terumo BCT was used for single and double red blood cell collection, single and double platelets, platelets plus plasma, platelets plus red blood cells, platelets plus red blood cells plus plasma, double platelets plus red blood cells, and double platelets plus plasma. Plasma for fractionation was collected with a PCS®2 Plasma Collection System by Haemonetics.
During the study period, 80,409 apheresis procedures were conducted, involving 14,742 donors. Within this cohort, 5,447 (6.8%) had citrate reactions; 2,006 (2.5%) had vasovagal reactions without loss of consciousness, and 77 (0.1%) had vasovagal reactions with loss of consciousness.
Three quarters of the donors (74%) were male, and rates of citrate reactions were higher in men than in women (7% vs. 6%, P less than .001). There was a linear association between level of citrate exposure and citrate reactions rates (P less than .001).
“Vasovagal reactions were four times higher for female donors than for males, with or without loss of consciousness, and this difference was statistically significant,” said Dr. Robillard. Vasovagal reactions were higher in first-time donors.
The rate of vasovagal reactions without loss of consciousness was 6.2% in women and 1.6% in men, (P less than .001). Vasovagal reactions with loss of consciousness affected 0.22% of women and 0.06% of men (P less than .001).
The rates of citrate reactions were similar at all ages, but the rates of vasovagal reactions declined with age; the rates were 6.1% in patients aged 18-22 years and 1% among those over age 70.
SAN DIEGO – , based on data presented from Héma-Québec, Montreal, presented at the annual meeting of the American Association of Blood Banks.
Vasovagal reactions were seen in 2.5% of procedures, and reactions with loss of consciousness occurred in 0.1%, reported Pierre Robillard, MD, of McGill University and Héma-Québec.
The fairly high rates of adverse reactions speak to the importance of taking preventive measures in donors undergoing apheresis, he said. Hypocalcemia and other citrate-induced abnormalities can affect neuromuscular and cardiac function. Most reactions are mild dysesthesias, but tetany, seizures, and cardiac arrhythmias can occur. Prophylactic oral or intravenous calcium supplements can correct decreased ionized calcium levels and manage the symptoms of hypocalcemia, which are especially likely in procedures involving platelet collection.
Donor exposure to citrate can vary depending on the type and length of the specific apheresis procedure as well as the type of system used, he said. The risk for vasovagal reactions also varies with the type of procedure performed and the use of volume replacement.
Dr. Robillard and his colleagues examined the severity of all cases of donor complications reported to Héma-Québec, beginning in October 2015. A Trima Accel system by Terumo BCT was used for single and double red blood cell collection, single and double platelets, platelets plus plasma, platelets plus red blood cells, platelets plus red blood cells plus plasma, double platelets plus red blood cells, and double platelets plus plasma. Plasma for fractionation was collected with a PCS®2 Plasma Collection System by Haemonetics.
During the study period, 80,409 apheresis procedures were conducted, involving 14,742 donors. Within this cohort, 5,447 (6.8%) had citrate reactions; 2,006 (2.5%) had vasovagal reactions without loss of consciousness, and 77 (0.1%) had vasovagal reactions with loss of consciousness.
Three quarters of the donors (74%) were male, and rates of citrate reactions were higher in men than in women (7% vs. 6%, P less than .001). There was a linear association between level of citrate exposure and citrate reactions rates (P less than .001).
“Vasovagal reactions were four times higher for female donors than for males, with or without loss of consciousness, and this difference was statistically significant,” said Dr. Robillard. Vasovagal reactions were higher in first-time donors.
The rate of vasovagal reactions without loss of consciousness was 6.2% in women and 1.6% in men, (P less than .001). Vasovagal reactions with loss of consciousness affected 0.22% of women and 0.06% of men (P less than .001).
The rates of citrate reactions were similar at all ages, but the rates of vasovagal reactions declined with age; the rates were 6.1% in patients aged 18-22 years and 1% among those over age 70.
SAN DIEGO – , based on data presented from Héma-Québec, Montreal, presented at the annual meeting of the American Association of Blood Banks.
Vasovagal reactions were seen in 2.5% of procedures, and reactions with loss of consciousness occurred in 0.1%, reported Pierre Robillard, MD, of McGill University and Héma-Québec.
The fairly high rates of adverse reactions speak to the importance of taking preventive measures in donors undergoing apheresis, he said. Hypocalcemia and other citrate-induced abnormalities can affect neuromuscular and cardiac function. Most reactions are mild dysesthesias, but tetany, seizures, and cardiac arrhythmias can occur. Prophylactic oral or intravenous calcium supplements can correct decreased ionized calcium levels and manage the symptoms of hypocalcemia, which are especially likely in procedures involving platelet collection.
Donor exposure to citrate can vary depending on the type and length of the specific apheresis procedure as well as the type of system used, he said. The risk for vasovagal reactions also varies with the type of procedure performed and the use of volume replacement.
Dr. Robillard and his colleagues examined the severity of all cases of donor complications reported to Héma-Québec, beginning in October 2015. A Trima Accel system by Terumo BCT was used for single and double red blood cell collection, single and double platelets, platelets plus plasma, platelets plus red blood cells, platelets plus red blood cells plus plasma, double platelets plus red blood cells, and double platelets plus plasma. Plasma for fractionation was collected with a PCS®2 Plasma Collection System by Haemonetics.
During the study period, 80,409 apheresis procedures were conducted, involving 14,742 donors. Within this cohort, 5,447 (6.8%) had citrate reactions; 2,006 (2.5%) had vasovagal reactions without loss of consciousness, and 77 (0.1%) had vasovagal reactions with loss of consciousness.
Three quarters of the donors (74%) were male, and rates of citrate reactions were higher in men than in women (7% vs. 6%, P less than .001). There was a linear association between level of citrate exposure and citrate reactions rates (P less than .001).
“Vasovagal reactions were four times higher for female donors than for males, with or without loss of consciousness, and this difference was statistically significant,” said Dr. Robillard. Vasovagal reactions were higher in first-time donors.
The rate of vasovagal reactions without loss of consciousness was 6.2% in women and 1.6% in men, (P less than .001). Vasovagal reactions with loss of consciousness affected 0.22% of women and 0.06% of men (P less than .001).
The rates of citrate reactions were similar at all ages, but the rates of vasovagal reactions declined with age; the rates were 6.1% in patients aged 18-22 years and 1% among those over age 70.
AT AABB17
Key clinical point: Adverse reactions to apheresis donations can be significant; calcium supplements can reduce the risk of citrate reactions and volume replacement can reduce the risk of vasovagal reactions in donors.
Major finding: Citrate reactions accompanied 6.8% of donations; 2.5% had vasovagal reactions without loss of consciousness and 0.1% had loss of consciousness.
Data source: A study at Héma-Québec, Montreal, of 80,409 apheresis procedures conducted in 14,742 donors.
Disclosures: Dr. Robillard had no disclosures.
State regulations for tattoo facilities increased blood donor pools
SAN DIEGO – Tattoos are rapidly moving into mainstream America, and as more states regulate tattoo facilities, persons with tattoos can be blood donors without compromising patient safety, Mary Townsend of Blood Systems Inc. reported at the annual meeting of the American Association of Blood Banks.
“Two big states – Arizona and California – were added to the list of approved states, and we had a gain of 2,216 donors in California during a 3-month period and a gain of 4,035 donors in Arizona over 4 months,” Ms. Townsend said.
Both the AABB and the Food and Drug Administration require a 12-month deferral of donors after they have received tattoos using nonsterile needles or reusable ink. The FDA’s current 2015 guidance also states that tattooed donors can give plasma as soon as the inked area has healed if they reside in a state with applied inspections and licenses for tattoo facilities, and if a sterile needle and ink were used.
Blood Systems monitors state regulations to see if they require tattoo establishments to be licensed and require the use of sterile needles and non-reusable ink. To be considered an approved state, the regulations have to be statewide, covering all jurisdictions.
In the study, Ms. Townsend and her colleagues compared the rates of donors who were deferred before and after Arizona and California were added to the list of approved states, to determine the potential gain in donors with changes in state tattoo licensing regulations.
They analyzed blood centers in California and Arizona before and after implementation of state tattoo regulations, and also screened individuals who had received tattoos in those states with the question: “In the past 12 months have you had a tattoo?” and if the answer was ‘yes,’ if the tattoo was applied by a state regulated facility.
For California, they compared two periods – 3 months before regulations were implemented (February to April of 2015) and 3 months after (February to April of 2016) regulations were implemented. For Arizona, they selected a 4-month period (December 2015 to March 2016) and 4 months afterward (December 2016 to March 2017).
A higher proportion of donors who came to centers to donate blood admitted to having gotten a tattoo within the last 12 months in the postregulatory period in both states. The increase in donors occurred immediately following the addition of both states to the Acceptable States List. Accepted donors increased 13-fold in California and 3-fold in Arizona. The absolute number of accepted donors with tattoos rose from 13 to 567 in California and from 151 to 1,496 in Arizona, which represented an annual potential gain of 2,216 and 4,035 additional blood donations.
For blood donors who received a tattoo in a regulated state, blood donations were reviewed for the presence of infectious disease markers including HIV, hepatitis B, and hepatitis C. All donors who had received a tattoo in a regulated state tested negative for HIV, HBV, and HCV.
“Roughly one in three people (in the United States) have a tattoo and, of those, about 70% have more than one tattoo. The bottom line is that 45 million Americans have at least one tattoo,” she said. As state regulations adhere to guidelines regarding tattoos and blood donation, the pool of donors increases.
SAN DIEGO – Tattoos are rapidly moving into mainstream America, and as more states regulate tattoo facilities, persons with tattoos can be blood donors without compromising patient safety, Mary Townsend of Blood Systems Inc. reported at the annual meeting of the American Association of Blood Banks.
“Two big states – Arizona and California – were added to the list of approved states, and we had a gain of 2,216 donors in California during a 3-month period and a gain of 4,035 donors in Arizona over 4 months,” Ms. Townsend said.
Both the AABB and the Food and Drug Administration require a 12-month deferral of donors after they have received tattoos using nonsterile needles or reusable ink. The FDA’s current 2015 guidance also states that tattooed donors can give plasma as soon as the inked area has healed if they reside in a state with applied inspections and licenses for tattoo facilities, and if a sterile needle and ink were used.
Blood Systems monitors state regulations to see if they require tattoo establishments to be licensed and require the use of sterile needles and non-reusable ink. To be considered an approved state, the regulations have to be statewide, covering all jurisdictions.
In the study, Ms. Townsend and her colleagues compared the rates of donors who were deferred before and after Arizona and California were added to the list of approved states, to determine the potential gain in donors with changes in state tattoo licensing regulations.
They analyzed blood centers in California and Arizona before and after implementation of state tattoo regulations, and also screened individuals who had received tattoos in those states with the question: “In the past 12 months have you had a tattoo?” and if the answer was ‘yes,’ if the tattoo was applied by a state regulated facility.
For California, they compared two periods – 3 months before regulations were implemented (February to April of 2015) and 3 months after (February to April of 2016) regulations were implemented. For Arizona, they selected a 4-month period (December 2015 to March 2016) and 4 months afterward (December 2016 to March 2017).
A higher proportion of donors who came to centers to donate blood admitted to having gotten a tattoo within the last 12 months in the postregulatory period in both states. The increase in donors occurred immediately following the addition of both states to the Acceptable States List. Accepted donors increased 13-fold in California and 3-fold in Arizona. The absolute number of accepted donors with tattoos rose from 13 to 567 in California and from 151 to 1,496 in Arizona, which represented an annual potential gain of 2,216 and 4,035 additional blood donations.
For blood donors who received a tattoo in a regulated state, blood donations were reviewed for the presence of infectious disease markers including HIV, hepatitis B, and hepatitis C. All donors who had received a tattoo in a regulated state tested negative for HIV, HBV, and HCV.
“Roughly one in three people (in the United States) have a tattoo and, of those, about 70% have more than one tattoo. The bottom line is that 45 million Americans have at least one tattoo,” she said. As state regulations adhere to guidelines regarding tattoos and blood donation, the pool of donors increases.
SAN DIEGO – Tattoos are rapidly moving into mainstream America, and as more states regulate tattoo facilities, persons with tattoos can be blood donors without compromising patient safety, Mary Townsend of Blood Systems Inc. reported at the annual meeting of the American Association of Blood Banks.
“Two big states – Arizona and California – were added to the list of approved states, and we had a gain of 2,216 donors in California during a 3-month period and a gain of 4,035 donors in Arizona over 4 months,” Ms. Townsend said.
Both the AABB and the Food and Drug Administration require a 12-month deferral of donors after they have received tattoos using nonsterile needles or reusable ink. The FDA’s current 2015 guidance also states that tattooed donors can give plasma as soon as the inked area has healed if they reside in a state with applied inspections and licenses for tattoo facilities, and if a sterile needle and ink were used.
Blood Systems monitors state regulations to see if they require tattoo establishments to be licensed and require the use of sterile needles and non-reusable ink. To be considered an approved state, the regulations have to be statewide, covering all jurisdictions.
In the study, Ms. Townsend and her colleagues compared the rates of donors who were deferred before and after Arizona and California were added to the list of approved states, to determine the potential gain in donors with changes in state tattoo licensing regulations.
They analyzed blood centers in California and Arizona before and after implementation of state tattoo regulations, and also screened individuals who had received tattoos in those states with the question: “In the past 12 months have you had a tattoo?” and if the answer was ‘yes,’ if the tattoo was applied by a state regulated facility.
For California, they compared two periods – 3 months before regulations were implemented (February to April of 2015) and 3 months after (February to April of 2016) regulations were implemented. For Arizona, they selected a 4-month period (December 2015 to March 2016) and 4 months afterward (December 2016 to March 2017).
A higher proportion of donors who came to centers to donate blood admitted to having gotten a tattoo within the last 12 months in the postregulatory period in both states. The increase in donors occurred immediately following the addition of both states to the Acceptable States List. Accepted donors increased 13-fold in California and 3-fold in Arizona. The absolute number of accepted donors with tattoos rose from 13 to 567 in California and from 151 to 1,496 in Arizona, which represented an annual potential gain of 2,216 and 4,035 additional blood donations.
For blood donors who received a tattoo in a regulated state, blood donations were reviewed for the presence of infectious disease markers including HIV, hepatitis B, and hepatitis C. All donors who had received a tattoo in a regulated state tested negative for HIV, HBV, and HCV.
“Roughly one in three people (in the United States) have a tattoo and, of those, about 70% have more than one tattoo. The bottom line is that 45 million Americans have at least one tattoo,” she said. As state regulations adhere to guidelines regarding tattoos and blood donation, the pool of donors increases.
AT AABB 2017
Key clinical point:
Major finding: The absolute number of accepted donors with tattoos rose from 13 to 567 in California and from 151 to 1,496 in Arizona, which represented an annual potential gain of 2,216 and 4,035 additional blood donations.
Data source: An analysis of blood centers in California and Arizona before and after state tattoo regulations were implemented.
Disclosures: Dr. Townsend has no disclosures.
Stroke cognitive outcomes found worse in Mexican Americans
SAN DIEGO – A new analysis shows that Mexican Americans (MAs) have worse cognitive outcomes a year after having a stroke than do non-Hispanic whites (NHWs).
The findings, which show that MAs had a mean score of 86 on the Modified Mini-Mental State Examination (3MSE; range: 0-100), compared with 92 for NHWs, come from a prospective study of MAs and NHWs in Corpus Christi, Texas, where both populations have long-established residencies.
After controlling for all factors, the researchers found a difference of –6.73 (95% confidence interval, –3.88 to –9.57; P less than .001).
“The Mexican-American population is growing quickly and aging. The cost of stroke-related cognitive impairment is high for patient, family, and society. Efforts to combat stroke-related cognitive decline are critical,” said Dr. Morgenstern, professor of neurology and epidemiology at the University of Michigan, Ann Arbor.
The study grew out of the Brain Attack Surveillance in Corpus Christi (BASIC) Project, which began in 1999 and is funded until 2019. It is the only ongoing stroke surveillance program that focuses on Mexican Americans, who comprise the largest segment of Hispanic Americans.
The researchers analyzed data encompassing all stroke patients in the BASIC Project from October 2014 through January 2016 (n = 227). They analyzed cognitive outcome data from 3 months, 6 months, and 12 months. MAs were younger on average than NHWs (median age 66 vs. 70; P = .018), and were more likely to have diabetes (54% vs. 36%; P less than .001). They were less likely to have atrial fibrillation (13% vs. 20%; P = .025).
At 12 months, MAs had a lower median 3MSE score of 86 (interquartile range, 73-93, compared with 92 in NHWs (IQR, 83-96; P less than .001). As the researchers adjusted for additional factors, the discrepancy became larger. Adjustment for age and sex revealed a difference of 6.88 (95% confidence interval, 4.15-9.60). Additional adjustment for prestroke condition showed a difference of 7.04. Additional adjustment for insurance led to the same differential of 7.04. Adjustment for diabetes and comorbidities pushed the difference to 7.11. Adjustment for stroke severity (National Institutes of Health Stroke Scale) revealed a difference of 6.73 (P less than .001).
Asked if the results were surprising, Dr. Morgenstern replied: “I think it’s always surprising to see one population of U.S. citizens who have more disease or a worse outcome than another when it’s not explained by the many possible factors we considered.” He also called for additional studies of cognitive dysfunction in Hispanic communities in other forms of dementia, such as Alzheimer’s disease and vascular dementia. “There’s very little of that,” he said.
The National Institutes of Health funded the study. Dr. Morgenstern reported having no financial disclosures.
SAN DIEGO – A new analysis shows that Mexican Americans (MAs) have worse cognitive outcomes a year after having a stroke than do non-Hispanic whites (NHWs).
The findings, which show that MAs had a mean score of 86 on the Modified Mini-Mental State Examination (3MSE; range: 0-100), compared with 92 for NHWs, come from a prospective study of MAs and NHWs in Corpus Christi, Texas, where both populations have long-established residencies.
After controlling for all factors, the researchers found a difference of –6.73 (95% confidence interval, –3.88 to –9.57; P less than .001).
“The Mexican-American population is growing quickly and aging. The cost of stroke-related cognitive impairment is high for patient, family, and society. Efforts to combat stroke-related cognitive decline are critical,” said Dr. Morgenstern, professor of neurology and epidemiology at the University of Michigan, Ann Arbor.
The study grew out of the Brain Attack Surveillance in Corpus Christi (BASIC) Project, which began in 1999 and is funded until 2019. It is the only ongoing stroke surveillance program that focuses on Mexican Americans, who comprise the largest segment of Hispanic Americans.
The researchers analyzed data encompassing all stroke patients in the BASIC Project from October 2014 through January 2016 (n = 227). They analyzed cognitive outcome data from 3 months, 6 months, and 12 months. MAs were younger on average than NHWs (median age 66 vs. 70; P = .018), and were more likely to have diabetes (54% vs. 36%; P less than .001). They were less likely to have atrial fibrillation (13% vs. 20%; P = .025).
At 12 months, MAs had a lower median 3MSE score of 86 (interquartile range, 73-93, compared with 92 in NHWs (IQR, 83-96; P less than .001). As the researchers adjusted for additional factors, the discrepancy became larger. Adjustment for age and sex revealed a difference of 6.88 (95% confidence interval, 4.15-9.60). Additional adjustment for prestroke condition showed a difference of 7.04. Additional adjustment for insurance led to the same differential of 7.04. Adjustment for diabetes and comorbidities pushed the difference to 7.11. Adjustment for stroke severity (National Institutes of Health Stroke Scale) revealed a difference of 6.73 (P less than .001).
Asked if the results were surprising, Dr. Morgenstern replied: “I think it’s always surprising to see one population of U.S. citizens who have more disease or a worse outcome than another when it’s not explained by the many possible factors we considered.” He also called for additional studies of cognitive dysfunction in Hispanic communities in other forms of dementia, such as Alzheimer’s disease and vascular dementia. “There’s very little of that,” he said.
The National Institutes of Health funded the study. Dr. Morgenstern reported having no financial disclosures.
SAN DIEGO – A new analysis shows that Mexican Americans (MAs) have worse cognitive outcomes a year after having a stroke than do non-Hispanic whites (NHWs).
The findings, which show that MAs had a mean score of 86 on the Modified Mini-Mental State Examination (3MSE; range: 0-100), compared with 92 for NHWs, come from a prospective study of MAs and NHWs in Corpus Christi, Texas, where both populations have long-established residencies.
After controlling for all factors, the researchers found a difference of –6.73 (95% confidence interval, –3.88 to –9.57; P less than .001).
“The Mexican-American population is growing quickly and aging. The cost of stroke-related cognitive impairment is high for patient, family, and society. Efforts to combat stroke-related cognitive decline are critical,” said Dr. Morgenstern, professor of neurology and epidemiology at the University of Michigan, Ann Arbor.
The study grew out of the Brain Attack Surveillance in Corpus Christi (BASIC) Project, which began in 1999 and is funded until 2019. It is the only ongoing stroke surveillance program that focuses on Mexican Americans, who comprise the largest segment of Hispanic Americans.
The researchers analyzed data encompassing all stroke patients in the BASIC Project from October 2014 through January 2016 (n = 227). They analyzed cognitive outcome data from 3 months, 6 months, and 12 months. MAs were younger on average than NHWs (median age 66 vs. 70; P = .018), and were more likely to have diabetes (54% vs. 36%; P less than .001). They were less likely to have atrial fibrillation (13% vs. 20%; P = .025).
At 12 months, MAs had a lower median 3MSE score of 86 (interquartile range, 73-93, compared with 92 in NHWs (IQR, 83-96; P less than .001). As the researchers adjusted for additional factors, the discrepancy became larger. Adjustment for age and sex revealed a difference of 6.88 (95% confidence interval, 4.15-9.60). Additional adjustment for prestroke condition showed a difference of 7.04. Additional adjustment for insurance led to the same differential of 7.04. Adjustment for diabetes and comorbidities pushed the difference to 7.11. Adjustment for stroke severity (National Institutes of Health Stroke Scale) revealed a difference of 6.73 (P less than .001).
Asked if the results were surprising, Dr. Morgenstern replied: “I think it’s always surprising to see one population of U.S. citizens who have more disease or a worse outcome than another when it’s not explained by the many possible factors we considered.” He also called for additional studies of cognitive dysfunction in Hispanic communities in other forms of dementia, such as Alzheimer’s disease and vascular dementia. “There’s very little of that,” he said.
The National Institutes of Health funded the study. Dr. Morgenstern reported having no financial disclosures.
AT ANA 2017
Key clinical point: In an analysis, cognitive outcomes were worse in Mexican-American stroke survivors despite researchers’ controlling for many factors.
Major finding: At 12 months, Mexican Americans scored 6 points lower on the Modified Mini-Mental State Examination compared with non-Hispanic whites.
Data source: Prospective analysis of 227 stroke patients in Corpus Christi, Texas.
Disclosures: The National Institutes of Health funded the study. Dr. Morgenstern reported having no financial disclosures.
Patients prefer higher dose of levothyroxine despite lack of objective benefit
VICTORIA, B.C. – Patient perception plays a large role in subjective benefit of levothyroxine therapy for hypothyroidism, suggests a double-blind randomized controlled trial reported at the annual meeting of the American Thyroid Association.
Mood, cognition, and quality of life (QoL) did not differ whether patients’ levothyroxine dose was adjusted to achieve thyroid-stimulating hormone (TSH) levels in the low-normal, high-normal, or mildly elevated range. But despite this lack of objective benefit, the large majority of patients preferred levothyroxine doses that they perceived to be higher – whether they actually were or not.
The study was not restricted to certain groups who might have a better response to higher levothyroxine dose, she acknowledged. Two such groups are patients with more symptoms (although volunteering for the study suggested dissatisfaction with symptom control) and patients with low tri-iodothyronine (T3) levels (although about half of patients had low baseline levels).
“We encourage further research in older subjects, men, and subjects with specific symptoms, low T3 levels, or functional polymorphisms in thyroid-relevant genes,” Dr. Samuels said. “These are really difficult, expensive studies to do, and if we are going to have any hope of getting them funded and doing them, I think that we have to be much more targeted.”
One of the session co-chairs, Catherine A. Dinauer, MD, a pediatric endocrinologist and clinician at the Yale Pediatric Thyroid Center, New Haven, Conn., commented, “I think these are really interesting data because there’s this sense among patients that their dose really affects how they feel, and this is essentially turning that on its head. It’s not really clear, then, why are these patients still maybe not feeling well.”
“It will be interesting to see more data on this and ... more about this business of checking T3 levels. Do we need to supplement with T3? I think we really don’t know that, especially in kids, but even in adults,” she added.
The other session co-chair, Yaron Tomer, MD, chair of the department of medicine and the Anita and Jack Saltz Chair in Diabetes Research at the Montefiore Medical Center, Bronx, N.Y., commented, “I think this study confirmed what a lot of us feel, that there is a lot of placebo effect when you treat in different ways to optimize the TSH or give T3.”
Other data reported in the session provide a possible explanation for the lack of benefit of adjusting pharmacologic therapy, suggesting that the volumes of various brain structures change with perturbations of thyroid function, he noted. “There might be true changes in the brain that affect how the patients feel. So these patients may truly not feel well. It’s just that we can’t fix it by adjusting the TSH level to very narrow margins or by adding T3,” he said.
Study details
“It is well known that overt hypothyroidism interferes with mood and a number of cognitive functions. However, neurocognitive effects of variations in thyroid function within the reference range and in mild or subclinical hypothyroidism are less clear,” Dr. Samuels noted, giving some background to the research.
“Observational studies of this question have tended to be negative but have often included less sensitive global screening tests of cognition. There are very few small-scale interventional studies,” she continued. “In the absence of conclusive data, many patients with mild hypothyroidism are started on levothyroxine to treat nonspecific quality of life, mood, or cognitive symptoms, and many additional treated patients request increased levothyroxine doses due to persistence of these symptoms.”
Dr. Samuels and her coinvestigators enrolled in the trial 138 hypothyroid but otherwise healthy patients who had been on a stable dose of levothyroxine (Synthroid and others) alone for at least 3 months and had normal TSH levels.
The patients were randomized to three groups in which clinicians adjusted levothyroxine dose every 6 weeks in blinded fashion to achieve different target TSH levels: low-normal TSH (0.34-2.50 mU/L), high-normal TSH (2.51-5.60 mU/L), or mildly elevated TSH (5.60-12.0 mU/L). Patients completed a battery of assessments at baseline and again at 6 months.
Main results confirmed that TSH targets were generally achieved, with significantly different mean TSH levels in the low-normal group (1.34 mU/L), high-normal group (3.74 mU/L), and mildly elevated group (9.74 mU/L) (P less than .05).
In crude analyses, results differed significantly across groups for only two of the dozens of measures of mood, cognition, and QoL assessed: bodily pain (26%, 11%, and 34% of patients reporting a high pain level in the low-normal, high-normal, and mildly elevated TSH groups, respectively; P = .03) and working memory on the N-Back test (86%, 58%, and 75% with a 1-back result; P = .002). However, there were no significant differences for any of the measures when analyses were repeated with statistical correction for multiple comparisons.
At the end of the study, patients were unable to say with any accuracy say whether they were receiving a dose of levothyroxine that was higher than, lower than, or unchanged from their baseline dose (P = .55 for actual vs. perceived).
However, patients preferred what they perceived to be a higher dose (P less than .001 for preferred vs. perceived). In the group perceiving their end-of-study dose was higher, the majority of patients (68%) preferred that dose, and in the group perceiving their end-of-study dose was lower, most (96%) preferred their baseline dose.
The sample size may not have been adequate to detect very small effects of changes in levothyroxine dose, acknowledged Dr. Samuels, who disclosed that she had no relevant conflicts of interest. Additionally, patients were predominantly female and younger, and heterogeneous with respect to thyroid diagnosis and length of levothyroxine therapy.
VICTORIA, B.C. – Patient perception plays a large role in subjective benefit of levothyroxine therapy for hypothyroidism, suggests a double-blind randomized controlled trial reported at the annual meeting of the American Thyroid Association.
Mood, cognition, and quality of life (QoL) did not differ whether patients’ levothyroxine dose was adjusted to achieve thyroid-stimulating hormone (TSH) levels in the low-normal, high-normal, or mildly elevated range. But despite this lack of objective benefit, the large majority of patients preferred levothyroxine doses that they perceived to be higher – whether they actually were or not.
The study was not restricted to certain groups who might have a better response to higher levothyroxine dose, she acknowledged. Two such groups are patients with more symptoms (although volunteering for the study suggested dissatisfaction with symptom control) and patients with low tri-iodothyronine (T3) levels (although about half of patients had low baseline levels).
“We encourage further research in older subjects, men, and subjects with specific symptoms, low T3 levels, or functional polymorphisms in thyroid-relevant genes,” Dr. Samuels said. “These are really difficult, expensive studies to do, and if we are going to have any hope of getting them funded and doing them, I think that we have to be much more targeted.”
One of the session co-chairs, Catherine A. Dinauer, MD, a pediatric endocrinologist and clinician at the Yale Pediatric Thyroid Center, New Haven, Conn., commented, “I think these are really interesting data because there’s this sense among patients that their dose really affects how they feel, and this is essentially turning that on its head. It’s not really clear, then, why are these patients still maybe not feeling well.”
“It will be interesting to see more data on this and ... more about this business of checking T3 levels. Do we need to supplement with T3? I think we really don’t know that, especially in kids, but even in adults,” she added.
The other session co-chair, Yaron Tomer, MD, chair of the department of medicine and the Anita and Jack Saltz Chair in Diabetes Research at the Montefiore Medical Center, Bronx, N.Y., commented, “I think this study confirmed what a lot of us feel, that there is a lot of placebo effect when you treat in different ways to optimize the TSH or give T3.”
Other data reported in the session provide a possible explanation for the lack of benefit of adjusting pharmacologic therapy, suggesting that the volumes of various brain structures change with perturbations of thyroid function, he noted. “There might be true changes in the brain that affect how the patients feel. So these patients may truly not feel well. It’s just that we can’t fix it by adjusting the TSH level to very narrow margins or by adding T3,” he said.
Study details
“It is well known that overt hypothyroidism interferes with mood and a number of cognitive functions. However, neurocognitive effects of variations in thyroid function within the reference range and in mild or subclinical hypothyroidism are less clear,” Dr. Samuels noted, giving some background to the research.
“Observational studies of this question have tended to be negative but have often included less sensitive global screening tests of cognition. There are very few small-scale interventional studies,” she continued. “In the absence of conclusive data, many patients with mild hypothyroidism are started on levothyroxine to treat nonspecific quality of life, mood, or cognitive symptoms, and many additional treated patients request increased levothyroxine doses due to persistence of these symptoms.”
Dr. Samuels and her coinvestigators enrolled in the trial 138 hypothyroid but otherwise healthy patients who had been on a stable dose of levothyroxine (Synthroid and others) alone for at least 3 months and had normal TSH levels.
The patients were randomized to three groups in which clinicians adjusted levothyroxine dose every 6 weeks in blinded fashion to achieve different target TSH levels: low-normal TSH (0.34-2.50 mU/L), high-normal TSH (2.51-5.60 mU/L), or mildly elevated TSH (5.60-12.0 mU/L). Patients completed a battery of assessments at baseline and again at 6 months.
Main results confirmed that TSH targets were generally achieved, with significantly different mean TSH levels in the low-normal group (1.34 mU/L), high-normal group (3.74 mU/L), and mildly elevated group (9.74 mU/L) (P less than .05).
In crude analyses, results differed significantly across groups for only two of the dozens of measures of mood, cognition, and QoL assessed: bodily pain (26%, 11%, and 34% of patients reporting a high pain level in the low-normal, high-normal, and mildly elevated TSH groups, respectively; P = .03) and working memory on the N-Back test (86%, 58%, and 75% with a 1-back result; P = .002). However, there were no significant differences for any of the measures when analyses were repeated with statistical correction for multiple comparisons.
At the end of the study, patients were unable to say with any accuracy say whether they were receiving a dose of levothyroxine that was higher than, lower than, or unchanged from their baseline dose (P = .55 for actual vs. perceived).
However, patients preferred what they perceived to be a higher dose (P less than .001 for preferred vs. perceived). In the group perceiving their end-of-study dose was higher, the majority of patients (68%) preferred that dose, and in the group perceiving their end-of-study dose was lower, most (96%) preferred their baseline dose.
The sample size may not have been adequate to detect very small effects of changes in levothyroxine dose, acknowledged Dr. Samuels, who disclosed that she had no relevant conflicts of interest. Additionally, patients were predominantly female and younger, and heterogeneous with respect to thyroid diagnosis and length of levothyroxine therapy.
VICTORIA, B.C. – Patient perception plays a large role in subjective benefit of levothyroxine therapy for hypothyroidism, suggests a double-blind randomized controlled trial reported at the annual meeting of the American Thyroid Association.
Mood, cognition, and quality of life (QoL) did not differ whether patients’ levothyroxine dose was adjusted to achieve thyroid-stimulating hormone (TSH) levels in the low-normal, high-normal, or mildly elevated range. But despite this lack of objective benefit, the large majority of patients preferred levothyroxine doses that they perceived to be higher – whether they actually were or not.
The study was not restricted to certain groups who might have a better response to higher levothyroxine dose, she acknowledged. Two such groups are patients with more symptoms (although volunteering for the study suggested dissatisfaction with symptom control) and patients with low tri-iodothyronine (T3) levels (although about half of patients had low baseline levels).
“We encourage further research in older subjects, men, and subjects with specific symptoms, low T3 levels, or functional polymorphisms in thyroid-relevant genes,” Dr. Samuels said. “These are really difficult, expensive studies to do, and if we are going to have any hope of getting them funded and doing them, I think that we have to be much more targeted.”
One of the session co-chairs, Catherine A. Dinauer, MD, a pediatric endocrinologist and clinician at the Yale Pediatric Thyroid Center, New Haven, Conn., commented, “I think these are really interesting data because there’s this sense among patients that their dose really affects how they feel, and this is essentially turning that on its head. It’s not really clear, then, why are these patients still maybe not feeling well.”
“It will be interesting to see more data on this and ... more about this business of checking T3 levels. Do we need to supplement with T3? I think we really don’t know that, especially in kids, but even in adults,” she added.
The other session co-chair, Yaron Tomer, MD, chair of the department of medicine and the Anita and Jack Saltz Chair in Diabetes Research at the Montefiore Medical Center, Bronx, N.Y., commented, “I think this study confirmed what a lot of us feel, that there is a lot of placebo effect when you treat in different ways to optimize the TSH or give T3.”
Other data reported in the session provide a possible explanation for the lack of benefit of adjusting pharmacologic therapy, suggesting that the volumes of various brain structures change with perturbations of thyroid function, he noted. “There might be true changes in the brain that affect how the patients feel. So these patients may truly not feel well. It’s just that we can’t fix it by adjusting the TSH level to very narrow margins or by adding T3,” he said.
Study details
“It is well known that overt hypothyroidism interferes with mood and a number of cognitive functions. However, neurocognitive effects of variations in thyroid function within the reference range and in mild or subclinical hypothyroidism are less clear,” Dr. Samuels noted, giving some background to the research.
“Observational studies of this question have tended to be negative but have often included less sensitive global screening tests of cognition. There are very few small-scale interventional studies,” she continued. “In the absence of conclusive data, many patients with mild hypothyroidism are started on levothyroxine to treat nonspecific quality of life, mood, or cognitive symptoms, and many additional treated patients request increased levothyroxine doses due to persistence of these symptoms.”
Dr. Samuels and her coinvestigators enrolled in the trial 138 hypothyroid but otherwise healthy patients who had been on a stable dose of levothyroxine (Synthroid and others) alone for at least 3 months and had normal TSH levels.
The patients were randomized to three groups in which clinicians adjusted levothyroxine dose every 6 weeks in blinded fashion to achieve different target TSH levels: low-normal TSH (0.34-2.50 mU/L), high-normal TSH (2.51-5.60 mU/L), or mildly elevated TSH (5.60-12.0 mU/L). Patients completed a battery of assessments at baseline and again at 6 months.
Main results confirmed that TSH targets were generally achieved, with significantly different mean TSH levels in the low-normal group (1.34 mU/L), high-normal group (3.74 mU/L), and mildly elevated group (9.74 mU/L) (P less than .05).
In crude analyses, results differed significantly across groups for only two of the dozens of measures of mood, cognition, and QoL assessed: bodily pain (26%, 11%, and 34% of patients reporting a high pain level in the low-normal, high-normal, and mildly elevated TSH groups, respectively; P = .03) and working memory on the N-Back test (86%, 58%, and 75% with a 1-back result; P = .002). However, there were no significant differences for any of the measures when analyses were repeated with statistical correction for multiple comparisons.
At the end of the study, patients were unable to say with any accuracy say whether they were receiving a dose of levothyroxine that was higher than, lower than, or unchanged from their baseline dose (P = .55 for actual vs. perceived).
However, patients preferred what they perceived to be a higher dose (P less than .001 for preferred vs. perceived). In the group perceiving their end-of-study dose was higher, the majority of patients (68%) preferred that dose, and in the group perceiving their end-of-study dose was lower, most (96%) preferred their baseline dose.
The sample size may not have been adequate to detect very small effects of changes in levothyroxine dose, acknowledged Dr. Samuels, who disclosed that she had no relevant conflicts of interest. Additionally, patients were predominantly female and younger, and heterogeneous with respect to thyroid diagnosis and length of levothyroxine therapy.
AT ATA 2017
Key clinical point:
Major finding: Mood, cognition, and QoL were similar across levothyroxine doses targeting various TSH levels, but patients preferred what they believed was a higher dose, even when it was not (P less than .001 for preferred vs. perceived).
Data source: A randomized trial of levothyroxine adjustment among 138 hypothyroid patients on a stable dose of the drug who had normal TSH levels.
Disclosures: Dr. Samuels disclosed that she had no relevant conflicts of interest.
Conjugate typhoid vaccine safe and effective in phase 2 trials
A new conjugate typhoid vaccine suitable for administration to infants and young children was efficacious, highly immunogenic, and well tolerated, compared with placebo, in a phase 2 study that tested the vaccine using a human typhoid infection model.
In a study that compared two formulations of typhoid vaccine to a control meningococcal vaccine, the new Vi-conjugate (Vi-TT) vaccine had an efficacy of 54.6% (95% confidence interval, 26.8-71.8) and a 100% seroconversion rate.
The study was not powered for a direct comparison of the efficacy of the Vi-TT with the efficacy of the Vi-polysaccharide (Vi-PS), the other vaccine used in the study. The Vi-PS vaccine had an efficacy of 52.0% (95% CI, 23.2-70.0), and 88.6% of the Vi-PS recipients had seroconversion.
However, “clinical manifestations of typhoid fever seemed less severe among diagnosed participants following Vi-TT vaccination,” Celina Jin, MD, and her colleagues wrote (Lancet. 2017 Sep 28: doi: 10.1016/S0140-6736[17]32149-9). Fever, defined as an oral temperature of 38° C or higher, was seen in 6 of 37 (16%) Vi-TT recipients, 17 of 31 (55%) receiving control, and 11 of 35 (31%) receiving Vi-PS.
Geometric mean titers also were significantly higher in the Vi-TT group than in the Vi-PS group, with an adjusted geometric mean titer of 562.9 EU/mL for Vi-TT and 140.5 EU/mL for Vi-PS (P less than .0001).
The study enrolled 112 healthy adult volunteers who were randomized 1:1:1 to receive Vi-PS, Vi-TT, or control meningococcal vaccine. A total of 103 of the participants eventually received one of the two study vaccines or the control vaccines, and that group was included in the per-protocol analysis.
After vaccination (recipients and investigators were masked as to which formulation participants received), study participants kept an online diary to report any vaccination-related symptoms for 7 days, and also had clinic visits scheduled at days 1, 3, 7, and 10.
Participants received one oral dose of wild-type Salmonella enterica serovar Typhi Quailes strain bacteria about 1 month after vaccination. The dose was 1-5x104 colony forming units, and was administered immediately following a 120-mL oral bolus of sodium bicarbonate (to neutralize stomach acid).
Participants then were seen daily in an outpatient clinic for 2 weeks. At each visit, investigators monitored vital signs, performed a general assessment, and drew blood to assess for typhoid bacteremia. Participants also kept an online diary for 21 days, reporting twice-daily self-measured temperatures as well. No antipyretics were allowed before typhoid diagnosis.
Participants who met the study’s criteria for typhoid diagnosis were treated with a 2-week course of ciprofloxacin or azithromycin; patients who did not become ill were treated 14 days after the oral typhoid challenge. None of the four serious adverse events reported during the study was deemed to be related to vaccination.
That broad definition of typhoid infection was used to determine attack rates for the study’s primary outcome measure. However, Dr. Jin and her colleagues also looked at a less stringent – and perhaps more clinically pertinent – definition of 12 hours of fever of 38° C or higher followed by S. Typhi bacteremia. Using those criteria, the Vi-TT vaccine prevented up to 87% of infections.
Salmonella Typhi is the world’s leading cause of enteric fever, said Dr. Jin, of the Oxford Vaccine Group at the University of Oxford (England). Up to 20.6 million people per year are affected, with children most commonly infected and low-resource populations in Asia and Africa hardest hit.
Both prescription and over-the-counter antibiotics are used worldwide to combat typhoid fever, and S. Typhi strains are becoming increasingly antibiotic resistant in South Asia and Africa, Dr. Jin and her coauthors said.
The typhoid vaccines that are currently licensed are either not suitable for administration to infants and young children, or are insufficiently immunogenic in younger populations.
The typhoid conjugate vaccine used in the study combines the Vi-polysaccharide capsule with a protein carrier, increasing host immunologic response and making the vaccine effective in infancy.
“This human challenge study provides further evidence to support the deployment of Vi-conjugate vaccines as a control measure to reduce the burden of typhoid fever, because those individuals living in endemic regions should not be made to wait another 60 years,” wrote Dr. Jin and her coauthors.
The study was funded by the Bill & Melinda Gates Foundation and the European commission FP7 grant, Advanced Immunization Technologies.
The Oxford Vaccine Group has developed a typhoid challenge model that provides an important bridge in clinical testing and affords the possibility of significant acceleration of the vaccine development process. Despite the controversy human challenge models sometimes engender, previous human typhoid challenge studies contributed to the development of the live attenuated typhoid vaccine Ty21a.
The conjugate vaccine tested by Dr. Jin and her colleagues is a much-needed weapon in the public health armamentarium of typhoid control. Treatment options are limited in regions of South Asia and Africa where endemic typhoid shows increasing antibiotic resistance.
This human challenge study provides the first evidence that the conjugate vaccine reduces the attack rate of typhoid fever, though its use in India has shown it to be safe and immunogenic, even in children as young as 6 months of age.
The stringent definition of typhoid fever attack used in this study may result in a finding of lower efficacy than would be seen in a field trial, and a National Institutes of Health–sponsored study of another conjugate vaccine found efficacy rates of 89% among Vietnamese preschoolers followed for nearly 4 years after vaccination. When the present study’s data were reanalyzed with use of the less stringent case definition of fever followed by typhoid bacteremia, a similar efficacy of 87.1% was seen for the conjugate vaccine. A larger sample size would be needed in a challenge study that included the less stringent definition as a coprimary endpoint, but results might better correlate with real-world field trials.
Phase 3 and 4 trials for the typhoid conjugate vaccine are forthcoming, but final results will not be tallied for many years. The typhoid challenge study reported by Dr. Jin and her colleagues bolsters hopes that the candidate vaccine will help with typhoid control where it’s most needed.
Nicholas A. Feasey, MD , is at the Liverpool (England) School of Tropical Medicine. Myron M. Levine, MD , is at the University of Maryland, Baltimore. Their comments were drawn from an editorial accompanying the study (Lancet. 2017 Sep 28. doi: 10.1016/S0140-6736[17]32407-8 ).
The Oxford Vaccine Group has developed a typhoid challenge model that provides an important bridge in clinical testing and affords the possibility of significant acceleration of the vaccine development process. Despite the controversy human challenge models sometimes engender, previous human typhoid challenge studies contributed to the development of the live attenuated typhoid vaccine Ty21a.
The conjugate vaccine tested by Dr. Jin and her colleagues is a much-needed weapon in the public health armamentarium of typhoid control. Treatment options are limited in regions of South Asia and Africa where endemic typhoid shows increasing antibiotic resistance.
This human challenge study provides the first evidence that the conjugate vaccine reduces the attack rate of typhoid fever, though its use in India has shown it to be safe and immunogenic, even in children as young as 6 months of age.
The stringent definition of typhoid fever attack used in this study may result in a finding of lower efficacy than would be seen in a field trial, and a National Institutes of Health–sponsored study of another conjugate vaccine found efficacy rates of 89% among Vietnamese preschoolers followed for nearly 4 years after vaccination. When the present study’s data were reanalyzed with use of the less stringent case definition of fever followed by typhoid bacteremia, a similar efficacy of 87.1% was seen for the conjugate vaccine. A larger sample size would be needed in a challenge study that included the less stringent definition as a coprimary endpoint, but results might better correlate with real-world field trials.
Phase 3 and 4 trials for the typhoid conjugate vaccine are forthcoming, but final results will not be tallied for many years. The typhoid challenge study reported by Dr. Jin and her colleagues bolsters hopes that the candidate vaccine will help with typhoid control where it’s most needed.
Nicholas A. Feasey, MD , is at the Liverpool (England) School of Tropical Medicine. Myron M. Levine, MD , is at the University of Maryland, Baltimore. Their comments were drawn from an editorial accompanying the study (Lancet. 2017 Sep 28. doi: 10.1016/S0140-6736[17]32407-8 ).
The Oxford Vaccine Group has developed a typhoid challenge model that provides an important bridge in clinical testing and affords the possibility of significant acceleration of the vaccine development process. Despite the controversy human challenge models sometimes engender, previous human typhoid challenge studies contributed to the development of the live attenuated typhoid vaccine Ty21a.
The conjugate vaccine tested by Dr. Jin and her colleagues is a much-needed weapon in the public health armamentarium of typhoid control. Treatment options are limited in regions of South Asia and Africa where endemic typhoid shows increasing antibiotic resistance.
This human challenge study provides the first evidence that the conjugate vaccine reduces the attack rate of typhoid fever, though its use in India has shown it to be safe and immunogenic, even in children as young as 6 months of age.
The stringent definition of typhoid fever attack used in this study may result in a finding of lower efficacy than would be seen in a field trial, and a National Institutes of Health–sponsored study of another conjugate vaccine found efficacy rates of 89% among Vietnamese preschoolers followed for nearly 4 years after vaccination. When the present study’s data were reanalyzed with use of the less stringent case definition of fever followed by typhoid bacteremia, a similar efficacy of 87.1% was seen for the conjugate vaccine. A larger sample size would be needed in a challenge study that included the less stringent definition as a coprimary endpoint, but results might better correlate with real-world field trials.
Phase 3 and 4 trials for the typhoid conjugate vaccine are forthcoming, but final results will not be tallied for many years. The typhoid challenge study reported by Dr. Jin and her colleagues bolsters hopes that the candidate vaccine will help with typhoid control where it’s most needed.
Nicholas A. Feasey, MD , is at the Liverpool (England) School of Tropical Medicine. Myron M. Levine, MD , is at the University of Maryland, Baltimore. Their comments were drawn from an editorial accompanying the study (Lancet. 2017 Sep 28. doi: 10.1016/S0140-6736[17]32407-8 ).
A new conjugate typhoid vaccine suitable for administration to infants and young children was efficacious, highly immunogenic, and well tolerated, compared with placebo, in a phase 2 study that tested the vaccine using a human typhoid infection model.
In a study that compared two formulations of typhoid vaccine to a control meningococcal vaccine, the new Vi-conjugate (Vi-TT) vaccine had an efficacy of 54.6% (95% confidence interval, 26.8-71.8) and a 100% seroconversion rate.
The study was not powered for a direct comparison of the efficacy of the Vi-TT with the efficacy of the Vi-polysaccharide (Vi-PS), the other vaccine used in the study. The Vi-PS vaccine had an efficacy of 52.0% (95% CI, 23.2-70.0), and 88.6% of the Vi-PS recipients had seroconversion.
However, “clinical manifestations of typhoid fever seemed less severe among diagnosed participants following Vi-TT vaccination,” Celina Jin, MD, and her colleagues wrote (Lancet. 2017 Sep 28: doi: 10.1016/S0140-6736[17]32149-9). Fever, defined as an oral temperature of 38° C or higher, was seen in 6 of 37 (16%) Vi-TT recipients, 17 of 31 (55%) receiving control, and 11 of 35 (31%) receiving Vi-PS.
Geometric mean titers also were significantly higher in the Vi-TT group than in the Vi-PS group, with an adjusted geometric mean titer of 562.9 EU/mL for Vi-TT and 140.5 EU/mL for Vi-PS (P less than .0001).
The study enrolled 112 healthy adult volunteers who were randomized 1:1:1 to receive Vi-PS, Vi-TT, or control meningococcal vaccine. A total of 103 of the participants eventually received one of the two study vaccines or the control vaccines, and that group was included in the per-protocol analysis.
After vaccination (recipients and investigators were masked as to which formulation participants received), study participants kept an online diary to report any vaccination-related symptoms for 7 days, and also had clinic visits scheduled at days 1, 3, 7, and 10.
Participants received one oral dose of wild-type Salmonella enterica serovar Typhi Quailes strain bacteria about 1 month after vaccination. The dose was 1-5x104 colony forming units, and was administered immediately following a 120-mL oral bolus of sodium bicarbonate (to neutralize stomach acid).
Participants then were seen daily in an outpatient clinic for 2 weeks. At each visit, investigators monitored vital signs, performed a general assessment, and drew blood to assess for typhoid bacteremia. Participants also kept an online diary for 21 days, reporting twice-daily self-measured temperatures as well. No antipyretics were allowed before typhoid diagnosis.
Participants who met the study’s criteria for typhoid diagnosis were treated with a 2-week course of ciprofloxacin or azithromycin; patients who did not become ill were treated 14 days after the oral typhoid challenge. None of the four serious adverse events reported during the study was deemed to be related to vaccination.
That broad definition of typhoid infection was used to determine attack rates for the study’s primary outcome measure. However, Dr. Jin and her colleagues also looked at a less stringent – and perhaps more clinically pertinent – definition of 12 hours of fever of 38° C or higher followed by S. Typhi bacteremia. Using those criteria, the Vi-TT vaccine prevented up to 87% of infections.
Salmonella Typhi is the world’s leading cause of enteric fever, said Dr. Jin, of the Oxford Vaccine Group at the University of Oxford (England). Up to 20.6 million people per year are affected, with children most commonly infected and low-resource populations in Asia and Africa hardest hit.
Both prescription and over-the-counter antibiotics are used worldwide to combat typhoid fever, and S. Typhi strains are becoming increasingly antibiotic resistant in South Asia and Africa, Dr. Jin and her coauthors said.
The typhoid vaccines that are currently licensed are either not suitable for administration to infants and young children, or are insufficiently immunogenic in younger populations.
The typhoid conjugate vaccine used in the study combines the Vi-polysaccharide capsule with a protein carrier, increasing host immunologic response and making the vaccine effective in infancy.
“This human challenge study provides further evidence to support the deployment of Vi-conjugate vaccines as a control measure to reduce the burden of typhoid fever, because those individuals living in endemic regions should not be made to wait another 60 years,” wrote Dr. Jin and her coauthors.
The study was funded by the Bill & Melinda Gates Foundation and the European commission FP7 grant, Advanced Immunization Technologies.
A new conjugate typhoid vaccine suitable for administration to infants and young children was efficacious, highly immunogenic, and well tolerated, compared with placebo, in a phase 2 study that tested the vaccine using a human typhoid infection model.
In a study that compared two formulations of typhoid vaccine to a control meningococcal vaccine, the new Vi-conjugate (Vi-TT) vaccine had an efficacy of 54.6% (95% confidence interval, 26.8-71.8) and a 100% seroconversion rate.
The study was not powered for a direct comparison of the efficacy of the Vi-TT with the efficacy of the Vi-polysaccharide (Vi-PS), the other vaccine used in the study. The Vi-PS vaccine had an efficacy of 52.0% (95% CI, 23.2-70.0), and 88.6% of the Vi-PS recipients had seroconversion.
However, “clinical manifestations of typhoid fever seemed less severe among diagnosed participants following Vi-TT vaccination,” Celina Jin, MD, and her colleagues wrote (Lancet. 2017 Sep 28: doi: 10.1016/S0140-6736[17]32149-9). Fever, defined as an oral temperature of 38° C or higher, was seen in 6 of 37 (16%) Vi-TT recipients, 17 of 31 (55%) receiving control, and 11 of 35 (31%) receiving Vi-PS.
Geometric mean titers also were significantly higher in the Vi-TT group than in the Vi-PS group, with an adjusted geometric mean titer of 562.9 EU/mL for Vi-TT and 140.5 EU/mL for Vi-PS (P less than .0001).
The study enrolled 112 healthy adult volunteers who were randomized 1:1:1 to receive Vi-PS, Vi-TT, or control meningococcal vaccine. A total of 103 of the participants eventually received one of the two study vaccines or the control vaccines, and that group was included in the per-protocol analysis.
After vaccination (recipients and investigators were masked as to which formulation participants received), study participants kept an online diary to report any vaccination-related symptoms for 7 days, and also had clinic visits scheduled at days 1, 3, 7, and 10.
Participants received one oral dose of wild-type Salmonella enterica serovar Typhi Quailes strain bacteria about 1 month after vaccination. The dose was 1-5x104 colony forming units, and was administered immediately following a 120-mL oral bolus of sodium bicarbonate (to neutralize stomach acid).
Participants then were seen daily in an outpatient clinic for 2 weeks. At each visit, investigators monitored vital signs, performed a general assessment, and drew blood to assess for typhoid bacteremia. Participants also kept an online diary for 21 days, reporting twice-daily self-measured temperatures as well. No antipyretics were allowed before typhoid diagnosis.
Participants who met the study’s criteria for typhoid diagnosis were treated with a 2-week course of ciprofloxacin or azithromycin; patients who did not become ill were treated 14 days after the oral typhoid challenge. None of the four serious adverse events reported during the study was deemed to be related to vaccination.
That broad definition of typhoid infection was used to determine attack rates for the study’s primary outcome measure. However, Dr. Jin and her colleagues also looked at a less stringent – and perhaps more clinically pertinent – definition of 12 hours of fever of 38° C or higher followed by S. Typhi bacteremia. Using those criteria, the Vi-TT vaccine prevented up to 87% of infections.
Salmonella Typhi is the world’s leading cause of enteric fever, said Dr. Jin, of the Oxford Vaccine Group at the University of Oxford (England). Up to 20.6 million people per year are affected, with children most commonly infected and low-resource populations in Asia and Africa hardest hit.
Both prescription and over-the-counter antibiotics are used worldwide to combat typhoid fever, and S. Typhi strains are becoming increasingly antibiotic resistant in South Asia and Africa, Dr. Jin and her coauthors said.
The typhoid vaccines that are currently licensed are either not suitable for administration to infants and young children, or are insufficiently immunogenic in younger populations.
The typhoid conjugate vaccine used in the study combines the Vi-polysaccharide capsule with a protein carrier, increasing host immunologic response and making the vaccine effective in infancy.
“This human challenge study provides further evidence to support the deployment of Vi-conjugate vaccines as a control measure to reduce the burden of typhoid fever, because those individuals living in endemic regions should not be made to wait another 60 years,” wrote Dr. Jin and her coauthors.
The study was funded by the Bill & Melinda Gates Foundation and the European commission FP7 grant, Advanced Immunization Technologies.
FROM THE LANCET
Key clinical point: A conjugate typhoid vaccine significantly reduced typhoid fever rates under a stringent case definition.
Major finding: Efficacy was 54.6% for the Vi-conjugate vaccine, with 100% seroconversion.
Study details: Randomized, controlled phase 2b trial of 112 participants receiving one of two typhoid vaccines, or control meningococcal vaccine.
Disclosures: The study was funded by the Bill & Melinda Gates Foundation and the European Commission FP7 grant, Advanced Immunization Technologies.