Barrett’s esophagus length predicts disease progression

Article Type
Changed

 

– Barrett’s esophagus length is a readily accessible endoscopic marker for disease progression, and it could aid in risk stratification and decision making about patient management, according to a review of records at a tertiary care center.

Of 301 patients who were diagnosed with Barrett’s esophagus and who underwent radiofrequency ablation (RFA) between March 2006 and 2016, 106 met a standardized definition of Barrett’s esophagus and were included in the study on the basis of the remaining criteria, including having nondysplastic Barrett’s esophagus and at least 1 year of follow-up from the time of initial diagnosis.

Of those 106 patients, 53 progressed to high-grade dysplasia/esophageal adenocarcinoma (HGD/EAC). The overall annual risk of EAC and combined HGD/EAC for the entire cohort was 1.23%/year and 5.94%/year, respectively. Those who progressed had significantly longer Barrett’s esophagus length, compared with 53 nonprogressors (6.37 cm vs. 4.3 cm).

Sharon Worcester/Frontline Medical News
Dr. Joseph Spataro and Dr. Christina Tofani
After adjustment for sex and number of RFA treatments, length of Barrett’s esophagus segment was found to be a significant independent predictor of progression to adenocarcinoma (odds ratio, 1.16), Joseph Spataro, MD, and his colleagues at Thomas Jefferson University Hospital, Philadelphia, reported in a poster at the World Congress of Gastroenterology at ACG 2017.

In fact, of all characteristics assessed, including Barrett’s esophagus length, age, sex, race, mean body mass index, family history of esophageal cancer, proton pump inhibitor use, and total duration of follow-up, only the first was a significant predictor of progression.

“For every 1-cm increase in length of BE [Barrett’s esophagus], the risk of progression to EAC increases by 16%,” Dr. Spataro said.

Although this work, which was awarded a “Presidential Poster” ribbon, is limited by the retrospective design, lack of standardization of surveillance intervals and biopsy protocols, and by the possibility of elevated progression rates due to the nature of the center (a referral center with ablative therapy options), the study included a “decent sample and follow-up,” and has important implications for patient care, he noted, explaining that the incidence of EAC has increased faster than any other malignancy in the Western world.

Despite therapeutic advances, the prognosis for patients with EAC remains poor; the annual risk of progression from Barrett’s esophagus to HGD is 0.38%, he added.

Currently, the most commonly used risk-stratification tool for determining surveillance intervals and management of patients with Barrett’s esophagus is the degree of dysplasia. Prior studies have evaluated Barrett’s esophagus length as a predictor of progression to HGD/EAC, but findings have been conflicting, he said.

The current findings suggest that until molecular biomarkers are identified and validated as adjunctive tools for risk stratification, Barrett’s esophagus length could be used to identify patients with nondysplastic Barrett’s esophagus at risk for disease progression.

This could facilitate more rational tailoring of endoscopic surveillance, explained lead author Christina Tofani, MD.

Currently, Barrett’s esophagus patients at the center who have dysplasia generally undergo ablation, while those without dysplasia generally undergo surveillance. Barrett’s esophagus length could be used to adjust surveillance intervals, or to lower the bar for ablation in some cases, she said.

The authors reported having no disclosures.
Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event
Related Articles

 

– Barrett’s esophagus length is a readily accessible endoscopic marker for disease progression, and it could aid in risk stratification and decision making about patient management, according to a review of records at a tertiary care center.

Of 301 patients who were diagnosed with Barrett’s esophagus and who underwent radiofrequency ablation (RFA) between March 2006 and 2016, 106 met a standardized definition of Barrett’s esophagus and were included in the study on the basis of the remaining criteria, including having nondysplastic Barrett’s esophagus and at least 1 year of follow-up from the time of initial diagnosis.

Of those 106 patients, 53 progressed to high-grade dysplasia/esophageal adenocarcinoma (HGD/EAC). The overall annual risk of EAC and combined HGD/EAC for the entire cohort was 1.23%/year and 5.94%/year, respectively. Those who progressed had significantly longer Barrett’s esophagus length, compared with 53 nonprogressors (6.37 cm vs. 4.3 cm).

Sharon Worcester/Frontline Medical News
Dr. Joseph Spataro and Dr. Christina Tofani
After adjustment for sex and number of RFA treatments, length of Barrett’s esophagus segment was found to be a significant independent predictor of progression to adenocarcinoma (odds ratio, 1.16), Joseph Spataro, MD, and his colleagues at Thomas Jefferson University Hospital, Philadelphia, reported in a poster at the World Congress of Gastroenterology at ACG 2017.

In fact, of all characteristics assessed, including Barrett’s esophagus length, age, sex, race, mean body mass index, family history of esophageal cancer, proton pump inhibitor use, and total duration of follow-up, only the first was a significant predictor of progression.

“For every 1-cm increase in length of BE [Barrett’s esophagus], the risk of progression to EAC increases by 16%,” Dr. Spataro said.

Although this work, which was awarded a “Presidential Poster” ribbon, is limited by the retrospective design, lack of standardization of surveillance intervals and biopsy protocols, and by the possibility of elevated progression rates due to the nature of the center (a referral center with ablative therapy options), the study included a “decent sample and follow-up,” and has important implications for patient care, he noted, explaining that the incidence of EAC has increased faster than any other malignancy in the Western world.

Despite therapeutic advances, the prognosis for patients with EAC remains poor; the annual risk of progression from Barrett’s esophagus to HGD is 0.38%, he added.

Currently, the most commonly used risk-stratification tool for determining surveillance intervals and management of patients with Barrett’s esophagus is the degree of dysplasia. Prior studies have evaluated Barrett’s esophagus length as a predictor of progression to HGD/EAC, but findings have been conflicting, he said.

The current findings suggest that until molecular biomarkers are identified and validated as adjunctive tools for risk stratification, Barrett’s esophagus length could be used to identify patients with nondysplastic Barrett’s esophagus at risk for disease progression.

This could facilitate more rational tailoring of endoscopic surveillance, explained lead author Christina Tofani, MD.

Currently, Barrett’s esophagus patients at the center who have dysplasia generally undergo ablation, while those without dysplasia generally undergo surveillance. Barrett’s esophagus length could be used to adjust surveillance intervals, or to lower the bar for ablation in some cases, she said.

The authors reported having no disclosures.

 

– Barrett’s esophagus length is a readily accessible endoscopic marker for disease progression, and it could aid in risk stratification and decision making about patient management, according to a review of records at a tertiary care center.

Of 301 patients who were diagnosed with Barrett’s esophagus and who underwent radiofrequency ablation (RFA) between March 2006 and 2016, 106 met a standardized definition of Barrett’s esophagus and were included in the study on the basis of the remaining criteria, including having nondysplastic Barrett’s esophagus and at least 1 year of follow-up from the time of initial diagnosis.

Of those 106 patients, 53 progressed to high-grade dysplasia/esophageal adenocarcinoma (HGD/EAC). The overall annual risk of EAC and combined HGD/EAC for the entire cohort was 1.23%/year and 5.94%/year, respectively. Those who progressed had significantly longer Barrett’s esophagus length, compared with 53 nonprogressors (6.37 cm vs. 4.3 cm).

Sharon Worcester/Frontline Medical News
Dr. Joseph Spataro and Dr. Christina Tofani
After adjustment for sex and number of RFA treatments, length of Barrett’s esophagus segment was found to be a significant independent predictor of progression to adenocarcinoma (odds ratio, 1.16), Joseph Spataro, MD, and his colleagues at Thomas Jefferson University Hospital, Philadelphia, reported in a poster at the World Congress of Gastroenterology at ACG 2017.

In fact, of all characteristics assessed, including Barrett’s esophagus length, age, sex, race, mean body mass index, family history of esophageal cancer, proton pump inhibitor use, and total duration of follow-up, only the first was a significant predictor of progression.

“For every 1-cm increase in length of BE [Barrett’s esophagus], the risk of progression to EAC increases by 16%,” Dr. Spataro said.

Although this work, which was awarded a “Presidential Poster” ribbon, is limited by the retrospective design, lack of standardization of surveillance intervals and biopsy protocols, and by the possibility of elevated progression rates due to the nature of the center (a referral center with ablative therapy options), the study included a “decent sample and follow-up,” and has important implications for patient care, he noted, explaining that the incidence of EAC has increased faster than any other malignancy in the Western world.

Despite therapeutic advances, the prognosis for patients with EAC remains poor; the annual risk of progression from Barrett’s esophagus to HGD is 0.38%, he added.

Currently, the most commonly used risk-stratification tool for determining surveillance intervals and management of patients with Barrett’s esophagus is the degree of dysplasia. Prior studies have evaluated Barrett’s esophagus length as a predictor of progression to HGD/EAC, but findings have been conflicting, he said.

The current findings suggest that until molecular biomarkers are identified and validated as adjunctive tools for risk stratification, Barrett’s esophagus length could be used to identify patients with nondysplastic Barrett’s esophagus at risk for disease progression.

This could facilitate more rational tailoring of endoscopic surveillance, explained lead author Christina Tofani, MD.

Currently, Barrett’s esophagus patients at the center who have dysplasia generally undergo ablation, while those without dysplasia generally undergo surveillance. Barrett’s esophagus length could be used to adjust surveillance intervals, or to lower the bar for ablation in some cases, she said.

The authors reported having no disclosures.
Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

AT THE 13TH WORLD CONGRESS OF GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Barrett’s esophagus length is a readily accessible endoscopic marker for disease progression.

Major finding: Barrett’s esophagus length was found to be a significant independent predictor of progression to adenocarcinoma (odds ratio, 1.16).

Data source: A retrospective review of 106 cases.

Disclosures: The authors reported having no disclosures.

Disqus Comments
Default

Scheduling patterns in hospital medicine

Article Type
Changed
Increasing discontent with 7-on-7-off schedule

 

For years, the Society of Hospital Medicine has been asking hospital medicine programs about operational metrics in order to understand and catalog how they are functioning and evolving. After compensation, the scheduling patterns that hospital medicine groups (HMGs) are using is the most reviewed item in the report.

When hospital medicine first started, 7 days working followed by 7 days off (7-on-7-off) quickly became vogue. No one really knows how this happened, but it was most likely due to the fact that hospital medicine most closely resembled emergency medicine and scheduling similar to emergency medicine seemed to make sense (that is, 14 shifts per month). That along with the assumption that continuity of care was critical in inpatient care and would improve quality most likely resulted in the popularity of the 7-on-7-off schedule.

Dr. Rachel George
Each new survey allows us the opportunity to observe changes in scheduling patterns as hospital medicine matures and to see which scheduling patterns gain or lose popularity.

In the most recent survey in 2016, HMGs were once again asked to comment on how they schedule. Groups were able to choose from five scheduling options:

1. Seven days on followed by 7 days off

2. Other fixed rotation block schedules (such as 5-on 5-off; or 10-on 5-off)

3. Monday to Friday with rotating weekend coverage

4. Variable schedule

5. Other

Looking at HMG programs that serve only adult populations, a majority of them (48%) follow a fixed rotating schedule either 7 days on followed by 7 days off, or some other fixed schedule, while 31% of programs that responded stated that they used a Monday to Friday schedule. Looking at the programs as a whole, it would seem that the 7-on-7-off schedule was quickly losing popularity while the Monday to Friday schedule was increasingly being used. However, this broad generalization doesn’t really give you the full picture.

Upon analyzing the data further, we see some distinct differences arise based on program size. Small programs (fewer than 10 full-time employees [FTEs]) are much more likely to schedule a Monday to Friday schedule than any other model, whereas only a handful of large programs (greater than 20 FTEs) schedule in this way, rather choosing to use a 7-on-7-off schedule.

The last survey was done in 2014 and a lot has changed since then. Significantly more programs responded in 2016, compared with 2014 (530 vs. 355) and the majority of this increase was made of up smaller programs (fewer than 10 FTEs). Programs with four or fewer FTEs, compared with the prior survey, increased by over 400% (37 programs in 2014 vs. 151 programs in 2016). Overall, programs with fewer than 10 FTEs constituted over 50% of the total programs that responded in 2016 (whereas they made up only a third in 2014). This was particularly significant since size of the program was the one variable that determined how a program might schedule – other factors like geographic region, academic status, or primary hospital GME status did not show significant variance in how groups scheduled.

The second major change that occurred is that these same small programs (those with fewer than 10 FTEs) moved overwhelmingly to a Monday to Friday schedule. In 2014, only 3% of small programs scheduled using a Monday to Friday pattern, but in 2016 almost 50% of small programs reported scheduling in this way. This change in the overall composition of programs, with small programs now making up over 50% of the programs that reported, and the specific change in how small programs schedule results in a noteworthy decrease of programs using a 7 days on followed by 7 days off (7-on-7-off) schedule (53.8% in 2014 and only 38.1% in 2016), and a corresponding increase in the number of programs that schedule using a Monday to Friday schedule (4% in 2014 to 31% in 2016).

In distinct contrast to programs with fewer than 10 FTEs, a very similar number of programs with greater than 20 FTEs reported in 2016 as in 2014 – there was no increase in this subgroup. I’m not clear at this time if this is because there is truly no increase in the number of large programs nationally, or if there is another factor causing larger programs to under-report. The large programs that did report data in 2016 continue to utilize a 7-on-7-off schedule or another fixed rotating block schedule more than 50% of the time. In fact, the utilization of one of these two scheduling patterns increased slightly from 2014 to 2016 (from 52% to 58%). Those that did not use one of the prior mentioned scheduling patterns were most likely to schedule with a variable schedule. A Monday to Friday schedule was almost never used in programs of this size and showed no significant change from 2014 to 2016.

This snapshot highlights the changing landscape in hospital medicine. Hospital medicine is penetrating more and more into smaller and smaller hospitals, and has even made it into critical access hospitals. As recently as 5-10 years ago, it was felt that these hospitals were too small to have a hospital medicine program. This is likely one of the reasons for the increase in programs with four or fewer FTEs. There has also been increasing discontent with the 7-on-7-off schedule, which many feel is leading to burnout. Dr. Bob Wachter famously said during the closing plenary of the 2016 Society of Hospital Medicine Annual Meeting that the 7-on-7-off schedule was “a mistake.” Despite this brewing discontent, larger programs have not changed their scheduling patterns, likely because finding a another scheduling pattern that is effective, supports high-quality care, and is sustainable for such a large group is challenging.

Many people will say that there are as many different types of hospital medicine programs as there are hospital medicine programs. This is true for scheduling as for other aspects of hospital medicine operations. As we continue to grow and evolve as an industry, scheduling patterns will continue to change and evolve as well. For now, two patterns are emerging – smaller programs are utilizing a Monday to Friday schedule and larger programs are utilizing a 7-on-7-off schedule. Only time will tell if these scheduling patterns persist or continue to evolve.
 

Dr. George is a board certified internal medicine physician and practicing hospitalist with over 15 years of experience in hospital medicine. She has been actively involved in the Society of Hospital Medicine and has participated in and chaired multiple committees and task forces. She is currently executive vice president and chief medical officer of Hospital Medicine at Schumacher Clinical Partners, a national provider of emergency medicine and hospital medicine services. She lives in the northwest suburbs of Chicago with her family.

Publications
Sections
Increasing discontent with 7-on-7-off schedule
Increasing discontent with 7-on-7-off schedule

 

For years, the Society of Hospital Medicine has been asking hospital medicine programs about operational metrics in order to understand and catalog how they are functioning and evolving. After compensation, the scheduling patterns that hospital medicine groups (HMGs) are using is the most reviewed item in the report.

When hospital medicine first started, 7 days working followed by 7 days off (7-on-7-off) quickly became vogue. No one really knows how this happened, but it was most likely due to the fact that hospital medicine most closely resembled emergency medicine and scheduling similar to emergency medicine seemed to make sense (that is, 14 shifts per month). That along with the assumption that continuity of care was critical in inpatient care and would improve quality most likely resulted in the popularity of the 7-on-7-off schedule.

Dr. Rachel George
Each new survey allows us the opportunity to observe changes in scheduling patterns as hospital medicine matures and to see which scheduling patterns gain or lose popularity.

In the most recent survey in 2016, HMGs were once again asked to comment on how they schedule. Groups were able to choose from five scheduling options:

1. Seven days on followed by 7 days off

2. Other fixed rotation block schedules (such as 5-on 5-off; or 10-on 5-off)

3. Monday to Friday with rotating weekend coverage

4. Variable schedule

5. Other

Looking at HMG programs that serve only adult populations, a majority of them (48%) follow a fixed rotating schedule either 7 days on followed by 7 days off, or some other fixed schedule, while 31% of programs that responded stated that they used a Monday to Friday schedule. Looking at the programs as a whole, it would seem that the 7-on-7-off schedule was quickly losing popularity while the Monday to Friday schedule was increasingly being used. However, this broad generalization doesn’t really give you the full picture.

Upon analyzing the data further, we see some distinct differences arise based on program size. Small programs (fewer than 10 full-time employees [FTEs]) are much more likely to schedule a Monday to Friday schedule than any other model, whereas only a handful of large programs (greater than 20 FTEs) schedule in this way, rather choosing to use a 7-on-7-off schedule.

The last survey was done in 2014 and a lot has changed since then. Significantly more programs responded in 2016, compared with 2014 (530 vs. 355) and the majority of this increase was made of up smaller programs (fewer than 10 FTEs). Programs with four or fewer FTEs, compared with the prior survey, increased by over 400% (37 programs in 2014 vs. 151 programs in 2016). Overall, programs with fewer than 10 FTEs constituted over 50% of the total programs that responded in 2016 (whereas they made up only a third in 2014). This was particularly significant since size of the program was the one variable that determined how a program might schedule – other factors like geographic region, academic status, or primary hospital GME status did not show significant variance in how groups scheduled.

The second major change that occurred is that these same small programs (those with fewer than 10 FTEs) moved overwhelmingly to a Monday to Friday schedule. In 2014, only 3% of small programs scheduled using a Monday to Friday pattern, but in 2016 almost 50% of small programs reported scheduling in this way. This change in the overall composition of programs, with small programs now making up over 50% of the programs that reported, and the specific change in how small programs schedule results in a noteworthy decrease of programs using a 7 days on followed by 7 days off (7-on-7-off) schedule (53.8% in 2014 and only 38.1% in 2016), and a corresponding increase in the number of programs that schedule using a Monday to Friday schedule (4% in 2014 to 31% in 2016).

In distinct contrast to programs with fewer than 10 FTEs, a very similar number of programs with greater than 20 FTEs reported in 2016 as in 2014 – there was no increase in this subgroup. I’m not clear at this time if this is because there is truly no increase in the number of large programs nationally, or if there is another factor causing larger programs to under-report. The large programs that did report data in 2016 continue to utilize a 7-on-7-off schedule or another fixed rotating block schedule more than 50% of the time. In fact, the utilization of one of these two scheduling patterns increased slightly from 2014 to 2016 (from 52% to 58%). Those that did not use one of the prior mentioned scheduling patterns were most likely to schedule with a variable schedule. A Monday to Friday schedule was almost never used in programs of this size and showed no significant change from 2014 to 2016.

This snapshot highlights the changing landscape in hospital medicine. Hospital medicine is penetrating more and more into smaller and smaller hospitals, and has even made it into critical access hospitals. As recently as 5-10 years ago, it was felt that these hospitals were too small to have a hospital medicine program. This is likely one of the reasons for the increase in programs with four or fewer FTEs. There has also been increasing discontent with the 7-on-7-off schedule, which many feel is leading to burnout. Dr. Bob Wachter famously said during the closing plenary of the 2016 Society of Hospital Medicine Annual Meeting that the 7-on-7-off schedule was “a mistake.” Despite this brewing discontent, larger programs have not changed their scheduling patterns, likely because finding a another scheduling pattern that is effective, supports high-quality care, and is sustainable for such a large group is challenging.

Many people will say that there are as many different types of hospital medicine programs as there are hospital medicine programs. This is true for scheduling as for other aspects of hospital medicine operations. As we continue to grow and evolve as an industry, scheduling patterns will continue to change and evolve as well. For now, two patterns are emerging – smaller programs are utilizing a Monday to Friday schedule and larger programs are utilizing a 7-on-7-off schedule. Only time will tell if these scheduling patterns persist or continue to evolve.
 

Dr. George is a board certified internal medicine physician and practicing hospitalist with over 15 years of experience in hospital medicine. She has been actively involved in the Society of Hospital Medicine and has participated in and chaired multiple committees and task forces. She is currently executive vice president and chief medical officer of Hospital Medicine at Schumacher Clinical Partners, a national provider of emergency medicine and hospital medicine services. She lives in the northwest suburbs of Chicago with her family.

 

For years, the Society of Hospital Medicine has been asking hospital medicine programs about operational metrics in order to understand and catalog how they are functioning and evolving. After compensation, the scheduling patterns that hospital medicine groups (HMGs) are using is the most reviewed item in the report.

When hospital medicine first started, 7 days working followed by 7 days off (7-on-7-off) quickly became vogue. No one really knows how this happened, but it was most likely due to the fact that hospital medicine most closely resembled emergency medicine and scheduling similar to emergency medicine seemed to make sense (that is, 14 shifts per month). That along with the assumption that continuity of care was critical in inpatient care and would improve quality most likely resulted in the popularity of the 7-on-7-off schedule.

Dr. Rachel George
Each new survey allows us the opportunity to observe changes in scheduling patterns as hospital medicine matures and to see which scheduling patterns gain or lose popularity.

In the most recent survey in 2016, HMGs were once again asked to comment on how they schedule. Groups were able to choose from five scheduling options:

1. Seven days on followed by 7 days off

2. Other fixed rotation block schedules (such as 5-on 5-off; or 10-on 5-off)

3. Monday to Friday with rotating weekend coverage

4. Variable schedule

5. Other

Looking at HMG programs that serve only adult populations, a majority of them (48%) follow a fixed rotating schedule either 7 days on followed by 7 days off, or some other fixed schedule, while 31% of programs that responded stated that they used a Monday to Friday schedule. Looking at the programs as a whole, it would seem that the 7-on-7-off schedule was quickly losing popularity while the Monday to Friday schedule was increasingly being used. However, this broad generalization doesn’t really give you the full picture.

Upon analyzing the data further, we see some distinct differences arise based on program size. Small programs (fewer than 10 full-time employees [FTEs]) are much more likely to schedule a Monday to Friday schedule than any other model, whereas only a handful of large programs (greater than 20 FTEs) schedule in this way, rather choosing to use a 7-on-7-off schedule.

The last survey was done in 2014 and a lot has changed since then. Significantly more programs responded in 2016, compared with 2014 (530 vs. 355) and the majority of this increase was made of up smaller programs (fewer than 10 FTEs). Programs with four or fewer FTEs, compared with the prior survey, increased by over 400% (37 programs in 2014 vs. 151 programs in 2016). Overall, programs with fewer than 10 FTEs constituted over 50% of the total programs that responded in 2016 (whereas they made up only a third in 2014). This was particularly significant since size of the program was the one variable that determined how a program might schedule – other factors like geographic region, academic status, or primary hospital GME status did not show significant variance in how groups scheduled.

The second major change that occurred is that these same small programs (those with fewer than 10 FTEs) moved overwhelmingly to a Monday to Friday schedule. In 2014, only 3% of small programs scheduled using a Monday to Friday pattern, but in 2016 almost 50% of small programs reported scheduling in this way. This change in the overall composition of programs, with small programs now making up over 50% of the programs that reported, and the specific change in how small programs schedule results in a noteworthy decrease of programs using a 7 days on followed by 7 days off (7-on-7-off) schedule (53.8% in 2014 and only 38.1% in 2016), and a corresponding increase in the number of programs that schedule using a Monday to Friday schedule (4% in 2014 to 31% in 2016).

In distinct contrast to programs with fewer than 10 FTEs, a very similar number of programs with greater than 20 FTEs reported in 2016 as in 2014 – there was no increase in this subgroup. I’m not clear at this time if this is because there is truly no increase in the number of large programs nationally, or if there is another factor causing larger programs to under-report. The large programs that did report data in 2016 continue to utilize a 7-on-7-off schedule or another fixed rotating block schedule more than 50% of the time. In fact, the utilization of one of these two scheduling patterns increased slightly from 2014 to 2016 (from 52% to 58%). Those that did not use one of the prior mentioned scheduling patterns were most likely to schedule with a variable schedule. A Monday to Friday schedule was almost never used in programs of this size and showed no significant change from 2014 to 2016.

This snapshot highlights the changing landscape in hospital medicine. Hospital medicine is penetrating more and more into smaller and smaller hospitals, and has even made it into critical access hospitals. As recently as 5-10 years ago, it was felt that these hospitals were too small to have a hospital medicine program. This is likely one of the reasons for the increase in programs with four or fewer FTEs. There has also been increasing discontent with the 7-on-7-off schedule, which many feel is leading to burnout. Dr. Bob Wachter famously said during the closing plenary of the 2016 Society of Hospital Medicine Annual Meeting that the 7-on-7-off schedule was “a mistake.” Despite this brewing discontent, larger programs have not changed their scheduling patterns, likely because finding a another scheduling pattern that is effective, supports high-quality care, and is sustainable for such a large group is challenging.

Many people will say that there are as many different types of hospital medicine programs as there are hospital medicine programs. This is true for scheduling as for other aspects of hospital medicine operations. As we continue to grow and evolve as an industry, scheduling patterns will continue to change and evolve as well. For now, two patterns are emerging – smaller programs are utilizing a Monday to Friday schedule and larger programs are utilizing a 7-on-7-off schedule. Only time will tell if these scheduling patterns persist or continue to evolve.
 

Dr. George is a board certified internal medicine physician and practicing hospitalist with over 15 years of experience in hospital medicine. She has been actively involved in the Society of Hospital Medicine and has participated in and chaired multiple committees and task forces. She is currently executive vice president and chief medical officer of Hospital Medicine at Schumacher Clinical Partners, a national provider of emergency medicine and hospital medicine services. She lives in the northwest suburbs of Chicago with her family.

Publications
Publications
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Skills training improves psychosocial outcomes for young cancer patients

Article Type
Changed

 

Compared with standard psychosocial care, a one-on-one skills-based intervention improved psychosocial outcomes in adolescents and young adults with cancer, according to results of a pilot randomized study presented at the Palliative and Supportive Care in Oncology Symposium.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

Compared with standard psychosocial care, a one-on-one skills-based intervention improved psychosocial outcomes in adolescents and young adults with cancer, according to results of a pilot randomized study presented at the Palliative and Supportive Care in Oncology Symposium.

 

Compared with standard psychosocial care, a one-on-one skills-based intervention improved psychosocial outcomes in adolescents and young adults with cancer, according to results of a pilot randomized study presented at the Palliative and Supportive Care in Oncology Symposium.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM PALLONC 2017

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: A one-on-one skills-based intervention improved psychosocial outcomes, compared with standard psychosocial care, in adolescents and young adults with cancer.

Major finding: The skills-based intervention was associated with improvements in resilience (+2.3; 95% CI, 0.7-4.0), hope (+2.8; 95% CI, 0.5-5.1), quality of life (+6.3; 95% CI, –0.8-13.5), and distress (–1.6; 95% CI –3.3-0.0).

Data source: A pilot study of 100 English-speaking cancer patients aged 12-25 who were randomly assigned to the skills-based intervention or standard psychosocial care.

Disclosures: The study was partly funded by the National Institutes of Health. The authors reported having no financial disclosures.

Disqus Comments
Default

ACIP recommends third MMR dose, if outbreak risk

Article Type
Changed

 

The Advisory Committee on Immunization Practices voted Oct. 25 to recommend a 3rd dose of measles, mumps, and rubella (MMR) vaccine for individuals at mumps risk from an outbreak.

The recommendation applies to individuals who already have been vaccinated with the usual two doses of MMR “who are identified by public health as at increased risk for mumps because of an outbreak,” according to draft text of the recommendation. This practice would “improve protection against mumps disease and related complications.”

stockce/Thinkstock
Multiple mumps outbreaks have been reported since 2015, mostly in university settings, Mona Marin, MD, CDC, said in a presentation at a meeting of the Centers for Disease Control and Prevention’s Advisory Committee on Immunization Practices.

Young adults are at highest risk, she said.

Key evidence supporting the ACIP’s recommendation includes one recent study suggesting a 3rd dose of MMR is effective for mumps outbreak control (N Engl J Med. 2017 Sep 7; doi: 10.1056/NEJMoa1703309).

In that study, Cristina V. Cardemil, MD, of the CDC, and her colleagues looked at college students who received a 3rd MMR dose during an outbreak of at the University of Iowa in Iowa City. Almost a quarter of students (4,783 of 20,496) enrolled in the 2015-2016 academic year received a 3rd dose. Compared with two doses of MMR, students receiving three total doses had a 78% lower risk of mumps at 28 days after vaccination, investigators reported.

“These findings suggest that the campaign to administer a 3rd dose of MMR vaccine improved mumps outbreak control and that waning immunity probably contributed to propagation of the outbreak,” Dr. Cardemil and her colleagues wrote.

The vote in favor of a 3rd dose was unanimous among 15 voting members of ACIP. The committee’s recommendations must be approved by the CDC director before they are considered official recommendations.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

The Advisory Committee on Immunization Practices voted Oct. 25 to recommend a 3rd dose of measles, mumps, and rubella (MMR) vaccine for individuals at mumps risk from an outbreak.

The recommendation applies to individuals who already have been vaccinated with the usual two doses of MMR “who are identified by public health as at increased risk for mumps because of an outbreak,” according to draft text of the recommendation. This practice would “improve protection against mumps disease and related complications.”

stockce/Thinkstock
Multiple mumps outbreaks have been reported since 2015, mostly in university settings, Mona Marin, MD, CDC, said in a presentation at a meeting of the Centers for Disease Control and Prevention’s Advisory Committee on Immunization Practices.

Young adults are at highest risk, she said.

Key evidence supporting the ACIP’s recommendation includes one recent study suggesting a 3rd dose of MMR is effective for mumps outbreak control (N Engl J Med. 2017 Sep 7; doi: 10.1056/NEJMoa1703309).

In that study, Cristina V. Cardemil, MD, of the CDC, and her colleagues looked at college students who received a 3rd MMR dose during an outbreak of at the University of Iowa in Iowa City. Almost a quarter of students (4,783 of 20,496) enrolled in the 2015-2016 academic year received a 3rd dose. Compared with two doses of MMR, students receiving three total doses had a 78% lower risk of mumps at 28 days after vaccination, investigators reported.

“These findings suggest that the campaign to administer a 3rd dose of MMR vaccine improved mumps outbreak control and that waning immunity probably contributed to propagation of the outbreak,” Dr. Cardemil and her colleagues wrote.

The vote in favor of a 3rd dose was unanimous among 15 voting members of ACIP. The committee’s recommendations must be approved by the CDC director before they are considered official recommendations.

 

The Advisory Committee on Immunization Practices voted Oct. 25 to recommend a 3rd dose of measles, mumps, and rubella (MMR) vaccine for individuals at mumps risk from an outbreak.

The recommendation applies to individuals who already have been vaccinated with the usual two doses of MMR “who are identified by public health as at increased risk for mumps because of an outbreak,” according to draft text of the recommendation. This practice would “improve protection against mumps disease and related complications.”

stockce/Thinkstock
Multiple mumps outbreaks have been reported since 2015, mostly in university settings, Mona Marin, MD, CDC, said in a presentation at a meeting of the Centers for Disease Control and Prevention’s Advisory Committee on Immunization Practices.

Young adults are at highest risk, she said.

Key evidence supporting the ACIP’s recommendation includes one recent study suggesting a 3rd dose of MMR is effective for mumps outbreak control (N Engl J Med. 2017 Sep 7; doi: 10.1056/NEJMoa1703309).

In that study, Cristina V. Cardemil, MD, of the CDC, and her colleagues looked at college students who received a 3rd MMR dose during an outbreak of at the University of Iowa in Iowa City. Almost a quarter of students (4,783 of 20,496) enrolled in the 2015-2016 academic year received a 3rd dose. Compared with two doses of MMR, students receiving three total doses had a 78% lower risk of mumps at 28 days after vaccination, investigators reported.

“These findings suggest that the campaign to administer a 3rd dose of MMR vaccine improved mumps outbreak control and that waning immunity probably contributed to propagation of the outbreak,” Dr. Cardemil and her colleagues wrote.

The vote in favor of a 3rd dose was unanimous among 15 voting members of ACIP. The committee’s recommendations must be approved by the CDC director before they are considered official recommendations.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

AT AN ACIP MEETING

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

VIDEO: Burnout affects half of U.S. gastroenterologists

Article Type
Changed

 

ORLANDO– Nearly half of U.S. gastroenterologists who responded to a recent survey had symptoms of burnout that seemed largely driven by work-life balance issues.

Burnout appeared to disproportionately affect younger gastroenterologists, those who spend more time on chores at home including caring for young children, physicians who were neutral toward or dissatisfied with a spouse or partner, and clinicians planning to soon leave their practice, Carol A. Burke, MD, said at the World Congress of Gastroenterology at ACG 2017.

Factors not linked with burnout included their type of practice, whether the gastroenterologists worked full or part time, their location, and their compensation, said Dr. Burke, director of the Center for Colon Polyp and Cancer Prevention at the Cleveland Clinic.

The life issues that appeared most strongly linked to burnout “speak to a problem for physicians to balance” their professional and personal lives, Dr. Burke said in a video interview. Several interventions exist that can potentially mitigate burnout, and the American College of Gastroenterology, which ran the survey, is taking steps to make information on these interventions available to members, noted Dr. Burke, the organization’s president.

Dr. Burke and her associates sent a 60-item survey to all 11,080 College members during 2014 and 2015 and received 1,021 replies including 754 fully completed responses. Their prespecified definition of burnout was a high score for emotional exhaustion or for depersonalization, or both, on the Maslach Burnout Inventory. The results showed that 45% of respondents had a high score for emotional exhaustion, 21% scored high on depersonalization, and overall 49% met the burnout criteria set by the investigators. The Inventory answers also showed that 18% had a low sense of personal accomplishment.

A multivariate analysis showed that significant links with burnout were younger age, more time spent on domestic chores, having a neutral or dissatisfying relationship with a spouse or partner, and plans for imminent retirement from gastroenterology practice, Dr. Burke reported.

The main reasons for planning imminent retirement were reimbursement, cited by 32% of this subgroup, regulations, cited by 21%, recertification, cited by 16%, and electronic medical records, cited by 10% as the main reason for leaving practice.

Strategies and resources aimed at better dealing with burnout were requested by 60% of all survey respondents, and the College is in the process of making these tools available, Dr. Burke said.

The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event
Related Articles

 

ORLANDO– Nearly half of U.S. gastroenterologists who responded to a recent survey had symptoms of burnout that seemed largely driven by work-life balance issues.

Burnout appeared to disproportionately affect younger gastroenterologists, those who spend more time on chores at home including caring for young children, physicians who were neutral toward or dissatisfied with a spouse or partner, and clinicians planning to soon leave their practice, Carol A. Burke, MD, said at the World Congress of Gastroenterology at ACG 2017.

Factors not linked with burnout included their type of practice, whether the gastroenterologists worked full or part time, their location, and their compensation, said Dr. Burke, director of the Center for Colon Polyp and Cancer Prevention at the Cleveland Clinic.

The life issues that appeared most strongly linked to burnout “speak to a problem for physicians to balance” their professional and personal lives, Dr. Burke said in a video interview. Several interventions exist that can potentially mitigate burnout, and the American College of Gastroenterology, which ran the survey, is taking steps to make information on these interventions available to members, noted Dr. Burke, the organization’s president.

Dr. Burke and her associates sent a 60-item survey to all 11,080 College members during 2014 and 2015 and received 1,021 replies including 754 fully completed responses. Their prespecified definition of burnout was a high score for emotional exhaustion or for depersonalization, or both, on the Maslach Burnout Inventory. The results showed that 45% of respondents had a high score for emotional exhaustion, 21% scored high on depersonalization, and overall 49% met the burnout criteria set by the investigators. The Inventory answers also showed that 18% had a low sense of personal accomplishment.

A multivariate analysis showed that significant links with burnout were younger age, more time spent on domestic chores, having a neutral or dissatisfying relationship with a spouse or partner, and plans for imminent retirement from gastroenterology practice, Dr. Burke reported.

The main reasons for planning imminent retirement were reimbursement, cited by 32% of this subgroup, regulations, cited by 21%, recertification, cited by 16%, and electronic medical records, cited by 10% as the main reason for leaving practice.

Strategies and resources aimed at better dealing with burnout were requested by 60% of all survey respondents, and the College is in the process of making these tools available, Dr. Burke said.

The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel

 

ORLANDO– Nearly half of U.S. gastroenterologists who responded to a recent survey had symptoms of burnout that seemed largely driven by work-life balance issues.

Burnout appeared to disproportionately affect younger gastroenterologists, those who spend more time on chores at home including caring for young children, physicians who were neutral toward or dissatisfied with a spouse or partner, and clinicians planning to soon leave their practice, Carol A. Burke, MD, said at the World Congress of Gastroenterology at ACG 2017.

Factors not linked with burnout included their type of practice, whether the gastroenterologists worked full or part time, their location, and their compensation, said Dr. Burke, director of the Center for Colon Polyp and Cancer Prevention at the Cleveland Clinic.

The life issues that appeared most strongly linked to burnout “speak to a problem for physicians to balance” their professional and personal lives, Dr. Burke said in a video interview. Several interventions exist that can potentially mitigate burnout, and the American College of Gastroenterology, which ran the survey, is taking steps to make information on these interventions available to members, noted Dr. Burke, the organization’s president.

Dr. Burke and her associates sent a 60-item survey to all 11,080 College members during 2014 and 2015 and received 1,021 replies including 754 fully completed responses. Their prespecified definition of burnout was a high score for emotional exhaustion or for depersonalization, or both, on the Maslach Burnout Inventory. The results showed that 45% of respondents had a high score for emotional exhaustion, 21% scored high on depersonalization, and overall 49% met the burnout criteria set by the investigators. The Inventory answers also showed that 18% had a low sense of personal accomplishment.

A multivariate analysis showed that significant links with burnout were younger age, more time spent on domestic chores, having a neutral or dissatisfying relationship with a spouse or partner, and plans for imminent retirement from gastroenterology practice, Dr. Burke reported.

The main reasons for planning imminent retirement were reimbursement, cited by 32% of this subgroup, regulations, cited by 21%, recertification, cited by 16%, and electronic medical records, cited by 10% as the main reason for leaving practice.

Strategies and resources aimed at better dealing with burnout were requested by 60% of all survey respondents, and the College is in the process of making these tools available, Dr. Burke said.

The video associated with this article is no longer available on this site. Please view all of our videos on the MDedge YouTube channel
Publications
Publications
Topics
Article Type
Sections
Article Source

AT THE 13TH WORLD CONGRESS OF GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Nearly half of U.S. gastroenterologists who responded to a recent survey reported symptoms of burnout.

Major finding: Forty-nine percent of surveyed U.S. gastroenterologists showed a high level of emotional exhaustion, depersonalization, or both.

Data source: Survey results from 754 members of the American College of Gastroenterology.

Disclosures: The American College of Gastroenterology funded the survey. Dr. Burke had no relevant disclosures.

Disqus Comments
Default

Citrate reactions seen in 7% of apheresis donations

Article Type
Changed

 

SAN DIEGO – The rate of citrate reactions was nearly 7% in over 80,000 apheresis procedures involving nearly 15,000 donors, and risk increased with the level of citrate exposure, based on data presented from Héma-Québec, Montreal, presented at the annual meeting of the American Association of Blood Banks.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

SAN DIEGO – The rate of citrate reactions was nearly 7% in over 80,000 apheresis procedures involving nearly 15,000 donors, and risk increased with the level of citrate exposure, based on data presented from Héma-Québec, Montreal, presented at the annual meeting of the American Association of Blood Banks.

 

SAN DIEGO – The rate of citrate reactions was nearly 7% in over 80,000 apheresis procedures involving nearly 15,000 donors, and risk increased with the level of citrate exposure, based on data presented from Héma-Québec, Montreal, presented at the annual meeting of the American Association of Blood Banks.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

AT AABB17

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Adverse reactions to apheresis donations can be significant; calcium supplements can reduce the risk of citrate reactions and volume replacement can reduce the risk of vasovagal reactions in donors.

Major finding: Citrate reactions accompanied 6.8% of donations; 2.5% had vasovagal reactions without loss of consciousness and 0.1% had loss of consciousness.

Data source: A study at Héma-Québec, Montreal, of 80,409 apheresis procedures conducted in 14,742 donors.

Disclosures: Dr. Robillard had no disclosures.

Disqus Comments
Default

State regulations for tattoo facilities increased blood donor pools

Article Type
Changed

– Tattoos are rapidly moving into mainstream America, and as more states regulate tattoo facilities, persons with tattoos can be blood donors without compromising patient safety, Mary Townsend of Blood Systems Inc. reported at the annual meeting of the American Association of Blood Banks.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– Tattoos are rapidly moving into mainstream America, and as more states regulate tattoo facilities, persons with tattoos can be blood donors without compromising patient safety, Mary Townsend of Blood Systems Inc. reported at the annual meeting of the American Association of Blood Banks.

– Tattoos are rapidly moving into mainstream America, and as more states regulate tattoo facilities, persons with tattoos can be blood donors without compromising patient safety, Mary Townsend of Blood Systems Inc. reported at the annual meeting of the American Association of Blood Banks.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

AT AABB 2017

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Statewide regulations for tattoo licenses in California and Arizona have increased the pool of blood donors in those states.

Major finding: The absolute number of accepted donors with tattoos rose from 13 to 567 in California and from 151 to 1,496 in Arizona, which represented an annual potential gain of 2,216 and 4,035 additional blood donations.

Data source: An analysis of blood centers in California and Arizona before and after state tattoo regulations were implemented.

Disclosures: Dr. Townsend has no disclosures.

Disqus Comments
Default

Stroke cognitive outcomes found worse in Mexican Americans

Article Type
Changed

 

– A new analysis shows that Mexican Americans (MAs) have worse cognitive outcomes a year after having a stroke than do non-Hispanic whites (NHWs).

stockce/Thinkstock
The conclusions held up even after the researchers controlled for insurance status and a range of other factors, including comorbidities, age, stroke severity, and prestroke cognition. “None of those influenced the relationship,” said Lewis Morgenstern, MD, who presented the research during a poster session at the annual meeting of the American Neurological Association.

After controlling for all factors, the researchers found a difference of –6.73 (95% confidence interval, –3.88 to –9.57; P less than .001).

“The Mexican-American population is growing quickly and aging. The cost of stroke-related cognitive impairment is high for patient, family, and society. Efforts to combat stroke-related cognitive decline are critical,” said Dr. Morgenstern, professor of neurology and epidemiology at the University of Michigan, Ann Arbor.

The study grew out of the Brain Attack Surveillance in Corpus Christi (BASIC) Project, which began in 1999 and is funded until 2019. It is the only ongoing stroke surveillance program that focuses on Mexican Americans, who comprise the largest segment of Hispanic Americans.

The researchers analyzed data encompassing all stroke patients in the BASIC Project from October 2014 through January 2016 (n = 227). They analyzed cognitive outcome data from 3 months, 6 months, and 12 months. MAs were younger on average than NHWs (median age 66 vs. 70; P = .018), and were more likely to have diabetes (54% vs. 36%; P less than .001). They were less likely to have atrial fibrillation (13% vs. 20%; P = .025).

At 12 months, MAs had a lower median 3MSE score of 86 (interquartile range, 73-93, compared with 92 in NHWs (IQR, 83-96; P less than .001). As the researchers adjusted for additional factors, the discrepancy became larger. Adjustment for age and sex revealed a difference of 6.88 (95% confidence interval, 4.15-9.60). Additional adjustment for prestroke condition showed a difference of 7.04. Additional adjustment for insurance led to the same differential of 7.04. Adjustment for diabetes and comorbidities pushed the difference to 7.11. Adjustment for stroke severity (National Institutes of Health Stroke Scale) revealed a difference of 6.73 (P less than .001).

Asked if the results were surprising, Dr. Morgenstern replied: “I think it’s always surprising to see one population of U.S. citizens who have more disease or a worse outcome than another when it’s not explained by the many possible factors we considered.” He also called for additional studies of cognitive dysfunction in Hispanic communities in other forms of dementia, such as Alzheimer’s disease and vascular dementia. “There’s very little of that,” he said.

The National Institutes of Health funded the study. Dr. Morgenstern reported having no financial disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event
Related Articles

 

– A new analysis shows that Mexican Americans (MAs) have worse cognitive outcomes a year after having a stroke than do non-Hispanic whites (NHWs).

stockce/Thinkstock
The conclusions held up even after the researchers controlled for insurance status and a range of other factors, including comorbidities, age, stroke severity, and prestroke cognition. “None of those influenced the relationship,” said Lewis Morgenstern, MD, who presented the research during a poster session at the annual meeting of the American Neurological Association.

After controlling for all factors, the researchers found a difference of –6.73 (95% confidence interval, –3.88 to –9.57; P less than .001).

“The Mexican-American population is growing quickly and aging. The cost of stroke-related cognitive impairment is high for patient, family, and society. Efforts to combat stroke-related cognitive decline are critical,” said Dr. Morgenstern, professor of neurology and epidemiology at the University of Michigan, Ann Arbor.

The study grew out of the Brain Attack Surveillance in Corpus Christi (BASIC) Project, which began in 1999 and is funded until 2019. It is the only ongoing stroke surveillance program that focuses on Mexican Americans, who comprise the largest segment of Hispanic Americans.

The researchers analyzed data encompassing all stroke patients in the BASIC Project from October 2014 through January 2016 (n = 227). They analyzed cognitive outcome data from 3 months, 6 months, and 12 months. MAs were younger on average than NHWs (median age 66 vs. 70; P = .018), and were more likely to have diabetes (54% vs. 36%; P less than .001). They were less likely to have atrial fibrillation (13% vs. 20%; P = .025).

At 12 months, MAs had a lower median 3MSE score of 86 (interquartile range, 73-93, compared with 92 in NHWs (IQR, 83-96; P less than .001). As the researchers adjusted for additional factors, the discrepancy became larger. Adjustment for age and sex revealed a difference of 6.88 (95% confidence interval, 4.15-9.60). Additional adjustment for prestroke condition showed a difference of 7.04. Additional adjustment for insurance led to the same differential of 7.04. Adjustment for diabetes and comorbidities pushed the difference to 7.11. Adjustment for stroke severity (National Institutes of Health Stroke Scale) revealed a difference of 6.73 (P less than .001).

Asked if the results were surprising, Dr. Morgenstern replied: “I think it’s always surprising to see one population of U.S. citizens who have more disease or a worse outcome than another when it’s not explained by the many possible factors we considered.” He also called for additional studies of cognitive dysfunction in Hispanic communities in other forms of dementia, such as Alzheimer’s disease and vascular dementia. “There’s very little of that,” he said.

The National Institutes of Health funded the study. Dr. Morgenstern reported having no financial disclosures.

 

– A new analysis shows that Mexican Americans (MAs) have worse cognitive outcomes a year after having a stroke than do non-Hispanic whites (NHWs).

stockce/Thinkstock
The conclusions held up even after the researchers controlled for insurance status and a range of other factors, including comorbidities, age, stroke severity, and prestroke cognition. “None of those influenced the relationship,” said Lewis Morgenstern, MD, who presented the research during a poster session at the annual meeting of the American Neurological Association.

After controlling for all factors, the researchers found a difference of –6.73 (95% confidence interval, –3.88 to –9.57; P less than .001).

“The Mexican-American population is growing quickly and aging. The cost of stroke-related cognitive impairment is high for patient, family, and society. Efforts to combat stroke-related cognitive decline are critical,” said Dr. Morgenstern, professor of neurology and epidemiology at the University of Michigan, Ann Arbor.

The study grew out of the Brain Attack Surveillance in Corpus Christi (BASIC) Project, which began in 1999 and is funded until 2019. It is the only ongoing stroke surveillance program that focuses on Mexican Americans, who comprise the largest segment of Hispanic Americans.

The researchers analyzed data encompassing all stroke patients in the BASIC Project from October 2014 through January 2016 (n = 227). They analyzed cognitive outcome data from 3 months, 6 months, and 12 months. MAs were younger on average than NHWs (median age 66 vs. 70; P = .018), and were more likely to have diabetes (54% vs. 36%; P less than .001). They were less likely to have atrial fibrillation (13% vs. 20%; P = .025).

At 12 months, MAs had a lower median 3MSE score of 86 (interquartile range, 73-93, compared with 92 in NHWs (IQR, 83-96; P less than .001). As the researchers adjusted for additional factors, the discrepancy became larger. Adjustment for age and sex revealed a difference of 6.88 (95% confidence interval, 4.15-9.60). Additional adjustment for prestroke condition showed a difference of 7.04. Additional adjustment for insurance led to the same differential of 7.04. Adjustment for diabetes and comorbidities pushed the difference to 7.11. Adjustment for stroke severity (National Institutes of Health Stroke Scale) revealed a difference of 6.73 (P less than .001).

Asked if the results were surprising, Dr. Morgenstern replied: “I think it’s always surprising to see one population of U.S. citizens who have more disease or a worse outcome than another when it’s not explained by the many possible factors we considered.” He also called for additional studies of cognitive dysfunction in Hispanic communities in other forms of dementia, such as Alzheimer’s disease and vascular dementia. “There’s very little of that,” he said.

The National Institutes of Health funded the study. Dr. Morgenstern reported having no financial disclosures.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

AT ANA 2017

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: In an analysis, cognitive outcomes were worse in Mexican-American stroke survivors despite researchers’ controlling for many factors.

Major finding: At 12 months, Mexican Americans scored 6 points lower on the Modified Mini-Mental State Examination compared with non-Hispanic whites.

Data source: Prospective analysis of 227 stroke patients in Corpus Christi, Texas.

Disclosures: The National Institutes of Health funded the study. Dr. Morgenstern reported having no financial disclosures.

Disqus Comments
Default

Patients prefer higher dose of levothyroxine despite lack of objective benefit

Article Type
Changed

 

– Patient perception plays a large role in subjective benefit of levothyroxine therapy for hypothyroidism, suggests a double-blind randomized controlled trial reported at the annual meeting of the American Thyroid Association.

Mood, cognition, and quality of life (QoL) did not differ whether patients’ levothyroxine dose was adjusted to achieve thyroid-stimulating hormone (TSH) levels in the low-normal, high-normal, or mildly elevated range. But despite this lack of objective benefit, the large majority of patients preferred levothyroxine doses that they perceived to be higher – whether they actually were or not.

Dr. Mary H. Samuels
“With these data, we believe that patients should be counseled that symptoms in these areas are not reliably related to levothyroxine doses or thyroid hormone levels,” commented first author Mary H. Samuels, MD, an endocrinologist at the Thyroid & Parathyroid Center, Oregon Health & Science University, Portland.

The study was not restricted to certain groups who might have a better response to higher levothyroxine dose, she acknowledged. Two such groups are patients with more symptoms (although volunteering for the study suggested dissatisfaction with symptom control) and patients with low tri-iodothyronine (T3) levels (although about half of patients had low baseline levels).

“We encourage further research in older subjects, men, and subjects with specific symptoms, low T3 levels, or functional polymorphisms in thyroid-relevant genes,” Dr. Samuels said. “These are really difficult, expensive studies to do, and if we are going to have any hope of getting them funded and doing them, I think that we have to be much more targeted.”

One of the session co-chairs, Catherine A. Dinauer, MD, a pediatric endocrinologist and clinician at the Yale Pediatric Thyroid Center, New Haven, Conn., commented, “I think these are really interesting data because there’s this sense among patients that their dose really affects how they feel, and this is essentially turning that on its head. It’s not really clear, then, why are these patients still maybe not feeling well.”

“It will be interesting to see more data on this and ... more about this business of checking T3 levels. Do we need to supplement with T3? I think we really don’t know that, especially in kids, but even in adults,” she added.

The other session co-chair, Yaron Tomer, MD, chair of the department of medicine and the Anita and Jack Saltz Chair in Diabetes Research at the Montefiore Medical Center, Bronx, N.Y., commented, “I think this study confirmed what a lot of us feel, that there is a lot of placebo effect when you treat in different ways to optimize the TSH or give T3.”

Other data reported in the session provide a possible explanation for the lack of benefit of adjusting pharmacologic therapy, suggesting that the volumes of various brain structures change with perturbations of thyroid function, he noted. “There might be true changes in the brain that affect how the patients feel. So these patients may truly not feel well. It’s just that we can’t fix it by adjusting the TSH level to very narrow margins or by adding T3,” he said.
 

Study details

“It is well known that overt hypothyroidism interferes with mood and a number of cognitive functions. However, neurocognitive effects of variations in thyroid function within the reference range and in mild or subclinical hypothyroidism are less clear,” Dr. Samuels noted, giving some background to the research.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Patient perception plays a large role in subjective benefit of levothyroxine therapy for hypothyroidism, suggests a double-blind randomized controlled trial reported at the annual meeting of the American Thyroid Association.

Mood, cognition, and quality of life (QoL) did not differ whether patients’ levothyroxine dose was adjusted to achieve thyroid-stimulating hormone (TSH) levels in the low-normal, high-normal, or mildly elevated range. But despite this lack of objective benefit, the large majority of patients preferred levothyroxine doses that they perceived to be higher – whether they actually were or not.

Dr. Mary H. Samuels
“With these data, we believe that patients should be counseled that symptoms in these areas are not reliably related to levothyroxine doses or thyroid hormone levels,” commented first author Mary H. Samuels, MD, an endocrinologist at the Thyroid & Parathyroid Center, Oregon Health & Science University, Portland.

The study was not restricted to certain groups who might have a better response to higher levothyroxine dose, she acknowledged. Two such groups are patients with more symptoms (although volunteering for the study suggested dissatisfaction with symptom control) and patients with low tri-iodothyronine (T3) levels (although about half of patients had low baseline levels).

“We encourage further research in older subjects, men, and subjects with specific symptoms, low T3 levels, or functional polymorphisms in thyroid-relevant genes,” Dr. Samuels said. “These are really difficult, expensive studies to do, and if we are going to have any hope of getting them funded and doing them, I think that we have to be much more targeted.”

One of the session co-chairs, Catherine A. Dinauer, MD, a pediatric endocrinologist and clinician at the Yale Pediatric Thyroid Center, New Haven, Conn., commented, “I think these are really interesting data because there’s this sense among patients that their dose really affects how they feel, and this is essentially turning that on its head. It’s not really clear, then, why are these patients still maybe not feeling well.”

“It will be interesting to see more data on this and ... more about this business of checking T3 levels. Do we need to supplement with T3? I think we really don’t know that, especially in kids, but even in adults,” she added.

The other session co-chair, Yaron Tomer, MD, chair of the department of medicine and the Anita and Jack Saltz Chair in Diabetes Research at the Montefiore Medical Center, Bronx, N.Y., commented, “I think this study confirmed what a lot of us feel, that there is a lot of placebo effect when you treat in different ways to optimize the TSH or give T3.”

Other data reported in the session provide a possible explanation for the lack of benefit of adjusting pharmacologic therapy, suggesting that the volumes of various brain structures change with perturbations of thyroid function, he noted. “There might be true changes in the brain that affect how the patients feel. So these patients may truly not feel well. It’s just that we can’t fix it by adjusting the TSH level to very narrow margins or by adding T3,” he said.
 

Study details

“It is well known that overt hypothyroidism interferes with mood and a number of cognitive functions. However, neurocognitive effects of variations in thyroid function within the reference range and in mild or subclinical hypothyroidism are less clear,” Dr. Samuels noted, giving some background to the research.

 

– Patient perception plays a large role in subjective benefit of levothyroxine therapy for hypothyroidism, suggests a double-blind randomized controlled trial reported at the annual meeting of the American Thyroid Association.

Mood, cognition, and quality of life (QoL) did not differ whether patients’ levothyroxine dose was adjusted to achieve thyroid-stimulating hormone (TSH) levels in the low-normal, high-normal, or mildly elevated range. But despite this lack of objective benefit, the large majority of patients preferred levothyroxine doses that they perceived to be higher – whether they actually were or not.

Dr. Mary H. Samuels
“With these data, we believe that patients should be counseled that symptoms in these areas are not reliably related to levothyroxine doses or thyroid hormone levels,” commented first author Mary H. Samuels, MD, an endocrinologist at the Thyroid & Parathyroid Center, Oregon Health & Science University, Portland.

The study was not restricted to certain groups who might have a better response to higher levothyroxine dose, she acknowledged. Two such groups are patients with more symptoms (although volunteering for the study suggested dissatisfaction with symptom control) and patients with low tri-iodothyronine (T3) levels (although about half of patients had low baseline levels).

“We encourage further research in older subjects, men, and subjects with specific symptoms, low T3 levels, or functional polymorphisms in thyroid-relevant genes,” Dr. Samuels said. “These are really difficult, expensive studies to do, and if we are going to have any hope of getting them funded and doing them, I think that we have to be much more targeted.”

One of the session co-chairs, Catherine A. Dinauer, MD, a pediatric endocrinologist and clinician at the Yale Pediatric Thyroid Center, New Haven, Conn., commented, “I think these are really interesting data because there’s this sense among patients that their dose really affects how they feel, and this is essentially turning that on its head. It’s not really clear, then, why are these patients still maybe not feeling well.”

“It will be interesting to see more data on this and ... more about this business of checking T3 levels. Do we need to supplement with T3? I think we really don’t know that, especially in kids, but even in adults,” she added.

The other session co-chair, Yaron Tomer, MD, chair of the department of medicine and the Anita and Jack Saltz Chair in Diabetes Research at the Montefiore Medical Center, Bronx, N.Y., commented, “I think this study confirmed what a lot of us feel, that there is a lot of placebo effect when you treat in different ways to optimize the TSH or give T3.”

Other data reported in the session provide a possible explanation for the lack of benefit of adjusting pharmacologic therapy, suggesting that the volumes of various brain structures change with perturbations of thyroid function, he noted. “There might be true changes in the brain that affect how the patients feel. So these patients may truly not feel well. It’s just that we can’t fix it by adjusting the TSH level to very narrow margins or by adding T3,” he said.
 

Study details

“It is well known that overt hypothyroidism interferes with mood and a number of cognitive functions. However, neurocognitive effects of variations in thyroid function within the reference range and in mild or subclinical hypothyroidism are less clear,” Dr. Samuels noted, giving some background to the research.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT ATA 2017

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Patient preference for higher levothyroxine dose may be driven in part by a placebo effect.

Major finding: Mood, cognition, and QoL were similar across levothyroxine doses targeting various TSH levels, but patients preferred what they believed was a higher dose, even when it was not (P less than .001 for preferred vs. perceived).

Data source: A randomized trial of levothyroxine adjustment among 138 hypothyroid patients on a stable dose of the drug who had normal TSH levels.

Disclosures: Dr. Samuels disclosed that she had no relevant conflicts of interest.

Disqus Comments
Default

Conjugate typhoid vaccine safe and effective in phase 2 trials

Human challenge models have a place in typhoid vaccine development
Article Type
Changed

 

A new conjugate typhoid vaccine suitable for administration to infants and young children was efficacious, highly immunogenic, and well tolerated, compared with placebo, in a phase 2 study that tested the vaccine using a human typhoid infection model.

In a study that compared two formulations of typhoid vaccine to a control meningococcal vaccine, the new Vi-conjugate (Vi-TT) vaccine had an efficacy of 54.6% (95% confidence interval, 26.8-71.8) and a 100% seroconversion rate.

The study was not powered for a direct comparison of the efficacy of the Vi-TT with the efficacy of the Vi-polysaccharide (Vi-PS), the other vaccine used in the study. The Vi-PS vaccine had an efficacy of 52.0% (95% CI, 23.2-70.0), and 88.6% of the Vi-PS recipients had seroconversion.

However, “clinical manifestations of typhoid fever seemed less severe among diagnosed participants following Vi-TT vaccination,” Celina Jin, MD, and her colleagues wrote (Lancet. 2017 Sep 28: doi: 10.1016/S0140-6736[17]32149-9). Fever, defined as an oral temperature of 38° C or higher, was seen in 6 of 37 (16%) Vi-TT recipients, 17 of 31 (55%) receiving control, and 11 of 35 (31%) receiving Vi-PS.

Geometric mean titers also were significantly higher in the Vi-TT group than in the Vi-PS group, with an adjusted geometric mean titer of 562.9 EU/mL for Vi-TT and 140.5 EU/mL for Vi-PS (P less than .0001).

The study enrolled 112 healthy adult volunteers who were randomized 1:1:1 to receive Vi-PS, Vi-TT, or control meningococcal vaccine. A total of 103 of the participants eventually received one of the two study vaccines or the control vaccines, and that group was included in the per-protocol analysis.

After vaccination (recipients and investigators were masked as to which formulation participants received), study participants kept an online diary to report any vaccination-related symptoms for 7 days, and also had clinic visits scheduled at days 1, 3, 7, and 10.

Participants received one oral dose of wild-type Salmonella enterica serovar Typhi Quailes strain bacteria about 1 month after vaccination. The dose was 1-5x104 colony forming units, and was administered immediately following a 120-mL oral bolus of sodium bicarbonate (to neutralize stomach acid).

Participants then were seen daily in an outpatient clinic for 2 weeks. At each visit, investigators monitored vital signs, performed a general assessment, and drew blood to assess for typhoid bacteremia. Participants also kept an online diary for 21 days, reporting twice-daily self-measured temperatures as well. No antipyretics were allowed before typhoid diagnosis.

Participants who met the study’s criteria for typhoid diagnosis were treated with a 2-week course of ciprofloxacin or azithromycin; patients who did not become ill were treated 14 days after the oral typhoid challenge. None of the four serious adverse events reported during the study was deemed to be related to vaccination.

CDC/Armed Forces Institute of Pathology, Charles N. Farmer
Histopathology of a lymph node in a case of typhoid fever.
Typhoid was diagnosed if patients had a fever of 38° C for 12 hours or more, or if they had S. Typhi bacteremia more than 72 hours after the challenge was administered.

That broad definition of typhoid infection was used to determine attack rates for the study’s primary outcome measure. However, Dr. Jin and her colleagues also looked at a less stringent – and perhaps more clinically pertinent – definition of 12 hours of fever of 38° C or higher followed by S. Typhi bacteremia. Using those criteria, the Vi-TT vaccine prevented up to 87% of infections.

Salmonella Typhi is the world’s leading cause of enteric fever, said Dr. Jin, of the Oxford Vaccine Group at the University of Oxford (England). Up to 20.6 million people per year are affected, with children most commonly infected and low-resource populations in Asia and Africa hardest hit.

Both prescription and over-the-counter antibiotics are used worldwide to combat typhoid fever, and S. Typhi strains are becoming increasingly antibiotic resistant in South Asia and Africa, Dr. Jin and her coauthors said.

The typhoid vaccines that are currently licensed are either not suitable for administration to infants and young children, or are insufficiently immunogenic in younger populations.

The typhoid conjugate vaccine used in the study combines the Vi-polysaccharide capsule with a protein carrier, increasing host immunologic response and making the vaccine effective in infancy.

“This human challenge study provides further evidence to support the deployment of Vi-conjugate vaccines as a control measure to reduce the burden of typhoid fever, because those individuals living in endemic regions should not be made to wait another 60 years,” wrote Dr. Jin and her coauthors.

The study was funded by the Bill & Melinda Gates Foundation and the European commission FP7 grant, Advanced Immunization Technologies.
 
 

 

Body

 

The Oxford Vaccine Group has developed a typhoid challenge model that provides an important bridge in clinical testing and affords the possibility of significant acceleration of the vaccine development process. Despite the controversy human challenge models sometimes engender, previous human typhoid challenge studies contributed to the development of the live attenuated typhoid vaccine Ty21a.

The conjugate vaccine tested by Dr. Jin and her colleagues is a much-needed weapon in the public health armamentarium of typhoid control. Treatment options are limited in regions of South Asia and Africa where endemic typhoid shows increasing antibiotic resistance.

This human challenge study provides the first evidence that the conjugate vaccine reduces the attack rate of typhoid fever, though its use in India has shown it to be safe and immunogenic, even in children as young as 6 months of age.

The stringent definition of typhoid fever attack used in this study may result in a finding of lower efficacy than would be seen in a field trial, and a National Institutes of Health–sponsored study of another conjugate vaccine found efficacy rates of 89% among Vietnamese preschoolers followed for nearly 4 years after vaccination. When the present study’s data were reanalyzed with use of the less stringent case definition of fever followed by typhoid bacteremia, a similar efficacy of 87.1% was seen for the conjugate vaccine. A larger sample size would be needed in a challenge study that included the less stringent definition as a coprimary endpoint, but results might better correlate with real-world field trials.

Phase 3 and 4 trials for the typhoid conjugate vaccine are forthcoming, but final results will not be tallied for many years. The typhoid challenge study reported by Dr. Jin and her colleagues bolsters hopes that the candidate vaccine will help with typhoid control where it’s most needed.
 

Nicholas A. Feasey, MD , is at the Liverpool (England) School of Tropical Medicine. Myron M. Levine, MD , is at the University of Maryland, Baltimore. Their comments were drawn from an editorial accompanying the study (Lancet. 2017 Sep 28. doi: 10.1016/S0140-6736[17]32407-8 ).

Publications
Topics
Sections
Body

 

The Oxford Vaccine Group has developed a typhoid challenge model that provides an important bridge in clinical testing and affords the possibility of significant acceleration of the vaccine development process. Despite the controversy human challenge models sometimes engender, previous human typhoid challenge studies contributed to the development of the live attenuated typhoid vaccine Ty21a.

The conjugate vaccine tested by Dr. Jin and her colleagues is a much-needed weapon in the public health armamentarium of typhoid control. Treatment options are limited in regions of South Asia and Africa where endemic typhoid shows increasing antibiotic resistance.

This human challenge study provides the first evidence that the conjugate vaccine reduces the attack rate of typhoid fever, though its use in India has shown it to be safe and immunogenic, even in children as young as 6 months of age.

The stringent definition of typhoid fever attack used in this study may result in a finding of lower efficacy than would be seen in a field trial, and a National Institutes of Health–sponsored study of another conjugate vaccine found efficacy rates of 89% among Vietnamese preschoolers followed for nearly 4 years after vaccination. When the present study’s data were reanalyzed with use of the less stringent case definition of fever followed by typhoid bacteremia, a similar efficacy of 87.1% was seen for the conjugate vaccine. A larger sample size would be needed in a challenge study that included the less stringent definition as a coprimary endpoint, but results might better correlate with real-world field trials.

Phase 3 and 4 trials for the typhoid conjugate vaccine are forthcoming, but final results will not be tallied for many years. The typhoid challenge study reported by Dr. Jin and her colleagues bolsters hopes that the candidate vaccine will help with typhoid control where it’s most needed.
 

Nicholas A. Feasey, MD , is at the Liverpool (England) School of Tropical Medicine. Myron M. Levine, MD , is at the University of Maryland, Baltimore. Their comments were drawn from an editorial accompanying the study (Lancet. 2017 Sep 28. doi: 10.1016/S0140-6736[17]32407-8 ).

Body

 

The Oxford Vaccine Group has developed a typhoid challenge model that provides an important bridge in clinical testing and affords the possibility of significant acceleration of the vaccine development process. Despite the controversy human challenge models sometimes engender, previous human typhoid challenge studies contributed to the development of the live attenuated typhoid vaccine Ty21a.

The conjugate vaccine tested by Dr. Jin and her colleagues is a much-needed weapon in the public health armamentarium of typhoid control. Treatment options are limited in regions of South Asia and Africa where endemic typhoid shows increasing antibiotic resistance.

This human challenge study provides the first evidence that the conjugate vaccine reduces the attack rate of typhoid fever, though its use in India has shown it to be safe and immunogenic, even in children as young as 6 months of age.

The stringent definition of typhoid fever attack used in this study may result in a finding of lower efficacy than would be seen in a field trial, and a National Institutes of Health–sponsored study of another conjugate vaccine found efficacy rates of 89% among Vietnamese preschoolers followed for nearly 4 years after vaccination. When the present study’s data were reanalyzed with use of the less stringent case definition of fever followed by typhoid bacteremia, a similar efficacy of 87.1% was seen for the conjugate vaccine. A larger sample size would be needed in a challenge study that included the less stringent definition as a coprimary endpoint, but results might better correlate with real-world field trials.

Phase 3 and 4 trials for the typhoid conjugate vaccine are forthcoming, but final results will not be tallied for many years. The typhoid challenge study reported by Dr. Jin and her colleagues bolsters hopes that the candidate vaccine will help with typhoid control where it’s most needed.
 

Nicholas A. Feasey, MD , is at the Liverpool (England) School of Tropical Medicine. Myron M. Levine, MD , is at the University of Maryland, Baltimore. Their comments were drawn from an editorial accompanying the study (Lancet. 2017 Sep 28. doi: 10.1016/S0140-6736[17]32407-8 ).

Title
Human challenge models have a place in typhoid vaccine development
Human challenge models have a place in typhoid vaccine development

 

A new conjugate typhoid vaccine suitable for administration to infants and young children was efficacious, highly immunogenic, and well tolerated, compared with placebo, in a phase 2 study that tested the vaccine using a human typhoid infection model.

In a study that compared two formulations of typhoid vaccine to a control meningococcal vaccine, the new Vi-conjugate (Vi-TT) vaccine had an efficacy of 54.6% (95% confidence interval, 26.8-71.8) and a 100% seroconversion rate.

The study was not powered for a direct comparison of the efficacy of the Vi-TT with the efficacy of the Vi-polysaccharide (Vi-PS), the other vaccine used in the study. The Vi-PS vaccine had an efficacy of 52.0% (95% CI, 23.2-70.0), and 88.6% of the Vi-PS recipients had seroconversion.

However, “clinical manifestations of typhoid fever seemed less severe among diagnosed participants following Vi-TT vaccination,” Celina Jin, MD, and her colleagues wrote (Lancet. 2017 Sep 28: doi: 10.1016/S0140-6736[17]32149-9). Fever, defined as an oral temperature of 38° C or higher, was seen in 6 of 37 (16%) Vi-TT recipients, 17 of 31 (55%) receiving control, and 11 of 35 (31%) receiving Vi-PS.

Geometric mean titers also were significantly higher in the Vi-TT group than in the Vi-PS group, with an adjusted geometric mean titer of 562.9 EU/mL for Vi-TT and 140.5 EU/mL for Vi-PS (P less than .0001).

The study enrolled 112 healthy adult volunteers who were randomized 1:1:1 to receive Vi-PS, Vi-TT, or control meningococcal vaccine. A total of 103 of the participants eventually received one of the two study vaccines or the control vaccines, and that group was included in the per-protocol analysis.

After vaccination (recipients and investigators were masked as to which formulation participants received), study participants kept an online diary to report any vaccination-related symptoms for 7 days, and also had clinic visits scheduled at days 1, 3, 7, and 10.

Participants received one oral dose of wild-type Salmonella enterica serovar Typhi Quailes strain bacteria about 1 month after vaccination. The dose was 1-5x104 colony forming units, and was administered immediately following a 120-mL oral bolus of sodium bicarbonate (to neutralize stomach acid).

Participants then were seen daily in an outpatient clinic for 2 weeks. At each visit, investigators monitored vital signs, performed a general assessment, and drew blood to assess for typhoid bacteremia. Participants also kept an online diary for 21 days, reporting twice-daily self-measured temperatures as well. No antipyretics were allowed before typhoid diagnosis.

Participants who met the study’s criteria for typhoid diagnosis were treated with a 2-week course of ciprofloxacin or azithromycin; patients who did not become ill were treated 14 days after the oral typhoid challenge. None of the four serious adverse events reported during the study was deemed to be related to vaccination.

CDC/Armed Forces Institute of Pathology, Charles N. Farmer
Histopathology of a lymph node in a case of typhoid fever.
Typhoid was diagnosed if patients had a fever of 38° C for 12 hours or more, or if they had S. Typhi bacteremia more than 72 hours after the challenge was administered.

That broad definition of typhoid infection was used to determine attack rates for the study’s primary outcome measure. However, Dr. Jin and her colleagues also looked at a less stringent – and perhaps more clinically pertinent – definition of 12 hours of fever of 38° C or higher followed by S. Typhi bacteremia. Using those criteria, the Vi-TT vaccine prevented up to 87% of infections.

Salmonella Typhi is the world’s leading cause of enteric fever, said Dr. Jin, of the Oxford Vaccine Group at the University of Oxford (England). Up to 20.6 million people per year are affected, with children most commonly infected and low-resource populations in Asia and Africa hardest hit.

Both prescription and over-the-counter antibiotics are used worldwide to combat typhoid fever, and S. Typhi strains are becoming increasingly antibiotic resistant in South Asia and Africa, Dr. Jin and her coauthors said.

The typhoid vaccines that are currently licensed are either not suitable for administration to infants and young children, or are insufficiently immunogenic in younger populations.

The typhoid conjugate vaccine used in the study combines the Vi-polysaccharide capsule with a protein carrier, increasing host immunologic response and making the vaccine effective in infancy.

“This human challenge study provides further evidence to support the deployment of Vi-conjugate vaccines as a control measure to reduce the burden of typhoid fever, because those individuals living in endemic regions should not be made to wait another 60 years,” wrote Dr. Jin and her coauthors.

The study was funded by the Bill & Melinda Gates Foundation and the European commission FP7 grant, Advanced Immunization Technologies.
 
 

 

 

A new conjugate typhoid vaccine suitable for administration to infants and young children was efficacious, highly immunogenic, and well tolerated, compared with placebo, in a phase 2 study that tested the vaccine using a human typhoid infection model.

In a study that compared two formulations of typhoid vaccine to a control meningococcal vaccine, the new Vi-conjugate (Vi-TT) vaccine had an efficacy of 54.6% (95% confidence interval, 26.8-71.8) and a 100% seroconversion rate.

The study was not powered for a direct comparison of the efficacy of the Vi-TT with the efficacy of the Vi-polysaccharide (Vi-PS), the other vaccine used in the study. The Vi-PS vaccine had an efficacy of 52.0% (95% CI, 23.2-70.0), and 88.6% of the Vi-PS recipients had seroconversion.

However, “clinical manifestations of typhoid fever seemed less severe among diagnosed participants following Vi-TT vaccination,” Celina Jin, MD, and her colleagues wrote (Lancet. 2017 Sep 28: doi: 10.1016/S0140-6736[17]32149-9). Fever, defined as an oral temperature of 38° C or higher, was seen in 6 of 37 (16%) Vi-TT recipients, 17 of 31 (55%) receiving control, and 11 of 35 (31%) receiving Vi-PS.

Geometric mean titers also were significantly higher in the Vi-TT group than in the Vi-PS group, with an adjusted geometric mean titer of 562.9 EU/mL for Vi-TT and 140.5 EU/mL for Vi-PS (P less than .0001).

The study enrolled 112 healthy adult volunteers who were randomized 1:1:1 to receive Vi-PS, Vi-TT, or control meningococcal vaccine. A total of 103 of the participants eventually received one of the two study vaccines or the control vaccines, and that group was included in the per-protocol analysis.

After vaccination (recipients and investigators were masked as to which formulation participants received), study participants kept an online diary to report any vaccination-related symptoms for 7 days, and also had clinic visits scheduled at days 1, 3, 7, and 10.

Participants received one oral dose of wild-type Salmonella enterica serovar Typhi Quailes strain bacteria about 1 month after vaccination. The dose was 1-5x104 colony forming units, and was administered immediately following a 120-mL oral bolus of sodium bicarbonate (to neutralize stomach acid).

Participants then were seen daily in an outpatient clinic for 2 weeks. At each visit, investigators monitored vital signs, performed a general assessment, and drew blood to assess for typhoid bacteremia. Participants also kept an online diary for 21 days, reporting twice-daily self-measured temperatures as well. No antipyretics were allowed before typhoid diagnosis.

Participants who met the study’s criteria for typhoid diagnosis were treated with a 2-week course of ciprofloxacin or azithromycin; patients who did not become ill were treated 14 days after the oral typhoid challenge. None of the four serious adverse events reported during the study was deemed to be related to vaccination.

CDC/Armed Forces Institute of Pathology, Charles N. Farmer
Histopathology of a lymph node in a case of typhoid fever.
Typhoid was diagnosed if patients had a fever of 38° C for 12 hours or more, or if they had S. Typhi bacteremia more than 72 hours after the challenge was administered.

That broad definition of typhoid infection was used to determine attack rates for the study’s primary outcome measure. However, Dr. Jin and her colleagues also looked at a less stringent – and perhaps more clinically pertinent – definition of 12 hours of fever of 38° C or higher followed by S. Typhi bacteremia. Using those criteria, the Vi-TT vaccine prevented up to 87% of infections.

Salmonella Typhi is the world’s leading cause of enteric fever, said Dr. Jin, of the Oxford Vaccine Group at the University of Oxford (England). Up to 20.6 million people per year are affected, with children most commonly infected and low-resource populations in Asia and Africa hardest hit.

Both prescription and over-the-counter antibiotics are used worldwide to combat typhoid fever, and S. Typhi strains are becoming increasingly antibiotic resistant in South Asia and Africa, Dr. Jin and her coauthors said.

The typhoid vaccines that are currently licensed are either not suitable for administration to infants and young children, or are insufficiently immunogenic in younger populations.

The typhoid conjugate vaccine used in the study combines the Vi-polysaccharide capsule with a protein carrier, increasing host immunologic response and making the vaccine effective in infancy.

“This human challenge study provides further evidence to support the deployment of Vi-conjugate vaccines as a control measure to reduce the burden of typhoid fever, because those individuals living in endemic regions should not be made to wait another 60 years,” wrote Dr. Jin and her coauthors.

The study was funded by the Bill & Melinda Gates Foundation and the European commission FP7 grant, Advanced Immunization Technologies.
 
 

 

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM THE LANCET

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: A conjugate typhoid vaccine significantly reduced typhoid fever rates under a stringent case definition.

Major finding: Efficacy was 54.6% for the Vi-conjugate vaccine, with 100% seroconversion.

Study details: Randomized, controlled phase 2b trial of 112 participants receiving one of two typhoid vaccines, or control meningococcal vaccine.

Disclosures: The study was funded by the Bill & Melinda Gates Foundation and the European Commission FP7 grant, Advanced Immunization Technologies.

Disqus Comments
Default