CDC: 3.4 million Americans have epilepsy

Article Type
Changed
Fri, 01/18/2019 - 16:57

 

Epilepsy estimates available for the first time for every state show that the disorder is widespread, with at least 3.4 million people affected, according to the Centers for Disease Control and Prevention.

The CDC data also show that the number of people with epilepsy is increasing, probably as a result of population growth. The number of affected adults went from 2.3 million in 2010 to 3 million in 2015, and the number of children with epilepsy rose from 450,000 in 2007 to 470,000 in 2015, CDC investigators reported (MMWR. 2017 Aug 11;66[31]:821-5).

The state estimates show that Mississippi has the highest epilepsy rate in the country at 1,194 cases per 100,000 population in 2015, followed by West Virginia (1,174) and Louisiana (1,173). Utah has the lowest rate at 960 cases per 100,000 population, with North Dakota next at 963 per 100,000 and Alaska third at 970. (The CDC report provided the number of cases per state, so rates given here are Frontline calculations using population estimates from the Census Bureau.)

“Millions of Americans are impacted by epilepsy, and unfortunately, this study shows cases are on the rise,” CDC Director Brenda Fitzgerald said in a separate statement. “Proper diagnosis is key to finding an effective treatment – and at CDC we are committed to researching, testing, and sharing strategies that will improve the lives of people with epilepsy.”

The CDC investigators based their estimates for children under age 18 years on data from the 2011-2012 National Survey of Children’s Health; estimates for those age 18 and over are based on data from the 2015 National Health Interview Survey.

Publications
Topics
Sections

 

Epilepsy estimates available for the first time for every state show that the disorder is widespread, with at least 3.4 million people affected, according to the Centers for Disease Control and Prevention.

The CDC data also show that the number of people with epilepsy is increasing, probably as a result of population growth. The number of affected adults went from 2.3 million in 2010 to 3 million in 2015, and the number of children with epilepsy rose from 450,000 in 2007 to 470,000 in 2015, CDC investigators reported (MMWR. 2017 Aug 11;66[31]:821-5).

The state estimates show that Mississippi has the highest epilepsy rate in the country at 1,194 cases per 100,000 population in 2015, followed by West Virginia (1,174) and Louisiana (1,173). Utah has the lowest rate at 960 cases per 100,000 population, with North Dakota next at 963 per 100,000 and Alaska third at 970. (The CDC report provided the number of cases per state, so rates given here are Frontline calculations using population estimates from the Census Bureau.)

“Millions of Americans are impacted by epilepsy, and unfortunately, this study shows cases are on the rise,” CDC Director Brenda Fitzgerald said in a separate statement. “Proper diagnosis is key to finding an effective treatment – and at CDC we are committed to researching, testing, and sharing strategies that will improve the lives of people with epilepsy.”

The CDC investigators based their estimates for children under age 18 years on data from the 2011-2012 National Survey of Children’s Health; estimates for those age 18 and over are based on data from the 2015 National Health Interview Survey.

 

Epilepsy estimates available for the first time for every state show that the disorder is widespread, with at least 3.4 million people affected, according to the Centers for Disease Control and Prevention.

The CDC data also show that the number of people with epilepsy is increasing, probably as a result of population growth. The number of affected adults went from 2.3 million in 2010 to 3 million in 2015, and the number of children with epilepsy rose from 450,000 in 2007 to 470,000 in 2015, CDC investigators reported (MMWR. 2017 Aug 11;66[31]:821-5).

The state estimates show that Mississippi has the highest epilepsy rate in the country at 1,194 cases per 100,000 population in 2015, followed by West Virginia (1,174) and Louisiana (1,173). Utah has the lowest rate at 960 cases per 100,000 population, with North Dakota next at 963 per 100,000 and Alaska third at 970. (The CDC report provided the number of cases per state, so rates given here are Frontline calculations using population estimates from the Census Bureau.)

“Millions of Americans are impacted by epilepsy, and unfortunately, this study shows cases are on the rise,” CDC Director Brenda Fitzgerald said in a separate statement. “Proper diagnosis is key to finding an effective treatment – and at CDC we are committed to researching, testing, and sharing strategies that will improve the lives of people with epilepsy.”

The CDC investigators based their estimates for children under age 18 years on data from the 2011-2012 National Survey of Children’s Health; estimates for those age 18 and over are based on data from the 2015 National Health Interview Survey.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM MMWR

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Revised thyroid Bethesda System resets malignant risks

Article Type
Changed
Tue, 07/21/2020 - 14:18

 

– Under the newly revised Bethesda System for Reporting Thyroid Cytology, slated for official release in October 2017, the six cytology-based diagnostic categories for thyroid lesions stay exactly the same as in the 10-year-old first edition, but some associated malignancy risks have changed.

Important changes include molecular testing to further assess malignancy risk in thyroid nodules and the introduction of lobectomy as a treatment option, “which really wasn’t an option 10 years ago,” in the first iteration of the Bethesda System (New York: Springer US, 2010), its coauthor Edmund S. Cibas, MD, said at the World Congress on Thyroid Cancer.

Mitchel L. Zoler/Frontline Medical News
Dr. Edmund S. Cibas
He singled out reframing the malignancy risks for some of the six cytology categories as a top message of the revision, and he attributed these changes to two main factors: routine molecular testing, and creation of a new diagnostic category, the “noninvasive follicular thyroid neoplasm with papillary-like nuclear features” (NIFTP).

An Endocrine Pathology Society working group created the NIFTP designation in 2016 to describe an encapsulated follicular variant of papillary thyroid carcinoma that is characterized by lack of invasion, a follicular growth pattern, and nuclear features of papillary thyroid carcinoma with a very low risk of an adverse outcome (JAMA Oncology. 2016 Aug;2[8]:1023-9) (Cancer Cytopathol. 2016 Sep;124[9]:616-20).

NIFTP is not an overt malignancy. The revised Bethesda System “limits malignancy to cases with features of classic malignant papillary thyroid carcinoma,” explained Dr. Cibas, professor of pathology at Harvard Medical School and director of cytopathology at Brigham and Women’s Hospital, both in Boston.

Because the Bethesda System categories link to specific management recommendations, the new edition orients patients toward more conservative management decisions, specifically lobectomies instead of total thyroidectomies, he said in an interview.

The International Cytology Congress held a symposium during its meeting in Yokohama, Japan, in 2016, which resulted in the second edition of the Bethesda System (ACTA Cytol. 2016 Sep-Oct; 60[5]:399-405).

The changes in risk of malignancy occurred primarily in two categories, either “atypia of undetermined significance” (AUS) or “follicular lesions of undetermined significance” (FLUS). The risk of malignancy jumped from 5%-15% in the Bethesda System first edition up to 10%-30% in the revision. A smaller bump-up hit the category of “follicular neoplasm” or “suspicious for follicular neoplasm,” in which the risk of malignancy increased from 20%-30% in the first edition to 25%-40% in the revision. And, in the suspicion of malignancy category, the risk of malignancy actually lowered modestly, easing from 60%-75% in the first edition to 50%-75% in the revision.

Dr. Cibas highlighted the AUS/FLUS category with further notable features. The limit on laboratories reporting this category increased to 10% of total reports, up from 7% in the first edition. Management changed from the single options of a repeat fine-needle aspiration specimen to either that or molecular testing. Also, “the first edition was not clear that AUS and FLUS are synonyms. That will be a lot clearer” in the second edition, Dr. Cibas promised. The revision “will encourage labs that currently use [the terms] AUS and FLUS to mean two different things to make a choice between them.”

Another quirk of the AUS and FLUS category is that the risk of malignancy estimates are based on what Dr. Cibas called “flawed” data from only the selected subset of AUS or FLUS patients who have their nodule resected. “The reality is that most of the nodules are not resected” from patients with AUS or FLUS, so conclusions about the risk of malignancy come from a subset with considerable selection bias.

The definition of “follicular neoplasm” or “suspicious for follicular neoplasm” category also added “mild nuclear changes,” which can include increased nuclear size, contour irregularity, or chromatin clearing. The “suspicious for malignancy” category made a modest tweak to the risk of malignancy. Plus, “some of these patients will now undergo lobectomy rather than total thyroidectomy, which has been usual management.

The “suspicious for malignant” and “malignant” categories had little change aside from wider use of lobectomy, now feasible for any patient except those with metastatic disease, Dr. Cibas said.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event
Related Articles

 

– Under the newly revised Bethesda System for Reporting Thyroid Cytology, slated for official release in October 2017, the six cytology-based diagnostic categories for thyroid lesions stay exactly the same as in the 10-year-old first edition, but some associated malignancy risks have changed.

Important changes include molecular testing to further assess malignancy risk in thyroid nodules and the introduction of lobectomy as a treatment option, “which really wasn’t an option 10 years ago,” in the first iteration of the Bethesda System (New York: Springer US, 2010), its coauthor Edmund S. Cibas, MD, said at the World Congress on Thyroid Cancer.

Mitchel L. Zoler/Frontline Medical News
Dr. Edmund S. Cibas
He singled out reframing the malignancy risks for some of the six cytology categories as a top message of the revision, and he attributed these changes to two main factors: routine molecular testing, and creation of a new diagnostic category, the “noninvasive follicular thyroid neoplasm with papillary-like nuclear features” (NIFTP).

An Endocrine Pathology Society working group created the NIFTP designation in 2016 to describe an encapsulated follicular variant of papillary thyroid carcinoma that is characterized by lack of invasion, a follicular growth pattern, and nuclear features of papillary thyroid carcinoma with a very low risk of an adverse outcome (JAMA Oncology. 2016 Aug;2[8]:1023-9) (Cancer Cytopathol. 2016 Sep;124[9]:616-20).

NIFTP is not an overt malignancy. The revised Bethesda System “limits malignancy to cases with features of classic malignant papillary thyroid carcinoma,” explained Dr. Cibas, professor of pathology at Harvard Medical School and director of cytopathology at Brigham and Women’s Hospital, both in Boston.

Because the Bethesda System categories link to specific management recommendations, the new edition orients patients toward more conservative management decisions, specifically lobectomies instead of total thyroidectomies, he said in an interview.

The International Cytology Congress held a symposium during its meeting in Yokohama, Japan, in 2016, which resulted in the second edition of the Bethesda System (ACTA Cytol. 2016 Sep-Oct; 60[5]:399-405).

The changes in risk of malignancy occurred primarily in two categories, either “atypia of undetermined significance” (AUS) or “follicular lesions of undetermined significance” (FLUS). The risk of malignancy jumped from 5%-15% in the Bethesda System first edition up to 10%-30% in the revision. A smaller bump-up hit the category of “follicular neoplasm” or “suspicious for follicular neoplasm,” in which the risk of malignancy increased from 20%-30% in the first edition to 25%-40% in the revision. And, in the suspicion of malignancy category, the risk of malignancy actually lowered modestly, easing from 60%-75% in the first edition to 50%-75% in the revision.

Dr. Cibas highlighted the AUS/FLUS category with further notable features. The limit on laboratories reporting this category increased to 10% of total reports, up from 7% in the first edition. Management changed from the single options of a repeat fine-needle aspiration specimen to either that or molecular testing. Also, “the first edition was not clear that AUS and FLUS are synonyms. That will be a lot clearer” in the second edition, Dr. Cibas promised. The revision “will encourage labs that currently use [the terms] AUS and FLUS to mean two different things to make a choice between them.”

Another quirk of the AUS and FLUS category is that the risk of malignancy estimates are based on what Dr. Cibas called “flawed” data from only the selected subset of AUS or FLUS patients who have their nodule resected. “The reality is that most of the nodules are not resected” from patients with AUS or FLUS, so conclusions about the risk of malignancy come from a subset with considerable selection bias.

The definition of “follicular neoplasm” or “suspicious for follicular neoplasm” category also added “mild nuclear changes,” which can include increased nuclear size, contour irregularity, or chromatin clearing. The “suspicious for malignancy” category made a modest tweak to the risk of malignancy. Plus, “some of these patients will now undergo lobectomy rather than total thyroidectomy, which has been usual management.

The “suspicious for malignant” and “malignant” categories had little change aside from wider use of lobectomy, now feasible for any patient except those with metastatic disease, Dr. Cibas said.

 

– Under the newly revised Bethesda System for Reporting Thyroid Cytology, slated for official release in October 2017, the six cytology-based diagnostic categories for thyroid lesions stay exactly the same as in the 10-year-old first edition, but some associated malignancy risks have changed.

Important changes include molecular testing to further assess malignancy risk in thyroid nodules and the introduction of lobectomy as a treatment option, “which really wasn’t an option 10 years ago,” in the first iteration of the Bethesda System (New York: Springer US, 2010), its coauthor Edmund S. Cibas, MD, said at the World Congress on Thyroid Cancer.

Mitchel L. Zoler/Frontline Medical News
Dr. Edmund S. Cibas
He singled out reframing the malignancy risks for some of the six cytology categories as a top message of the revision, and he attributed these changes to two main factors: routine molecular testing, and creation of a new diagnostic category, the “noninvasive follicular thyroid neoplasm with papillary-like nuclear features” (NIFTP).

An Endocrine Pathology Society working group created the NIFTP designation in 2016 to describe an encapsulated follicular variant of papillary thyroid carcinoma that is characterized by lack of invasion, a follicular growth pattern, and nuclear features of papillary thyroid carcinoma with a very low risk of an adverse outcome (JAMA Oncology. 2016 Aug;2[8]:1023-9) (Cancer Cytopathol. 2016 Sep;124[9]:616-20).

NIFTP is not an overt malignancy. The revised Bethesda System “limits malignancy to cases with features of classic malignant papillary thyroid carcinoma,” explained Dr. Cibas, professor of pathology at Harvard Medical School and director of cytopathology at Brigham and Women’s Hospital, both in Boston.

Because the Bethesda System categories link to specific management recommendations, the new edition orients patients toward more conservative management decisions, specifically lobectomies instead of total thyroidectomies, he said in an interview.

The International Cytology Congress held a symposium during its meeting in Yokohama, Japan, in 2016, which resulted in the second edition of the Bethesda System (ACTA Cytol. 2016 Sep-Oct; 60[5]:399-405).

The changes in risk of malignancy occurred primarily in two categories, either “atypia of undetermined significance” (AUS) or “follicular lesions of undetermined significance” (FLUS). The risk of malignancy jumped from 5%-15% in the Bethesda System first edition up to 10%-30% in the revision. A smaller bump-up hit the category of “follicular neoplasm” or “suspicious for follicular neoplasm,” in which the risk of malignancy increased from 20%-30% in the first edition to 25%-40% in the revision. And, in the suspicion of malignancy category, the risk of malignancy actually lowered modestly, easing from 60%-75% in the first edition to 50%-75% in the revision.

Dr. Cibas highlighted the AUS/FLUS category with further notable features. The limit on laboratories reporting this category increased to 10% of total reports, up from 7% in the first edition. Management changed from the single options of a repeat fine-needle aspiration specimen to either that or molecular testing. Also, “the first edition was not clear that AUS and FLUS are synonyms. That will be a lot clearer” in the second edition, Dr. Cibas promised. The revision “will encourage labs that currently use [the terms] AUS and FLUS to mean two different things to make a choice between them.”

Another quirk of the AUS and FLUS category is that the risk of malignancy estimates are based on what Dr. Cibas called “flawed” data from only the selected subset of AUS or FLUS patients who have their nodule resected. “The reality is that most of the nodules are not resected” from patients with AUS or FLUS, so conclusions about the risk of malignancy come from a subset with considerable selection bias.

The definition of “follicular neoplasm” or “suspicious for follicular neoplasm” category also added “mild nuclear changes,” which can include increased nuclear size, contour irregularity, or chromatin clearing. The “suspicious for malignancy” category made a modest tweak to the risk of malignancy. Plus, “some of these patients will now undergo lobectomy rather than total thyroidectomy, which has been usual management.

The “suspicious for malignant” and “malignant” categories had little change aside from wider use of lobectomy, now feasible for any patient except those with metastatic disease, Dr. Cibas said.

Publications
Publications
Topics
Article Type
Click for Credit Status
Active
Sections
Article Source

EXPERT ANALYSIS FROM WCTC 2017

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
CME ID
144412
Disqus Comments
Default

How to rule out secondary causes of osteoporosis

Article Type
Changed
Fri, 01/18/2019 - 16:57

 

– Everyone diagnosed with osteoporosis deserves a laboratory assessment to rule out unsuspected secondary causes, according to Sterling West, MD. And he’s got a doozy of a workup he recommends to primary care physicians as “incredibly cost effective.”

“With this workup you’ll identify 98% of abnormalities at a mean cost of $366 per diagnosis. That’s incredibly cost effective. You’re going to get a lot of information with actually not very much outlay at all,” he said at a conference on internal medicine sponsored by the University of Colorado.

Bruce Jancin/Frontline Medical News
Dr. Sterling West
Applying this laboratory screening regimen to all patients diagnosed with osteoporosis is warranted because unsuspected secondary causes of the skeletal disease are so common. In various studies, laboratory screening has revealed a secondary cause in up to one third of postmenopausal women with osteoporosis, in up to half of men, and in 50%-80% of premenopausal osteoporosis patients, noted Dr. West, professor of medicine at the university.

The tests he advocates that primary care physicians order in all their patients with osteoporosis include a complete blood count, a complete metabolic panel, a 24-hour urine calcium/sodium/creatinine, a serum 25-hydroxyvitamin D level, and a serum phosphorus. In addition, men with osteoporosis should have their serum testosterone measured. A thyroid-stimulating hormone level should be obtained in patients who are taking thyroxine or if they look clinically hyperthyroid.

A measurement of parathyroid hormone is warranted as part of the screen in patients with an abnormal serum calcium. If the parathyroid hormone is normal, hyperparathyroidism can be ruled out.

Ordering a serum protein electrophoresis to check for multiple myeloma is appropriate in osteoporotic patients over age 50 years with an abnormal complete blood count.

This basic laboratory workup will identify patients with the relatively common secondary causes of low bone mineral density which account for 98% of all cases. These causes include vitamin D deficiency, malabsorption, hypogonadism, hypercalciuria, and myeloma.

“Leave the other 2% to me,” the rheumatologist suggested.

Special laboratory tests Dr. Sterling recommended that are best left to bone disease specialists include bone turnover markers, a serum tryptase/urine N-methylhistamine to screen for systemic mastocytosis, antitransglutaminase antibodies for celiac disease, a 24-hour urinary free cortisol and/or overnight dexamethasone suppression test to identify patients with Cushing syndrome, and bone biopsy.

Who should be referred to a bone specialist for a more extensive workup?

“If somebody is losing bone or fracturing and they’re on appropriate therapy and you believe they’re taking their medication, that’s for sure somebody that we should see. Also, a premenopausal woman with a high Z score who has had a fracture that’s atypical. And patients with stage 4 or 5 chronic kidney disease; those are some of the toughest cases and are best referred to a bone expert,” Dr. Sterling said.

On the other hand, if an osteoporotic patient simply can’t tolerate guideline-recommended initial therapy with an oral bisphosphonate such as alendronate (Fosamax) or risedronate (Actonel), there’s no need to bring in a specialist. Simply switch the patient to denosumab (Prolia), a monoclonal antibody against receptor activator of nuclear factor kappa-B ligand, administered by subcutaneous injection once every 6 months. The cost is about $2,200 per year, but the drug is covered by Medicare Part B. Clinical trials have demonstrated that denosumab boosts bone mineral density by 6%-9%, with an absolute 5% reduction in fractures and a 40%-68% relative risk reduction, he noted.

Dr. West reported having no financial conflicts of interest regarding his presentation.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Everyone diagnosed with osteoporosis deserves a laboratory assessment to rule out unsuspected secondary causes, according to Sterling West, MD. And he’s got a doozy of a workup he recommends to primary care physicians as “incredibly cost effective.”

“With this workup you’ll identify 98% of abnormalities at a mean cost of $366 per diagnosis. That’s incredibly cost effective. You’re going to get a lot of information with actually not very much outlay at all,” he said at a conference on internal medicine sponsored by the University of Colorado.

Bruce Jancin/Frontline Medical News
Dr. Sterling West
Applying this laboratory screening regimen to all patients diagnosed with osteoporosis is warranted because unsuspected secondary causes of the skeletal disease are so common. In various studies, laboratory screening has revealed a secondary cause in up to one third of postmenopausal women with osteoporosis, in up to half of men, and in 50%-80% of premenopausal osteoporosis patients, noted Dr. West, professor of medicine at the university.

The tests he advocates that primary care physicians order in all their patients with osteoporosis include a complete blood count, a complete metabolic panel, a 24-hour urine calcium/sodium/creatinine, a serum 25-hydroxyvitamin D level, and a serum phosphorus. In addition, men with osteoporosis should have their serum testosterone measured. A thyroid-stimulating hormone level should be obtained in patients who are taking thyroxine or if they look clinically hyperthyroid.

A measurement of parathyroid hormone is warranted as part of the screen in patients with an abnormal serum calcium. If the parathyroid hormone is normal, hyperparathyroidism can be ruled out.

Ordering a serum protein electrophoresis to check for multiple myeloma is appropriate in osteoporotic patients over age 50 years with an abnormal complete blood count.

This basic laboratory workup will identify patients with the relatively common secondary causes of low bone mineral density which account for 98% of all cases. These causes include vitamin D deficiency, malabsorption, hypogonadism, hypercalciuria, and myeloma.

“Leave the other 2% to me,” the rheumatologist suggested.

Special laboratory tests Dr. Sterling recommended that are best left to bone disease specialists include bone turnover markers, a serum tryptase/urine N-methylhistamine to screen for systemic mastocytosis, antitransglutaminase antibodies for celiac disease, a 24-hour urinary free cortisol and/or overnight dexamethasone suppression test to identify patients with Cushing syndrome, and bone biopsy.

Who should be referred to a bone specialist for a more extensive workup?

“If somebody is losing bone or fracturing and they’re on appropriate therapy and you believe they’re taking their medication, that’s for sure somebody that we should see. Also, a premenopausal woman with a high Z score who has had a fracture that’s atypical. And patients with stage 4 or 5 chronic kidney disease; those are some of the toughest cases and are best referred to a bone expert,” Dr. Sterling said.

On the other hand, if an osteoporotic patient simply can’t tolerate guideline-recommended initial therapy with an oral bisphosphonate such as alendronate (Fosamax) or risedronate (Actonel), there’s no need to bring in a specialist. Simply switch the patient to denosumab (Prolia), a monoclonal antibody against receptor activator of nuclear factor kappa-B ligand, administered by subcutaneous injection once every 6 months. The cost is about $2,200 per year, but the drug is covered by Medicare Part B. Clinical trials have demonstrated that denosumab boosts bone mineral density by 6%-9%, with an absolute 5% reduction in fractures and a 40%-68% relative risk reduction, he noted.

Dr. West reported having no financial conflicts of interest regarding his presentation.

 

– Everyone diagnosed with osteoporosis deserves a laboratory assessment to rule out unsuspected secondary causes, according to Sterling West, MD. And he’s got a doozy of a workup he recommends to primary care physicians as “incredibly cost effective.”

“With this workup you’ll identify 98% of abnormalities at a mean cost of $366 per diagnosis. That’s incredibly cost effective. You’re going to get a lot of information with actually not very much outlay at all,” he said at a conference on internal medicine sponsored by the University of Colorado.

Bruce Jancin/Frontline Medical News
Dr. Sterling West
Applying this laboratory screening regimen to all patients diagnosed with osteoporosis is warranted because unsuspected secondary causes of the skeletal disease are so common. In various studies, laboratory screening has revealed a secondary cause in up to one third of postmenopausal women with osteoporosis, in up to half of men, and in 50%-80% of premenopausal osteoporosis patients, noted Dr. West, professor of medicine at the university.

The tests he advocates that primary care physicians order in all their patients with osteoporosis include a complete blood count, a complete metabolic panel, a 24-hour urine calcium/sodium/creatinine, a serum 25-hydroxyvitamin D level, and a serum phosphorus. In addition, men with osteoporosis should have their serum testosterone measured. A thyroid-stimulating hormone level should be obtained in patients who are taking thyroxine or if they look clinically hyperthyroid.

A measurement of parathyroid hormone is warranted as part of the screen in patients with an abnormal serum calcium. If the parathyroid hormone is normal, hyperparathyroidism can be ruled out.

Ordering a serum protein electrophoresis to check for multiple myeloma is appropriate in osteoporotic patients over age 50 years with an abnormal complete blood count.

This basic laboratory workup will identify patients with the relatively common secondary causes of low bone mineral density which account for 98% of all cases. These causes include vitamin D deficiency, malabsorption, hypogonadism, hypercalciuria, and myeloma.

“Leave the other 2% to me,” the rheumatologist suggested.

Special laboratory tests Dr. Sterling recommended that are best left to bone disease specialists include bone turnover markers, a serum tryptase/urine N-methylhistamine to screen for systemic mastocytosis, antitransglutaminase antibodies for celiac disease, a 24-hour urinary free cortisol and/or overnight dexamethasone suppression test to identify patients with Cushing syndrome, and bone biopsy.

Who should be referred to a bone specialist for a more extensive workup?

“If somebody is losing bone or fracturing and they’re on appropriate therapy and you believe they’re taking their medication, that’s for sure somebody that we should see. Also, a premenopausal woman with a high Z score who has had a fracture that’s atypical. And patients with stage 4 or 5 chronic kidney disease; those are some of the toughest cases and are best referred to a bone expert,” Dr. Sterling said.

On the other hand, if an osteoporotic patient simply can’t tolerate guideline-recommended initial therapy with an oral bisphosphonate such as alendronate (Fosamax) or risedronate (Actonel), there’s no need to bring in a specialist. Simply switch the patient to denosumab (Prolia), a monoclonal antibody against receptor activator of nuclear factor kappa-B ligand, administered by subcutaneous injection once every 6 months. The cost is about $2,200 per year, but the drug is covered by Medicare Part B. Clinical trials have demonstrated that denosumab boosts bone mineral density by 6%-9%, with an absolute 5% reduction in fractures and a 40%-68% relative risk reduction, he noted.

Dr. West reported having no financial conflicts of interest regarding his presentation.

Publications
Publications
Topics
Article Type
Sections
Article Source

EXPERT ANALYSIS FROM THE ANNUAL INTERNAL MEDICINE PROGRAM

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

First trial of TAVR vs. SAVR in low-risk patients

Article Type
Changed
Tue, 12/04/2018 - 11:30

 

– Five-year hemodynamic results of the first randomized trial of transcatheter versus surgical aortic valve replacement in low-surgical-risk patients with severe aortic stenosis showed continued superior valve performance in the TAVR group, Lars Sondergaard, MD, reported at the annual congress of the European Association of Percutaneous Cardiovascular Interventions.

“The durability results are very encouraging. We can’t see that the TAVR patients are doing worse. So I think this is setting the scene to try to move forward in patients at low risk and also in younger patients,” declared Dr. Sondergaard, professor of cardiology at the University of Copenhagen.

He presented an update from the Nordic Aortic Valve Intervention (NOTION) trial, a prospective, multicenter, randomized, all-comers clinical trial in which 280 patients with symptomatic severe aortic stenosis at low surgical risk were assigned to surgical aortic valve replacement (SAVR) or to TAVR with the self-expanding CoreValve. Their mean age was 79 years, with an average Society of Thoracic Surgeons projected risk of mortality score of 3%. Eighty-two percent of participants had an STS score below 4%. Roughly 40% of TAVR patients got the first-generation CoreValve in the 26-mm size, 40% received the 29-mm version, and the rest got the 31-mm CoreValve.

Bruce Jancin/Frontline Medical News
Dr. Lars Sondergaard
With 94% compliance with follow-up through 4 years post procedure, the primary clinical endpoint – a composite of all-cause mortality, MI, and stroke – had occurred in 29.1% of the TAVR group and was similar at 30.2% in the SAVR group. The all-cause mortality rate was 20% in the TAVR group, compared with 23% in the SAVR cohort, a nonsignificant difference.

Among patients in the lowest-surgical-risk and youngest subgroup – those aged 70-75 with a Society of Thoracic Surgeons risk score below 4% – the composite primary endpoint rate at 4 years was 15.6% with TAVR compared with 27.2% with SAVR. However, only 62 NOTION participants fell into this category, so the between-group difference, while sizable, didn’t achieve statistical significance, according to Dr. Sondergaard.

There was a trade-off between the two valve replacement strategies with regard to procedural complications. The rate of new-onset atrial fibrillation was far higher in the SAVR group: 59.4% at 1 year and 60.2% at 4 years of follow-up, compared with 21.2% and 24.5% at 1 and 4 years, respectively, in the TAVR group.

On the other hand, 38% of the TAVR patients got a new pacemaker within the first year of follow-up, compared with only 2.4% in the SAVR group. At 4 years, 43.7% of the TAVR group had a pacemaker, versus 9% of the SAVR group.

Turning to the hemodynamic data, the cardiologist noted that the effective orifice area in the TAVR group went from 0.71 cm2 at baseline to 1.66 at 1 year and remained steady thereafter at 1.67 cm2 through 5 years. The TAVR group’s mean gradient improved from 45.4 mm Hg at baseline to 8.6 mm Hg at 1 year and 7.9 mm Hg at 5 years. These outcomes were significantly better than in the SAVR group, where the effective orifice area went from 0.74 cm2 at baseline to 1.32 at 1 year and 1.24 cm2 at 5 years, while the mean gradient fell from 44.9 mm Hg to 12.5 at 1 year and 13.6 mm Hg at 5 years.

Moderate hemodynamic structural valve deterioration was significantly more common in the SAVR group: 20.7% at 5 years, compared with 2.9% in the TAVR patients. The opposite was true with regard to moderate paravalvular leak, which occurred in 20.9% of the TAVI group but only 1.5% of SAVR patients.

Late complications were rare following either procedure. There were no cases of valve thrombosis through 5 years. The incidence of endocarditis at 5 years was 4.3% in the TAVR patients and similar at 5.9% in the SAVR group.

Discussant Samer Mansour, MD, of the University of Montreal, remarked that the rate of new pacemaker implantation following TAVR seemed extraordinarily high.

“This was early days,” Dr. Sondergaard explained. “We had a lower threshold for putting in a pacemaker and we put the valves in a little deeper.”

About half of new pacemaker recipients didn’t use the device after the first year, he added. Also, neither getting a new pacemaker nor moderate paravalvular leak was associated with increased mortality in the TAVR group.

Dr. Mansour observed that subtle but real differences in mortality probably wouldn’t show up in a 280-patient trial. Dr. Sondergaard concurred.

“We designed the NOTION trial in 2008-2009. Knowing what we know now, we should have had a larger study, but at that time TAVR volume wasn’t that big and it wasn’t realistic as a Nordic trial to include 1,000 patients. This was the best we could do,” he said.

Follow-up in the NOTION study will continue out to 10 years.

The study is funded by Medtronic. Dr. Sondergaard reported serving as a consultant to and receiving research grant support from the company.

 

 

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event
Related Articles

 

– Five-year hemodynamic results of the first randomized trial of transcatheter versus surgical aortic valve replacement in low-surgical-risk patients with severe aortic stenosis showed continued superior valve performance in the TAVR group, Lars Sondergaard, MD, reported at the annual congress of the European Association of Percutaneous Cardiovascular Interventions.

“The durability results are very encouraging. We can’t see that the TAVR patients are doing worse. So I think this is setting the scene to try to move forward in patients at low risk and also in younger patients,” declared Dr. Sondergaard, professor of cardiology at the University of Copenhagen.

He presented an update from the Nordic Aortic Valve Intervention (NOTION) trial, a prospective, multicenter, randomized, all-comers clinical trial in which 280 patients with symptomatic severe aortic stenosis at low surgical risk were assigned to surgical aortic valve replacement (SAVR) or to TAVR with the self-expanding CoreValve. Their mean age was 79 years, with an average Society of Thoracic Surgeons projected risk of mortality score of 3%. Eighty-two percent of participants had an STS score below 4%. Roughly 40% of TAVR patients got the first-generation CoreValve in the 26-mm size, 40% received the 29-mm version, and the rest got the 31-mm CoreValve.

Bruce Jancin/Frontline Medical News
Dr. Lars Sondergaard
With 94% compliance with follow-up through 4 years post procedure, the primary clinical endpoint – a composite of all-cause mortality, MI, and stroke – had occurred in 29.1% of the TAVR group and was similar at 30.2% in the SAVR group. The all-cause mortality rate was 20% in the TAVR group, compared with 23% in the SAVR cohort, a nonsignificant difference.

Among patients in the lowest-surgical-risk and youngest subgroup – those aged 70-75 with a Society of Thoracic Surgeons risk score below 4% – the composite primary endpoint rate at 4 years was 15.6% with TAVR compared with 27.2% with SAVR. However, only 62 NOTION participants fell into this category, so the between-group difference, while sizable, didn’t achieve statistical significance, according to Dr. Sondergaard.

There was a trade-off between the two valve replacement strategies with regard to procedural complications. The rate of new-onset atrial fibrillation was far higher in the SAVR group: 59.4% at 1 year and 60.2% at 4 years of follow-up, compared with 21.2% and 24.5% at 1 and 4 years, respectively, in the TAVR group.

On the other hand, 38% of the TAVR patients got a new pacemaker within the first year of follow-up, compared with only 2.4% in the SAVR group. At 4 years, 43.7% of the TAVR group had a pacemaker, versus 9% of the SAVR group.

Turning to the hemodynamic data, the cardiologist noted that the effective orifice area in the TAVR group went from 0.71 cm2 at baseline to 1.66 at 1 year and remained steady thereafter at 1.67 cm2 through 5 years. The TAVR group’s mean gradient improved from 45.4 mm Hg at baseline to 8.6 mm Hg at 1 year and 7.9 mm Hg at 5 years. These outcomes were significantly better than in the SAVR group, where the effective orifice area went from 0.74 cm2 at baseline to 1.32 at 1 year and 1.24 cm2 at 5 years, while the mean gradient fell from 44.9 mm Hg to 12.5 at 1 year and 13.6 mm Hg at 5 years.

Moderate hemodynamic structural valve deterioration was significantly more common in the SAVR group: 20.7% at 5 years, compared with 2.9% in the TAVR patients. The opposite was true with regard to moderate paravalvular leak, which occurred in 20.9% of the TAVI group but only 1.5% of SAVR patients.

Late complications were rare following either procedure. There were no cases of valve thrombosis through 5 years. The incidence of endocarditis at 5 years was 4.3% in the TAVR patients and similar at 5.9% in the SAVR group.

Discussant Samer Mansour, MD, of the University of Montreal, remarked that the rate of new pacemaker implantation following TAVR seemed extraordinarily high.

“This was early days,” Dr. Sondergaard explained. “We had a lower threshold for putting in a pacemaker and we put the valves in a little deeper.”

About half of new pacemaker recipients didn’t use the device after the first year, he added. Also, neither getting a new pacemaker nor moderate paravalvular leak was associated with increased mortality in the TAVR group.

Dr. Mansour observed that subtle but real differences in mortality probably wouldn’t show up in a 280-patient trial. Dr. Sondergaard concurred.

“We designed the NOTION trial in 2008-2009. Knowing what we know now, we should have had a larger study, but at that time TAVR volume wasn’t that big and it wasn’t realistic as a Nordic trial to include 1,000 patients. This was the best we could do,” he said.

Follow-up in the NOTION study will continue out to 10 years.

The study is funded by Medtronic. Dr. Sondergaard reported serving as a consultant to and receiving research grant support from the company.

 

 

 

– Five-year hemodynamic results of the first randomized trial of transcatheter versus surgical aortic valve replacement in low-surgical-risk patients with severe aortic stenosis showed continued superior valve performance in the TAVR group, Lars Sondergaard, MD, reported at the annual congress of the European Association of Percutaneous Cardiovascular Interventions.

“The durability results are very encouraging. We can’t see that the TAVR patients are doing worse. So I think this is setting the scene to try to move forward in patients at low risk and also in younger patients,” declared Dr. Sondergaard, professor of cardiology at the University of Copenhagen.

He presented an update from the Nordic Aortic Valve Intervention (NOTION) trial, a prospective, multicenter, randomized, all-comers clinical trial in which 280 patients with symptomatic severe aortic stenosis at low surgical risk were assigned to surgical aortic valve replacement (SAVR) or to TAVR with the self-expanding CoreValve. Their mean age was 79 years, with an average Society of Thoracic Surgeons projected risk of mortality score of 3%. Eighty-two percent of participants had an STS score below 4%. Roughly 40% of TAVR patients got the first-generation CoreValve in the 26-mm size, 40% received the 29-mm version, and the rest got the 31-mm CoreValve.

Bruce Jancin/Frontline Medical News
Dr. Lars Sondergaard
With 94% compliance with follow-up through 4 years post procedure, the primary clinical endpoint – a composite of all-cause mortality, MI, and stroke – had occurred in 29.1% of the TAVR group and was similar at 30.2% in the SAVR group. The all-cause mortality rate was 20% in the TAVR group, compared with 23% in the SAVR cohort, a nonsignificant difference.

Among patients in the lowest-surgical-risk and youngest subgroup – those aged 70-75 with a Society of Thoracic Surgeons risk score below 4% – the composite primary endpoint rate at 4 years was 15.6% with TAVR compared with 27.2% with SAVR. However, only 62 NOTION participants fell into this category, so the between-group difference, while sizable, didn’t achieve statistical significance, according to Dr. Sondergaard.

There was a trade-off between the two valve replacement strategies with regard to procedural complications. The rate of new-onset atrial fibrillation was far higher in the SAVR group: 59.4% at 1 year and 60.2% at 4 years of follow-up, compared with 21.2% and 24.5% at 1 and 4 years, respectively, in the TAVR group.

On the other hand, 38% of the TAVR patients got a new pacemaker within the first year of follow-up, compared with only 2.4% in the SAVR group. At 4 years, 43.7% of the TAVR group had a pacemaker, versus 9% of the SAVR group.

Turning to the hemodynamic data, the cardiologist noted that the effective orifice area in the TAVR group went from 0.71 cm2 at baseline to 1.66 at 1 year and remained steady thereafter at 1.67 cm2 through 5 years. The TAVR group’s mean gradient improved from 45.4 mm Hg at baseline to 8.6 mm Hg at 1 year and 7.9 mm Hg at 5 years. These outcomes were significantly better than in the SAVR group, where the effective orifice area went from 0.74 cm2 at baseline to 1.32 at 1 year and 1.24 cm2 at 5 years, while the mean gradient fell from 44.9 mm Hg to 12.5 at 1 year and 13.6 mm Hg at 5 years.

Moderate hemodynamic structural valve deterioration was significantly more common in the SAVR group: 20.7% at 5 years, compared with 2.9% in the TAVR patients. The opposite was true with regard to moderate paravalvular leak, which occurred in 20.9% of the TAVI group but only 1.5% of SAVR patients.

Late complications were rare following either procedure. There were no cases of valve thrombosis through 5 years. The incidence of endocarditis at 5 years was 4.3% in the TAVR patients and similar at 5.9% in the SAVR group.

Discussant Samer Mansour, MD, of the University of Montreal, remarked that the rate of new pacemaker implantation following TAVR seemed extraordinarily high.

“This was early days,” Dr. Sondergaard explained. “We had a lower threshold for putting in a pacemaker and we put the valves in a little deeper.”

About half of new pacemaker recipients didn’t use the device after the first year, he added. Also, neither getting a new pacemaker nor moderate paravalvular leak was associated with increased mortality in the TAVR group.

Dr. Mansour observed that subtle but real differences in mortality probably wouldn’t show up in a 280-patient trial. Dr. Sondergaard concurred.

“We designed the NOTION trial in 2008-2009. Knowing what we know now, we should have had a larger study, but at that time TAVR volume wasn’t that big and it wasn’t realistic as a Nordic trial to include 1,000 patients. This was the best we could do,” he said.

Follow-up in the NOTION study will continue out to 10 years.

The study is funded by Medtronic. Dr. Sondergaard reported serving as a consultant to and receiving research grant support from the company.

 

 

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

AT EuroPCR

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: TAVR looks promising through 4-5 years of follow-up in low-surgical-risk patients in the NOTION trial.

Major finding: At 4 years of follow-up, the composite endpoint of all-cause mortality, MI, or stroke occurred in 29% of low-surgical-risk patients with severe aortic stenosis who were randomized to transcatheter aortic valve replacement (TAVR) and 30% of those who underwent surgical valve replacement.

Data source: NOTION, a prospective multicenter randomized trial in which 280 Nordic patients with symptomatic severe aortic stenosis at low surgical risk were assigned to surgical aortic valve replacement (SAVR) or to TAVR with the self-expanding CoreValve.

Disclosures: The study is funded by Medtronic. The presenter reported serving as a consultant to and receiving research grant support from the company.

Disqus Comments
Default

The TEND (Tomorrow’s Expected Number of Discharges) Model Accurately Predicted the Number of Patients Who Were Discharged from the Hospital the Next Day

Article Type
Changed
Mon, 03/26/2018 - 11:01
Display Headline
The TEND (Tomorrow’s Expected Number of Discharges) Model Accurately Predicted the Number of Patients Who Were Discharged from the Hospital the Next Day

Hospitals typically allocate beds based on historical patient volumes. If funding decreases, hospitals will usually try to maximize resource utilization by allocating beds to attain occupancies close to 100% for significant periods of time. This will invariably cause days in which hospital occupancy exceeds capacity, at which time critical entry points (such as the emergency department and operating room) will become blocked. This creates significant concerns over the patient quality of care.

Hospital administrators have very few options when hospital occupancy exceeds 100%. They could postpone admissions for “planned” cases, bring in additional staff to increase capacity, or instigate additional methods to increase hospital discharges such as expanding care resources in the community. All options are costly, bothersome, or cannot be actioned immediately. The need for these options could be minimized by enabling hospital administrators to make more informed decisions regarding hospital bed management by knowing the likely number of discharges in the next 24 hours.

Predicting the number of people who will be discharged in the next day can be approached in several ways. One approach would be to calculate each patient’s expected length of stay and then use the variation around that estimate to calculate each day’s discharge probability. Several studies have attempted to model hospital length of stay using a broad assortment of methodologies, but a mechanism to accurately predict this outcome has been elusive1,2 (with Verburg et al.3 concluding in their study’s abstract that “…it is difficult to predict length of stay…”). A second approach would be to use survival analysis methods to generate each patient’s hazard of discharge over time, which could be directly converted to an expected daily risk of discharge. However, this approach is complicated by the concurrent need to include time-dependent covariates and consider the competing risk of death in hospital, which can complicate survival modeling.4,5 A third approach would be the implementation of a longitudinal analysis using marginal models to predict the daily probability of discharge,6 but this method quickly overwhelms computer resources when large datasets are present.

In this study, we decided to use nonparametric models to predict the daily number of hospital discharges. We first identified patient groups with distinct discharge patterns. We then calculated the conditional daily discharge probability of patients in each of these groups. Finally, these conditional daily discharge probabilities were then summed for each hospital day to generate the expected number of discharges in the next 24 hours. This paper details the methods we used to create our model and the accuracy of its predictions.

 

 

METHODS

Study Setting and Databases Used for Analysis

The study took place at The Ottawa Hospital, a 1000-bed teaching hospital with 3 campuses that is the primary referral center in our region. The study was approved by our local research ethics board.

The Patient Registry Database records the date and time of admission for each patient (defined as the moment that a patient’s admission request is registered in the patient registration) and discharge (defined as the time when the patient’s discharge from hospital was entered into the patient registration) for hospital encounters. Emergency department encounters were also identified in the Patient Registry Database along with admission service, patient age and sex, and patient location throughout the admission. The Laboratory Database records all laboratory studies and results on all patients at the hospital.

Study Cohort

We used the Patient Registry Database to identify all people aged 1 year or more who were admitted to the hospital between January 1, 2013, and December 31, 2015. This time frame was selected to (i) ensure that data were complete; and (ii) complete calendar years of data were available for both derivation (patient-days in 2013-2014) and validation (2015) cohorts. Patients who were observed in the emergency room without admission to hospital were not included.

Study Outcome

The study outcome was the number of patients discharged from the hospital each day. For the analysis, the reference point for each day was 1 second past midnight; therefore, values for time-dependent covariates up to and including midnight were used to predict the number of discharges in the next 24 hours.

Study Covariates

Baseline (ie, time-independent) covariates included patient age and sex, admission service, hospital campus, whether or not the patient was admitted from the emergency department (all determined from the Patient Registry Database), and the Laboratory-based Acute Physiological Score (LAPS). The latter, which was calculated with the Laboratory Database using results for 14 tests (arterial pH, PaCO2, PaO2, anion gap, hematocrit, total white blood cell count, serum albumin, total bilirubin, creatinine, urea nitrogen, glucose, sodium, bicarbonate, and troponin I) measured in the 24-hour time frame preceding hospitalization, was derived by Escobar and colleagues7 to measure severity of illness and was subsequently validated in our hospital.8 The independent association of each laboratory perturbation with risk of death in hospital is reflected by the number of points assigned to each lab value with the total LAPS being the sum of these values. Time-dependent covariates included weekday in hospital and whether or not patients were in the intensive care unit.

Analysis

We used 3 stages to create a model to predict the daily expected number of discharges: we identified discharge risk strata containing patients having similar discharge patterns using data from patients in the derivation cohort (first stage); then, we generated the preliminary probability of discharge by determining the daily discharge probability in each discharge risk strata (second stage); finally, we modified the probability from the second stage based on the weekday and admission service and summed these probabilities to create the expected number of discharges on a particular date (third stage).

The first stage identified discharge risk strata based on the covariates listed above. This was determined by using a survival tree approach9 with proportional hazard regression models to generate the “splits.” These models were offered all covariates listed in the Study Covariates section. Admission service was clustered within 4 departments (obstetrics/gynecology, psychiatry, surgery, and medicine) and day of week was “binarized” into weekday/weekend-holiday (because the use of categorical variables with large numbers of groups can “stunt” regression trees due to small numbers of patients—and, therefore, statistical power—in each subgroup). The proportional hazards model identified the covariate having the strongest association with time to discharge (based on the Wald X2 value divided by the degrees of freedom). This variable was then used to split the cohort into subgroups (with continuous covariates being categorized into quartiles). The proportional hazards model was then repeated in each subgroup (with the previous splitting variable[s] excluded from the model). This process continued until no variable was associated with time to discharge with a P value less than .0001. This survival-tree was then used to cluster all patients into distinct discharge risk strata.

In the second stage, we generated the preliminary probability of discharge for a specific date. This was calculated by assigning all patients in hospital to their discharge risk strata (Appendix). We then measured the probability of discharge on each hospitalization day in all discharge risk strata using data from the previous 180 days (we only used the prior 180 days of data to account for temporal changes in hospital discharge patterns). For example, consider a 75-year-old patient on her third hospital day under obstetrics/gynecology on December 19, 2015 (a Saturday). This patient would be assigned to risk stratum #133 (Appendix A). We then measured the probability of discharge of all patients in this discharge risk stratum hospitalized in the previous 6 months (ie, between June 22, 2015, and December 18, 2015) on each hospital day. For risk stratum #133, the probability of discharge on hospital day 3 was 0.1111; therefore, our sample patient’s preliminary expected discharge probability was 0.1111.

To attain stable daily discharge probability estimates, a minimum of 50 patients per discharge risk stratum-hospitalization day combination was required. If there were less than 50 patients for a particular hospitalization day in a particular discharge risk stratum, we grouped hospitalization days in that risk stratum together until the minimum of 50 patients was collected.

The third (and final) stage accounted for the lack of granularity when we created the discharge risk strata in the first stage. As we mentioned above, admission service was clustered into 4 departments and the day of week was clustered into weekend/weekday. However, important variations in discharge probabilities could still exist within departments and between particular days of the week.10 Therefore, we created a correction factor to adjust the preliminary expected number of discharges based on the admission division and day of week. This correction factor used data from the 180 days prior to the analysis date within which the expected daily number of discharges was calculated (using the methods above). The correction factor was the relative difference between the observed and expected number of discharges within each division-day of week grouping.

For example, to calculate the correction factor for our sample patient presented above (75-year-old patient on hospital day 3 under gynecology on Saturday, December 19, 2015), we measured the observed number of discharges from gynecology on Saturdays between June 22, 2015, and December 18, 2015, (n = 206) and the expected number of discharges (n = 195.255) resulting in a correction factor of (observed-expected)/expected = (195.255-206)/195.206 = 0.05503. Therefore, the final expected discharge probability for our sample patient was 0.1111+0.1111*0.05503=0.1172. The expected number of discharges on a particular date was the preliminary expected number of discharges on that date (generated in the second stage) multiplied by the correction factor for the corresponding division-day or week group.

 

 

RESULTS

There were 192,859 admissions involving patients more than 1 year of age that spent at least part of their hospitalization between January 1, 2013, and December 31, 2015 (Table). Patients were middle-aged and slightly female predominant, with about half being admitted from the emergency department. Approximately 80% of admissions were to surgical or medical services. More than 95% of admissions ended with a discharge from the hospital with the remainder ending in a death. Almost 30% of hospitalization days occurred on weekends or holidays. Hospitalizations in the derivation (2013-2014) and validation (2015) group were essentially the same, except there was a slight drop in hospital length of stay (from a median of 4 days to 3 days) between the 2 periods.

Patient and hospital covariates importantly influenced the daily conditional probability of discharge (Figure 1). Patients admitted to the obstetrics/gynecology department were notably more likely to be discharged from hospital with no influence from the day of week. In contrast, the probability of discharge decreased notably on the weekends in the other departments. Patients on the ward were much more likely to be discharged than those in the intensive care unit, with increasing age associated with a decreased discharge likelihood in the former but not the latter patients. Finally, discharge probabilities varied only slightly between campuses at our hospital with discharge risk decreasing as severity of illness (as measured by LAPS) increased.


The TEND model contained 142 discharge risk strata (Appendix A). Weekend-holiday status had the strongest association with discharge probability (ie, it was the first splitting variable). The most complex discharge risk strata contained 6 covariates. The daily conditional probability of discharge during the first 2 weeks of hospitalization varied extensively between discharge risk strata (Figure 2). Overall, the conditional discharge probability increased from the first to the second day, remained relatively stable for several days, and then slowly decreased over time. However, this pattern and day-to-day variability differed extensively between risk strata.

The observed daily number of discharges in the validation cohort varied extensively (median 139; interquartile range [IQR] 95-160; range 39-214). The TEND model accurately predicted the daily number of discharges with the expected daily number being strongly associated with the observed number (adjusted R2 = 89.2%; P < 0.0001; Figure 3). Calibration decreased but remained significant when we limited the analyses by hospital campus (General: R2 = 46.3%; P < 0.0001; Civic: R2 = 47.9%; P < 0.0001; Heart Institute: R2 = 18.1%; P < 0.0001). The expected number of daily discharges was an unbiased estimator of the observed number of discharges (its parameter estimate in a linear regression model with the observed number of discharges as the outcome variable was 1.0005; 95% confidence interval, 0.9647-1.0363). The absolute difference in the observed and expected daily number of discharges was small (median 1.6; IQR −6.8 to 9.4; range −37 to 63.4) as was the relative difference (median 1.4%; IQR −5.5% to 7.1%; range −40.9% to 43.4%). The expected number of discharges was within 20% of the observed number of discharges in 95.1% of days in 2015.

DISCUSSION

Knowing how many patients will soon be discharged from the hospital should greatly facilitate hospital planning. This study showed that the TEND model used simple patient and hospitalization covariates to accurately predict the number of patients who will be discharged from hospital in the next day.

We believe that this study has several notable findings. First, we think that using a nonparametric approach to predicting the daily number of discharges importantly increased accuracy. This approach allowed us to generate expected likelihoods based on actual discharge probabilities at our hospital in the most recent 6 months of hospitalization-days within patients having discharge patterns that were very similar to the patient in question (ie, discharge risk strata, Appendix A). This ensured that trends in hospitalization habits were accounted for without the need of a period variable in our model. In addition, the lack of parameters in the model will make it easier to transplant it to other hospitals. Second, we think that the accuracy of the predictions were remarkable given the relative “crudeness” of our predictors. By using relatively simple factors, the TEND model was able to output accurate predictions for the number of daily discharges (Figure 3).


This study joins several others that have attempted to accomplish the difficult task of predicting the number of hospital discharges by using digitized data. Barnes et al.11 created a model using regression random forest methods in a single medical service within a hospital to predict the daily number of discharges with impressive accuracy (mean daily number of discharges observed 8.29, expected 8.51). Interestingly, the model in this study was more accurate at predicting discharge likelihood than physicians. Levin et al.12 derived a model using discrete time logistic regression to predict the likelihood of discharge from a pediatric intensive care unit, finding that physician orders (captured via electronic order entry) could be categorized and used to significantly increase the accuracy of discharge likelihood. This study demonstrates the potential opportunities within health-related data from hospital data warehouses to improve prediction. We believe that continued work in this field will result in the increased use of digital data to help hospital administrators manage patient beds more efficiently and effectively than currently used resource intensive manual methods.13,14

Several issues should be kept in mind when interpreting our findings. First, our analysis is limited to a single institution in Canada. It will be important to determine if the TEND model methodology generalizes to other hospitals in different jurisdictions. Such an external validation, especially in multiple hospitals, will be important to show that the TEND model methodology works in other facilities. Hospitals could implement the TEND model if they are able to record daily values for each of the variables required to assign patients to a discharge risk stratum (Appendix A) and calculate within each the daily probability of discharge. Hospitals could derive their own discharge risk strata to account for covariates, which we did not include in our study but could be influential, such as insurance status. These discharge risk estimates could also be incorporated into the electronic medical record or hospital dashboards (as long as the data required to generate the estimates are available). These interventions would permit the expected number of hospital discharges (and even the patient-level probability of discharge) to be calculated on a daily basis. Second, 2 potential biases could have influenced the identification of our discharge risk strata (Appendix A). In this process, we used survival tree methods to separate patient-days into clusters having progressively more homogenous discharge patterns. Each split was determined by using a proportional hazards model that ignored the competing risks of death in hospital. In addition, the model expressed age and LAPS as continuous variables, whereas these covariates had to be categorized to create our risk strata groupings. The strength of a covariate’s association with an outcome will decrease when a continuous variable is categorized.15 Both of these issues might have biased our final risk strata categorization (Appendix A). Third, we limited our model to include simple covariates whose values could be determined relatively easily within most hospital administrative data systems. While this increases the generalizability to other hospital information systems, we believe that the introduction of other covariates to the model—such as daily vital signs, laboratory results, medications, or time from operations—could increase prediction accuracy. Finally, it is uncertain whether or not knowing the predicted number of discharges will improve the efficiency of bed management within the hospital. It seems logical that an accurate prediction of the number of beds that will be made available in the next day should improve decisions regarding the number of patients who could be admitted electively to the hospital. It remains to be seen, however, whether this truly happens.

In summary, we found that the TEND model used a handful of patient and hospitalization factors to accurately predict the expected number of discharges from hospital in the next day. Further work is required to implement this model into our institution’s data warehouse and then determine whether this prediction will improve the efficiency of bed management at our hospital.

 

 

Disclosure: CvW is supported by a University of Ottawa Department of Medicine Clinician Scientist Chair. The authors have no conflicts of interest

 

Files
References

1. Austin PC, Rothwell DM, Tu JV. A comparison of statistical modeling strategies for analyzing length of stay after CABG surgery. Health Serv Outcomes Res Methodol. 2002;3:107-133. 

2. Moran JL, Solomon PJ. A review of statistical estimators for risk-adjusted length of stay: analysis of the Australian and new Zealand intensive care adult patient data-base, 2008-2009. BMC Med Res Methodol. 2012;12:68. PubMed

3. Verburg IWM, de Keizer NF, de Jonge E, Peek N. Comparison of regression methods for modeling intensive care length of stay. PLoS One. 2014;9:e109684. PubMed

4. Beyersmann J, Schumacher M. Time-dependent covariates in the proportional subdistribution hazards model for competing risks. Biostatistics. 2008;9:765-776. PubMed

5. Latouche A, Porcher R, Chevret S. A note on including time-dependent covariate in regression model for competing risks data. Biom J. 2005;47:807-814. PubMed

6. Fitzmaurice GM, Laird NM, Ware JH. Marginal models: generalized estimating equations. Applied Longitudinal Analysis. 2nd ed. John Wiley & Sons; 2011;353-394. 

7. Escobar GJ, Greene JD, Scheirer P, Gardner MN, Draper D, Kipnis P. Risk-adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases. Med Care. 2008;46:232-239. PubMed

8. van Walraven C, Escobar GJ, Greene JD, Forster AJ. The Kaiser Permanente inpatient risk adjustment methodology was valid in an external patient population. J Clin Epidemiol. 2010;63:798-803. PubMed

9. Bou-Hamad I, Larocque D, Ben-Ameur H. A review of survival trees. Statist Surv. 2011;44-71. 

10. van Walraven C, Bell CM. Risk of death or readmission among people discharged from hospital on Fridays. CMAJ. 2002;166:1672-1673. PubMed

11. Barnes S, Hamrock E, Toerper M, Siddiqui S, Levin S. Real-time prediction of inpatient length of stay for discharge prioritization. J Am Med Inform Assoc. 2016;23:e2-e10. PubMed

12. Levin SRP, Harley ETB, Fackler JCM, et al. Real-time forecasting of pediatric intensive care unit length of stay using computerized provider orders. Crit Care Med. 2012;40:3058-3064. PubMed

13. Resar R, Nolan K, Kaczynski D, Jensen K. Using real-time demand capacity management to improve hospitalwide patient flow. Jt Comm J Qual Patient Saf. 2011;37:217-227. PubMed

14. de Grood A, Blades K, Pendharkar SR. A review of discharge prediction processes in acute care hospitals. Healthc Policy. 2016;12:105-115. PubMed

15. van Walraven C, Hart RG. Leave ‘em alone - why continuous variables should be analyzed as such. Neuroepidemiology 2008;30:138-139. PubMed

Article PDF
Issue
Journal of Hospital Medicine 13(3)
Topics
Page Number
158-163. Published online first August 23, 2017.
Sections
Files
Files
Article PDF
Article PDF

Hospitals typically allocate beds based on historical patient volumes. If funding decreases, hospitals will usually try to maximize resource utilization by allocating beds to attain occupancies close to 100% for significant periods of time. This will invariably cause days in which hospital occupancy exceeds capacity, at which time critical entry points (such as the emergency department and operating room) will become blocked. This creates significant concerns over the patient quality of care.

Hospital administrators have very few options when hospital occupancy exceeds 100%. They could postpone admissions for “planned” cases, bring in additional staff to increase capacity, or instigate additional methods to increase hospital discharges such as expanding care resources in the community. All options are costly, bothersome, or cannot be actioned immediately. The need for these options could be minimized by enabling hospital administrators to make more informed decisions regarding hospital bed management by knowing the likely number of discharges in the next 24 hours.

Predicting the number of people who will be discharged in the next day can be approached in several ways. One approach would be to calculate each patient’s expected length of stay and then use the variation around that estimate to calculate each day’s discharge probability. Several studies have attempted to model hospital length of stay using a broad assortment of methodologies, but a mechanism to accurately predict this outcome has been elusive1,2 (with Verburg et al.3 concluding in their study’s abstract that “…it is difficult to predict length of stay…”). A second approach would be to use survival analysis methods to generate each patient’s hazard of discharge over time, which could be directly converted to an expected daily risk of discharge. However, this approach is complicated by the concurrent need to include time-dependent covariates and consider the competing risk of death in hospital, which can complicate survival modeling.4,5 A third approach would be the implementation of a longitudinal analysis using marginal models to predict the daily probability of discharge,6 but this method quickly overwhelms computer resources when large datasets are present.

In this study, we decided to use nonparametric models to predict the daily number of hospital discharges. We first identified patient groups with distinct discharge patterns. We then calculated the conditional daily discharge probability of patients in each of these groups. Finally, these conditional daily discharge probabilities were then summed for each hospital day to generate the expected number of discharges in the next 24 hours. This paper details the methods we used to create our model and the accuracy of its predictions.

 

 

METHODS

Study Setting and Databases Used for Analysis

The study took place at The Ottawa Hospital, a 1000-bed teaching hospital with 3 campuses that is the primary referral center in our region. The study was approved by our local research ethics board.

The Patient Registry Database records the date and time of admission for each patient (defined as the moment that a patient’s admission request is registered in the patient registration) and discharge (defined as the time when the patient’s discharge from hospital was entered into the patient registration) for hospital encounters. Emergency department encounters were also identified in the Patient Registry Database along with admission service, patient age and sex, and patient location throughout the admission. The Laboratory Database records all laboratory studies and results on all patients at the hospital.

Study Cohort

We used the Patient Registry Database to identify all people aged 1 year or more who were admitted to the hospital between January 1, 2013, and December 31, 2015. This time frame was selected to (i) ensure that data were complete; and (ii) complete calendar years of data were available for both derivation (patient-days in 2013-2014) and validation (2015) cohorts. Patients who were observed in the emergency room without admission to hospital were not included.

Study Outcome

The study outcome was the number of patients discharged from the hospital each day. For the analysis, the reference point for each day was 1 second past midnight; therefore, values for time-dependent covariates up to and including midnight were used to predict the number of discharges in the next 24 hours.

Study Covariates

Baseline (ie, time-independent) covariates included patient age and sex, admission service, hospital campus, whether or not the patient was admitted from the emergency department (all determined from the Patient Registry Database), and the Laboratory-based Acute Physiological Score (LAPS). The latter, which was calculated with the Laboratory Database using results for 14 tests (arterial pH, PaCO2, PaO2, anion gap, hematocrit, total white blood cell count, serum albumin, total bilirubin, creatinine, urea nitrogen, glucose, sodium, bicarbonate, and troponin I) measured in the 24-hour time frame preceding hospitalization, was derived by Escobar and colleagues7 to measure severity of illness and was subsequently validated in our hospital.8 The independent association of each laboratory perturbation with risk of death in hospital is reflected by the number of points assigned to each lab value with the total LAPS being the sum of these values. Time-dependent covariates included weekday in hospital and whether or not patients were in the intensive care unit.

Analysis

We used 3 stages to create a model to predict the daily expected number of discharges: we identified discharge risk strata containing patients having similar discharge patterns using data from patients in the derivation cohort (first stage); then, we generated the preliminary probability of discharge by determining the daily discharge probability in each discharge risk strata (second stage); finally, we modified the probability from the second stage based on the weekday and admission service and summed these probabilities to create the expected number of discharges on a particular date (third stage).

The first stage identified discharge risk strata based on the covariates listed above. This was determined by using a survival tree approach9 with proportional hazard regression models to generate the “splits.” These models were offered all covariates listed in the Study Covariates section. Admission service was clustered within 4 departments (obstetrics/gynecology, psychiatry, surgery, and medicine) and day of week was “binarized” into weekday/weekend-holiday (because the use of categorical variables with large numbers of groups can “stunt” regression trees due to small numbers of patients—and, therefore, statistical power—in each subgroup). The proportional hazards model identified the covariate having the strongest association with time to discharge (based on the Wald X2 value divided by the degrees of freedom). This variable was then used to split the cohort into subgroups (with continuous covariates being categorized into quartiles). The proportional hazards model was then repeated in each subgroup (with the previous splitting variable[s] excluded from the model). This process continued until no variable was associated with time to discharge with a P value less than .0001. This survival-tree was then used to cluster all patients into distinct discharge risk strata.

In the second stage, we generated the preliminary probability of discharge for a specific date. This was calculated by assigning all patients in hospital to their discharge risk strata (Appendix). We then measured the probability of discharge on each hospitalization day in all discharge risk strata using data from the previous 180 days (we only used the prior 180 days of data to account for temporal changes in hospital discharge patterns). For example, consider a 75-year-old patient on her third hospital day under obstetrics/gynecology on December 19, 2015 (a Saturday). This patient would be assigned to risk stratum #133 (Appendix A). We then measured the probability of discharge of all patients in this discharge risk stratum hospitalized in the previous 6 months (ie, between June 22, 2015, and December 18, 2015) on each hospital day. For risk stratum #133, the probability of discharge on hospital day 3 was 0.1111; therefore, our sample patient’s preliminary expected discharge probability was 0.1111.

To attain stable daily discharge probability estimates, a minimum of 50 patients per discharge risk stratum-hospitalization day combination was required. If there were less than 50 patients for a particular hospitalization day in a particular discharge risk stratum, we grouped hospitalization days in that risk stratum together until the minimum of 50 patients was collected.

The third (and final) stage accounted for the lack of granularity when we created the discharge risk strata in the first stage. As we mentioned above, admission service was clustered into 4 departments and the day of week was clustered into weekend/weekday. However, important variations in discharge probabilities could still exist within departments and between particular days of the week.10 Therefore, we created a correction factor to adjust the preliminary expected number of discharges based on the admission division and day of week. This correction factor used data from the 180 days prior to the analysis date within which the expected daily number of discharges was calculated (using the methods above). The correction factor was the relative difference between the observed and expected number of discharges within each division-day of week grouping.

For example, to calculate the correction factor for our sample patient presented above (75-year-old patient on hospital day 3 under gynecology on Saturday, December 19, 2015), we measured the observed number of discharges from gynecology on Saturdays between June 22, 2015, and December 18, 2015, (n = 206) and the expected number of discharges (n = 195.255) resulting in a correction factor of (observed-expected)/expected = (195.255-206)/195.206 = 0.05503. Therefore, the final expected discharge probability for our sample patient was 0.1111+0.1111*0.05503=0.1172. The expected number of discharges on a particular date was the preliminary expected number of discharges on that date (generated in the second stage) multiplied by the correction factor for the corresponding division-day or week group.

 

 

RESULTS

There were 192,859 admissions involving patients more than 1 year of age that spent at least part of their hospitalization between January 1, 2013, and December 31, 2015 (Table). Patients were middle-aged and slightly female predominant, with about half being admitted from the emergency department. Approximately 80% of admissions were to surgical or medical services. More than 95% of admissions ended with a discharge from the hospital with the remainder ending in a death. Almost 30% of hospitalization days occurred on weekends or holidays. Hospitalizations in the derivation (2013-2014) and validation (2015) group were essentially the same, except there was a slight drop in hospital length of stay (from a median of 4 days to 3 days) between the 2 periods.

Patient and hospital covariates importantly influenced the daily conditional probability of discharge (Figure 1). Patients admitted to the obstetrics/gynecology department were notably more likely to be discharged from hospital with no influence from the day of week. In contrast, the probability of discharge decreased notably on the weekends in the other departments. Patients on the ward were much more likely to be discharged than those in the intensive care unit, with increasing age associated with a decreased discharge likelihood in the former but not the latter patients. Finally, discharge probabilities varied only slightly between campuses at our hospital with discharge risk decreasing as severity of illness (as measured by LAPS) increased.


The TEND model contained 142 discharge risk strata (Appendix A). Weekend-holiday status had the strongest association with discharge probability (ie, it was the first splitting variable). The most complex discharge risk strata contained 6 covariates. The daily conditional probability of discharge during the first 2 weeks of hospitalization varied extensively between discharge risk strata (Figure 2). Overall, the conditional discharge probability increased from the first to the second day, remained relatively stable for several days, and then slowly decreased over time. However, this pattern and day-to-day variability differed extensively between risk strata.

The observed daily number of discharges in the validation cohort varied extensively (median 139; interquartile range [IQR] 95-160; range 39-214). The TEND model accurately predicted the daily number of discharges with the expected daily number being strongly associated with the observed number (adjusted R2 = 89.2%; P < 0.0001; Figure 3). Calibration decreased but remained significant when we limited the analyses by hospital campus (General: R2 = 46.3%; P < 0.0001; Civic: R2 = 47.9%; P < 0.0001; Heart Institute: R2 = 18.1%; P < 0.0001). The expected number of daily discharges was an unbiased estimator of the observed number of discharges (its parameter estimate in a linear regression model with the observed number of discharges as the outcome variable was 1.0005; 95% confidence interval, 0.9647-1.0363). The absolute difference in the observed and expected daily number of discharges was small (median 1.6; IQR −6.8 to 9.4; range −37 to 63.4) as was the relative difference (median 1.4%; IQR −5.5% to 7.1%; range −40.9% to 43.4%). The expected number of discharges was within 20% of the observed number of discharges in 95.1% of days in 2015.

DISCUSSION

Knowing how many patients will soon be discharged from the hospital should greatly facilitate hospital planning. This study showed that the TEND model used simple patient and hospitalization covariates to accurately predict the number of patients who will be discharged from hospital in the next day.

We believe that this study has several notable findings. First, we think that using a nonparametric approach to predicting the daily number of discharges importantly increased accuracy. This approach allowed us to generate expected likelihoods based on actual discharge probabilities at our hospital in the most recent 6 months of hospitalization-days within patients having discharge patterns that were very similar to the patient in question (ie, discharge risk strata, Appendix A). This ensured that trends in hospitalization habits were accounted for without the need of a period variable in our model. In addition, the lack of parameters in the model will make it easier to transplant it to other hospitals. Second, we think that the accuracy of the predictions were remarkable given the relative “crudeness” of our predictors. By using relatively simple factors, the TEND model was able to output accurate predictions for the number of daily discharges (Figure 3).


This study joins several others that have attempted to accomplish the difficult task of predicting the number of hospital discharges by using digitized data. Barnes et al.11 created a model using regression random forest methods in a single medical service within a hospital to predict the daily number of discharges with impressive accuracy (mean daily number of discharges observed 8.29, expected 8.51). Interestingly, the model in this study was more accurate at predicting discharge likelihood than physicians. Levin et al.12 derived a model using discrete time logistic regression to predict the likelihood of discharge from a pediatric intensive care unit, finding that physician orders (captured via electronic order entry) could be categorized and used to significantly increase the accuracy of discharge likelihood. This study demonstrates the potential opportunities within health-related data from hospital data warehouses to improve prediction. We believe that continued work in this field will result in the increased use of digital data to help hospital administrators manage patient beds more efficiently and effectively than currently used resource intensive manual methods.13,14

Several issues should be kept in mind when interpreting our findings. First, our analysis is limited to a single institution in Canada. It will be important to determine if the TEND model methodology generalizes to other hospitals in different jurisdictions. Such an external validation, especially in multiple hospitals, will be important to show that the TEND model methodology works in other facilities. Hospitals could implement the TEND model if they are able to record daily values for each of the variables required to assign patients to a discharge risk stratum (Appendix A) and calculate within each the daily probability of discharge. Hospitals could derive their own discharge risk strata to account for covariates, which we did not include in our study but could be influential, such as insurance status. These discharge risk estimates could also be incorporated into the electronic medical record or hospital dashboards (as long as the data required to generate the estimates are available). These interventions would permit the expected number of hospital discharges (and even the patient-level probability of discharge) to be calculated on a daily basis. Second, 2 potential biases could have influenced the identification of our discharge risk strata (Appendix A). In this process, we used survival tree methods to separate patient-days into clusters having progressively more homogenous discharge patterns. Each split was determined by using a proportional hazards model that ignored the competing risks of death in hospital. In addition, the model expressed age and LAPS as continuous variables, whereas these covariates had to be categorized to create our risk strata groupings. The strength of a covariate’s association with an outcome will decrease when a continuous variable is categorized.15 Both of these issues might have biased our final risk strata categorization (Appendix A). Third, we limited our model to include simple covariates whose values could be determined relatively easily within most hospital administrative data systems. While this increases the generalizability to other hospital information systems, we believe that the introduction of other covariates to the model—such as daily vital signs, laboratory results, medications, or time from operations—could increase prediction accuracy. Finally, it is uncertain whether or not knowing the predicted number of discharges will improve the efficiency of bed management within the hospital. It seems logical that an accurate prediction of the number of beds that will be made available in the next day should improve decisions regarding the number of patients who could be admitted electively to the hospital. It remains to be seen, however, whether this truly happens.

In summary, we found that the TEND model used a handful of patient and hospitalization factors to accurately predict the expected number of discharges from hospital in the next day. Further work is required to implement this model into our institution’s data warehouse and then determine whether this prediction will improve the efficiency of bed management at our hospital.

 

 

Disclosure: CvW is supported by a University of Ottawa Department of Medicine Clinician Scientist Chair. The authors have no conflicts of interest

 

Hospitals typically allocate beds based on historical patient volumes. If funding decreases, hospitals will usually try to maximize resource utilization by allocating beds to attain occupancies close to 100% for significant periods of time. This will invariably cause days in which hospital occupancy exceeds capacity, at which time critical entry points (such as the emergency department and operating room) will become blocked. This creates significant concerns over the patient quality of care.

Hospital administrators have very few options when hospital occupancy exceeds 100%. They could postpone admissions for “planned” cases, bring in additional staff to increase capacity, or instigate additional methods to increase hospital discharges such as expanding care resources in the community. All options are costly, bothersome, or cannot be actioned immediately. The need for these options could be minimized by enabling hospital administrators to make more informed decisions regarding hospital bed management by knowing the likely number of discharges in the next 24 hours.

Predicting the number of people who will be discharged in the next day can be approached in several ways. One approach would be to calculate each patient’s expected length of stay and then use the variation around that estimate to calculate each day’s discharge probability. Several studies have attempted to model hospital length of stay using a broad assortment of methodologies, but a mechanism to accurately predict this outcome has been elusive1,2 (with Verburg et al.3 concluding in their study’s abstract that “…it is difficult to predict length of stay…”). A second approach would be to use survival analysis methods to generate each patient’s hazard of discharge over time, which could be directly converted to an expected daily risk of discharge. However, this approach is complicated by the concurrent need to include time-dependent covariates and consider the competing risk of death in hospital, which can complicate survival modeling.4,5 A third approach would be the implementation of a longitudinal analysis using marginal models to predict the daily probability of discharge,6 but this method quickly overwhelms computer resources when large datasets are present.

In this study, we decided to use nonparametric models to predict the daily number of hospital discharges. We first identified patient groups with distinct discharge patterns. We then calculated the conditional daily discharge probability of patients in each of these groups. Finally, these conditional daily discharge probabilities were then summed for each hospital day to generate the expected number of discharges in the next 24 hours. This paper details the methods we used to create our model and the accuracy of its predictions.

 

 

METHODS

Study Setting and Databases Used for Analysis

The study took place at The Ottawa Hospital, a 1000-bed teaching hospital with 3 campuses that is the primary referral center in our region. The study was approved by our local research ethics board.

The Patient Registry Database records the date and time of admission for each patient (defined as the moment that a patient’s admission request is registered in the patient registration) and discharge (defined as the time when the patient’s discharge from hospital was entered into the patient registration) for hospital encounters. Emergency department encounters were also identified in the Patient Registry Database along with admission service, patient age and sex, and patient location throughout the admission. The Laboratory Database records all laboratory studies and results on all patients at the hospital.

Study Cohort

We used the Patient Registry Database to identify all people aged 1 year or more who were admitted to the hospital between January 1, 2013, and December 31, 2015. This time frame was selected to (i) ensure that data were complete; and (ii) complete calendar years of data were available for both derivation (patient-days in 2013-2014) and validation (2015) cohorts. Patients who were observed in the emergency room without admission to hospital were not included.

Study Outcome

The study outcome was the number of patients discharged from the hospital each day. For the analysis, the reference point for each day was 1 second past midnight; therefore, values for time-dependent covariates up to and including midnight were used to predict the number of discharges in the next 24 hours.

Study Covariates

Baseline (ie, time-independent) covariates included patient age and sex, admission service, hospital campus, whether or not the patient was admitted from the emergency department (all determined from the Patient Registry Database), and the Laboratory-based Acute Physiological Score (LAPS). The latter, which was calculated with the Laboratory Database using results for 14 tests (arterial pH, PaCO2, PaO2, anion gap, hematocrit, total white blood cell count, serum albumin, total bilirubin, creatinine, urea nitrogen, glucose, sodium, bicarbonate, and troponin I) measured in the 24-hour time frame preceding hospitalization, was derived by Escobar and colleagues7 to measure severity of illness and was subsequently validated in our hospital.8 The independent association of each laboratory perturbation with risk of death in hospital is reflected by the number of points assigned to each lab value with the total LAPS being the sum of these values. Time-dependent covariates included weekday in hospital and whether or not patients were in the intensive care unit.

Analysis

We used 3 stages to create a model to predict the daily expected number of discharges: we identified discharge risk strata containing patients having similar discharge patterns using data from patients in the derivation cohort (first stage); then, we generated the preliminary probability of discharge by determining the daily discharge probability in each discharge risk strata (second stage); finally, we modified the probability from the second stage based on the weekday and admission service and summed these probabilities to create the expected number of discharges on a particular date (third stage).

The first stage identified discharge risk strata based on the covariates listed above. This was determined by using a survival tree approach9 with proportional hazard regression models to generate the “splits.” These models were offered all covariates listed in the Study Covariates section. Admission service was clustered within 4 departments (obstetrics/gynecology, psychiatry, surgery, and medicine) and day of week was “binarized” into weekday/weekend-holiday (because the use of categorical variables with large numbers of groups can “stunt” regression trees due to small numbers of patients—and, therefore, statistical power—in each subgroup). The proportional hazards model identified the covariate having the strongest association with time to discharge (based on the Wald X2 value divided by the degrees of freedom). This variable was then used to split the cohort into subgroups (with continuous covariates being categorized into quartiles). The proportional hazards model was then repeated in each subgroup (with the previous splitting variable[s] excluded from the model). This process continued until no variable was associated with time to discharge with a P value less than .0001. This survival-tree was then used to cluster all patients into distinct discharge risk strata.

In the second stage, we generated the preliminary probability of discharge for a specific date. This was calculated by assigning all patients in hospital to their discharge risk strata (Appendix). We then measured the probability of discharge on each hospitalization day in all discharge risk strata using data from the previous 180 days (we only used the prior 180 days of data to account for temporal changes in hospital discharge patterns). For example, consider a 75-year-old patient on her third hospital day under obstetrics/gynecology on December 19, 2015 (a Saturday). This patient would be assigned to risk stratum #133 (Appendix A). We then measured the probability of discharge of all patients in this discharge risk stratum hospitalized in the previous 6 months (ie, between June 22, 2015, and December 18, 2015) on each hospital day. For risk stratum #133, the probability of discharge on hospital day 3 was 0.1111; therefore, our sample patient’s preliminary expected discharge probability was 0.1111.

To attain stable daily discharge probability estimates, a minimum of 50 patients per discharge risk stratum-hospitalization day combination was required. If there were less than 50 patients for a particular hospitalization day in a particular discharge risk stratum, we grouped hospitalization days in that risk stratum together until the minimum of 50 patients was collected.

The third (and final) stage accounted for the lack of granularity when we created the discharge risk strata in the first stage. As we mentioned above, admission service was clustered into 4 departments and the day of week was clustered into weekend/weekday. However, important variations in discharge probabilities could still exist within departments and between particular days of the week.10 Therefore, we created a correction factor to adjust the preliminary expected number of discharges based on the admission division and day of week. This correction factor used data from the 180 days prior to the analysis date within which the expected daily number of discharges was calculated (using the methods above). The correction factor was the relative difference between the observed and expected number of discharges within each division-day of week grouping.

For example, to calculate the correction factor for our sample patient presented above (75-year-old patient on hospital day 3 under gynecology on Saturday, December 19, 2015), we measured the observed number of discharges from gynecology on Saturdays between June 22, 2015, and December 18, 2015, (n = 206) and the expected number of discharges (n = 195.255) resulting in a correction factor of (observed-expected)/expected = (195.255-206)/195.206 = 0.05503. Therefore, the final expected discharge probability for our sample patient was 0.1111+0.1111*0.05503=0.1172. The expected number of discharges on a particular date was the preliminary expected number of discharges on that date (generated in the second stage) multiplied by the correction factor for the corresponding division-day or week group.

 

 

RESULTS

There were 192,859 admissions involving patients more than 1 year of age that spent at least part of their hospitalization between January 1, 2013, and December 31, 2015 (Table). Patients were middle-aged and slightly female predominant, with about half being admitted from the emergency department. Approximately 80% of admissions were to surgical or medical services. More than 95% of admissions ended with a discharge from the hospital with the remainder ending in a death. Almost 30% of hospitalization days occurred on weekends or holidays. Hospitalizations in the derivation (2013-2014) and validation (2015) group were essentially the same, except there was a slight drop in hospital length of stay (from a median of 4 days to 3 days) between the 2 periods.

Patient and hospital covariates importantly influenced the daily conditional probability of discharge (Figure 1). Patients admitted to the obstetrics/gynecology department were notably more likely to be discharged from hospital with no influence from the day of week. In contrast, the probability of discharge decreased notably on the weekends in the other departments. Patients on the ward were much more likely to be discharged than those in the intensive care unit, with increasing age associated with a decreased discharge likelihood in the former but not the latter patients. Finally, discharge probabilities varied only slightly between campuses at our hospital with discharge risk decreasing as severity of illness (as measured by LAPS) increased.


The TEND model contained 142 discharge risk strata (Appendix A). Weekend-holiday status had the strongest association with discharge probability (ie, it was the first splitting variable). The most complex discharge risk strata contained 6 covariates. The daily conditional probability of discharge during the first 2 weeks of hospitalization varied extensively between discharge risk strata (Figure 2). Overall, the conditional discharge probability increased from the first to the second day, remained relatively stable for several days, and then slowly decreased over time. However, this pattern and day-to-day variability differed extensively between risk strata.

The observed daily number of discharges in the validation cohort varied extensively (median 139; interquartile range [IQR] 95-160; range 39-214). The TEND model accurately predicted the daily number of discharges with the expected daily number being strongly associated with the observed number (adjusted R2 = 89.2%; P < 0.0001; Figure 3). Calibration decreased but remained significant when we limited the analyses by hospital campus (General: R2 = 46.3%; P < 0.0001; Civic: R2 = 47.9%; P < 0.0001; Heart Institute: R2 = 18.1%; P < 0.0001). The expected number of daily discharges was an unbiased estimator of the observed number of discharges (its parameter estimate in a linear regression model with the observed number of discharges as the outcome variable was 1.0005; 95% confidence interval, 0.9647-1.0363). The absolute difference in the observed and expected daily number of discharges was small (median 1.6; IQR −6.8 to 9.4; range −37 to 63.4) as was the relative difference (median 1.4%; IQR −5.5% to 7.1%; range −40.9% to 43.4%). The expected number of discharges was within 20% of the observed number of discharges in 95.1% of days in 2015.

DISCUSSION

Knowing how many patients will soon be discharged from the hospital should greatly facilitate hospital planning. This study showed that the TEND model used simple patient and hospitalization covariates to accurately predict the number of patients who will be discharged from hospital in the next day.

We believe that this study has several notable findings. First, we think that using a nonparametric approach to predicting the daily number of discharges importantly increased accuracy. This approach allowed us to generate expected likelihoods based on actual discharge probabilities at our hospital in the most recent 6 months of hospitalization-days within patients having discharge patterns that were very similar to the patient in question (ie, discharge risk strata, Appendix A). This ensured that trends in hospitalization habits were accounted for without the need of a period variable in our model. In addition, the lack of parameters in the model will make it easier to transplant it to other hospitals. Second, we think that the accuracy of the predictions were remarkable given the relative “crudeness” of our predictors. By using relatively simple factors, the TEND model was able to output accurate predictions for the number of daily discharges (Figure 3).


This study joins several others that have attempted to accomplish the difficult task of predicting the number of hospital discharges by using digitized data. Barnes et al.11 created a model using regression random forest methods in a single medical service within a hospital to predict the daily number of discharges with impressive accuracy (mean daily number of discharges observed 8.29, expected 8.51). Interestingly, the model in this study was more accurate at predicting discharge likelihood than physicians. Levin et al.12 derived a model using discrete time logistic regression to predict the likelihood of discharge from a pediatric intensive care unit, finding that physician orders (captured via electronic order entry) could be categorized and used to significantly increase the accuracy of discharge likelihood. This study demonstrates the potential opportunities within health-related data from hospital data warehouses to improve prediction. We believe that continued work in this field will result in the increased use of digital data to help hospital administrators manage patient beds more efficiently and effectively than currently used resource intensive manual methods.13,14

Several issues should be kept in mind when interpreting our findings. First, our analysis is limited to a single institution in Canada. It will be important to determine if the TEND model methodology generalizes to other hospitals in different jurisdictions. Such an external validation, especially in multiple hospitals, will be important to show that the TEND model methodology works in other facilities. Hospitals could implement the TEND model if they are able to record daily values for each of the variables required to assign patients to a discharge risk stratum (Appendix A) and calculate within each the daily probability of discharge. Hospitals could derive their own discharge risk strata to account for covariates, which we did not include in our study but could be influential, such as insurance status. These discharge risk estimates could also be incorporated into the electronic medical record or hospital dashboards (as long as the data required to generate the estimates are available). These interventions would permit the expected number of hospital discharges (and even the patient-level probability of discharge) to be calculated on a daily basis. Second, 2 potential biases could have influenced the identification of our discharge risk strata (Appendix A). In this process, we used survival tree methods to separate patient-days into clusters having progressively more homogenous discharge patterns. Each split was determined by using a proportional hazards model that ignored the competing risks of death in hospital. In addition, the model expressed age and LAPS as continuous variables, whereas these covariates had to be categorized to create our risk strata groupings. The strength of a covariate’s association with an outcome will decrease when a continuous variable is categorized.15 Both of these issues might have biased our final risk strata categorization (Appendix A). Third, we limited our model to include simple covariates whose values could be determined relatively easily within most hospital administrative data systems. While this increases the generalizability to other hospital information systems, we believe that the introduction of other covariates to the model—such as daily vital signs, laboratory results, medications, or time from operations—could increase prediction accuracy. Finally, it is uncertain whether or not knowing the predicted number of discharges will improve the efficiency of bed management within the hospital. It seems logical that an accurate prediction of the number of beds that will be made available in the next day should improve decisions regarding the number of patients who could be admitted electively to the hospital. It remains to be seen, however, whether this truly happens.

In summary, we found that the TEND model used a handful of patient and hospitalization factors to accurately predict the expected number of discharges from hospital in the next day. Further work is required to implement this model into our institution’s data warehouse and then determine whether this prediction will improve the efficiency of bed management at our hospital.

 

 

Disclosure: CvW is supported by a University of Ottawa Department of Medicine Clinician Scientist Chair. The authors have no conflicts of interest

 

References

1. Austin PC, Rothwell DM, Tu JV. A comparison of statistical modeling strategies for analyzing length of stay after CABG surgery. Health Serv Outcomes Res Methodol. 2002;3:107-133. 

2. Moran JL, Solomon PJ. A review of statistical estimators for risk-adjusted length of stay: analysis of the Australian and new Zealand intensive care adult patient data-base, 2008-2009. BMC Med Res Methodol. 2012;12:68. PubMed

3. Verburg IWM, de Keizer NF, de Jonge E, Peek N. Comparison of regression methods for modeling intensive care length of stay. PLoS One. 2014;9:e109684. PubMed

4. Beyersmann J, Schumacher M. Time-dependent covariates in the proportional subdistribution hazards model for competing risks. Biostatistics. 2008;9:765-776. PubMed

5. Latouche A, Porcher R, Chevret S. A note on including time-dependent covariate in regression model for competing risks data. Biom J. 2005;47:807-814. PubMed

6. Fitzmaurice GM, Laird NM, Ware JH. Marginal models: generalized estimating equations. Applied Longitudinal Analysis. 2nd ed. John Wiley & Sons; 2011;353-394. 

7. Escobar GJ, Greene JD, Scheirer P, Gardner MN, Draper D, Kipnis P. Risk-adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases. Med Care. 2008;46:232-239. PubMed

8. van Walraven C, Escobar GJ, Greene JD, Forster AJ. The Kaiser Permanente inpatient risk adjustment methodology was valid in an external patient population. J Clin Epidemiol. 2010;63:798-803. PubMed

9. Bou-Hamad I, Larocque D, Ben-Ameur H. A review of survival trees. Statist Surv. 2011;44-71. 

10. van Walraven C, Bell CM. Risk of death or readmission among people discharged from hospital on Fridays. CMAJ. 2002;166:1672-1673. PubMed

11. Barnes S, Hamrock E, Toerper M, Siddiqui S, Levin S. Real-time prediction of inpatient length of stay for discharge prioritization. J Am Med Inform Assoc. 2016;23:e2-e10. PubMed

12. Levin SRP, Harley ETB, Fackler JCM, et al. Real-time forecasting of pediatric intensive care unit length of stay using computerized provider orders. Crit Care Med. 2012;40:3058-3064. PubMed

13. Resar R, Nolan K, Kaczynski D, Jensen K. Using real-time demand capacity management to improve hospitalwide patient flow. Jt Comm J Qual Patient Saf. 2011;37:217-227. PubMed

14. de Grood A, Blades K, Pendharkar SR. A review of discharge prediction processes in acute care hospitals. Healthc Policy. 2016;12:105-115. PubMed

15. van Walraven C, Hart RG. Leave ‘em alone - why continuous variables should be analyzed as such. Neuroepidemiology 2008;30:138-139. PubMed

References

1. Austin PC, Rothwell DM, Tu JV. A comparison of statistical modeling strategies for analyzing length of stay after CABG surgery. Health Serv Outcomes Res Methodol. 2002;3:107-133. 

2. Moran JL, Solomon PJ. A review of statistical estimators for risk-adjusted length of stay: analysis of the Australian and new Zealand intensive care adult patient data-base, 2008-2009. BMC Med Res Methodol. 2012;12:68. PubMed

3. Verburg IWM, de Keizer NF, de Jonge E, Peek N. Comparison of regression methods for modeling intensive care length of stay. PLoS One. 2014;9:e109684. PubMed

4. Beyersmann J, Schumacher M. Time-dependent covariates in the proportional subdistribution hazards model for competing risks. Biostatistics. 2008;9:765-776. PubMed

5. Latouche A, Porcher R, Chevret S. A note on including time-dependent covariate in regression model for competing risks data. Biom J. 2005;47:807-814. PubMed

6. Fitzmaurice GM, Laird NM, Ware JH. Marginal models: generalized estimating equations. Applied Longitudinal Analysis. 2nd ed. John Wiley & Sons; 2011;353-394. 

7. Escobar GJ, Greene JD, Scheirer P, Gardner MN, Draper D, Kipnis P. Risk-adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases. Med Care. 2008;46:232-239. PubMed

8. van Walraven C, Escobar GJ, Greene JD, Forster AJ. The Kaiser Permanente inpatient risk adjustment methodology was valid in an external patient population. J Clin Epidemiol. 2010;63:798-803. PubMed

9. Bou-Hamad I, Larocque D, Ben-Ameur H. A review of survival trees. Statist Surv. 2011;44-71. 

10. van Walraven C, Bell CM. Risk of death or readmission among people discharged from hospital on Fridays. CMAJ. 2002;166:1672-1673. PubMed

11. Barnes S, Hamrock E, Toerper M, Siddiqui S, Levin S. Real-time prediction of inpatient length of stay for discharge prioritization. J Am Med Inform Assoc. 2016;23:e2-e10. PubMed

12. Levin SRP, Harley ETB, Fackler JCM, et al. Real-time forecasting of pediatric intensive care unit length of stay using computerized provider orders. Crit Care Med. 2012;40:3058-3064. PubMed

13. Resar R, Nolan K, Kaczynski D, Jensen K. Using real-time demand capacity management to improve hospitalwide patient flow. Jt Comm J Qual Patient Saf. 2011;37:217-227. PubMed

14. de Grood A, Blades K, Pendharkar SR. A review of discharge prediction processes in acute care hospitals. Healthc Policy. 2016;12:105-115. PubMed

15. van Walraven C, Hart RG. Leave ‘em alone - why continuous variables should be analyzed as such. Neuroepidemiology 2008;30:138-139. PubMed

Issue
Journal of Hospital Medicine 13(3)
Issue
Journal of Hospital Medicine 13(3)
Page Number
158-163. Published online first August 23, 2017.
Page Number
158-163. Published online first August 23, 2017.
Topics
Article Type
Display Headline
The TEND (Tomorrow’s Expected Number of Discharges) Model Accurately Predicted the Number of Patients Who Were Discharged from the Hospital the Next Day
Display Headline
The TEND (Tomorrow’s Expected Number of Discharges) Model Accurately Predicted the Number of Patients Who Were Discharged from the Hospital the Next Day
Sections
Article Source

© 2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Carl van Walraven, MD, MSc, FRCPC, ASB1-003 1053 Carling Ave., Ottawa, ON K1Y 4E9; Telephone: 613-761-4903; Fax: 613-761-5492; E-mail: [email protected]
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Un-Gate On Date
Wed, 03/14/2018 - 06:00
Article PDF Media
Media Files

Opening the door to gene editing?

Article Type
Changed
Fri, 01/18/2019 - 16:57

 

In early August, an international team of biologists reported injecting gene editing proteins into more than a hundred human embryos in Portland, Ore. The scale and success of such experimentation with human embryos is unprecedented in the United States. Given the highly experimental nature of fertility clinics in the United States and abroad, many suggest that these findings open the door to designer babies. A careful read of the report, however, indicates that the door is still quite closed, perhaps cracked open just a little.

The research team used a new method of cutting the genome, called CRISPR-Cas9. CRISPR utilizes two key components that the team combined in a test tube together: a Cas9 protein that can cut the DNA and a synthetic RNA that can guide the protein to cut a 20-letter sequence in the human genome specifically. In these experiments, the Cas9-RNA protein was designed to cut a pathogenic mutation in the MYBPC3 gene, which can cause hypertrophic cardiomyopathy. The research team could not obtain human zygotes with this mutation on both copies of the genome (a rare homozygous genotype). Such zygotes would have the most severe phenotype and be the most compelling test case for CRISPR. Instead, they focused on gene editing heterozygous human zygotes that have one normal maternal copy of the MYBPC3 gene and one pathogenic paternal copy. The heterozygous zygotes were produced by the research team via in vitro fertilization (IVF) or intracytoplasmic sperm injection (ICSI) using sperm donated by males carrying the pathogenic mutation (Nature. 2017 Aug 2. doi: 10.1038/nature23305).

When researchers injected the Cas9-RNA protein targeting the mutation into already fertilized zygotes, they found that 67% of the resulting embryos had two normal copies of the MYBPC3 gene. Without gene editing, approximately 50% of the embryos would have two normal copies, because the male sperm donor would produce equal numbers of sperm with normal and pathogenic genotypes. Thus, editing likely corrected only about 17% of the embryos that would have otherwise had one pathogenic paternal mutation. Thirty-six percent of embryos had additional mutations from imprecise gene editing. Further, some of the gene edits and additional mutations were mosaic, meaning that the resulting embryo harbored many different genotypes.

To overcome these challenges, the research team precisely controlled the timing of CRISPR injection to coincide with fertilization. With controlled timing, gene editing was restricted to only the paternal pathogenic mutation, resulting in 72% of all injected embryos having two normal copies of the gene in all cells without any mosaicism. Whole genome sequencing revealed no additional mutations above the detection limit of the assay. Finally, preimplantation development proceeded normally to the blastocyst stage, suggesting that the edited embryos have no functional deficits from the procedure.

A surprising finding was that new sequences could not be put into the embryo. The research team had coinjected a synthetic DNA template that differed from the normal maternal copy, but never saw this sequence incorporated into any embryo. Instead, the zygote utilized the maternal copy of the gene with the normal sequence as a template for repairing the DNA cut in the paternal copy produced by CRISPR. The biology behind this repair process is poorly understood and has not been previously reported with other human cell types. These observations suggest that we cannot easily “write” our genome. Instead, our vocabulary is limited to what is already within either the maternal or paternal copy of the genome. In other words, designer babies are not around the corner. While preimplantation genetic diagnosis (PGD) is still currently the safest way to avoid passing on autosomal dominant mutations, these new findings could enable correction of such mutations within IVF embryos, resulting in a larger pool of embryos for IVF clinics to work with.

Apart from these technical challenges, the National Academies has not given a green light to implant edited human embryos. Instead, the organization calls for several requirements to be met, including “broad societal consensus” on the need for this type of intervention. While it is not clear whether or how consensus could be achieved, it is clear that scientists, clinicians, and patients will need help from the rest of society for this research to have an impact clinically.

Dr. Krishanu Saha

Dr. Saha is assistant professor of biomedical engineering at the Wisconsin Institute for Discovery at the University of Wisconsin, Madison. His lab works on gene editing of human cells. He has patent filings through the Wisconsin Alumni Research Foundation on gene editing inventions.

Publications
Topics
Sections

 

In early August, an international team of biologists reported injecting gene editing proteins into more than a hundred human embryos in Portland, Ore. The scale and success of such experimentation with human embryos is unprecedented in the United States. Given the highly experimental nature of fertility clinics in the United States and abroad, many suggest that these findings open the door to designer babies. A careful read of the report, however, indicates that the door is still quite closed, perhaps cracked open just a little.

The research team used a new method of cutting the genome, called CRISPR-Cas9. CRISPR utilizes two key components that the team combined in a test tube together: a Cas9 protein that can cut the DNA and a synthetic RNA that can guide the protein to cut a 20-letter sequence in the human genome specifically. In these experiments, the Cas9-RNA protein was designed to cut a pathogenic mutation in the MYBPC3 gene, which can cause hypertrophic cardiomyopathy. The research team could not obtain human zygotes with this mutation on both copies of the genome (a rare homozygous genotype). Such zygotes would have the most severe phenotype and be the most compelling test case for CRISPR. Instead, they focused on gene editing heterozygous human zygotes that have one normal maternal copy of the MYBPC3 gene and one pathogenic paternal copy. The heterozygous zygotes were produced by the research team via in vitro fertilization (IVF) or intracytoplasmic sperm injection (ICSI) using sperm donated by males carrying the pathogenic mutation (Nature. 2017 Aug 2. doi: 10.1038/nature23305).

When researchers injected the Cas9-RNA protein targeting the mutation into already fertilized zygotes, they found that 67% of the resulting embryos had two normal copies of the MYBPC3 gene. Without gene editing, approximately 50% of the embryos would have two normal copies, because the male sperm donor would produce equal numbers of sperm with normal and pathogenic genotypes. Thus, editing likely corrected only about 17% of the embryos that would have otherwise had one pathogenic paternal mutation. Thirty-six percent of embryos had additional mutations from imprecise gene editing. Further, some of the gene edits and additional mutations were mosaic, meaning that the resulting embryo harbored many different genotypes.

To overcome these challenges, the research team precisely controlled the timing of CRISPR injection to coincide with fertilization. With controlled timing, gene editing was restricted to only the paternal pathogenic mutation, resulting in 72% of all injected embryos having two normal copies of the gene in all cells without any mosaicism. Whole genome sequencing revealed no additional mutations above the detection limit of the assay. Finally, preimplantation development proceeded normally to the blastocyst stage, suggesting that the edited embryos have no functional deficits from the procedure.

A surprising finding was that new sequences could not be put into the embryo. The research team had coinjected a synthetic DNA template that differed from the normal maternal copy, but never saw this sequence incorporated into any embryo. Instead, the zygote utilized the maternal copy of the gene with the normal sequence as a template for repairing the DNA cut in the paternal copy produced by CRISPR. The biology behind this repair process is poorly understood and has not been previously reported with other human cell types. These observations suggest that we cannot easily “write” our genome. Instead, our vocabulary is limited to what is already within either the maternal or paternal copy of the genome. In other words, designer babies are not around the corner. While preimplantation genetic diagnosis (PGD) is still currently the safest way to avoid passing on autosomal dominant mutations, these new findings could enable correction of such mutations within IVF embryos, resulting in a larger pool of embryos for IVF clinics to work with.

Apart from these technical challenges, the National Academies has not given a green light to implant edited human embryos. Instead, the organization calls for several requirements to be met, including “broad societal consensus” on the need for this type of intervention. While it is not clear whether or how consensus could be achieved, it is clear that scientists, clinicians, and patients will need help from the rest of society for this research to have an impact clinically.

Dr. Krishanu Saha

Dr. Saha is assistant professor of biomedical engineering at the Wisconsin Institute for Discovery at the University of Wisconsin, Madison. His lab works on gene editing of human cells. He has patent filings through the Wisconsin Alumni Research Foundation on gene editing inventions.

 

In early August, an international team of biologists reported injecting gene editing proteins into more than a hundred human embryos in Portland, Ore. The scale and success of such experimentation with human embryos is unprecedented in the United States. Given the highly experimental nature of fertility clinics in the United States and abroad, many suggest that these findings open the door to designer babies. A careful read of the report, however, indicates that the door is still quite closed, perhaps cracked open just a little.

The research team used a new method of cutting the genome, called CRISPR-Cas9. CRISPR utilizes two key components that the team combined in a test tube together: a Cas9 protein that can cut the DNA and a synthetic RNA that can guide the protein to cut a 20-letter sequence in the human genome specifically. In these experiments, the Cas9-RNA protein was designed to cut a pathogenic mutation in the MYBPC3 gene, which can cause hypertrophic cardiomyopathy. The research team could not obtain human zygotes with this mutation on both copies of the genome (a rare homozygous genotype). Such zygotes would have the most severe phenotype and be the most compelling test case for CRISPR. Instead, they focused on gene editing heterozygous human zygotes that have one normal maternal copy of the MYBPC3 gene and one pathogenic paternal copy. The heterozygous zygotes were produced by the research team via in vitro fertilization (IVF) or intracytoplasmic sperm injection (ICSI) using sperm donated by males carrying the pathogenic mutation (Nature. 2017 Aug 2. doi: 10.1038/nature23305).

When researchers injected the Cas9-RNA protein targeting the mutation into already fertilized zygotes, they found that 67% of the resulting embryos had two normal copies of the MYBPC3 gene. Without gene editing, approximately 50% of the embryos would have two normal copies, because the male sperm donor would produce equal numbers of sperm with normal and pathogenic genotypes. Thus, editing likely corrected only about 17% of the embryos that would have otherwise had one pathogenic paternal mutation. Thirty-six percent of embryos had additional mutations from imprecise gene editing. Further, some of the gene edits and additional mutations were mosaic, meaning that the resulting embryo harbored many different genotypes.

To overcome these challenges, the research team precisely controlled the timing of CRISPR injection to coincide with fertilization. With controlled timing, gene editing was restricted to only the paternal pathogenic mutation, resulting in 72% of all injected embryos having two normal copies of the gene in all cells without any mosaicism. Whole genome sequencing revealed no additional mutations above the detection limit of the assay. Finally, preimplantation development proceeded normally to the blastocyst stage, suggesting that the edited embryos have no functional deficits from the procedure.

A surprising finding was that new sequences could not be put into the embryo. The research team had coinjected a synthetic DNA template that differed from the normal maternal copy, but never saw this sequence incorporated into any embryo. Instead, the zygote utilized the maternal copy of the gene with the normal sequence as a template for repairing the DNA cut in the paternal copy produced by CRISPR. The biology behind this repair process is poorly understood and has not been previously reported with other human cell types. These observations suggest that we cannot easily “write” our genome. Instead, our vocabulary is limited to what is already within either the maternal or paternal copy of the genome. In other words, designer babies are not around the corner. While preimplantation genetic diagnosis (PGD) is still currently the safest way to avoid passing on autosomal dominant mutations, these new findings could enable correction of such mutations within IVF embryos, resulting in a larger pool of embryos for IVF clinics to work with.

Apart from these technical challenges, the National Academies has not given a green light to implant edited human embryos. Instead, the organization calls for several requirements to be met, including “broad societal consensus” on the need for this type of intervention. While it is not clear whether or how consensus could be achieved, it is clear that scientists, clinicians, and patients will need help from the rest of society for this research to have an impact clinically.

Dr. Krishanu Saha

Dr. Saha is assistant professor of biomedical engineering at the Wisconsin Institute for Discovery at the University of Wisconsin, Madison. His lab works on gene editing of human cells. He has patent filings through the Wisconsin Alumni Research Foundation on gene editing inventions.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Neurologist to endocrinologists: Listen to your patients’ feet

Article Type
Changed
Tue, 05/03/2022 - 15:22

 

– Listen to the feet of your patients, and don’t just focus on glucose control as a way to combat diabetic neuropathy, according to Eva L. Feldman, MD, PhD.

copyright Jorge Salcedo/Thinkstock


“Diabetic neuropathy has some important clinical consequences in a patient’s life. There’s clearly impaired function, a lower quality of life, and increased mortality. There’s also a high association between DN and cardiovascular disease and a high risk of amputation,” said Dr. Feldman, coauthor of the ADA’s new clinical guidelines for diabetic neuropathy (Diabetes Care. 2017;40[1]:136-54).

And in patients with type 2 diabetes, “diabetic neuropathy is not just hyperglycemia, which is what we’ve focused all our efforts on up until the past 5 years. There is also a role for dyslipidemia and other metabolic impairments,” she said.

In a follow-up interview, Dr. Feldman elaborated on a common misconception about DN, simple tools for foot exams by endocrinologists, and how to know when it’s time for a referral to a neurologist or podiatrist.

Q: What do you think physicians/endocrinologists misunderstand about diabetic neuropathy?

A: Commonly, physicians think if patients with diabetes do not complain of pain or numbness, they do not have DN. This simply isn’t correct. Over 80% of patients with DN have insensate feet – they simply do not have feeling in their feet. Physicians must examine a patient’s foot at least once a year to ensure the patient has not developed DN.

Q: What should endocrinologists understand about how diabetic neuropathy develops?

A: We know that excellent glucose control has a significant impact on DN in patients with type 1 diabetes. In patients with type 2 diabetes, we know that excellent glucose control plays a much less significant role. While it’s important, it must be coupled with control of other components of metabolic syndrome – elevated blood lipids, obesity, and hypertension.

Dr. Eva Feldman
Q: You spoke in your presentation about “very simple tools” that endocrinologists can use to test for neuropathy. What do you recommend?

A: Take a 126 Hz tuning fork and determine if the patient can feel vibration on the joint of the great toe for at least 10 seconds. Then take a 10-gram filament and a pin and determine if the patient can feel both of these instruments when they are applied to the joint of the great toe. Some physicians also take a 10-gram filament and apply it to the sole of the foot. I would not suggest using a pin on the sole of the foot.

Q: What else should they look for when they inspect feet? And how often would you recommend that endocrinologists do this per patient?

A: Inspection for callous formation, fissure formation, and fungal infections is important, and a foot exam should be done once yearly.

Q: What about testing whether patents can feel temperature?

A: The pin tests the same class of nerve fibers so this is routinely not done in an endocrinologist’s office.

Q: When should endocrinologists refer out for neuropathy?

A: Endocrinologists can treat DN by treating the diabetic condition and, in type 2 diabetes, the metabolic syndrome. Referral to a neurologist is indicated if there are atypical symptoms or signs, such as motor impairment that’s greater than sensory impairment, a significant asymmetry, or a very rapid onset. All patients with severe DN should be under the care of a podiatrist to prevent the development of nonhealing wounds and ulcers.

Q: What have you learned about how diabetic neuropathy affects the lives of patients?

A: DN definitely affects the quality of a patient’s life, not only in terms of work productivity and ability to perform activities of daily living. Quality of life and general enjoyment of life can frequently be adversely affected.

Q: How successful is treatment for diabetic neuropathy?

A: Treatment for pain can be very successful, and we have outlined a protocol in our recent ADA guidelines. For patients with uncontrollable pain, frequently a referral to a pain clinic is in order.

Dr. Feldman reports no relevant disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Listen to the feet of your patients, and don’t just focus on glucose control as a way to combat diabetic neuropathy, according to Eva L. Feldman, MD, PhD.

copyright Jorge Salcedo/Thinkstock


“Diabetic neuropathy has some important clinical consequences in a patient’s life. There’s clearly impaired function, a lower quality of life, and increased mortality. There’s also a high association between DN and cardiovascular disease and a high risk of amputation,” said Dr. Feldman, coauthor of the ADA’s new clinical guidelines for diabetic neuropathy (Diabetes Care. 2017;40[1]:136-54).

And in patients with type 2 diabetes, “diabetic neuropathy is not just hyperglycemia, which is what we’ve focused all our efforts on up until the past 5 years. There is also a role for dyslipidemia and other metabolic impairments,” she said.

In a follow-up interview, Dr. Feldman elaborated on a common misconception about DN, simple tools for foot exams by endocrinologists, and how to know when it’s time for a referral to a neurologist or podiatrist.

Q: What do you think physicians/endocrinologists misunderstand about diabetic neuropathy?

A: Commonly, physicians think if patients with diabetes do not complain of pain or numbness, they do not have DN. This simply isn’t correct. Over 80% of patients with DN have insensate feet – they simply do not have feeling in their feet. Physicians must examine a patient’s foot at least once a year to ensure the patient has not developed DN.

Q: What should endocrinologists understand about how diabetic neuropathy develops?

A: We know that excellent glucose control has a significant impact on DN in patients with type 1 diabetes. In patients with type 2 diabetes, we know that excellent glucose control plays a much less significant role. While it’s important, it must be coupled with control of other components of metabolic syndrome – elevated blood lipids, obesity, and hypertension.

Dr. Eva Feldman
Q: You spoke in your presentation about “very simple tools” that endocrinologists can use to test for neuropathy. What do you recommend?

A: Take a 126 Hz tuning fork and determine if the patient can feel vibration on the joint of the great toe for at least 10 seconds. Then take a 10-gram filament and a pin and determine if the patient can feel both of these instruments when they are applied to the joint of the great toe. Some physicians also take a 10-gram filament and apply it to the sole of the foot. I would not suggest using a pin on the sole of the foot.

Q: What else should they look for when they inspect feet? And how often would you recommend that endocrinologists do this per patient?

A: Inspection for callous formation, fissure formation, and fungal infections is important, and a foot exam should be done once yearly.

Q: What about testing whether patents can feel temperature?

A: The pin tests the same class of nerve fibers so this is routinely not done in an endocrinologist’s office.

Q: When should endocrinologists refer out for neuropathy?

A: Endocrinologists can treat DN by treating the diabetic condition and, in type 2 diabetes, the metabolic syndrome. Referral to a neurologist is indicated if there are atypical symptoms or signs, such as motor impairment that’s greater than sensory impairment, a significant asymmetry, or a very rapid onset. All patients with severe DN should be under the care of a podiatrist to prevent the development of nonhealing wounds and ulcers.

Q: What have you learned about how diabetic neuropathy affects the lives of patients?

A: DN definitely affects the quality of a patient’s life, not only in terms of work productivity and ability to perform activities of daily living. Quality of life and general enjoyment of life can frequently be adversely affected.

Q: How successful is treatment for diabetic neuropathy?

A: Treatment for pain can be very successful, and we have outlined a protocol in our recent ADA guidelines. For patients with uncontrollable pain, frequently a referral to a pain clinic is in order.

Dr. Feldman reports no relevant disclosures.

 

– Listen to the feet of your patients, and don’t just focus on glucose control as a way to combat diabetic neuropathy, according to Eva L. Feldman, MD, PhD.

copyright Jorge Salcedo/Thinkstock


“Diabetic neuropathy has some important clinical consequences in a patient’s life. There’s clearly impaired function, a lower quality of life, and increased mortality. There’s also a high association between DN and cardiovascular disease and a high risk of amputation,” said Dr. Feldman, coauthor of the ADA’s new clinical guidelines for diabetic neuropathy (Diabetes Care. 2017;40[1]:136-54).

And in patients with type 2 diabetes, “diabetic neuropathy is not just hyperglycemia, which is what we’ve focused all our efforts on up until the past 5 years. There is also a role for dyslipidemia and other metabolic impairments,” she said.

In a follow-up interview, Dr. Feldman elaborated on a common misconception about DN, simple tools for foot exams by endocrinologists, and how to know when it’s time for a referral to a neurologist or podiatrist.

Q: What do you think physicians/endocrinologists misunderstand about diabetic neuropathy?

A: Commonly, physicians think if patients with diabetes do not complain of pain or numbness, they do not have DN. This simply isn’t correct. Over 80% of patients with DN have insensate feet – they simply do not have feeling in their feet. Physicians must examine a patient’s foot at least once a year to ensure the patient has not developed DN.

Q: What should endocrinologists understand about how diabetic neuropathy develops?

A: We know that excellent glucose control has a significant impact on DN in patients with type 1 diabetes. In patients with type 2 diabetes, we know that excellent glucose control plays a much less significant role. While it’s important, it must be coupled with control of other components of metabolic syndrome – elevated blood lipids, obesity, and hypertension.

Dr. Eva Feldman
Q: You spoke in your presentation about “very simple tools” that endocrinologists can use to test for neuropathy. What do you recommend?

A: Take a 126 Hz tuning fork and determine if the patient can feel vibration on the joint of the great toe for at least 10 seconds. Then take a 10-gram filament and a pin and determine if the patient can feel both of these instruments when they are applied to the joint of the great toe. Some physicians also take a 10-gram filament and apply it to the sole of the foot. I would not suggest using a pin on the sole of the foot.

Q: What else should they look for when they inspect feet? And how often would you recommend that endocrinologists do this per patient?

A: Inspection for callous formation, fissure formation, and fungal infections is important, and a foot exam should be done once yearly.

Q: What about testing whether patents can feel temperature?

A: The pin tests the same class of nerve fibers so this is routinely not done in an endocrinologist’s office.

Q: When should endocrinologists refer out for neuropathy?

A: Endocrinologists can treat DN by treating the diabetic condition and, in type 2 diabetes, the metabolic syndrome. Referral to a neurologist is indicated if there are atypical symptoms or signs, such as motor impairment that’s greater than sensory impairment, a significant asymmetry, or a very rapid onset. All patients with severe DN should be under the care of a podiatrist to prevent the development of nonhealing wounds and ulcers.

Q: What have you learned about how diabetic neuropathy affects the lives of patients?

A: DN definitely affects the quality of a patient’s life, not only in terms of work productivity and ability to perform activities of daily living. Quality of life and general enjoyment of life can frequently be adversely affected.

Q: How successful is treatment for diabetic neuropathy?

A: Treatment for pain can be very successful, and we have outlined a protocol in our recent ADA guidelines. For patients with uncontrollable pain, frequently a referral to a pain clinic is in order.

Dr. Feldman reports no relevant disclosures.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

AT THE ADA ANNUAL SCIENTIFIC SESSIONS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Psoriasis, psoriatic arthritis research makes headway at GRAPPA meeting

Article Type
Changed
Tue, 02/07/2023 - 16:56


The agenda for this year’s Group for Research and Assessment of Psoriasis and Psoriatic Arthritis (GRAPPA) annual meeting included sessions on juvenile psoriatic arthritis (PsA), the microbiome in psoriasis and PsA, and setting up longitudinal cohort studies. There was also a workshop to define the clinical fit and feasibility of PsA outcome measures for clinical trials and updates on several GRAPPA-associated projects.

The meeting, held July 13-15 in Amsterdam, opened with the annual trainee session. This year’s trainee session attracted more than 40 abstracts, of which 6 were selected for oral presentations on a variety of topics including outcome measure assessment, imaging, the microbiome in PsA, and T-cell subsets in synovial fluid.

Dr. Alexis R. Ogdie
The juvenile PsA session highlighted the similarities and differences between juvenile and adult PsA. For example, nearly half of children develop the arthritis prior to the skin symptoms, as opposed to adults where the vast majority develop psoriasis first.

The jPsA session also emphasized the need for efforts to better define this disease entity and to study outcomes related to it. The current International League of Associations for Rheumatology criteria for juvenile idiopathic arthritis split jPsA into other subgroups (for example, enthesitis-associated arthritis, undifferentiated arthritis, etc).

Three investigators – Matt Stoll, MD, PhD, Devy Zisman, MD, and Elizabeth Mellins, MD – also presented studies of the epidemiology of jPsA.
 

 

Among the most intriguing sessions at the GRAPPA meeting was a series of three talks by Jose Scher, MD, Hok Bing Thio, MD, PhD, and Dirk Elewaut, PhD, that introduced the audience to the complexity of the micro-organisms that call the human host “home” and the potential role in the development of inflammatory conditions, in particular psoriasis and PsA. These talks touched on the potential uses of the microbiome of the gut, skin, and oral mucosa in predicting therapy response and modulating the immune system.

Dafna Gladman, MD, led a session on “how to set up a cohort.” In this session, speakers provided a road map for setting up a longitudinal cohort and discussed opportunities and challenges along the way. This session provided a foundation for a meeting that followed the annual meeting, the GRAPPA Research Collaborative Network meeting (held July 15-16). The RCN meeting aimed to develop a plan for a GRAPPA research network supporting collaborative research to identify biomarkers and outcomes in psoriasis and PsA. Speakers/panelists from academics and industry kicked off the meeting with a discussion of prior experiences in establishing international cohort studies. Subsequent sessions presented individual aspects of beginning a longitudinal cohort study for biomarkers, including methods for sample collection, regulatory processes, and data collected at potential sites; capturing and harmonizing clinical data for such studies; and policies for publication of RCN studies.

This year’s workshop was focused on the GRAPPA-Outcome Measures in Rheumatology (OMERACT) working group’s plan to develop a Core Outcome Measure Set, a collection of outcome measurement instruments to be used in randomized, controlled trials (RCTs) for PsA. During the plenary session, the team presented a process for the group to evaluate instruments, including assessment of match to the domain or concept of interest, feasibility, construct validity, and discrimination (including reliability and the ability to distinguish between two groups such as responders and nonresponders). Breakout groups then discussed one of the six tools being considered and voted on match to the domain of interest and feasibility for RCTs.

In a skin session, ongoing efforts to standardize and simplify the measurement of skin psoriasis in both the clinic and RCTs were presented. While the Psoriasis Area and Severity Index (PASI) is the current standard for measuring psoriasis severity and response to therapy in RCTs, the PASI is challenging for use in clinical practice. Joseph Merola, MD, presented data to support the use of the psoriasis Physician Global Assessment (PGA) x body surface area (BSA), a simpler measure than the PASI, as a potential substitute for the PASI. Updates from the International Dermatology Outcome Measures board included reporting of the psoriasis core domain set and a summary of the PsA symptoms working group’s efforts to identify patient-reported outcomes to identify and measure PsA among patients enrolled in psoriasis RCTs. April Armstrong, MD, discussed the National Psoriasis Foundation’s efforts to develop treat-to-target goals for psoriasis. Finally, Laura Coates, MBChB, PhD, presented a report on her efforts to examine a variety of psoriasis cut points for minimal disease activity and very low disease activity outcomes.
 

 

 

Dr. Ogdie is director of the Penn Psoriatic Arthritis Clinic at the University of Pennsylvania, Philadelphia, and is a member of the GRAPPA Steering Committee.

Publications
Topics
Sections


The agenda for this year’s Group for Research and Assessment of Psoriasis and Psoriatic Arthritis (GRAPPA) annual meeting included sessions on juvenile psoriatic arthritis (PsA), the microbiome in psoriasis and PsA, and setting up longitudinal cohort studies. There was also a workshop to define the clinical fit and feasibility of PsA outcome measures for clinical trials and updates on several GRAPPA-associated projects.

The meeting, held July 13-15 in Amsterdam, opened with the annual trainee session. This year’s trainee session attracted more than 40 abstracts, of which 6 were selected for oral presentations on a variety of topics including outcome measure assessment, imaging, the microbiome in PsA, and T-cell subsets in synovial fluid.

Dr. Alexis R. Ogdie
The juvenile PsA session highlighted the similarities and differences between juvenile and adult PsA. For example, nearly half of children develop the arthritis prior to the skin symptoms, as opposed to adults where the vast majority develop psoriasis first.

The jPsA session also emphasized the need for efforts to better define this disease entity and to study outcomes related to it. The current International League of Associations for Rheumatology criteria for juvenile idiopathic arthritis split jPsA into other subgroups (for example, enthesitis-associated arthritis, undifferentiated arthritis, etc).

Three investigators – Matt Stoll, MD, PhD, Devy Zisman, MD, and Elizabeth Mellins, MD – also presented studies of the epidemiology of jPsA.
 

 

Among the most intriguing sessions at the GRAPPA meeting was a series of three talks by Jose Scher, MD, Hok Bing Thio, MD, PhD, and Dirk Elewaut, PhD, that introduced the audience to the complexity of the micro-organisms that call the human host “home” and the potential role in the development of inflammatory conditions, in particular psoriasis and PsA. These talks touched on the potential uses of the microbiome of the gut, skin, and oral mucosa in predicting therapy response and modulating the immune system.

Dafna Gladman, MD, led a session on “how to set up a cohort.” In this session, speakers provided a road map for setting up a longitudinal cohort and discussed opportunities and challenges along the way. This session provided a foundation for a meeting that followed the annual meeting, the GRAPPA Research Collaborative Network meeting (held July 15-16). The RCN meeting aimed to develop a plan for a GRAPPA research network supporting collaborative research to identify biomarkers and outcomes in psoriasis and PsA. Speakers/panelists from academics and industry kicked off the meeting with a discussion of prior experiences in establishing international cohort studies. Subsequent sessions presented individual aspects of beginning a longitudinal cohort study for biomarkers, including methods for sample collection, regulatory processes, and data collected at potential sites; capturing and harmonizing clinical data for such studies; and policies for publication of RCN studies.

This year’s workshop was focused on the GRAPPA-Outcome Measures in Rheumatology (OMERACT) working group’s plan to develop a Core Outcome Measure Set, a collection of outcome measurement instruments to be used in randomized, controlled trials (RCTs) for PsA. During the plenary session, the team presented a process for the group to evaluate instruments, including assessment of match to the domain or concept of interest, feasibility, construct validity, and discrimination (including reliability and the ability to distinguish between two groups such as responders and nonresponders). Breakout groups then discussed one of the six tools being considered and voted on match to the domain of interest and feasibility for RCTs.

In a skin session, ongoing efforts to standardize and simplify the measurement of skin psoriasis in both the clinic and RCTs were presented. While the Psoriasis Area and Severity Index (PASI) is the current standard for measuring psoriasis severity and response to therapy in RCTs, the PASI is challenging for use in clinical practice. Joseph Merola, MD, presented data to support the use of the psoriasis Physician Global Assessment (PGA) x body surface area (BSA), a simpler measure than the PASI, as a potential substitute for the PASI. Updates from the International Dermatology Outcome Measures board included reporting of the psoriasis core domain set and a summary of the PsA symptoms working group’s efforts to identify patient-reported outcomes to identify and measure PsA among patients enrolled in psoriasis RCTs. April Armstrong, MD, discussed the National Psoriasis Foundation’s efforts to develop treat-to-target goals for psoriasis. Finally, Laura Coates, MBChB, PhD, presented a report on her efforts to examine a variety of psoriasis cut points for minimal disease activity and very low disease activity outcomes.
 

 

 

Dr. Ogdie is director of the Penn Psoriatic Arthritis Clinic at the University of Pennsylvania, Philadelphia, and is a member of the GRAPPA Steering Committee.


The agenda for this year’s Group for Research and Assessment of Psoriasis and Psoriatic Arthritis (GRAPPA) annual meeting included sessions on juvenile psoriatic arthritis (PsA), the microbiome in psoriasis and PsA, and setting up longitudinal cohort studies. There was also a workshop to define the clinical fit and feasibility of PsA outcome measures for clinical trials and updates on several GRAPPA-associated projects.

The meeting, held July 13-15 in Amsterdam, opened with the annual trainee session. This year’s trainee session attracted more than 40 abstracts, of which 6 were selected for oral presentations on a variety of topics including outcome measure assessment, imaging, the microbiome in PsA, and T-cell subsets in synovial fluid.

Dr. Alexis R. Ogdie
The juvenile PsA session highlighted the similarities and differences between juvenile and adult PsA. For example, nearly half of children develop the arthritis prior to the skin symptoms, as opposed to adults where the vast majority develop psoriasis first.

The jPsA session also emphasized the need for efforts to better define this disease entity and to study outcomes related to it. The current International League of Associations for Rheumatology criteria for juvenile idiopathic arthritis split jPsA into other subgroups (for example, enthesitis-associated arthritis, undifferentiated arthritis, etc).

Three investigators – Matt Stoll, MD, PhD, Devy Zisman, MD, and Elizabeth Mellins, MD – also presented studies of the epidemiology of jPsA.
 

 

Among the most intriguing sessions at the GRAPPA meeting was a series of three talks by Jose Scher, MD, Hok Bing Thio, MD, PhD, and Dirk Elewaut, PhD, that introduced the audience to the complexity of the micro-organisms that call the human host “home” and the potential role in the development of inflammatory conditions, in particular psoriasis and PsA. These talks touched on the potential uses of the microbiome of the gut, skin, and oral mucosa in predicting therapy response and modulating the immune system.

Dafna Gladman, MD, led a session on “how to set up a cohort.” In this session, speakers provided a road map for setting up a longitudinal cohort and discussed opportunities and challenges along the way. This session provided a foundation for a meeting that followed the annual meeting, the GRAPPA Research Collaborative Network meeting (held July 15-16). The RCN meeting aimed to develop a plan for a GRAPPA research network supporting collaborative research to identify biomarkers and outcomes in psoriasis and PsA. Speakers/panelists from academics and industry kicked off the meeting with a discussion of prior experiences in establishing international cohort studies. Subsequent sessions presented individual aspects of beginning a longitudinal cohort study for biomarkers, including methods for sample collection, regulatory processes, and data collected at potential sites; capturing and harmonizing clinical data for such studies; and policies for publication of RCN studies.

This year’s workshop was focused on the GRAPPA-Outcome Measures in Rheumatology (OMERACT) working group’s plan to develop a Core Outcome Measure Set, a collection of outcome measurement instruments to be used in randomized, controlled trials (RCTs) for PsA. During the plenary session, the team presented a process for the group to evaluate instruments, including assessment of match to the domain or concept of interest, feasibility, construct validity, and discrimination (including reliability and the ability to distinguish between two groups such as responders and nonresponders). Breakout groups then discussed one of the six tools being considered and voted on match to the domain of interest and feasibility for RCTs.

In a skin session, ongoing efforts to standardize and simplify the measurement of skin psoriasis in both the clinic and RCTs were presented. While the Psoriasis Area and Severity Index (PASI) is the current standard for measuring psoriasis severity and response to therapy in RCTs, the PASI is challenging for use in clinical practice. Joseph Merola, MD, presented data to support the use of the psoriasis Physician Global Assessment (PGA) x body surface area (BSA), a simpler measure than the PASI, as a potential substitute for the PASI. Updates from the International Dermatology Outcome Measures board included reporting of the psoriasis core domain set and a summary of the PsA symptoms working group’s efforts to identify patient-reported outcomes to identify and measure PsA among patients enrolled in psoriasis RCTs. April Armstrong, MD, discussed the National Psoriasis Foundation’s efforts to develop treat-to-target goals for psoriasis. Finally, Laura Coates, MBChB, PhD, presented a report on her efforts to examine a variety of psoriasis cut points for minimal disease activity and very low disease activity outcomes.
 

 

 

Dr. Ogdie is director of the Penn Psoriatic Arthritis Clinic at the University of Pennsylvania, Philadelphia, and is a member of the GRAPPA Steering Committee.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Future Hospitalist: Top 10 tips for carrying out a successful quality improvement project

Article Type
Changed
Fri, 09/14/2018 - 11:58

 

Editor’s Note: This column is a quarterly feature written by members of the Physicians in Training Committee. It aims to encourage and educate students, residents, and early career hospitalists.

One of the biggest challenges early career hospitalists, residents and medical students face in launching their first quality improvement (QI) project is knowing how and where to get started. QI can be highly rewarding, but it can also take valuable time and resources without guarantees of sustainable improvement. In this article, we outline 10 key factors to consider when starting a new project.
 

1. Frame your project so that it aligns with your hospital’s current goals

Choose a project with your hospital’s goals in mind. Securing resources such as health IT, financial, or staffing support will prove difficult unless you get buy-in from hospital leadership. If your project does not directly address hospital goals, frame the purpose to demonstrate that it still fits with leadership priorities. For example, though improving handoffs from daytime to nighttime providers may not be a specific goal, leadership should appreciate that this project is expected to improve patient safety.

2. Be SMART about goals

Many QI projects fail because the scope of the initial project is too large, unrealistic, or vague. Creating a clear and focused aim statement and keeping it “SMART” (Specific, Measurable, Achievable, Realistic, and Timely) will bring structure to the project.1 “We will reduce Congestive Heart Failure readmissions on 5 medicine units at our hospital by 2.5% in 6 months” is an example of a SMART aim statement.

Dr. Maria Reyna

3. Involve the right people from the start

QI project disasters often start with the wrong team. Select members based on who is needed and not who is available. It is critical to include representatives or “champions” from each area that will be affected. People will buy into a new methodology much more quickly if they were engaged in its development or know that respected members in their area were involved.

4. Use a simple, systematic approach to guide improvement work

Various QI models exist and each offers a systematic approach for assessing and improving care services. The Model for Improvement developed by the Associates in Process Improvement2 is a simple and powerful framework for quality improvement that asks three questions: (1) What are we trying to accomplish? (2) How will we know a change is an improvement? (3) What changes can we make that will result in improvement? The model incorporates Plan-Do-Study-Act (PDSA) cycles to test changes on a small scale.

5. Good projects start with good background data

Dr. Alfred Burger
As with patient care, to improve a service’s “health status,” you must gather baseline information before prescribing any solutions. Anecdotal information helps, but to accurately assess baseline performance you need details and data. Data will determine the need for improvement as well as the scope of the project. Use QI tools such as process mapping or a fishbone diagram to identify potential causes of error.3

6. Choose interventions that are high impact, low effort

People will more easily change if the change itself is easy. So consider the question “does this intervention add significant work?” The best interventions change a process without causing undue burden to the clinicians and staff involved.

7. If you can’t measure it, you can’t improve it

After implementation, collect enough data to know whether the changes made improved the process. Study outcome, process, and balancing measures. If possible, use data already being collected by your institution. While it is critical to have quantitative measures, qualitative data such as surveys and observations can also enrich understanding.

Example: Increasing early discharges in medical unit.

Outcome measure – This is the desired outcome that the project aims to improve. This may be the percentage of discharges before noon (DBN) or the average discharge time.

Process measure – This is a measure of a specific change made to improve the outcome metric. The discharge orders may need to be placed earlier in the electronic medical record to improve DBN. This average discharge order time is an example of a process measure.

Balance measure – This metric evaluates whether the intended outcome is leading to unintended consequences. For example, tracking the readmission rate is an important balance measure to assess whether improved DBN is associated with rushed discharges and possible unsafe transitions.

 

 

8. Communicate project goals and progress

Dr. Harry Cho
Progress and changes need to be communicated effectively and repeatedly – do not assume that team members are aware. Celebrate the achievement of intermediate goals and small successes to ensure engagement and commitment of the team. Feedback and reminders help develop the momentum that is crucial for any long-term project.

9. Manage resistance to change

“People responsible for planning and implementing change often forget that while the first task of change management is to understand the destination and how to get there, the first task of transition management is to convince people to leave home.” – William Bridges

Inertia is powerful. We may consider our continuous performance improvement initiative as “the next big thing” but others may not share this enthusiasm. We therefore need to build a compelling reason for others to become engaged and accept major changes to work flow. Different strategies may be needed depending on your audience. Though for some, data and a rational analysis will be persuasive, for others the emotional argument will be the most motivating. Share personal anecdotes and use patient stories. In addition, let providers know “what’s in it for them.” Some may have a personal interest in your project or may need QI experience for career advancement; others might be motivated by the possibilities for scholarship arising from this work.

10. Make the work count twice

Consider QI as a scholarly initiative from the start to bring rigor to the project at all phases. Describe the project in an abstract or manuscript once improvements have been made. Publication is a great way to boost team morale and help make a business case for future improvement work. The Standards for Quality Improvement Reporting Excellence (SQUIRE) guidelines provide an excellent framework for designing and writing up an improvement project.4 The guidelines focus on why the project was started, what was done, what was found, and what the findings mean.

Driving change is challenging, and it is tempting to jump ahead to “fixing the problem.” But implementing a successful QI project requires intelligent direction, strategic planning, and skillful execution. It is our hope that following the above tips will help you develop the best possible ideas and approach implementation in a systematic way, ultimately leading to meaningful change.
 

Dr. Reyna is assistant professor in the division of hospital medicine and unit medical director at Mount Sinai Medical Center in New York City. She is a Certified Clinical Microsystems Coach. Dr. Burger is associate professor and associate program director, internal medicine residency, at Mount Sinai Beth Israel. He is on the faculty for the SGIM Annual Meeting Precourse on QI and is head of the high value care committee at the department of medicine at Mount Sinai Beth Israel. Dr. Cho is assistant professor and director of quality and safety in the division of hospital medicine at Mount Sinai. He is a senior fellow at the Lown Institute.

References

1. MacLeod L. Making SMART goals smarter. Physician Exec. 2012 Mar-Apr;38(2):68-70, 72.

2. Langley GL, Moen R, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance (2nd edition). San Francisco: Jossey-Bass Publishers; 2009.

3. Nelson EC, Batalden PB, Godfrey MM. Quality By Design: A Clinical Microsystems Approach. San Francisco, California: Jossey-Bass; 2007.

4. Ogrinc G, Davies L, Goodman D et.al. SQUIRE 2.0 (Standards for Quality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process. BMJ Qual Saf. 2015 Sep 14.

Publications
Sections

 

Editor’s Note: This column is a quarterly feature written by members of the Physicians in Training Committee. It aims to encourage and educate students, residents, and early career hospitalists.

One of the biggest challenges early career hospitalists, residents and medical students face in launching their first quality improvement (QI) project is knowing how and where to get started. QI can be highly rewarding, but it can also take valuable time and resources without guarantees of sustainable improvement. In this article, we outline 10 key factors to consider when starting a new project.
 

1. Frame your project so that it aligns with your hospital’s current goals

Choose a project with your hospital’s goals in mind. Securing resources such as health IT, financial, or staffing support will prove difficult unless you get buy-in from hospital leadership. If your project does not directly address hospital goals, frame the purpose to demonstrate that it still fits with leadership priorities. For example, though improving handoffs from daytime to nighttime providers may not be a specific goal, leadership should appreciate that this project is expected to improve patient safety.

2. Be SMART about goals

Many QI projects fail because the scope of the initial project is too large, unrealistic, or vague. Creating a clear and focused aim statement and keeping it “SMART” (Specific, Measurable, Achievable, Realistic, and Timely) will bring structure to the project.1 “We will reduce Congestive Heart Failure readmissions on 5 medicine units at our hospital by 2.5% in 6 months” is an example of a SMART aim statement.

Dr. Maria Reyna

3. Involve the right people from the start

QI project disasters often start with the wrong team. Select members based on who is needed and not who is available. It is critical to include representatives or “champions” from each area that will be affected. People will buy into a new methodology much more quickly if they were engaged in its development or know that respected members in their area were involved.

4. Use a simple, systematic approach to guide improvement work

Various QI models exist and each offers a systematic approach for assessing and improving care services. The Model for Improvement developed by the Associates in Process Improvement2 is a simple and powerful framework for quality improvement that asks three questions: (1) What are we trying to accomplish? (2) How will we know a change is an improvement? (3) What changes can we make that will result in improvement? The model incorporates Plan-Do-Study-Act (PDSA) cycles to test changes on a small scale.

5. Good projects start with good background data

Dr. Alfred Burger
As with patient care, to improve a service’s “health status,” you must gather baseline information before prescribing any solutions. Anecdotal information helps, but to accurately assess baseline performance you need details and data. Data will determine the need for improvement as well as the scope of the project. Use QI tools such as process mapping or a fishbone diagram to identify potential causes of error.3

6. Choose interventions that are high impact, low effort

People will more easily change if the change itself is easy. So consider the question “does this intervention add significant work?” The best interventions change a process without causing undue burden to the clinicians and staff involved.

7. If you can’t measure it, you can’t improve it

After implementation, collect enough data to know whether the changes made improved the process. Study outcome, process, and balancing measures. If possible, use data already being collected by your institution. While it is critical to have quantitative measures, qualitative data such as surveys and observations can also enrich understanding.

Example: Increasing early discharges in medical unit.

Outcome measure – This is the desired outcome that the project aims to improve. This may be the percentage of discharges before noon (DBN) or the average discharge time.

Process measure – This is a measure of a specific change made to improve the outcome metric. The discharge orders may need to be placed earlier in the electronic medical record to improve DBN. This average discharge order time is an example of a process measure.

Balance measure – This metric evaluates whether the intended outcome is leading to unintended consequences. For example, tracking the readmission rate is an important balance measure to assess whether improved DBN is associated with rushed discharges and possible unsafe transitions.

 

 

8. Communicate project goals and progress

Dr. Harry Cho
Progress and changes need to be communicated effectively and repeatedly – do not assume that team members are aware. Celebrate the achievement of intermediate goals and small successes to ensure engagement and commitment of the team. Feedback and reminders help develop the momentum that is crucial for any long-term project.

9. Manage resistance to change

“People responsible for planning and implementing change often forget that while the first task of change management is to understand the destination and how to get there, the first task of transition management is to convince people to leave home.” – William Bridges

Inertia is powerful. We may consider our continuous performance improvement initiative as “the next big thing” but others may not share this enthusiasm. We therefore need to build a compelling reason for others to become engaged and accept major changes to work flow. Different strategies may be needed depending on your audience. Though for some, data and a rational analysis will be persuasive, for others the emotional argument will be the most motivating. Share personal anecdotes and use patient stories. In addition, let providers know “what’s in it for them.” Some may have a personal interest in your project or may need QI experience for career advancement; others might be motivated by the possibilities for scholarship arising from this work.

10. Make the work count twice

Consider QI as a scholarly initiative from the start to bring rigor to the project at all phases. Describe the project in an abstract or manuscript once improvements have been made. Publication is a great way to boost team morale and help make a business case for future improvement work. The Standards for Quality Improvement Reporting Excellence (SQUIRE) guidelines provide an excellent framework for designing and writing up an improvement project.4 The guidelines focus on why the project was started, what was done, what was found, and what the findings mean.

Driving change is challenging, and it is tempting to jump ahead to “fixing the problem.” But implementing a successful QI project requires intelligent direction, strategic planning, and skillful execution. It is our hope that following the above tips will help you develop the best possible ideas and approach implementation in a systematic way, ultimately leading to meaningful change.
 

Dr. Reyna is assistant professor in the division of hospital medicine and unit medical director at Mount Sinai Medical Center in New York City. She is a Certified Clinical Microsystems Coach. Dr. Burger is associate professor and associate program director, internal medicine residency, at Mount Sinai Beth Israel. He is on the faculty for the SGIM Annual Meeting Precourse on QI and is head of the high value care committee at the department of medicine at Mount Sinai Beth Israel. Dr. Cho is assistant professor and director of quality and safety in the division of hospital medicine at Mount Sinai. He is a senior fellow at the Lown Institute.

References

1. MacLeod L. Making SMART goals smarter. Physician Exec. 2012 Mar-Apr;38(2):68-70, 72.

2. Langley GL, Moen R, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance (2nd edition). San Francisco: Jossey-Bass Publishers; 2009.

3. Nelson EC, Batalden PB, Godfrey MM. Quality By Design: A Clinical Microsystems Approach. San Francisco, California: Jossey-Bass; 2007.

4. Ogrinc G, Davies L, Goodman D et.al. SQUIRE 2.0 (Standards for Quality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process. BMJ Qual Saf. 2015 Sep 14.

 

Editor’s Note: This column is a quarterly feature written by members of the Physicians in Training Committee. It aims to encourage and educate students, residents, and early career hospitalists.

One of the biggest challenges early career hospitalists, residents and medical students face in launching their first quality improvement (QI) project is knowing how and where to get started. QI can be highly rewarding, but it can also take valuable time and resources without guarantees of sustainable improvement. In this article, we outline 10 key factors to consider when starting a new project.
 

1. Frame your project so that it aligns with your hospital’s current goals

Choose a project with your hospital’s goals in mind. Securing resources such as health IT, financial, or staffing support will prove difficult unless you get buy-in from hospital leadership. If your project does not directly address hospital goals, frame the purpose to demonstrate that it still fits with leadership priorities. For example, though improving handoffs from daytime to nighttime providers may not be a specific goal, leadership should appreciate that this project is expected to improve patient safety.

2. Be SMART about goals

Many QI projects fail because the scope of the initial project is too large, unrealistic, or vague. Creating a clear and focused aim statement and keeping it “SMART” (Specific, Measurable, Achievable, Realistic, and Timely) will bring structure to the project.1 “We will reduce Congestive Heart Failure readmissions on 5 medicine units at our hospital by 2.5% in 6 months” is an example of a SMART aim statement.

Dr. Maria Reyna

3. Involve the right people from the start

QI project disasters often start with the wrong team. Select members based on who is needed and not who is available. It is critical to include representatives or “champions” from each area that will be affected. People will buy into a new methodology much more quickly if they were engaged in its development or know that respected members in their area were involved.

4. Use a simple, systematic approach to guide improvement work

Various QI models exist and each offers a systematic approach for assessing and improving care services. The Model for Improvement developed by the Associates in Process Improvement2 is a simple and powerful framework for quality improvement that asks three questions: (1) What are we trying to accomplish? (2) How will we know a change is an improvement? (3) What changes can we make that will result in improvement? The model incorporates Plan-Do-Study-Act (PDSA) cycles to test changes on a small scale.

5. Good projects start with good background data

Dr. Alfred Burger
As with patient care, to improve a service’s “health status,” you must gather baseline information before prescribing any solutions. Anecdotal information helps, but to accurately assess baseline performance you need details and data. Data will determine the need for improvement as well as the scope of the project. Use QI tools such as process mapping or a fishbone diagram to identify potential causes of error.3

6. Choose interventions that are high impact, low effort

People will more easily change if the change itself is easy. So consider the question “does this intervention add significant work?” The best interventions change a process without causing undue burden to the clinicians and staff involved.

7. If you can’t measure it, you can’t improve it

After implementation, collect enough data to know whether the changes made improved the process. Study outcome, process, and balancing measures. If possible, use data already being collected by your institution. While it is critical to have quantitative measures, qualitative data such as surveys and observations can also enrich understanding.

Example: Increasing early discharges in medical unit.

Outcome measure – This is the desired outcome that the project aims to improve. This may be the percentage of discharges before noon (DBN) or the average discharge time.

Process measure – This is a measure of a specific change made to improve the outcome metric. The discharge orders may need to be placed earlier in the electronic medical record to improve DBN. This average discharge order time is an example of a process measure.

Balance measure – This metric evaluates whether the intended outcome is leading to unintended consequences. For example, tracking the readmission rate is an important balance measure to assess whether improved DBN is associated with rushed discharges and possible unsafe transitions.

 

 

8. Communicate project goals and progress

Dr. Harry Cho
Progress and changes need to be communicated effectively and repeatedly – do not assume that team members are aware. Celebrate the achievement of intermediate goals and small successes to ensure engagement and commitment of the team. Feedback and reminders help develop the momentum that is crucial for any long-term project.

9. Manage resistance to change

“People responsible for planning and implementing change often forget that while the first task of change management is to understand the destination and how to get there, the first task of transition management is to convince people to leave home.” – William Bridges

Inertia is powerful. We may consider our continuous performance improvement initiative as “the next big thing” but others may not share this enthusiasm. We therefore need to build a compelling reason for others to become engaged and accept major changes to work flow. Different strategies may be needed depending on your audience. Though for some, data and a rational analysis will be persuasive, for others the emotional argument will be the most motivating. Share personal anecdotes and use patient stories. In addition, let providers know “what’s in it for them.” Some may have a personal interest in your project or may need QI experience for career advancement; others might be motivated by the possibilities for scholarship arising from this work.

10. Make the work count twice

Consider QI as a scholarly initiative from the start to bring rigor to the project at all phases. Describe the project in an abstract or manuscript once improvements have been made. Publication is a great way to boost team morale and help make a business case for future improvement work. The Standards for Quality Improvement Reporting Excellence (SQUIRE) guidelines provide an excellent framework for designing and writing up an improvement project.4 The guidelines focus on why the project was started, what was done, what was found, and what the findings mean.

Driving change is challenging, and it is tempting to jump ahead to “fixing the problem.” But implementing a successful QI project requires intelligent direction, strategic planning, and skillful execution. It is our hope that following the above tips will help you develop the best possible ideas and approach implementation in a systematic way, ultimately leading to meaningful change.
 

Dr. Reyna is assistant professor in the division of hospital medicine and unit medical director at Mount Sinai Medical Center in New York City. She is a Certified Clinical Microsystems Coach. Dr. Burger is associate professor and associate program director, internal medicine residency, at Mount Sinai Beth Israel. He is on the faculty for the SGIM Annual Meeting Precourse on QI and is head of the high value care committee at the department of medicine at Mount Sinai Beth Israel. Dr. Cho is assistant professor and director of quality and safety in the division of hospital medicine at Mount Sinai. He is a senior fellow at the Lown Institute.

References

1. MacLeod L. Making SMART goals smarter. Physician Exec. 2012 Mar-Apr;38(2):68-70, 72.

2. Langley GL, Moen R, Nolan KM, Nolan TW, Norman CL, Provost LP. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance (2nd edition). San Francisco: Jossey-Bass Publishers; 2009.

3. Nelson EC, Batalden PB, Godfrey MM. Quality By Design: A Clinical Microsystems Approach. San Francisco, California: Jossey-Bass; 2007.

4. Ogrinc G, Davies L, Goodman D et.al. SQUIRE 2.0 (Standards for Quality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process. BMJ Qual Saf. 2015 Sep 14.

Publications
Publications
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

AAD president sees the specialty as ‘a bright star on the dance floor’

Article Type
Changed
Mon, 01/14/2019 - 10:07

 

In a plenary session at the American Academy of Dermatology summer meeting, the AAD president offered an upbeat view of the profession, likening his role in leading the 19,000-member organization to that of a dancer and comparing the specialty itself to “a bright star on the dance floor.”

The specialty, however, is facing an uncertain future. “As the music changes, so must the dance,” Henry Lim, MD, told attendees. “And so it is with American medicine today. Successfully transitioning, adapting to those changes, is especially challenging for all of medicine, including for our specialty.”

Dr. Henry Lim


Dr. Lim’s remarks came hours after President Donald Trump’s effort to dismantle the Affordable Care Act had failed. “We are in the middle of a health care system in deep turmoil and uncertainty – as you all saw from the vote this morning,” said Dr. Lim, whose 1-year term began in March.

Dermatology is assuming an ever-greater role as the U.S. population ages, he said. “The fastest-growing segment is people over 85 and last year Hallmark reported it sold 85,000 ‘Happy 100th Birthday’ cards.”

He cited the AAD’s Burden of Skin Disease Report, which found that nearly half of Americans over the age of 65 have at least one skin disease. That may not, however, translate into job security for dermatologists, he cautioned.

“A most concerning statistic from that report is that two in every three patients with skin disease are being treated by nondermatologists,” he said. Those practitioners include primary care physicians, pediatricians, hospitalists, nurse practitioners, and physician assistants. “We all know a major reason for it is access,” said Dr. Lim, who told a reporter prior to his speech that the academy has taken no position on whether it is for or against the Affordable Care Act.

But, he added in his speech, “we have been continuing to meet with individual members of Congress, Health and Human Services leadership, and the FDA – tackling issues eroding our ability to care for patients.”

Dr. Lim, chair emeritus of the department of dermatology and senior vice president for academic affairs at Henry Ford Health System in Detroit, cited in-office compounding, step therapy, narrow network funding for medical research, and scope of practice as examples.

“Listening is the key to understanding,” he noted, and the academy is doing just that. He and the rest of the academy’s leadership have visited with a number of state societies to listen to their concerns. “It is clear to me that, while we have handled many issues well, there are areas where we as an academy can do better,” he said.

Dr. Lim cited the need to “enhance our efforts in advocacy and to improve our communication, including our social media presence.”

The academy itself is in strong shape, with more than 90% of practicing dermatologists as members, he said. That places the AAD among the top specialty societies and means that future growth will likely come from international outreach.

Dr. Lim called on members to join the effort by taking to the dance floor themselves and participating. “Ask not what dermatology can do for you, ask what you can do for dermatology,” he concluded. “With the leadership of our academy listening to all of you and working together with all of you, I’m confident that dermatology will continue to be a bright star on the dance floor.”

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

In a plenary session at the American Academy of Dermatology summer meeting, the AAD president offered an upbeat view of the profession, likening his role in leading the 19,000-member organization to that of a dancer and comparing the specialty itself to “a bright star on the dance floor.”

The specialty, however, is facing an uncertain future. “As the music changes, so must the dance,” Henry Lim, MD, told attendees. “And so it is with American medicine today. Successfully transitioning, adapting to those changes, is especially challenging for all of medicine, including for our specialty.”

Dr. Henry Lim


Dr. Lim’s remarks came hours after President Donald Trump’s effort to dismantle the Affordable Care Act had failed. “We are in the middle of a health care system in deep turmoil and uncertainty – as you all saw from the vote this morning,” said Dr. Lim, whose 1-year term began in March.

Dermatology is assuming an ever-greater role as the U.S. population ages, he said. “The fastest-growing segment is people over 85 and last year Hallmark reported it sold 85,000 ‘Happy 100th Birthday’ cards.”

He cited the AAD’s Burden of Skin Disease Report, which found that nearly half of Americans over the age of 65 have at least one skin disease. That may not, however, translate into job security for dermatologists, he cautioned.

“A most concerning statistic from that report is that two in every three patients with skin disease are being treated by nondermatologists,” he said. Those practitioners include primary care physicians, pediatricians, hospitalists, nurse practitioners, and physician assistants. “We all know a major reason for it is access,” said Dr. Lim, who told a reporter prior to his speech that the academy has taken no position on whether it is for or against the Affordable Care Act.

But, he added in his speech, “we have been continuing to meet with individual members of Congress, Health and Human Services leadership, and the FDA – tackling issues eroding our ability to care for patients.”

Dr. Lim, chair emeritus of the department of dermatology and senior vice president for academic affairs at Henry Ford Health System in Detroit, cited in-office compounding, step therapy, narrow network funding for medical research, and scope of practice as examples.

“Listening is the key to understanding,” he noted, and the academy is doing just that. He and the rest of the academy’s leadership have visited with a number of state societies to listen to their concerns. “It is clear to me that, while we have handled many issues well, there are areas where we as an academy can do better,” he said.

Dr. Lim cited the need to “enhance our efforts in advocacy and to improve our communication, including our social media presence.”

The academy itself is in strong shape, with more than 90% of practicing dermatologists as members, he said. That places the AAD among the top specialty societies and means that future growth will likely come from international outreach.

Dr. Lim called on members to join the effort by taking to the dance floor themselves and participating. “Ask not what dermatology can do for you, ask what you can do for dermatology,” he concluded. “With the leadership of our academy listening to all of you and working together with all of you, I’m confident that dermatology will continue to be a bright star on the dance floor.”

 

In a plenary session at the American Academy of Dermatology summer meeting, the AAD president offered an upbeat view of the profession, likening his role in leading the 19,000-member organization to that of a dancer and comparing the specialty itself to “a bright star on the dance floor.”

The specialty, however, is facing an uncertain future. “As the music changes, so must the dance,” Henry Lim, MD, told attendees. “And so it is with American medicine today. Successfully transitioning, adapting to those changes, is especially challenging for all of medicine, including for our specialty.”

Dr. Henry Lim


Dr. Lim’s remarks came hours after President Donald Trump’s effort to dismantle the Affordable Care Act had failed. “We are in the middle of a health care system in deep turmoil and uncertainty – as you all saw from the vote this morning,” said Dr. Lim, whose 1-year term began in March.

Dermatology is assuming an ever-greater role as the U.S. population ages, he said. “The fastest-growing segment is people over 85 and last year Hallmark reported it sold 85,000 ‘Happy 100th Birthday’ cards.”

He cited the AAD’s Burden of Skin Disease Report, which found that nearly half of Americans over the age of 65 have at least one skin disease. That may not, however, translate into job security for dermatologists, he cautioned.

“A most concerning statistic from that report is that two in every three patients with skin disease are being treated by nondermatologists,” he said. Those practitioners include primary care physicians, pediatricians, hospitalists, nurse practitioners, and physician assistants. “We all know a major reason for it is access,” said Dr. Lim, who told a reporter prior to his speech that the academy has taken no position on whether it is for or against the Affordable Care Act.

But, he added in his speech, “we have been continuing to meet with individual members of Congress, Health and Human Services leadership, and the FDA – tackling issues eroding our ability to care for patients.”

Dr. Lim, chair emeritus of the department of dermatology and senior vice president for academic affairs at Henry Ford Health System in Detroit, cited in-office compounding, step therapy, narrow network funding for medical research, and scope of practice as examples.

“Listening is the key to understanding,” he noted, and the academy is doing just that. He and the rest of the academy’s leadership have visited with a number of state societies to listen to their concerns. “It is clear to me that, while we have handled many issues well, there are areas where we as an academy can do better,” he said.

Dr. Lim cited the need to “enhance our efforts in advocacy and to improve our communication, including our social media presence.”

The academy itself is in strong shape, with more than 90% of practicing dermatologists as members, he said. That places the AAD among the top specialty societies and means that future growth will likely come from international outreach.

Dr. Lim called on members to join the effort by taking to the dance floor themselves and participating. “Ask not what dermatology can do for you, ask what you can do for dermatology,” he concluded. “With the leadership of our academy listening to all of you and working together with all of you, I’m confident that dermatology will continue to be a bright star on the dance floor.”

Publications
Publications
Topics
Article Type
Sections
Article Source

AT THE 2017 AAD SUMMER MEETING

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default