Breast cancer prevention

Article Type
Changed
Fri, 01/18/2019 - 12:40
Display Headline
Breast cancer prevention

According to the National Cancer Institute, 1 in 8 women will receive a diagnosis of breast cancer during their lifetime, making it the most common nonskin cancer in females. It is a disease associated with significant physical and psychological morbidity and mortality. Breast cancer carries with it a yearly mortality rate of approximately 20-30 women per 100,000, with a median age at death of 61 years. The most affected subpopulation is postmenopausal African American females.

Updated draft guidelines

In primary care, our focus has long been directed at the detection of these cancers in the earliest possible stage using mammography with the ultimate goal of decreasing mortality. Historically, less emphasis has been placed on preventing the development of the disease. As the priorities of our health care system continue to evolve, the United States Preventive Services Task Force (USPSTF) has reviewed the available evidence and updated, in draft form available for public comment, their recommendations regarding breast cancer prevention with the use of tamoxifen and raloxifene, which are two selective estrogen-receptor modulators (SERMs).

These medications have been approved by the Food and Drug Administration to reduce the risk of development of breast cancers, specifically hormone receptor–positive disease. The USPSTF has reviewed seven large randomized controlled trials evaluating women without preexisting breast cancer. Typical dosing included tamoxifen 20 mg and raloxifene 60 mg daily for a period of 5 years. Overall, these studies have shown that SERMs can be utilized to decrease the incidence of invasive breast cancer in postmenopausal women by 30%-55%, which is equal to 7 to 9 fewer events per 1,000 in a 5-year period. Tamoxifen may have similar benefits in premenopausal women. When the effectiveness of these two medications was compared head-to-head, 25% more women receiving raloxifen than those receiving tamoxifen developed invasive breast cancer (5 events per 1,000 women). For all studies, the benefits were only observed with women who had a higher calculated risk of breast cancer development. Neither medication significantly reduced the risk of estrogen receptor–negative or noninvasive breast cancer. Both medications reduced the incidence of osteoporotic fractures.

Adverse effects of SERMs

Importantly, these medications are not without potential adverse effects. Studies have shown that SERMs increase the risk for venous thromboembolic events (VTEs) such as deep-venous thrombosis or stroke by approximately 60%-90%, which is equivalent to an increase of 4 to 7 VTEs per 1,000 women over 5 years, with tamoxifen bestowing a slightly increased rate over raloxifene. Therefore, they are not recommended for use in women with a prior VTE. Tamoxifen has also been implicated in a small, potential increased incidence of endometrial cancer, specifically in women older than age 50 without a history of hysterectomy.

Other potential issues, including ischemic stroke and cataract development, have a less clearly defined risk profile. Vasomotor side effects similar to postmenopausal hot flashes are relatively common with SERM use, and although not life-threatening, can have a serious negative impact on quality of life. Physicians should weigh the potential benefits with the potential risks of taking these medications, taking into account a woman’s age, comorbidities, presence or absence of a uterus, and potential additive risks for thromboembolic events. The risk of adverse effects can be minimized when patients are selected on an individualized basis, highlighting the necessity and importance of a discussion between patient and physician before implementing preventive therapy.

Patient selection for SERM therapy

From these data, it appears that women with an increased baseline risk of developing breast cancer seem to benefit the most from preventive SERM therapy. Tools exist to help calculate the estimated risk in certain populations. One such is the National Cancer Institute Breast Cancer Risk Assessment Tool, available online at www.cancer.gov/bcrisktool/. For women older than 35 years for whom BRCA1 or BRCA2 mutation status is unknown, this calculator utilizes ethnicity, age at menarche, age at birth of first child, family history of disease prior to age 50 years in first-degree female relatives, and history of prior breast mass or biopsy to estimate a projected 5-year incidence of disease. Women with a calculated 5-year breast cancer risk of 3% or greater, who do not have increased antithrombotic risk, appear to have the most favorable risk-benefit ratio for preventive SERM therapy.

The bottom line

Breast cancer is a common disease that can have devastating physical and emotional consequences and affects an estimated 12.4% of women during their lifetime. The USPSTF has drafted an update, currently available for public comment, of its recommendations regarding the use of SERMs as preventive therapy for specific populations of women aged 40-70 years with a calculated 5-year risk of 3% or greater for future development of breast cancer. The USPSTF gives class B recommendations that physicians should engage in a dialogue with patients, assisting individuals to weigh the possible adverse drug effects with the potential reduction of breast cancer risk when these medications are utilized. The USPSTF concludes with moderate certainty that women with increased calculated risk can benefit from SERM therapy to reduce the incidence of invasive hormone receptor–positive breast cancer.

 

 

References

• U.S. Preventive Services Task Force. Medications for Risk Reduction of Primary Breast Cancer in Women: Draft Recommendation Statement. AHRQ Publication No. 13-05189-EF-2.

• Nelson HD, et al. Use of Medications to Reduce Risk for Primary Breast Cancer: A Systematic Review for the U.S. Preventive Services Task Force. Ann. Intern. Med. 2013;158:604-14.

Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University, Philadelphia. Dr. Madsen is currently a third-year resident and chief resident in the family medicine residency program at Abington Memorial Hospital.

Author and Disclosure Information

Publications
Legacy Keywords
National Cancer Institute, breast cancer, prevention, women's health, screening, Neil Skolnik
Sections
Author and Disclosure Information

Author and Disclosure Information

According to the National Cancer Institute, 1 in 8 women will receive a diagnosis of breast cancer during their lifetime, making it the most common nonskin cancer in females. It is a disease associated with significant physical and psychological morbidity and mortality. Breast cancer carries with it a yearly mortality rate of approximately 20-30 women per 100,000, with a median age at death of 61 years. The most affected subpopulation is postmenopausal African American females.

Updated draft guidelines

In primary care, our focus has long been directed at the detection of these cancers in the earliest possible stage using mammography with the ultimate goal of decreasing mortality. Historically, less emphasis has been placed on preventing the development of the disease. As the priorities of our health care system continue to evolve, the United States Preventive Services Task Force (USPSTF) has reviewed the available evidence and updated, in draft form available for public comment, their recommendations regarding breast cancer prevention with the use of tamoxifen and raloxifene, which are two selective estrogen-receptor modulators (SERMs).

These medications have been approved by the Food and Drug Administration to reduce the risk of development of breast cancers, specifically hormone receptor–positive disease. The USPSTF has reviewed seven large randomized controlled trials evaluating women without preexisting breast cancer. Typical dosing included tamoxifen 20 mg and raloxifene 60 mg daily for a period of 5 years. Overall, these studies have shown that SERMs can be utilized to decrease the incidence of invasive breast cancer in postmenopausal women by 30%-55%, which is equal to 7 to 9 fewer events per 1,000 in a 5-year period. Tamoxifen may have similar benefits in premenopausal women. When the effectiveness of these two medications was compared head-to-head, 25% more women receiving raloxifen than those receiving tamoxifen developed invasive breast cancer (5 events per 1,000 women). For all studies, the benefits were only observed with women who had a higher calculated risk of breast cancer development. Neither medication significantly reduced the risk of estrogen receptor–negative or noninvasive breast cancer. Both medications reduced the incidence of osteoporotic fractures.

Adverse effects of SERMs

Importantly, these medications are not without potential adverse effects. Studies have shown that SERMs increase the risk for venous thromboembolic events (VTEs) such as deep-venous thrombosis or stroke by approximately 60%-90%, which is equivalent to an increase of 4 to 7 VTEs per 1,000 women over 5 years, with tamoxifen bestowing a slightly increased rate over raloxifene. Therefore, they are not recommended for use in women with a prior VTE. Tamoxifen has also been implicated in a small, potential increased incidence of endometrial cancer, specifically in women older than age 50 without a history of hysterectomy.

Other potential issues, including ischemic stroke and cataract development, have a less clearly defined risk profile. Vasomotor side effects similar to postmenopausal hot flashes are relatively common with SERM use, and although not life-threatening, can have a serious negative impact on quality of life. Physicians should weigh the potential benefits with the potential risks of taking these medications, taking into account a woman’s age, comorbidities, presence or absence of a uterus, and potential additive risks for thromboembolic events. The risk of adverse effects can be minimized when patients are selected on an individualized basis, highlighting the necessity and importance of a discussion between patient and physician before implementing preventive therapy.

Patient selection for SERM therapy

From these data, it appears that women with an increased baseline risk of developing breast cancer seem to benefit the most from preventive SERM therapy. Tools exist to help calculate the estimated risk in certain populations. One such is the National Cancer Institute Breast Cancer Risk Assessment Tool, available online at www.cancer.gov/bcrisktool/. For women older than 35 years for whom BRCA1 or BRCA2 mutation status is unknown, this calculator utilizes ethnicity, age at menarche, age at birth of first child, family history of disease prior to age 50 years in first-degree female relatives, and history of prior breast mass or biopsy to estimate a projected 5-year incidence of disease. Women with a calculated 5-year breast cancer risk of 3% or greater, who do not have increased antithrombotic risk, appear to have the most favorable risk-benefit ratio for preventive SERM therapy.

The bottom line

Breast cancer is a common disease that can have devastating physical and emotional consequences and affects an estimated 12.4% of women during their lifetime. The USPSTF has drafted an update, currently available for public comment, of its recommendations regarding the use of SERMs as preventive therapy for specific populations of women aged 40-70 years with a calculated 5-year risk of 3% or greater for future development of breast cancer. The USPSTF gives class B recommendations that physicians should engage in a dialogue with patients, assisting individuals to weigh the possible adverse drug effects with the potential reduction of breast cancer risk when these medications are utilized. The USPSTF concludes with moderate certainty that women with increased calculated risk can benefit from SERM therapy to reduce the incidence of invasive hormone receptor–positive breast cancer.

 

 

References

• U.S. Preventive Services Task Force. Medications for Risk Reduction of Primary Breast Cancer in Women: Draft Recommendation Statement. AHRQ Publication No. 13-05189-EF-2.

• Nelson HD, et al. Use of Medications to Reduce Risk for Primary Breast Cancer: A Systematic Review for the U.S. Preventive Services Task Force. Ann. Intern. Med. 2013;158:604-14.

Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University, Philadelphia. Dr. Madsen is currently a third-year resident and chief resident in the family medicine residency program at Abington Memorial Hospital.

According to the National Cancer Institute, 1 in 8 women will receive a diagnosis of breast cancer during their lifetime, making it the most common nonskin cancer in females. It is a disease associated with significant physical and psychological morbidity and mortality. Breast cancer carries with it a yearly mortality rate of approximately 20-30 women per 100,000, with a median age at death of 61 years. The most affected subpopulation is postmenopausal African American females.

Updated draft guidelines

In primary care, our focus has long been directed at the detection of these cancers in the earliest possible stage using mammography with the ultimate goal of decreasing mortality. Historically, less emphasis has been placed on preventing the development of the disease. As the priorities of our health care system continue to evolve, the United States Preventive Services Task Force (USPSTF) has reviewed the available evidence and updated, in draft form available for public comment, their recommendations regarding breast cancer prevention with the use of tamoxifen and raloxifene, which are two selective estrogen-receptor modulators (SERMs).

These medications have been approved by the Food and Drug Administration to reduce the risk of development of breast cancers, specifically hormone receptor–positive disease. The USPSTF has reviewed seven large randomized controlled trials evaluating women without preexisting breast cancer. Typical dosing included tamoxifen 20 mg and raloxifene 60 mg daily for a period of 5 years. Overall, these studies have shown that SERMs can be utilized to decrease the incidence of invasive breast cancer in postmenopausal women by 30%-55%, which is equal to 7 to 9 fewer events per 1,000 in a 5-year period. Tamoxifen may have similar benefits in premenopausal women. When the effectiveness of these two medications was compared head-to-head, 25% more women receiving raloxifen than those receiving tamoxifen developed invasive breast cancer (5 events per 1,000 women). For all studies, the benefits were only observed with women who had a higher calculated risk of breast cancer development. Neither medication significantly reduced the risk of estrogen receptor–negative or noninvasive breast cancer. Both medications reduced the incidence of osteoporotic fractures.

Adverse effects of SERMs

Importantly, these medications are not without potential adverse effects. Studies have shown that SERMs increase the risk for venous thromboembolic events (VTEs) such as deep-venous thrombosis or stroke by approximately 60%-90%, which is equivalent to an increase of 4 to 7 VTEs per 1,000 women over 5 years, with tamoxifen bestowing a slightly increased rate over raloxifene. Therefore, they are not recommended for use in women with a prior VTE. Tamoxifen has also been implicated in a small, potential increased incidence of endometrial cancer, specifically in women older than age 50 without a history of hysterectomy.

Other potential issues, including ischemic stroke and cataract development, have a less clearly defined risk profile. Vasomotor side effects similar to postmenopausal hot flashes are relatively common with SERM use, and although not life-threatening, can have a serious negative impact on quality of life. Physicians should weigh the potential benefits with the potential risks of taking these medications, taking into account a woman’s age, comorbidities, presence or absence of a uterus, and potential additive risks for thromboembolic events. The risk of adverse effects can be minimized when patients are selected on an individualized basis, highlighting the necessity and importance of a discussion between patient and physician before implementing preventive therapy.

Patient selection for SERM therapy

From these data, it appears that women with an increased baseline risk of developing breast cancer seem to benefit the most from preventive SERM therapy. Tools exist to help calculate the estimated risk in certain populations. One such is the National Cancer Institute Breast Cancer Risk Assessment Tool, available online at www.cancer.gov/bcrisktool/. For women older than 35 years for whom BRCA1 or BRCA2 mutation status is unknown, this calculator utilizes ethnicity, age at menarche, age at birth of first child, family history of disease prior to age 50 years in first-degree female relatives, and history of prior breast mass or biopsy to estimate a projected 5-year incidence of disease. Women with a calculated 5-year breast cancer risk of 3% or greater, who do not have increased antithrombotic risk, appear to have the most favorable risk-benefit ratio for preventive SERM therapy.

The bottom line

Breast cancer is a common disease that can have devastating physical and emotional consequences and affects an estimated 12.4% of women during their lifetime. The USPSTF has drafted an update, currently available for public comment, of its recommendations regarding the use of SERMs as preventive therapy for specific populations of women aged 40-70 years with a calculated 5-year risk of 3% or greater for future development of breast cancer. The USPSTF gives class B recommendations that physicians should engage in a dialogue with patients, assisting individuals to weigh the possible adverse drug effects with the potential reduction of breast cancer risk when these medications are utilized. The USPSTF concludes with moderate certainty that women with increased calculated risk can benefit from SERM therapy to reduce the incidence of invasive hormone receptor–positive breast cancer.

 

 

References

• U.S. Preventive Services Task Force. Medications for Risk Reduction of Primary Breast Cancer in Women: Draft Recommendation Statement. AHRQ Publication No. 13-05189-EF-2.

• Nelson HD, et al. Use of Medications to Reduce Risk for Primary Breast Cancer: A Systematic Review for the U.S. Preventive Services Task Force. Ann. Intern. Med. 2013;158:604-14.

Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University, Philadelphia. Dr. Madsen is currently a third-year resident and chief resident in the family medicine residency program at Abington Memorial Hospital.

Publications
Publications
Article Type
Display Headline
Breast cancer prevention
Display Headline
Breast cancer prevention
Legacy Keywords
National Cancer Institute, breast cancer, prevention, women's health, screening, Neil Skolnik
Legacy Keywords
National Cancer Institute, breast cancer, prevention, women's health, screening, Neil Skolnik
Sections
Article Source

PURLs Copyright

Inside the Article

Prevention and treatment of osteoporosis

Article Type
Changed
Fri, 01/18/2019 - 12:38
Display Headline
Prevention and treatment of osteoporosis

The National Osteoporosis Foundation released new 2013 guidelines for the treatment and management of osteoporosis for postmenopausal women and men over the age of 50 years.

Osteoporosis definition

Osteoporosis is defined by a bone mineral density (BMD) measurement (T score) less than or equal to 2.5 standard deviations (SD) below the mean for a young adult reference population, or the occurrence of a hip or vertebral fracture without preceding major trauma. Osteopenia is established by BMD testing showing a T score between 1.0-2.5 SD below a young adult reference population.

Assess patient’s risk for fracture

All postmenopausal women and men above age 50 years should be evaluated for risk of osteoporosis in order to determine the need for BMD testing and/or vertebral imaging. In addition, all patients should be evaluated for their risk of falling, since the majority of osteoporosis-related fractures occur because of a fall.

The WHO FRAX tool, may be used in order to calculate the 10-year probability of a hip fracture and the 10-year probability of a major osteoporotic fracture (clinical vertebral, hip, forearm or proximal humerus fracture). Risk of fracture can be calculated either with or without availability of BMD. The 10-year probability of fracture can be used to determine the need for pharmacologic treatment.

Diagnosis

Bone mineral density testing

Dual-energy x-ray absorptiometry (DXA) imaging of the hip and spine can diagnose or confirm osteoporosis. Testing should be considered in:

• Women aged 65 years and older and men 70 years of age and older, regardless of clinical risk factors.

• Patients of either sex who are aged between 50-69 years with clinical risk factors.

• Patients with a fracture after age 50 years.

• And patients with conditions (for example, rheumatoid arthritis) or on medications (for example, glucocorticoids) associated with low bone mass or bone loss.

Vertebral imaging

A single vertebral fracture increases the risk of subsequent vertebral and hip fractures, is consistent with the diagnosis of osteoporosis, and is an indication for pharmacologic treatment regardless of BMD. New to these guidelines is a recommendation for a proactive screening effort for vertebral fractures using lateral thoracic and lumbar spine x-ray or by lateral vertebral fracture assessment (VFA). Indications for vertebral imaging are:

• Women aged 65 years and older and men aged 70 years and older if T score is –1.5 or below.

• Women aged 70 years and men age 80 years and older.

• Postmenopausal women and men aged 50 years and older with a low trauma fracture.

• And/or postmenopausal women and men aged 50-69 years if there is height loss of 1.5 inches or more or ongoing long-term glucocorticoid treatment.

Markers of bone turnover

Biochemical markers of bone turnover are divided into two types:

• Markers of bone remodeling – serum C-telopeptide (CTx) and urinary N-telopeptide (NTx)

• Formation markers-serum bone-specific alkaline phosphatase (BSAP), osteocalcin (OC), and aminoterminal propeptide of type 1 procollagen (P1NP)]

Markers should be collected as fasting morning specimens and may be helpful in predicting risk of fracture and extend of fracture risk reduction when repeated after 3-6 months of pharmacologic therapy.

General recommendations

Vitamin D and calcium: A diet rich in vitamin D and calcium is an inexpensive way to prevent bone mineral density loss. Fruits, vegetables, low-fat dairy, and sunlight are great sources. If dietary supplementation is required, men aged 50-70 years should consume 1,000 mg of calcium/day and women over 51 years old should have 1,200 mg of calcium daily. Both men and women over 50 years should have 800-1,000 IU of vitamin D daily.

Treat vitamin D deficiencies: Supplementation should be adequate to achieve serum levels of 30ng/mL (75nmol/L).

Decreased alcohol use, smoking cessation, exercise, and fall prevention: Smoking cessation should be strongly advised. Moderate alcohol intake does not adversely affect bone and may be associated with lower fracture risk, though consuming more than three drinks daily may have an adverse effect on bone health and increases the risk of falling. Weight-bearing and muscle-strengthening exercise improves bone health and decreases the risk of falls. Home assessment for fall prevention for the elderly may decrease the risk of fracture.

Pharmacologic treatments

Treatment should be considered in postmenopausal women and men over 50 years with a hip or vertebral fracture; T score less than or equal to –2.5 at femoral neck, total hip or lumbar spine; low bone mass (T score between –1.0 and –2.5) and a 10-year probability of hip fracture greater than or equal to 3% or 10 year probability of major osteoporosis-related fracture greater than or equal to 20%. The antifracture benefits of medications have been studied primarily in postmenopausal women with osteoporosis. Pharmacologic therapy should not be considered life-long and that treatment decisions should be individualized. After 3-5 years of treatment a comprehensive risk assessment should be performed.

 

 

The bottom line

Identify risk factors for osteoporosis in postmenopausal women and men over the age of 50 years. Bone mineral density screening is an important part of fracture prevention, and vertebral imaging should now be considered as a part of osteoporosis screening. Pharmacologic treatment can be considered when a nontraumatic fracture is apparent; if the T score is less than or equal to –2.5; or for individuals with an elevated 10-year fracture risk based on WHO model.

• Source: Clinician’s Guide to Prevention and Treatment of Osteoporosis. Washington, DC: National Osteoporosis Foundation; 2013.

Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University, Philadelphia. Dr. Charles is a second year resident in the Family Medicine Residency Program at Abington Memorial Hospital.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
National Osteoporosis Foundation, osteoporosis, postmenopausal women, bone mineral density, BMD, fracture, Osteopenia, T score
Sections
Author and Disclosure Information

Author and Disclosure Information

The National Osteoporosis Foundation released new 2013 guidelines for the treatment and management of osteoporosis for postmenopausal women and men over the age of 50 years.

Osteoporosis definition

Osteoporosis is defined by a bone mineral density (BMD) measurement (T score) less than or equal to 2.5 standard deviations (SD) below the mean for a young adult reference population, or the occurrence of a hip or vertebral fracture without preceding major trauma. Osteopenia is established by BMD testing showing a T score between 1.0-2.5 SD below a young adult reference population.

Assess patient’s risk for fracture

All postmenopausal women and men above age 50 years should be evaluated for risk of osteoporosis in order to determine the need for BMD testing and/or vertebral imaging. In addition, all patients should be evaluated for their risk of falling, since the majority of osteoporosis-related fractures occur because of a fall.

The WHO FRAX tool, may be used in order to calculate the 10-year probability of a hip fracture and the 10-year probability of a major osteoporotic fracture (clinical vertebral, hip, forearm or proximal humerus fracture). Risk of fracture can be calculated either with or without availability of BMD. The 10-year probability of fracture can be used to determine the need for pharmacologic treatment.

Diagnosis

Bone mineral density testing

Dual-energy x-ray absorptiometry (DXA) imaging of the hip and spine can diagnose or confirm osteoporosis. Testing should be considered in:

• Women aged 65 years and older and men 70 years of age and older, regardless of clinical risk factors.

• Patients of either sex who are aged between 50-69 years with clinical risk factors.

• Patients with a fracture after age 50 years.

• And patients with conditions (for example, rheumatoid arthritis) or on medications (for example, glucocorticoids) associated with low bone mass or bone loss.

Vertebral imaging

A single vertebral fracture increases the risk of subsequent vertebral and hip fractures, is consistent with the diagnosis of osteoporosis, and is an indication for pharmacologic treatment regardless of BMD. New to these guidelines is a recommendation for a proactive screening effort for vertebral fractures using lateral thoracic and lumbar spine x-ray or by lateral vertebral fracture assessment (VFA). Indications for vertebral imaging are:

• Women aged 65 years and older and men aged 70 years and older if T score is –1.5 or below.

• Women aged 70 years and men age 80 years and older.

• Postmenopausal women and men aged 50 years and older with a low trauma fracture.

• And/or postmenopausal women and men aged 50-69 years if there is height loss of 1.5 inches or more or ongoing long-term glucocorticoid treatment.

Markers of bone turnover

Biochemical markers of bone turnover are divided into two types:

• Markers of bone remodeling – serum C-telopeptide (CTx) and urinary N-telopeptide (NTx)

• Formation markers-serum bone-specific alkaline phosphatase (BSAP), osteocalcin (OC), and aminoterminal propeptide of type 1 procollagen (P1NP)]

Markers should be collected as fasting morning specimens and may be helpful in predicting risk of fracture and extend of fracture risk reduction when repeated after 3-6 months of pharmacologic therapy.

General recommendations

Vitamin D and calcium: A diet rich in vitamin D and calcium is an inexpensive way to prevent bone mineral density loss. Fruits, vegetables, low-fat dairy, and sunlight are great sources. If dietary supplementation is required, men aged 50-70 years should consume 1,000 mg of calcium/day and women over 51 years old should have 1,200 mg of calcium daily. Both men and women over 50 years should have 800-1,000 IU of vitamin D daily.

Treat vitamin D deficiencies: Supplementation should be adequate to achieve serum levels of 30ng/mL (75nmol/L).

Decreased alcohol use, smoking cessation, exercise, and fall prevention: Smoking cessation should be strongly advised. Moderate alcohol intake does not adversely affect bone and may be associated with lower fracture risk, though consuming more than three drinks daily may have an adverse effect on bone health and increases the risk of falling. Weight-bearing and muscle-strengthening exercise improves bone health and decreases the risk of falls. Home assessment for fall prevention for the elderly may decrease the risk of fracture.

Pharmacologic treatments

Treatment should be considered in postmenopausal women and men over 50 years with a hip or vertebral fracture; T score less than or equal to –2.5 at femoral neck, total hip or lumbar spine; low bone mass (T score between –1.0 and –2.5) and a 10-year probability of hip fracture greater than or equal to 3% or 10 year probability of major osteoporosis-related fracture greater than or equal to 20%. The antifracture benefits of medications have been studied primarily in postmenopausal women with osteoporosis. Pharmacologic therapy should not be considered life-long and that treatment decisions should be individualized. After 3-5 years of treatment a comprehensive risk assessment should be performed.

 

 

The bottom line

Identify risk factors for osteoporosis in postmenopausal women and men over the age of 50 years. Bone mineral density screening is an important part of fracture prevention, and vertebral imaging should now be considered as a part of osteoporosis screening. Pharmacologic treatment can be considered when a nontraumatic fracture is apparent; if the T score is less than or equal to –2.5; or for individuals with an elevated 10-year fracture risk based on WHO model.

• Source: Clinician’s Guide to Prevention and Treatment of Osteoporosis. Washington, DC: National Osteoporosis Foundation; 2013.

Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University, Philadelphia. Dr. Charles is a second year resident in the Family Medicine Residency Program at Abington Memorial Hospital.

The National Osteoporosis Foundation released new 2013 guidelines for the treatment and management of osteoporosis for postmenopausal women and men over the age of 50 years.

Osteoporosis definition

Osteoporosis is defined by a bone mineral density (BMD) measurement (T score) less than or equal to 2.5 standard deviations (SD) below the mean for a young adult reference population, or the occurrence of a hip or vertebral fracture without preceding major trauma. Osteopenia is established by BMD testing showing a T score between 1.0-2.5 SD below a young adult reference population.

Assess patient’s risk for fracture

All postmenopausal women and men above age 50 years should be evaluated for risk of osteoporosis in order to determine the need for BMD testing and/or vertebral imaging. In addition, all patients should be evaluated for their risk of falling, since the majority of osteoporosis-related fractures occur because of a fall.

The WHO FRAX tool, may be used in order to calculate the 10-year probability of a hip fracture and the 10-year probability of a major osteoporotic fracture (clinical vertebral, hip, forearm or proximal humerus fracture). Risk of fracture can be calculated either with or without availability of BMD. The 10-year probability of fracture can be used to determine the need for pharmacologic treatment.

Diagnosis

Bone mineral density testing

Dual-energy x-ray absorptiometry (DXA) imaging of the hip and spine can diagnose or confirm osteoporosis. Testing should be considered in:

• Women aged 65 years and older and men 70 years of age and older, regardless of clinical risk factors.

• Patients of either sex who are aged between 50-69 years with clinical risk factors.

• Patients with a fracture after age 50 years.

• And patients with conditions (for example, rheumatoid arthritis) or on medications (for example, glucocorticoids) associated with low bone mass or bone loss.

Vertebral imaging

A single vertebral fracture increases the risk of subsequent vertebral and hip fractures, is consistent with the diagnosis of osteoporosis, and is an indication for pharmacologic treatment regardless of BMD. New to these guidelines is a recommendation for a proactive screening effort for vertebral fractures using lateral thoracic and lumbar spine x-ray or by lateral vertebral fracture assessment (VFA). Indications for vertebral imaging are:

• Women aged 65 years and older and men aged 70 years and older if T score is –1.5 or below.

• Women aged 70 years and men age 80 years and older.

• Postmenopausal women and men aged 50 years and older with a low trauma fracture.

• And/or postmenopausal women and men aged 50-69 years if there is height loss of 1.5 inches or more or ongoing long-term glucocorticoid treatment.

Markers of bone turnover

Biochemical markers of bone turnover are divided into two types:

• Markers of bone remodeling – serum C-telopeptide (CTx) and urinary N-telopeptide (NTx)

• Formation markers-serum bone-specific alkaline phosphatase (BSAP), osteocalcin (OC), and aminoterminal propeptide of type 1 procollagen (P1NP)]

Markers should be collected as fasting morning specimens and may be helpful in predicting risk of fracture and extend of fracture risk reduction when repeated after 3-6 months of pharmacologic therapy.

General recommendations

Vitamin D and calcium: A diet rich in vitamin D and calcium is an inexpensive way to prevent bone mineral density loss. Fruits, vegetables, low-fat dairy, and sunlight are great sources. If dietary supplementation is required, men aged 50-70 years should consume 1,000 mg of calcium/day and women over 51 years old should have 1,200 mg of calcium daily. Both men and women over 50 years should have 800-1,000 IU of vitamin D daily.

Treat vitamin D deficiencies: Supplementation should be adequate to achieve serum levels of 30ng/mL (75nmol/L).

Decreased alcohol use, smoking cessation, exercise, and fall prevention: Smoking cessation should be strongly advised. Moderate alcohol intake does not adversely affect bone and may be associated with lower fracture risk, though consuming more than three drinks daily may have an adverse effect on bone health and increases the risk of falling. Weight-bearing and muscle-strengthening exercise improves bone health and decreases the risk of falls. Home assessment for fall prevention for the elderly may decrease the risk of fracture.

Pharmacologic treatments

Treatment should be considered in postmenopausal women and men over 50 years with a hip or vertebral fracture; T score less than or equal to –2.5 at femoral neck, total hip or lumbar spine; low bone mass (T score between –1.0 and –2.5) and a 10-year probability of hip fracture greater than or equal to 3% or 10 year probability of major osteoporosis-related fracture greater than or equal to 20%. The antifracture benefits of medications have been studied primarily in postmenopausal women with osteoporosis. Pharmacologic therapy should not be considered life-long and that treatment decisions should be individualized. After 3-5 years of treatment a comprehensive risk assessment should be performed.

 

 

The bottom line

Identify risk factors for osteoporosis in postmenopausal women and men over the age of 50 years. Bone mineral density screening is an important part of fracture prevention, and vertebral imaging should now be considered as a part of osteoporosis screening. Pharmacologic treatment can be considered when a nontraumatic fracture is apparent; if the T score is less than or equal to –2.5; or for individuals with an elevated 10-year fracture risk based on WHO model.

• Source: Clinician’s Guide to Prevention and Treatment of Osteoporosis. Washington, DC: National Osteoporosis Foundation; 2013.

Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University, Philadelphia. Dr. Charles is a second year resident in the Family Medicine Residency Program at Abington Memorial Hospital.

Publications
Publications
Topics
Article Type
Display Headline
Prevention and treatment of osteoporosis
Display Headline
Prevention and treatment of osteoporosis
Legacy Keywords
National Osteoporosis Foundation, osteoporosis, postmenopausal women, bone mineral density, BMD, fracture, Osteopenia, T score
Legacy Keywords
National Osteoporosis Foundation, osteoporosis, postmenopausal women, bone mineral density, BMD, fracture, Osteopenia, T score
Sections
Article Source

PURLs Copyright

Inside the Article

Measuring the true costs of an EHR

Article Type
Changed
Thu, 03/28/2019 - 16:06
Display Headline
Measuring the true costs of an EHR

We routinely respond to readers who question our opinions on the value of electronic health records. Many have suggested that we have become so biased in favor of Health IT that we fail to acknowledge its shortcomings. With those we respectfully disagree. In several previous columns, we have discussed the implementation challenges, legal pitfalls, and productivity losses associated with EHRs (these are indisputable facts of life for so many practitioners). But to satisfy our harshest critics, in this column, we’ll try to count the true financial cost of an EHR and assess the impact of that cost on physicians, while balancing this with the very real promise of improved patient care (a very tall order for one column!).

Cause for doubt?

EHR vendors and Health IT evangelists often cite studies that point to the incredible financial benefits of purchasing and using an electronic record. We, too, have propagated this notion, but acknowledge that the data to support this have been quite meager. In addition to the financial incentive, the advent of the meaningful use program brought an acknowledgement of the very real costs and challenges associated with electronic documentation, and many people – both inside and outside health care – are starting to take notice.

A recent survey analysis by Adler-Milstein, et al., published in the March 2013 issue of Health Affairs, prospectively evaluated the costs associated with EHR implementation and usage in a pilot program known as the Massachusetts eHealth Collaborative. "With more than eighty ambulatory care practices in three diverse communities agreeing to adopt EHR systems simultaneously, the pilot offered a unique opportunity to study the long-term financial impact of adoption on a heterogeneous group of practices," according to the authors. The results challenge the conventional wisdom and certainly warrant close examination (Health Affairs 2013;32:1-9).

As a primary conclusion, the authors note that "current meaningful use incentives alone may not ensure that most practices, particularly smaller ones, achieve a positive return on investment from EHR adoption." To break this down further, their analysis shows that across practices of all sizes and specialty, only 41% would see a positive return on investment – even after factoring in the meaningful use incentive payments of $44,000/provider. Productivity losses, software and equipment costs, and ongoing support and maintenance factored among the financial burdens. In addition, 22% of practices reported that physicians were spending more time at work after implementation. Clearly, these results threaten to tarnish our erudite reasoning on the benefits of EHRs – and might even bring a sense of joy and vindication to our detractors! But the data analysis doesn’t end there.

The devil in the details

In spite of their overall conclusion that electronic records typically lead to a net loss in revenue, the authors discovered several scenarios wherein implementing an EHR actually might make financial sense. A few are particularly worth noting here. First, when factoring in the meaningful use incentive dollars, they predict that 56% of primary care practices would realize a positive 5-year return on investment. Larger practices would also see benefit, with 75% achieving true gains in revenue. The survey team went on to comment that successful practices found ways to use the electronic record to their financial advantage and reaped incredible returns, averaging more than $100,000 in additional revenue per physician over 5 years. This was apparently done through improved efficiency (equating to more patient visits per day), better charge capture, and elimination of ancillary services such as dictation and billing.

Also noteworthy was the observation that smaller practices do not fare well in the financial equation. This, presumably, is in part due to their inability to take advantage of the economies of scale. While the expense of most EHRs is tied directly to the number of providers using it, the amount of equipment and support required is not a linear correlation at all. A solo provider requires almost as much support staff as a group of two or three, and the additional providers greatly offset the productivity loss incurred when switching to an electronic system.

Finally, because the EHR incentive program did not begin reimbursing physicians until 2011, the authors made projections based on the expected payment of $44,000/doctor over 5 years. They did not, however, factor in the penalties involved in not adopting an EHR by 2014. This reduced Medicare reimbursement of 1% per year is potentially significant and should be considered in a total cost/benefit analysis.

We still believe!

In spite of the unforgiving data presented in this survey, we continue to feel positive about the future of connected medicine and see reason to be encouraged by the success of the practices that fully "embraced" their EHR. We also are unwilling to accept that the benefits of EHRs are only financial; there are intangible rewards that cannot be appreciated on a ledger sheet. Even the authors of the survey acknowledge there are advantages that "may accrue to other stakeholders – such as patients – which could be significant." We’ve enumerated these in previous columns, but a few worth highlighting are improved access to patient information, better care coordination, and point-of-care decision support. None of these advantages can be realized if they are not implemented well, but if done right, there is no question that in the future the value of electronic health records will be measured in better outcomes, not lower costs.

 

 

Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University, Philadelphia. He is editor in chief of Redi-Reference, a software company that creates medical handheld references. Dr. Notte practices family medicine and health care informatics at Abington Memorial. They are partners in EHR Practice Consultants. Contact them at [email protected].

Author and Disclosure Information

Publications
Topics
Legacy Keywords
electronic health records, IT, EHRs, patient care, medical records
Sections
Author and Disclosure Information

Author and Disclosure Information

We routinely respond to readers who question our opinions on the value of electronic health records. Many have suggested that we have become so biased in favor of Health IT that we fail to acknowledge its shortcomings. With those we respectfully disagree. In several previous columns, we have discussed the implementation challenges, legal pitfalls, and productivity losses associated with EHRs (these are indisputable facts of life for so many practitioners). But to satisfy our harshest critics, in this column, we’ll try to count the true financial cost of an EHR and assess the impact of that cost on physicians, while balancing this with the very real promise of improved patient care (a very tall order for one column!).

Cause for doubt?

EHR vendors and Health IT evangelists often cite studies that point to the incredible financial benefits of purchasing and using an electronic record. We, too, have propagated this notion, but acknowledge that the data to support this have been quite meager. In addition to the financial incentive, the advent of the meaningful use program brought an acknowledgement of the very real costs and challenges associated with electronic documentation, and many people – both inside and outside health care – are starting to take notice.

A recent survey analysis by Adler-Milstein, et al., published in the March 2013 issue of Health Affairs, prospectively evaluated the costs associated with EHR implementation and usage in a pilot program known as the Massachusetts eHealth Collaborative. "With more than eighty ambulatory care practices in three diverse communities agreeing to adopt EHR systems simultaneously, the pilot offered a unique opportunity to study the long-term financial impact of adoption on a heterogeneous group of practices," according to the authors. The results challenge the conventional wisdom and certainly warrant close examination (Health Affairs 2013;32:1-9).

As a primary conclusion, the authors note that "current meaningful use incentives alone may not ensure that most practices, particularly smaller ones, achieve a positive return on investment from EHR adoption." To break this down further, their analysis shows that across practices of all sizes and specialty, only 41% would see a positive return on investment – even after factoring in the meaningful use incentive payments of $44,000/provider. Productivity losses, software and equipment costs, and ongoing support and maintenance factored among the financial burdens. In addition, 22% of practices reported that physicians were spending more time at work after implementation. Clearly, these results threaten to tarnish our erudite reasoning on the benefits of EHRs – and might even bring a sense of joy and vindication to our detractors! But the data analysis doesn’t end there.

The devil in the details

In spite of their overall conclusion that electronic records typically lead to a net loss in revenue, the authors discovered several scenarios wherein implementing an EHR actually might make financial sense. A few are particularly worth noting here. First, when factoring in the meaningful use incentive dollars, they predict that 56% of primary care practices would realize a positive 5-year return on investment. Larger practices would also see benefit, with 75% achieving true gains in revenue. The survey team went on to comment that successful practices found ways to use the electronic record to their financial advantage and reaped incredible returns, averaging more than $100,000 in additional revenue per physician over 5 years. This was apparently done through improved efficiency (equating to more patient visits per day), better charge capture, and elimination of ancillary services such as dictation and billing.

Also noteworthy was the observation that smaller practices do not fare well in the financial equation. This, presumably, is in part due to their inability to take advantage of the economies of scale. While the expense of most EHRs is tied directly to the number of providers using it, the amount of equipment and support required is not a linear correlation at all. A solo provider requires almost as much support staff as a group of two or three, and the additional providers greatly offset the productivity loss incurred when switching to an electronic system.

Finally, because the EHR incentive program did not begin reimbursing physicians until 2011, the authors made projections based on the expected payment of $44,000/doctor over 5 years. They did not, however, factor in the penalties involved in not adopting an EHR by 2014. This reduced Medicare reimbursement of 1% per year is potentially significant and should be considered in a total cost/benefit analysis.

We still believe!

In spite of the unforgiving data presented in this survey, we continue to feel positive about the future of connected medicine and see reason to be encouraged by the success of the practices that fully "embraced" their EHR. We also are unwilling to accept that the benefits of EHRs are only financial; there are intangible rewards that cannot be appreciated on a ledger sheet. Even the authors of the survey acknowledge there are advantages that "may accrue to other stakeholders – such as patients – which could be significant." We’ve enumerated these in previous columns, but a few worth highlighting are improved access to patient information, better care coordination, and point-of-care decision support. None of these advantages can be realized if they are not implemented well, but if done right, there is no question that in the future the value of electronic health records will be measured in better outcomes, not lower costs.

 

 

Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University, Philadelphia. He is editor in chief of Redi-Reference, a software company that creates medical handheld references. Dr. Notte practices family medicine and health care informatics at Abington Memorial. They are partners in EHR Practice Consultants. Contact them at [email protected].

We routinely respond to readers who question our opinions on the value of electronic health records. Many have suggested that we have become so biased in favor of Health IT that we fail to acknowledge its shortcomings. With those we respectfully disagree. In several previous columns, we have discussed the implementation challenges, legal pitfalls, and productivity losses associated with EHRs (these are indisputable facts of life for so many practitioners). But to satisfy our harshest critics, in this column, we’ll try to count the true financial cost of an EHR and assess the impact of that cost on physicians, while balancing this with the very real promise of improved patient care (a very tall order for one column!).

Cause for doubt?

EHR vendors and Health IT evangelists often cite studies that point to the incredible financial benefits of purchasing and using an electronic record. We, too, have propagated this notion, but acknowledge that the data to support this have been quite meager. In addition to the financial incentive, the advent of the meaningful use program brought an acknowledgement of the very real costs and challenges associated with electronic documentation, and many people – both inside and outside health care – are starting to take notice.

A recent survey analysis by Adler-Milstein, et al., published in the March 2013 issue of Health Affairs, prospectively evaluated the costs associated with EHR implementation and usage in a pilot program known as the Massachusetts eHealth Collaborative. "With more than eighty ambulatory care practices in three diverse communities agreeing to adopt EHR systems simultaneously, the pilot offered a unique opportunity to study the long-term financial impact of adoption on a heterogeneous group of practices," according to the authors. The results challenge the conventional wisdom and certainly warrant close examination (Health Affairs 2013;32:1-9).

As a primary conclusion, the authors note that "current meaningful use incentives alone may not ensure that most practices, particularly smaller ones, achieve a positive return on investment from EHR adoption." To break this down further, their analysis shows that across practices of all sizes and specialty, only 41% would see a positive return on investment – even after factoring in the meaningful use incentive payments of $44,000/provider. Productivity losses, software and equipment costs, and ongoing support and maintenance factored among the financial burdens. In addition, 22% of practices reported that physicians were spending more time at work after implementation. Clearly, these results threaten to tarnish our erudite reasoning on the benefits of EHRs – and might even bring a sense of joy and vindication to our detractors! But the data analysis doesn’t end there.

The devil in the details

In spite of their overall conclusion that electronic records typically lead to a net loss in revenue, the authors discovered several scenarios wherein implementing an EHR actually might make financial sense. A few are particularly worth noting here. First, when factoring in the meaningful use incentive dollars, they predict that 56% of primary care practices would realize a positive 5-year return on investment. Larger practices would also see benefit, with 75% achieving true gains in revenue. The survey team went on to comment that successful practices found ways to use the electronic record to their financial advantage and reaped incredible returns, averaging more than $100,000 in additional revenue per physician over 5 years. This was apparently done through improved efficiency (equating to more patient visits per day), better charge capture, and elimination of ancillary services such as dictation and billing.

Also noteworthy was the observation that smaller practices do not fare well in the financial equation. This, presumably, is in part due to their inability to take advantage of the economies of scale. While the expense of most EHRs is tied directly to the number of providers using it, the amount of equipment and support required is not a linear correlation at all. A solo provider requires almost as much support staff as a group of two or three, and the additional providers greatly offset the productivity loss incurred when switching to an electronic system.

Finally, because the EHR incentive program did not begin reimbursing physicians until 2011, the authors made projections based on the expected payment of $44,000/doctor over 5 years. They did not, however, factor in the penalties involved in not adopting an EHR by 2014. This reduced Medicare reimbursement of 1% per year is potentially significant and should be considered in a total cost/benefit analysis.

We still believe!

In spite of the unforgiving data presented in this survey, we continue to feel positive about the future of connected medicine and see reason to be encouraged by the success of the practices that fully "embraced" their EHR. We also are unwilling to accept that the benefits of EHRs are only financial; there are intangible rewards that cannot be appreciated on a ledger sheet. Even the authors of the survey acknowledge there are advantages that "may accrue to other stakeholders – such as patients – which could be significant." We’ve enumerated these in previous columns, but a few worth highlighting are improved access to patient information, better care coordination, and point-of-care decision support. None of these advantages can be realized if they are not implemented well, but if done right, there is no question that in the future the value of electronic health records will be measured in better outcomes, not lower costs.

 

 

Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University, Philadelphia. He is editor in chief of Redi-Reference, a software company that creates medical handheld references. Dr. Notte practices family medicine and health care informatics at Abington Memorial. They are partners in EHR Practice Consultants. Contact them at [email protected].

Publications
Publications
Topics
Article Type
Display Headline
Measuring the true costs of an EHR
Display Headline
Measuring the true costs of an EHR
Legacy Keywords
electronic health records, IT, EHRs, patient care, medical records
Legacy Keywords
electronic health records, IT, EHRs, patient care, medical records
Sections
Article Source

PURLs Copyright

Inside the Article

CME and EHR integration improves clinical outcomes

Article Type
Changed
Thu, 03/28/2019 - 16:07
Display Headline
CME and EHR integration improves clinical outcomes

‘Big Data’ is coming. Are you ready?

If there is one phrase you are likely to hear repeated this year, it will be Big Data. IBM has estimated that we create 2.5 quintillion bytes of data every day, which means that 90% of the world’s data have been created in the last 2 years alone. These data come from everything around us – weather sensors, Facebook postings, financial transactions, and, of course, electronic health records.

Big Data analytics can be useful across every human endeavor and health care is no exception. Today’s steady adoption of electronic health record (EHR) systems by the majority of medical practices in the United States is accelerating this trend. Information from these EHR systems allows for analysis of larger data sets than was previously possible as few as 3 years ago. Perhaps equally important, we can now look at a single large set of related data from a broad population of clinicians and compare those data to separate smaller sets from individual physicians or practices. This allows us to utilize those data for a range of purposes, including monitoring prescription trends, carrying out quality assurance activities, embarking on population management for both chronic disease management and preventive health, and measuring outcomes to assess treatment effectiveness.

Making learning more meaningful

The availability of these data creates an opportunity both to increase the effectiveness of continuing medical education (CME) and to measure that increase through its impact on both physician behavior and patient outcomes. Let’s take a look at three distinctly different ways that the integration of EHR data analytics and CME can deliver benefits to clinicians and their patients through improved measurement of health care educational outcomes.

Closing the gap between evidence and practice has always been one of the goals of CME, but the challenge is for clinicians to understand how to translate knowledge into clinical strategy and action in a timely manner. Many of us struggle to keep up with the rapidly expanding amount of medical knowledge (some estimates suggest the world’s knowledge doubles every 3 years). Chances are great that the latest evidence-based recommendations have changed since we last learned about them.

Dr. Jonathan Bertman

An earlier EHR Report ("Clinical decision support," February 2013, p. 51) pointed out some of the limitations of currently available clinical decision support systems. Clearly, there is room for improvement when it comes to reminding clinicians about the importance of evidence-based recommendations during therapeutic and diagnostic interventions.

CME providers are naturally well positioned to offer the latest medical information at the point of care. An intelligent EHR system that identifies correlative patterns revealed through data analytics can present evidence-based recommendations to the clinician in an active, contextual manner based on the needs of an individual patient.

Learning done your way

The ability to customize individual learning plans that address identifiable gaps in care is another important benefit we can derive from Big Data analytics in EHR systems. Physicians and other specialists have specific needs and limited time to meet those needs, making it imperative that CME programming is as targeted as possible. As such, varying patient needs should be reflected in the education of primary care physicians.

With analytics of EHR data, CME providers can start to tailor their programs to address the needs of a clinician’s specific population of patients and patterns of practice. They can apply best-practice metrics to performance to identify specific gaps in care and then develop specific learning interventions to close those gaps.

The ultimate measure of success – patient outcomes

The standard of EHR data analytics is the ability to measure the impact of CME offerings on patient health outcomes and build feedback mechanisms based on educational content development and research. Drawing a direct relationship from learning to physician behavior to patient outcome won’t always be possible, but the data in EHR systems hold the promise of making such correlations with accuracy and timeliness.

The feedback mechanism is perhaps equally important in the development of new learning activities and content that take into account real-world results. A simple 1 to 5 score on an evaluation form doesn’t tell a medical educator anything about what happens after a clinician leaves the lecture hall. With a better understanding of the real-world impact of learning, CME developers can continuously improve their content to better serve the needs of their audiences and patients.

Data privacy concerns

Big Data has the power to transform every facet of our lives. How we use the data from EHRs – for clinical or commercial purposes – is up to us as a community. One way to ensure protection of data privacy rights would be to apply the same principle of informed consent that we use today for decisions related to the course of action we take with a patient. This means clinicians would have the right to know if and how information about their practice and/or patients could be used by another company, as well as details about how these data are identified/de-identified. Clinicians also should have the right to never be automatically opted into a process that uses any practice or patient data. Likewise, the right to opt out should always be one click away.

 

 

These data privacy rights will help ensure that analysis of the data held inside hundreds of thousands of EHR systems nationwide will harm neither clinicians nor their patients but rather improve, though continuous learning, the practice of medicine.

Dr. Skolnik is an associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital. Dr. Bertman is a family physician in Rhode Island and the founder of Amazing Charts.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
Big Data, data, analytics, electronic health records
Sections
Author and Disclosure Information

Author and Disclosure Information

Related Articles

‘Big Data’ is coming. Are you ready?

If there is one phrase you are likely to hear repeated this year, it will be Big Data. IBM has estimated that we create 2.5 quintillion bytes of data every day, which means that 90% of the world’s data have been created in the last 2 years alone. These data come from everything around us – weather sensors, Facebook postings, financial transactions, and, of course, electronic health records.

Big Data analytics can be useful across every human endeavor and health care is no exception. Today’s steady adoption of electronic health record (EHR) systems by the majority of medical practices in the United States is accelerating this trend. Information from these EHR systems allows for analysis of larger data sets than was previously possible as few as 3 years ago. Perhaps equally important, we can now look at a single large set of related data from a broad population of clinicians and compare those data to separate smaller sets from individual physicians or practices. This allows us to utilize those data for a range of purposes, including monitoring prescription trends, carrying out quality assurance activities, embarking on population management for both chronic disease management and preventive health, and measuring outcomes to assess treatment effectiveness.

Making learning more meaningful

The availability of these data creates an opportunity both to increase the effectiveness of continuing medical education (CME) and to measure that increase through its impact on both physician behavior and patient outcomes. Let’s take a look at three distinctly different ways that the integration of EHR data analytics and CME can deliver benefits to clinicians and their patients through improved measurement of health care educational outcomes.

Closing the gap between evidence and practice has always been one of the goals of CME, but the challenge is for clinicians to understand how to translate knowledge into clinical strategy and action in a timely manner. Many of us struggle to keep up with the rapidly expanding amount of medical knowledge (some estimates suggest the world’s knowledge doubles every 3 years). Chances are great that the latest evidence-based recommendations have changed since we last learned about them.

Dr. Jonathan Bertman

An earlier EHR Report ("Clinical decision support," February 2013, p. 51) pointed out some of the limitations of currently available clinical decision support systems. Clearly, there is room for improvement when it comes to reminding clinicians about the importance of evidence-based recommendations during therapeutic and diagnostic interventions.

CME providers are naturally well positioned to offer the latest medical information at the point of care. An intelligent EHR system that identifies correlative patterns revealed through data analytics can present evidence-based recommendations to the clinician in an active, contextual manner based on the needs of an individual patient.

Learning done your way

The ability to customize individual learning plans that address identifiable gaps in care is another important benefit we can derive from Big Data analytics in EHR systems. Physicians and other specialists have specific needs and limited time to meet those needs, making it imperative that CME programming is as targeted as possible. As such, varying patient needs should be reflected in the education of primary care physicians.

With analytics of EHR data, CME providers can start to tailor their programs to address the needs of a clinician’s specific population of patients and patterns of practice. They can apply best-practice metrics to performance to identify specific gaps in care and then develop specific learning interventions to close those gaps.

The ultimate measure of success – patient outcomes

The standard of EHR data analytics is the ability to measure the impact of CME offerings on patient health outcomes and build feedback mechanisms based on educational content development and research. Drawing a direct relationship from learning to physician behavior to patient outcome won’t always be possible, but the data in EHR systems hold the promise of making such correlations with accuracy and timeliness.

The feedback mechanism is perhaps equally important in the development of new learning activities and content that take into account real-world results. A simple 1 to 5 score on an evaluation form doesn’t tell a medical educator anything about what happens after a clinician leaves the lecture hall. With a better understanding of the real-world impact of learning, CME developers can continuously improve their content to better serve the needs of their audiences and patients.

Data privacy concerns

Big Data has the power to transform every facet of our lives. How we use the data from EHRs – for clinical or commercial purposes – is up to us as a community. One way to ensure protection of data privacy rights would be to apply the same principle of informed consent that we use today for decisions related to the course of action we take with a patient. This means clinicians would have the right to know if and how information about their practice and/or patients could be used by another company, as well as details about how these data are identified/de-identified. Clinicians also should have the right to never be automatically opted into a process that uses any practice or patient data. Likewise, the right to opt out should always be one click away.

 

 

These data privacy rights will help ensure that analysis of the data held inside hundreds of thousands of EHR systems nationwide will harm neither clinicians nor their patients but rather improve, though continuous learning, the practice of medicine.

Dr. Skolnik is an associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital. Dr. Bertman is a family physician in Rhode Island and the founder of Amazing Charts.

‘Big Data’ is coming. Are you ready?

If there is one phrase you are likely to hear repeated this year, it will be Big Data. IBM has estimated that we create 2.5 quintillion bytes of data every day, which means that 90% of the world’s data have been created in the last 2 years alone. These data come from everything around us – weather sensors, Facebook postings, financial transactions, and, of course, electronic health records.

Big Data analytics can be useful across every human endeavor and health care is no exception. Today’s steady adoption of electronic health record (EHR) systems by the majority of medical practices in the United States is accelerating this trend. Information from these EHR systems allows for analysis of larger data sets than was previously possible as few as 3 years ago. Perhaps equally important, we can now look at a single large set of related data from a broad population of clinicians and compare those data to separate smaller sets from individual physicians or practices. This allows us to utilize those data for a range of purposes, including monitoring prescription trends, carrying out quality assurance activities, embarking on population management for both chronic disease management and preventive health, and measuring outcomes to assess treatment effectiveness.

Making learning more meaningful

The availability of these data creates an opportunity both to increase the effectiveness of continuing medical education (CME) and to measure that increase through its impact on both physician behavior and patient outcomes. Let’s take a look at three distinctly different ways that the integration of EHR data analytics and CME can deliver benefits to clinicians and their patients through improved measurement of health care educational outcomes.

Closing the gap between evidence and practice has always been one of the goals of CME, but the challenge is for clinicians to understand how to translate knowledge into clinical strategy and action in a timely manner. Many of us struggle to keep up with the rapidly expanding amount of medical knowledge (some estimates suggest the world’s knowledge doubles every 3 years). Chances are great that the latest evidence-based recommendations have changed since we last learned about them.

Dr. Jonathan Bertman

An earlier EHR Report ("Clinical decision support," February 2013, p. 51) pointed out some of the limitations of currently available clinical decision support systems. Clearly, there is room for improvement when it comes to reminding clinicians about the importance of evidence-based recommendations during therapeutic and diagnostic interventions.

CME providers are naturally well positioned to offer the latest medical information at the point of care. An intelligent EHR system that identifies correlative patterns revealed through data analytics can present evidence-based recommendations to the clinician in an active, contextual manner based on the needs of an individual patient.

Learning done your way

The ability to customize individual learning plans that address identifiable gaps in care is another important benefit we can derive from Big Data analytics in EHR systems. Physicians and other specialists have specific needs and limited time to meet those needs, making it imperative that CME programming is as targeted as possible. As such, varying patient needs should be reflected in the education of primary care physicians.

With analytics of EHR data, CME providers can start to tailor their programs to address the needs of a clinician’s specific population of patients and patterns of practice. They can apply best-practice metrics to performance to identify specific gaps in care and then develop specific learning interventions to close those gaps.

The ultimate measure of success – patient outcomes

The standard of EHR data analytics is the ability to measure the impact of CME offerings on patient health outcomes and build feedback mechanisms based on educational content development and research. Drawing a direct relationship from learning to physician behavior to patient outcome won’t always be possible, but the data in EHR systems hold the promise of making such correlations with accuracy and timeliness.

The feedback mechanism is perhaps equally important in the development of new learning activities and content that take into account real-world results. A simple 1 to 5 score on an evaluation form doesn’t tell a medical educator anything about what happens after a clinician leaves the lecture hall. With a better understanding of the real-world impact of learning, CME developers can continuously improve their content to better serve the needs of their audiences and patients.

Data privacy concerns

Big Data has the power to transform every facet of our lives. How we use the data from EHRs – for clinical or commercial purposes – is up to us as a community. One way to ensure protection of data privacy rights would be to apply the same principle of informed consent that we use today for decisions related to the course of action we take with a patient. This means clinicians would have the right to know if and how information about their practice and/or patients could be used by another company, as well as details about how these data are identified/de-identified. Clinicians also should have the right to never be automatically opted into a process that uses any practice or patient data. Likewise, the right to opt out should always be one click away.

 

 

These data privacy rights will help ensure that analysis of the data held inside hundreds of thousands of EHR systems nationwide will harm neither clinicians nor their patients but rather improve, though continuous learning, the practice of medicine.

Dr. Skolnik is an associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital. Dr. Bertman is a family physician in Rhode Island and the founder of Amazing Charts.

Publications
Publications
Topics
Article Type
Display Headline
CME and EHR integration improves clinical outcomes
Display Headline
CME and EHR integration improves clinical outcomes
Legacy Keywords
Big Data, data, analytics, electronic health records
Legacy Keywords
Big Data, data, analytics, electronic health records
Sections
Article Source

PURLs Copyright

Inside the Article

Diagnosis of DVT

Article Type
Changed
Fri, 01/18/2019 - 12:32
Display Headline
Diagnosis of DVT

Deep vein thrombosis is a common condition that affects approximately 1 in 1,000 people per year. The consequences of misdiagnosis are important and can affect both quality and length of life. Only a small percentage of patients evaluated for DVT in fact have a DVT. Thus, the risks associated with anticoagulation treatment – primarily major and minor hemorrhage – are significant, making a reliable, consistent approach to diagnosis essential. The American College of Chest Physicians’ recently issued guidelines for diagnosis and management of DVT in which they emphasized that the diagnostic process be based on clinical assessment of pretest probability of DVT to determine which test should be ordered.

The first step in assessing a patent for DVT is to risk-stratify patients for the likelihood of DVT using signs, symptoms, and risk factors. This assessment can be done using validated, structured tools such as the Wells score, which characterizes patients having a low, moderate, or high probability of DVT. Low, moderate, and high likelihood of disease on the Wells score reflects a prevalence of DVT of 5%, 17%, and 53% respectively. Further testing with D-dimer assays or imaging studies or a combination of these studies is based on this risk stratification.

Dr. Neil Skolnik and Dr. William Vaughan

Patients with a low pretest probability for DVT

Recommended testing for patients stratified with a low pretest probability of first lower extremity DVT include initial testing with D-dimer or compression ultrasound (CUS) of the proximal veins. Initial testing with a moderately or highly sensitive D-dimer rather than proximal CUS is preferred.

In cases where either the D-dimer or CUS is negative, no further testing is needed. The advantage of initial testing with a D-dimer assay is that the vast majority of patients assessed for DVT will need no further testing performed after a simple initial blood test, since most low-risk patients will have negative D-dimer assays. A CUS of the proximal veins (vs. whole-leg US) should be completed if the initial D-dimer is positive, because a D-dimer test is a sensitive but not specific test for venous thromboembolic disease. If initial testing with CUS is positive, treatment for DVT should be started.

Patients with a moderate pretest probability for DVT

Recommended testing for patients stratified with a moderate pretest probability of first lower extremity DVT include initial testing with a highly sensitive D-dimer, proximal CUS, or whole-leg US. Initial testing with a highly sensitive D-dimer vs. ultrasound is preferred.

No further testing is needed if the initial highly sensitive D-dimer is negative. If the initial highly sensitive D-dimer is positive, either a proximal CUS or whole-leg US should be completed. In the case of a negative initial CUS, repeat proximal CUS in 1 week or testing with highly sensitive D-dimer assay is required. If the follow-up proximal CUS is negative or the initial proximal CUS and subsequent D-dimer is negative, no further testing is needed.

No further testing is needed if an initial whole-leg US is negative. Treatment for DVT should be initiated if initial proximal CUS is positive. If an isolated distal DVT is found on whole-leg US, serial testing should be completed to rule out proximal extension.

Patients with a high pretest probability for DVT

It is important to understand that D-dimer testing should not be used exclusively to rule out DVT in patients with a high pretest probability. Recommended testing for patients stratified with a high pretest probability of first lower extremity DVT include initial testing with proximal CUS or whole-leg US. If either ultrasound is positive in the initial study, treatment for DVT should be started.

An initial negative proximal CUS should be followed up with a highly sensitive D-dimer or whole-leg US or by repeating the proximal CUS in 1 week. If a single proximal CUS is negative but D-dimer positive, a follow-up whole-leg US or repeat CUS is needed in 1 week. No further testing is needed if serial proximal CUS; a single proximal CUS, and highly sensitive D-dimer; or a whole-leg US are negative.

Patients without risk stratification

Recommended testing for patients not risk stratified with a first lower extremity DVT include initial testing with proximal CUS or whole-leg US. If the initial proximal ultrasound is negative, follow-up testing should be completed with a moderate- or high-sensitivity D-dimer, whole-leg US, or repeat proximal CUS in 1 week. In this case, follow-up testing with D-dimer is preferred. If a single proximal ultrasound is negative and the D-dimer is positive, then whole-leg US should be completed or a repeat proximal CUS in 1 week. No further testing is needed in patients with negative serial proximal CUS, a negative D-dimer following a negative initial proximal CUS, or negative whole-leg US. Treatment for DVT is recommended if proximal ultrasound is positive. Serial testing should be performed to rule out proximal extension if an isolated distal DVT is found on whole-leg US.

 

 

Neither CT venography nor MRI is recommended in patients with suspected first lower extremity DVT. However, in patients with suspected first lower extremity DVT in whom US is impractical or nondiagnostic, CT scan venography, MR venography, or MR direct thrombus imaging may be used.

The bottom line

The American College of Chest Physicians Guideline on Diagnosis of Venous Thromboembolic Disease recommend an approach to diagnosis of venous thromboembolic disease that begins with a clinical assessment that risk stratifies patients. For patients at low or moderate risk of DVT or PE, the initial preferred test is a highly-sensitive D-dimer. If the D-dimer is negative, no further work-up is needed. If the D-dimer is positive, then further testing to rule-in VTE is recommended.

Reference: Diagnosis of DVT: Antithrombotic Therapy Evidence-Based Clinical Practice Guidelines American College of Chest Physicians and Prevention of Thrombosis, 9th ed: Chest 2012;141;e351S-e418S

Dr. Skolnik is an associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital. Dr. Vaughn is a third year resident in the Family Medicine Residency Program at Abington Memorial Hospital.

Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

Deep vein thrombosis is a common condition that affects approximately 1 in 1,000 people per year. The consequences of misdiagnosis are important and can affect both quality and length of life. Only a small percentage of patients evaluated for DVT in fact have a DVT. Thus, the risks associated with anticoagulation treatment – primarily major and minor hemorrhage – are significant, making a reliable, consistent approach to diagnosis essential. The American College of Chest Physicians’ recently issued guidelines for diagnosis and management of DVT in which they emphasized that the diagnostic process be based on clinical assessment of pretest probability of DVT to determine which test should be ordered.

The first step in assessing a patent for DVT is to risk-stratify patients for the likelihood of DVT using signs, symptoms, and risk factors. This assessment can be done using validated, structured tools such as the Wells score, which characterizes patients having a low, moderate, or high probability of DVT. Low, moderate, and high likelihood of disease on the Wells score reflects a prevalence of DVT of 5%, 17%, and 53% respectively. Further testing with D-dimer assays or imaging studies or a combination of these studies is based on this risk stratification.

Dr. Neil Skolnik and Dr. William Vaughan

Patients with a low pretest probability for DVT

Recommended testing for patients stratified with a low pretest probability of first lower extremity DVT include initial testing with D-dimer or compression ultrasound (CUS) of the proximal veins. Initial testing with a moderately or highly sensitive D-dimer rather than proximal CUS is preferred.

In cases where either the D-dimer or CUS is negative, no further testing is needed. The advantage of initial testing with a D-dimer assay is that the vast majority of patients assessed for DVT will need no further testing performed after a simple initial blood test, since most low-risk patients will have negative D-dimer assays. A CUS of the proximal veins (vs. whole-leg US) should be completed if the initial D-dimer is positive, because a D-dimer test is a sensitive but not specific test for venous thromboembolic disease. If initial testing with CUS is positive, treatment for DVT should be started.

Patients with a moderate pretest probability for DVT

Recommended testing for patients stratified with a moderate pretest probability of first lower extremity DVT include initial testing with a highly sensitive D-dimer, proximal CUS, or whole-leg US. Initial testing with a highly sensitive D-dimer vs. ultrasound is preferred.

No further testing is needed if the initial highly sensitive D-dimer is negative. If the initial highly sensitive D-dimer is positive, either a proximal CUS or whole-leg US should be completed. In the case of a negative initial CUS, repeat proximal CUS in 1 week or testing with highly sensitive D-dimer assay is required. If the follow-up proximal CUS is negative or the initial proximal CUS and subsequent D-dimer is negative, no further testing is needed.

No further testing is needed if an initial whole-leg US is negative. Treatment for DVT should be initiated if initial proximal CUS is positive. If an isolated distal DVT is found on whole-leg US, serial testing should be completed to rule out proximal extension.

Patients with a high pretest probability for DVT

It is important to understand that D-dimer testing should not be used exclusively to rule out DVT in patients with a high pretest probability. Recommended testing for patients stratified with a high pretest probability of first lower extremity DVT include initial testing with proximal CUS or whole-leg US. If either ultrasound is positive in the initial study, treatment for DVT should be started.

An initial negative proximal CUS should be followed up with a highly sensitive D-dimer or whole-leg US or by repeating the proximal CUS in 1 week. If a single proximal CUS is negative but D-dimer positive, a follow-up whole-leg US or repeat CUS is needed in 1 week. No further testing is needed if serial proximal CUS; a single proximal CUS, and highly sensitive D-dimer; or a whole-leg US are negative.

Patients without risk stratification

Recommended testing for patients not risk stratified with a first lower extremity DVT include initial testing with proximal CUS or whole-leg US. If the initial proximal ultrasound is negative, follow-up testing should be completed with a moderate- or high-sensitivity D-dimer, whole-leg US, or repeat proximal CUS in 1 week. In this case, follow-up testing with D-dimer is preferred. If a single proximal ultrasound is negative and the D-dimer is positive, then whole-leg US should be completed or a repeat proximal CUS in 1 week. No further testing is needed in patients with negative serial proximal CUS, a negative D-dimer following a negative initial proximal CUS, or negative whole-leg US. Treatment for DVT is recommended if proximal ultrasound is positive. Serial testing should be performed to rule out proximal extension if an isolated distal DVT is found on whole-leg US.

 

 

Neither CT venography nor MRI is recommended in patients with suspected first lower extremity DVT. However, in patients with suspected first lower extremity DVT in whom US is impractical or nondiagnostic, CT scan venography, MR venography, or MR direct thrombus imaging may be used.

The bottom line

The American College of Chest Physicians Guideline on Diagnosis of Venous Thromboembolic Disease recommend an approach to diagnosis of venous thromboembolic disease that begins with a clinical assessment that risk stratifies patients. For patients at low or moderate risk of DVT or PE, the initial preferred test is a highly-sensitive D-dimer. If the D-dimer is negative, no further work-up is needed. If the D-dimer is positive, then further testing to rule-in VTE is recommended.

Reference: Diagnosis of DVT: Antithrombotic Therapy Evidence-Based Clinical Practice Guidelines American College of Chest Physicians and Prevention of Thrombosis, 9th ed: Chest 2012;141;e351S-e418S

Dr. Skolnik is an associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital. Dr. Vaughn is a third year resident in the Family Medicine Residency Program at Abington Memorial Hospital.

Deep vein thrombosis is a common condition that affects approximately 1 in 1,000 people per year. The consequences of misdiagnosis are important and can affect both quality and length of life. Only a small percentage of patients evaluated for DVT in fact have a DVT. Thus, the risks associated with anticoagulation treatment – primarily major and minor hemorrhage – are significant, making a reliable, consistent approach to diagnosis essential. The American College of Chest Physicians’ recently issued guidelines for diagnosis and management of DVT in which they emphasized that the diagnostic process be based on clinical assessment of pretest probability of DVT to determine which test should be ordered.

The first step in assessing a patent for DVT is to risk-stratify patients for the likelihood of DVT using signs, symptoms, and risk factors. This assessment can be done using validated, structured tools such as the Wells score, which characterizes patients having a low, moderate, or high probability of DVT. Low, moderate, and high likelihood of disease on the Wells score reflects a prevalence of DVT of 5%, 17%, and 53% respectively. Further testing with D-dimer assays or imaging studies or a combination of these studies is based on this risk stratification.

Dr. Neil Skolnik and Dr. William Vaughan

Patients with a low pretest probability for DVT

Recommended testing for patients stratified with a low pretest probability of first lower extremity DVT include initial testing with D-dimer or compression ultrasound (CUS) of the proximal veins. Initial testing with a moderately or highly sensitive D-dimer rather than proximal CUS is preferred.

In cases where either the D-dimer or CUS is negative, no further testing is needed. The advantage of initial testing with a D-dimer assay is that the vast majority of patients assessed for DVT will need no further testing performed after a simple initial blood test, since most low-risk patients will have negative D-dimer assays. A CUS of the proximal veins (vs. whole-leg US) should be completed if the initial D-dimer is positive, because a D-dimer test is a sensitive but not specific test for venous thromboembolic disease. If initial testing with CUS is positive, treatment for DVT should be started.

Patients with a moderate pretest probability for DVT

Recommended testing for patients stratified with a moderate pretest probability of first lower extremity DVT include initial testing with a highly sensitive D-dimer, proximal CUS, or whole-leg US. Initial testing with a highly sensitive D-dimer vs. ultrasound is preferred.

No further testing is needed if the initial highly sensitive D-dimer is negative. If the initial highly sensitive D-dimer is positive, either a proximal CUS or whole-leg US should be completed. In the case of a negative initial CUS, repeat proximal CUS in 1 week or testing with highly sensitive D-dimer assay is required. If the follow-up proximal CUS is negative or the initial proximal CUS and subsequent D-dimer is negative, no further testing is needed.

No further testing is needed if an initial whole-leg US is negative. Treatment for DVT should be initiated if initial proximal CUS is positive. If an isolated distal DVT is found on whole-leg US, serial testing should be completed to rule out proximal extension.

Patients with a high pretest probability for DVT

It is important to understand that D-dimer testing should not be used exclusively to rule out DVT in patients with a high pretest probability. Recommended testing for patients stratified with a high pretest probability of first lower extremity DVT include initial testing with proximal CUS or whole-leg US. If either ultrasound is positive in the initial study, treatment for DVT should be started.

An initial negative proximal CUS should be followed up with a highly sensitive D-dimer or whole-leg US or by repeating the proximal CUS in 1 week. If a single proximal CUS is negative but D-dimer positive, a follow-up whole-leg US or repeat CUS is needed in 1 week. No further testing is needed if serial proximal CUS; a single proximal CUS, and highly sensitive D-dimer; or a whole-leg US are negative.

Patients without risk stratification

Recommended testing for patients not risk stratified with a first lower extremity DVT include initial testing with proximal CUS or whole-leg US. If the initial proximal ultrasound is negative, follow-up testing should be completed with a moderate- or high-sensitivity D-dimer, whole-leg US, or repeat proximal CUS in 1 week. In this case, follow-up testing with D-dimer is preferred. If a single proximal ultrasound is negative and the D-dimer is positive, then whole-leg US should be completed or a repeat proximal CUS in 1 week. No further testing is needed in patients with negative serial proximal CUS, a negative D-dimer following a negative initial proximal CUS, or negative whole-leg US. Treatment for DVT is recommended if proximal ultrasound is positive. Serial testing should be performed to rule out proximal extension if an isolated distal DVT is found on whole-leg US.

 

 

Neither CT venography nor MRI is recommended in patients with suspected first lower extremity DVT. However, in patients with suspected first lower extremity DVT in whom US is impractical or nondiagnostic, CT scan venography, MR venography, or MR direct thrombus imaging may be used.

The bottom line

The American College of Chest Physicians Guideline on Diagnosis of Venous Thromboembolic Disease recommend an approach to diagnosis of venous thromboembolic disease that begins with a clinical assessment that risk stratifies patients. For patients at low or moderate risk of DVT or PE, the initial preferred test is a highly-sensitive D-dimer. If the D-dimer is negative, no further work-up is needed. If the D-dimer is positive, then further testing to rule-in VTE is recommended.

Reference: Diagnosis of DVT: Antithrombotic Therapy Evidence-Based Clinical Practice Guidelines American College of Chest Physicians and Prevention of Thrombosis, 9th ed: Chest 2012;141;e351S-e418S

Dr. Skolnik is an associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital. Dr. Vaughn is a third year resident in the Family Medicine Residency Program at Abington Memorial Hospital.

Publications
Publications
Topics
Article Type
Display Headline
Diagnosis of DVT
Display Headline
Diagnosis of DVT
Sections
Article Source

PURLs Copyright

Inside the Article

Concussion in sport

Article Type
Changed
Fri, 01/18/2019 - 12:30
Display Headline
Concussion in sport

Concussion has become an increasing concern over the past several years because of evidence that it can have long-term deleterious effects on the human brain. In November 2012, the American Medical Society for Sports Medicine (AMSSM) published its position statement on concussion in sport. The National Trainer’s Athletic Association and the American College of Sports Medicine have endorsed the guideline.

©Vladimir Mucibabic/Fotolia.com

Diagnosis

The AMSSM defines a concussion as "a traumatically induced transient disturbance of brain function ... caused by a complex pathophysiological process." The signs and symptoms of concussion are varied and usually overlap with other vague symptoms of illness. By far the most common symptom is headache. Other common symptoms are nausea, dizziness, photophobia, balance problems, and feeling mentally foggy. In 80%-90% of cases of concussion in athletes, symptoms resolve within 7 days, although dizziness, migraine headaches, and cognitive symptoms such as fogginess and mental slowing all suggest a longer recovery period.

Several factors increase a patient’s risk for sustaining a concussion. The most common predictor is a previous concussion, which increases a person’s risk of having a concussion by 2-5.8 times the average risk. Females sustain more concussions, experience/report more symptoms, and have longer recovery times than do males who play the same sport. Younger athletes are more susceptible to prolonged recovery and catastrophic injury likely due to differences between developing and mature brains.

Mood disorders do not increase an athlete’s risk for concussion, but depression and anxiety may be increased as a consequence of concussion. Preexisting learning disorders may increase recovery times, but do not increase the risk for concussion. Preexisting migraines may increase the concussion risk, but the evidence is mixed whether this increases the recovery period. Developing migraine symptoms after a concussion can lengthen the recovery period.

Treatment

The primary treatment for concussion is physical and mental rest. Therapies should be tailored to specific symptoms although little evidence exists in support of medications to alleviate symptoms. Acetaminophen offers the lowest risk profile for headache. Nonsteroidal anti-inflammatory drugs as well as aspirin are usually avoided because of a theoretical risk of bleeding. There is no evidence to support the use of stimulants or sleep aids in the management of postconcussive symptoms nor should antidepressants be started in the first 6-12 weeks after a concussion. Changes in mood may occur. If symptoms persist for more than 6-12 weeks, then further workup for an underlying mood disorder should be considered. Balance disturbance that persists may benefit from vestibular therapy.

There are no current standardized return-to-school recommendations. Each student should be evaluated individually and a custom return-to-school plan should be devised. Students with symptoms that are exacerbated with cognitive stress may need limitations such as a decreased work load, extended testing time, half days off or full days off. Athletes should be able to tolerate full school days without symptom recurrence prior to being released for a return-to-play protocol.

Each athlete’s return to play should be individualized. The return-to-play program should be started only when the athlete’s symptoms are resolved. The return-to-play protocol should be gradual and incremental with slow increases in physical activity, then sport-specific activities, then sport-specific physical contact. For example, a wrestler might, once symptom free, be told to start with light aerobic training, then if remaining symptom free, proceed to weight training, then movements on the mat, then light training with other wrestlers, then practice matches, and finally full competition. Going from one level to the next in the return-to-play protocol should only happen if the athlete remains asymptomatic. With any reoccurring symptoms, the athlete should be returned to the preceding step. The return-to-play progression may be as few as 4-5 days and as long as a couple of months. The final return-to-play determination should come from a "licensed health care provider trained in the evaluation and management of concussion," according to the statement.

Computerized neuropsychiatric testing has been shown to have a moderate sensitivity in detecting postconcussive deficits. These deficits have been shown to persist even past symptom resolution although it is unknown if baseline and postinjury testing alter the short- and long-term risk associated with concussion. There is no consensus on specific recommendations for use of computerized, neuropsychiatric testing other than that it can be used as part of a "comprehensive concussion management strategy" and should not be used alone.

Most concussions do not need advanced neuroimaging. CT and MRI should be reserved for those patients whose symptoms worsen and/or whose ability to process information declines.

There are no current guidelines for disqualification of an athlete from sport. Some experts recommend considering disqualification if any of the following are present: "structural abnormality on neuroimaging, multiple lifetime concussions, persistent diminished academic or workplace performance, persistent postconcussive symptoms, prolonged recovery courses, and perceived reduced threshold of sustaining recurrent concussions."

 

 

The bottom line

Concussion is an increasingly diagnosed injury that has short-term consequences and can have long-term complications. The current literature supports individualized treatment of patients with concussion, and it supports the idea that no athlete should be returned to play the same day or while symptomatic.

Reference:

– American Medical Society for Sports Medicine position statement: concussion in sport, Br. J. Sports Med. 2013;47:15-26.

Dr. Skolnik is an associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital. Dr. Chrusch is an assistant director in the family medicine residency program at the hospital.

Author and Disclosure Information

Publications
Legacy Keywords
Concussion, brain, the American Medical Society for Sports Medicine, AMSSM, The National Trainer’s Athletic Association, the American College of Sports Medicine
Sections
Author and Disclosure Information

Author and Disclosure Information

Concussion has become an increasing concern over the past several years because of evidence that it can have long-term deleterious effects on the human brain. In November 2012, the American Medical Society for Sports Medicine (AMSSM) published its position statement on concussion in sport. The National Trainer’s Athletic Association and the American College of Sports Medicine have endorsed the guideline.

©Vladimir Mucibabic/Fotolia.com

Diagnosis

The AMSSM defines a concussion as "a traumatically induced transient disturbance of brain function ... caused by a complex pathophysiological process." The signs and symptoms of concussion are varied and usually overlap with other vague symptoms of illness. By far the most common symptom is headache. Other common symptoms are nausea, dizziness, photophobia, balance problems, and feeling mentally foggy. In 80%-90% of cases of concussion in athletes, symptoms resolve within 7 days, although dizziness, migraine headaches, and cognitive symptoms such as fogginess and mental slowing all suggest a longer recovery period.

Several factors increase a patient’s risk for sustaining a concussion. The most common predictor is a previous concussion, which increases a person’s risk of having a concussion by 2-5.8 times the average risk. Females sustain more concussions, experience/report more symptoms, and have longer recovery times than do males who play the same sport. Younger athletes are more susceptible to prolonged recovery and catastrophic injury likely due to differences between developing and mature brains.

Mood disorders do not increase an athlete’s risk for concussion, but depression and anxiety may be increased as a consequence of concussion. Preexisting learning disorders may increase recovery times, but do not increase the risk for concussion. Preexisting migraines may increase the concussion risk, but the evidence is mixed whether this increases the recovery period. Developing migraine symptoms after a concussion can lengthen the recovery period.

Treatment

The primary treatment for concussion is physical and mental rest. Therapies should be tailored to specific symptoms although little evidence exists in support of medications to alleviate symptoms. Acetaminophen offers the lowest risk profile for headache. Nonsteroidal anti-inflammatory drugs as well as aspirin are usually avoided because of a theoretical risk of bleeding. There is no evidence to support the use of stimulants or sleep aids in the management of postconcussive symptoms nor should antidepressants be started in the first 6-12 weeks after a concussion. Changes in mood may occur. If symptoms persist for more than 6-12 weeks, then further workup for an underlying mood disorder should be considered. Balance disturbance that persists may benefit from vestibular therapy.

There are no current standardized return-to-school recommendations. Each student should be evaluated individually and a custom return-to-school plan should be devised. Students with symptoms that are exacerbated with cognitive stress may need limitations such as a decreased work load, extended testing time, half days off or full days off. Athletes should be able to tolerate full school days without symptom recurrence prior to being released for a return-to-play protocol.

Each athlete’s return to play should be individualized. The return-to-play program should be started only when the athlete’s symptoms are resolved. The return-to-play protocol should be gradual and incremental with slow increases in physical activity, then sport-specific activities, then sport-specific physical contact. For example, a wrestler might, once symptom free, be told to start with light aerobic training, then if remaining symptom free, proceed to weight training, then movements on the mat, then light training with other wrestlers, then practice matches, and finally full competition. Going from one level to the next in the return-to-play protocol should only happen if the athlete remains asymptomatic. With any reoccurring symptoms, the athlete should be returned to the preceding step. The return-to-play progression may be as few as 4-5 days and as long as a couple of months. The final return-to-play determination should come from a "licensed health care provider trained in the evaluation and management of concussion," according to the statement.

Computerized neuropsychiatric testing has been shown to have a moderate sensitivity in detecting postconcussive deficits. These deficits have been shown to persist even past symptom resolution although it is unknown if baseline and postinjury testing alter the short- and long-term risk associated with concussion. There is no consensus on specific recommendations for use of computerized, neuropsychiatric testing other than that it can be used as part of a "comprehensive concussion management strategy" and should not be used alone.

Most concussions do not need advanced neuroimaging. CT and MRI should be reserved for those patients whose symptoms worsen and/or whose ability to process information declines.

There are no current guidelines for disqualification of an athlete from sport. Some experts recommend considering disqualification if any of the following are present: "structural abnormality on neuroimaging, multiple lifetime concussions, persistent diminished academic or workplace performance, persistent postconcussive symptoms, prolonged recovery courses, and perceived reduced threshold of sustaining recurrent concussions."

 

 

The bottom line

Concussion is an increasingly diagnosed injury that has short-term consequences and can have long-term complications. The current literature supports individualized treatment of patients with concussion, and it supports the idea that no athlete should be returned to play the same day or while symptomatic.

Reference:

– American Medical Society for Sports Medicine position statement: concussion in sport, Br. J. Sports Med. 2013;47:15-26.

Dr. Skolnik is an associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital. Dr. Chrusch is an assistant director in the family medicine residency program at the hospital.

Concussion has become an increasing concern over the past several years because of evidence that it can have long-term deleterious effects on the human brain. In November 2012, the American Medical Society for Sports Medicine (AMSSM) published its position statement on concussion in sport. The National Trainer’s Athletic Association and the American College of Sports Medicine have endorsed the guideline.

©Vladimir Mucibabic/Fotolia.com

Diagnosis

The AMSSM defines a concussion as "a traumatically induced transient disturbance of brain function ... caused by a complex pathophysiological process." The signs and symptoms of concussion are varied and usually overlap with other vague symptoms of illness. By far the most common symptom is headache. Other common symptoms are nausea, dizziness, photophobia, balance problems, and feeling mentally foggy. In 80%-90% of cases of concussion in athletes, symptoms resolve within 7 days, although dizziness, migraine headaches, and cognitive symptoms such as fogginess and mental slowing all suggest a longer recovery period.

Several factors increase a patient’s risk for sustaining a concussion. The most common predictor is a previous concussion, which increases a person’s risk of having a concussion by 2-5.8 times the average risk. Females sustain more concussions, experience/report more symptoms, and have longer recovery times than do males who play the same sport. Younger athletes are more susceptible to prolonged recovery and catastrophic injury likely due to differences between developing and mature brains.

Mood disorders do not increase an athlete’s risk for concussion, but depression and anxiety may be increased as a consequence of concussion. Preexisting learning disorders may increase recovery times, but do not increase the risk for concussion. Preexisting migraines may increase the concussion risk, but the evidence is mixed whether this increases the recovery period. Developing migraine symptoms after a concussion can lengthen the recovery period.

Treatment

The primary treatment for concussion is physical and mental rest. Therapies should be tailored to specific symptoms although little evidence exists in support of medications to alleviate symptoms. Acetaminophen offers the lowest risk profile for headache. Nonsteroidal anti-inflammatory drugs as well as aspirin are usually avoided because of a theoretical risk of bleeding. There is no evidence to support the use of stimulants or sleep aids in the management of postconcussive symptoms nor should antidepressants be started in the first 6-12 weeks after a concussion. Changes in mood may occur. If symptoms persist for more than 6-12 weeks, then further workup for an underlying mood disorder should be considered. Balance disturbance that persists may benefit from vestibular therapy.

There are no current standardized return-to-school recommendations. Each student should be evaluated individually and a custom return-to-school plan should be devised. Students with symptoms that are exacerbated with cognitive stress may need limitations such as a decreased work load, extended testing time, half days off or full days off. Athletes should be able to tolerate full school days without symptom recurrence prior to being released for a return-to-play protocol.

Each athlete’s return to play should be individualized. The return-to-play program should be started only when the athlete’s symptoms are resolved. The return-to-play protocol should be gradual and incremental with slow increases in physical activity, then sport-specific activities, then sport-specific physical contact. For example, a wrestler might, once symptom free, be told to start with light aerobic training, then if remaining symptom free, proceed to weight training, then movements on the mat, then light training with other wrestlers, then practice matches, and finally full competition. Going from one level to the next in the return-to-play protocol should only happen if the athlete remains asymptomatic. With any reoccurring symptoms, the athlete should be returned to the preceding step. The return-to-play progression may be as few as 4-5 days and as long as a couple of months. The final return-to-play determination should come from a "licensed health care provider trained in the evaluation and management of concussion," according to the statement.

Computerized neuropsychiatric testing has been shown to have a moderate sensitivity in detecting postconcussive deficits. These deficits have been shown to persist even past symptom resolution although it is unknown if baseline and postinjury testing alter the short- and long-term risk associated with concussion. There is no consensus on specific recommendations for use of computerized, neuropsychiatric testing other than that it can be used as part of a "comprehensive concussion management strategy" and should not be used alone.

Most concussions do not need advanced neuroimaging. CT and MRI should be reserved for those patients whose symptoms worsen and/or whose ability to process information declines.

There are no current guidelines for disqualification of an athlete from sport. Some experts recommend considering disqualification if any of the following are present: "structural abnormality on neuroimaging, multiple lifetime concussions, persistent diminished academic or workplace performance, persistent postconcussive symptoms, prolonged recovery courses, and perceived reduced threshold of sustaining recurrent concussions."

 

 

The bottom line

Concussion is an increasingly diagnosed injury that has short-term consequences and can have long-term complications. The current literature supports individualized treatment of patients with concussion, and it supports the idea that no athlete should be returned to play the same day or while symptomatic.

Reference:

– American Medical Society for Sports Medicine position statement: concussion in sport, Br. J. Sports Med. 2013;47:15-26.

Dr. Skolnik is an associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital. Dr. Chrusch is an assistant director in the family medicine residency program at the hospital.

Publications
Publications
Article Type
Display Headline
Concussion in sport
Display Headline
Concussion in sport
Legacy Keywords
Concussion, brain, the American Medical Society for Sports Medicine, AMSSM, The National Trainer’s Athletic Association, the American College of Sports Medicine
Legacy Keywords
Concussion, brain, the American Medical Society for Sports Medicine, AMSSM, The National Trainer’s Athletic Association, the American College of Sports Medicine
Sections
Article Source

PURLs Copyright

Inside the Article

Clinical decision support in search of a smarter EHR

Article Type
Changed
Thu, 03/28/2019 - 16:10
Display Headline
Clinical decision support in search of a smarter EHR

We have written routinely about the positive impact of implementing an electronic health record, citing potential improvements in areas such as charge capture, data sharing, and population management. In an attempt to be balanced, we’ve also discussed the financial implications and the risks of decreased productivity and provider frustration, among others. One area that we have not focused on – but which has been attracting increasingly more attention – is that of the advantages and limitations of Clinical Decision Support Systems (CDSSs).

CDSSs are tools that add evidence-based clinical intelligence to patient care, providing assistance to the provider as he or she treats patients and makes decisions about their management. A simple example of this would be an alert, reminding a physician to provide an immunization to age-appropriate patients while seeing them in the office. Some EHRs ship with this capability built-in and ready for deployment "right out of the box," while others completely lack real-decision support. Most commonly, however, an EHR will have the capability to provide support but rely heavily on end-user customization prior to implementation. The question that many are beginning to ask is how using a clinical decision support system will ultimately affect patient outcomes.

The promise and liability of clinical intelligence

There is no question that the medical community has accepted the concept of guideline-based workflows and the importance of evidence-based medicine at the point of care. More recently, though, several studies have begun to look at how CDSS tools that are packaged into EHRs have affected care delivery. Surprisingly, the results are inconsistent; while many studies have demonstrated the benefits of decision support, others have not shown impressive changes in patient outcomes.

Findings from a review of 100 studies comparing the outcomes in care provided with and without a CDSS showed that 64% of the studies demonstrated improvements in practitioner performance when using a Clinical Decision Support System. While the specific systems varied in type and purpose, improvements in performance were "associated with CDSSs that automatically prompted users," compared with those "requiring users to activate the system," (JAMA 2005;293:1223-38).

Similar results were found in a multidisciplinary randomized trial pin which investigators analyzed data from 21 centers and demonstrated that "computerized decision support increased concordance with guideline-recommended therapeutic decisions" for numerous treatment options and "reduced cases of both overtreatment and undertreatment" (BMJ 2009;338:b1440 [doi:10.1136/bmj.b1440]).

But not all of the studies have been so optimistic. Findings from a more recent study showed that there is little benefit to having a CDSS in place. Using survey data collected from over 250,000 ambulatory patient visits (sourced from the National Ambulatory Medical Care Survey), they discovered that only 1 of 20 quality indicators proved better in the group of patients treated using EHRs with a CDSS in place, compared with those treated without decision support. The investigators offered little explanation for these unexpected results, but they did cite some limitations in their methods and theorized that the value of current support systems may be minimal in the absence of standardization and better quality control (Arch. Intern. Med. 2011;171:897-903).

Searching for help

To meet certification for meaningful use, electronic records are required to have some minimal CDSS functionality available from Day 1. But in our experience with most products, the depth and breadth of this built-in support is sorely lacking. For some practitioners who simply view the EMR as a more complicated way of documenting progress notes and telephone calls, this might not seem like a big deal. After all, the world of paper offered no clinical intelligence to speak of. But for others hoping to realize the true promises of health information technology, high-quality decision support may be essential.

It is again important to point out that the usefulness of clinical decision support systems is typically limited by the EHR itself, so it’s critical to start investigating CDSS capability when first selecting an EHR. We would encourage everyone to request to see a demonstration of what – if any – decision support is present in the EHRs they are considering, and ask a lot of questions about how the information is accessed and kept current. Does the product have a standard toolset based on outdated practice suggestions or is it updated as new guidelines are published and research is released? Is the information customizable to meet the needs of the implementation, or is it a "one-size-fits-all" solution? Finally, is the information passive or active? In other words, does the provider need to go searching for the support, or is the software smart enough to offer support when appropriate in the form of an "alert" or "pop-up"?

 

 

A tale of art and science

When chess champion Gary Kasparov defeated IBM’s Deep Blue Supercomputer back in 1996, people around the globe shared in a warm feeling of vindication. More than a simple win, Kasparov’s victory proved that humans still had the advantage over machines. In the same way, it is possible to find the data questioning the value of CDSSs oddly reassuring. But the irony of history reminds us not to get comfortable in our assertions; just 1 year later, after extensive enhancements, Deep Blue returned to defeat Kasparov in a devastating rematch. We suggest viewing this irony as instructive; if one accepts – as we do unequivocally – the value of evidence-based medicine, one must also accept that the right decision support delivered in a timely fashion will ultimately lead to better care and improved clinical outcomes.

Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University, Philadelphia. He is also editor in chief of Redi-Reference, a software company that creates medical handheld references. Dr. Notte practices family medicine and health care informatics for Abington Memorial Hospital. They are partners in EHR Practice Consultants, helping practices move to EHR systems. Contact them at [email protected].

Author and Disclosure Information

Publications
Topics
Legacy Keywords
EHR, electronic health records, clinical decision support systems,
Sections
Author and Disclosure Information

Author and Disclosure Information

We have written routinely about the positive impact of implementing an electronic health record, citing potential improvements in areas such as charge capture, data sharing, and population management. In an attempt to be balanced, we’ve also discussed the financial implications and the risks of decreased productivity and provider frustration, among others. One area that we have not focused on – but which has been attracting increasingly more attention – is that of the advantages and limitations of Clinical Decision Support Systems (CDSSs).

CDSSs are tools that add evidence-based clinical intelligence to patient care, providing assistance to the provider as he or she treats patients and makes decisions about their management. A simple example of this would be an alert, reminding a physician to provide an immunization to age-appropriate patients while seeing them in the office. Some EHRs ship with this capability built-in and ready for deployment "right out of the box," while others completely lack real-decision support. Most commonly, however, an EHR will have the capability to provide support but rely heavily on end-user customization prior to implementation. The question that many are beginning to ask is how using a clinical decision support system will ultimately affect patient outcomes.

The promise and liability of clinical intelligence

There is no question that the medical community has accepted the concept of guideline-based workflows and the importance of evidence-based medicine at the point of care. More recently, though, several studies have begun to look at how CDSS tools that are packaged into EHRs have affected care delivery. Surprisingly, the results are inconsistent; while many studies have demonstrated the benefits of decision support, others have not shown impressive changes in patient outcomes.

Findings from a review of 100 studies comparing the outcomes in care provided with and without a CDSS showed that 64% of the studies demonstrated improvements in practitioner performance when using a Clinical Decision Support System. While the specific systems varied in type and purpose, improvements in performance were "associated with CDSSs that automatically prompted users," compared with those "requiring users to activate the system," (JAMA 2005;293:1223-38).

Similar results were found in a multidisciplinary randomized trial pin which investigators analyzed data from 21 centers and demonstrated that "computerized decision support increased concordance with guideline-recommended therapeutic decisions" for numerous treatment options and "reduced cases of both overtreatment and undertreatment" (BMJ 2009;338:b1440 [doi:10.1136/bmj.b1440]).

But not all of the studies have been so optimistic. Findings from a more recent study showed that there is little benefit to having a CDSS in place. Using survey data collected from over 250,000 ambulatory patient visits (sourced from the National Ambulatory Medical Care Survey), they discovered that only 1 of 20 quality indicators proved better in the group of patients treated using EHRs with a CDSS in place, compared with those treated without decision support. The investigators offered little explanation for these unexpected results, but they did cite some limitations in their methods and theorized that the value of current support systems may be minimal in the absence of standardization and better quality control (Arch. Intern. Med. 2011;171:897-903).

Searching for help

To meet certification for meaningful use, electronic records are required to have some minimal CDSS functionality available from Day 1. But in our experience with most products, the depth and breadth of this built-in support is sorely lacking. For some practitioners who simply view the EMR as a more complicated way of documenting progress notes and telephone calls, this might not seem like a big deal. After all, the world of paper offered no clinical intelligence to speak of. But for others hoping to realize the true promises of health information technology, high-quality decision support may be essential.

It is again important to point out that the usefulness of clinical decision support systems is typically limited by the EHR itself, so it’s critical to start investigating CDSS capability when first selecting an EHR. We would encourage everyone to request to see a demonstration of what – if any – decision support is present in the EHRs they are considering, and ask a lot of questions about how the information is accessed and kept current. Does the product have a standard toolset based on outdated practice suggestions or is it updated as new guidelines are published and research is released? Is the information customizable to meet the needs of the implementation, or is it a "one-size-fits-all" solution? Finally, is the information passive or active? In other words, does the provider need to go searching for the support, or is the software smart enough to offer support when appropriate in the form of an "alert" or "pop-up"?

 

 

A tale of art and science

When chess champion Gary Kasparov defeated IBM’s Deep Blue Supercomputer back in 1996, people around the globe shared in a warm feeling of vindication. More than a simple win, Kasparov’s victory proved that humans still had the advantage over machines. In the same way, it is possible to find the data questioning the value of CDSSs oddly reassuring. But the irony of history reminds us not to get comfortable in our assertions; just 1 year later, after extensive enhancements, Deep Blue returned to defeat Kasparov in a devastating rematch. We suggest viewing this irony as instructive; if one accepts – as we do unequivocally – the value of evidence-based medicine, one must also accept that the right decision support delivered in a timely fashion will ultimately lead to better care and improved clinical outcomes.

Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University, Philadelphia. He is also editor in chief of Redi-Reference, a software company that creates medical handheld references. Dr. Notte practices family medicine and health care informatics for Abington Memorial Hospital. They are partners in EHR Practice Consultants, helping practices move to EHR systems. Contact them at [email protected].

We have written routinely about the positive impact of implementing an electronic health record, citing potential improvements in areas such as charge capture, data sharing, and population management. In an attempt to be balanced, we’ve also discussed the financial implications and the risks of decreased productivity and provider frustration, among others. One area that we have not focused on – but which has been attracting increasingly more attention – is that of the advantages and limitations of Clinical Decision Support Systems (CDSSs).

CDSSs are tools that add evidence-based clinical intelligence to patient care, providing assistance to the provider as he or she treats patients and makes decisions about their management. A simple example of this would be an alert, reminding a physician to provide an immunization to age-appropriate patients while seeing them in the office. Some EHRs ship with this capability built-in and ready for deployment "right out of the box," while others completely lack real-decision support. Most commonly, however, an EHR will have the capability to provide support but rely heavily on end-user customization prior to implementation. The question that many are beginning to ask is how using a clinical decision support system will ultimately affect patient outcomes.

The promise and liability of clinical intelligence

There is no question that the medical community has accepted the concept of guideline-based workflows and the importance of evidence-based medicine at the point of care. More recently, though, several studies have begun to look at how CDSS tools that are packaged into EHRs have affected care delivery. Surprisingly, the results are inconsistent; while many studies have demonstrated the benefits of decision support, others have not shown impressive changes in patient outcomes.

Findings from a review of 100 studies comparing the outcomes in care provided with and without a CDSS showed that 64% of the studies demonstrated improvements in practitioner performance when using a Clinical Decision Support System. While the specific systems varied in type and purpose, improvements in performance were "associated with CDSSs that automatically prompted users," compared with those "requiring users to activate the system," (JAMA 2005;293:1223-38).

Similar results were found in a multidisciplinary randomized trial pin which investigators analyzed data from 21 centers and demonstrated that "computerized decision support increased concordance with guideline-recommended therapeutic decisions" for numerous treatment options and "reduced cases of both overtreatment and undertreatment" (BMJ 2009;338:b1440 [doi:10.1136/bmj.b1440]).

But not all of the studies have been so optimistic. Findings from a more recent study showed that there is little benefit to having a CDSS in place. Using survey data collected from over 250,000 ambulatory patient visits (sourced from the National Ambulatory Medical Care Survey), they discovered that only 1 of 20 quality indicators proved better in the group of patients treated using EHRs with a CDSS in place, compared with those treated without decision support. The investigators offered little explanation for these unexpected results, but they did cite some limitations in their methods and theorized that the value of current support systems may be minimal in the absence of standardization and better quality control (Arch. Intern. Med. 2011;171:897-903).

Searching for help

To meet certification for meaningful use, electronic records are required to have some minimal CDSS functionality available from Day 1. But in our experience with most products, the depth and breadth of this built-in support is sorely lacking. For some practitioners who simply view the EMR as a more complicated way of documenting progress notes and telephone calls, this might not seem like a big deal. After all, the world of paper offered no clinical intelligence to speak of. But for others hoping to realize the true promises of health information technology, high-quality decision support may be essential.

It is again important to point out that the usefulness of clinical decision support systems is typically limited by the EHR itself, so it’s critical to start investigating CDSS capability when first selecting an EHR. We would encourage everyone to request to see a demonstration of what – if any – decision support is present in the EHRs they are considering, and ask a lot of questions about how the information is accessed and kept current. Does the product have a standard toolset based on outdated practice suggestions or is it updated as new guidelines are published and research is released? Is the information customizable to meet the needs of the implementation, or is it a "one-size-fits-all" solution? Finally, is the information passive or active? In other words, does the provider need to go searching for the support, or is the software smart enough to offer support when appropriate in the form of an "alert" or "pop-up"?

 

 

A tale of art and science

When chess champion Gary Kasparov defeated IBM’s Deep Blue Supercomputer back in 1996, people around the globe shared in a warm feeling of vindication. More than a simple win, Kasparov’s victory proved that humans still had the advantage over machines. In the same way, it is possible to find the data questioning the value of CDSSs oddly reassuring. But the irony of history reminds us not to get comfortable in our assertions; just 1 year later, after extensive enhancements, Deep Blue returned to defeat Kasparov in a devastating rematch. We suggest viewing this irony as instructive; if one accepts – as we do unequivocally – the value of evidence-based medicine, one must also accept that the right decision support delivered in a timely fashion will ultimately lead to better care and improved clinical outcomes.

Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University, Philadelphia. He is also editor in chief of Redi-Reference, a software company that creates medical handheld references. Dr. Notte practices family medicine and health care informatics for Abington Memorial Hospital. They are partners in EHR Practice Consultants, helping practices move to EHR systems. Contact them at [email protected].

Publications
Publications
Topics
Article Type
Display Headline
Clinical decision support in search of a smarter EHR
Display Headline
Clinical decision support in search of a smarter EHR
Legacy Keywords
EHR, electronic health records, clinical decision support systems,
Legacy Keywords
EHR, electronic health records, clinical decision support systems,
Sections
Article Source

PURLs Copyright

Inside the Article

Guideline on diagnosis and treatment of non-neurogenic overactive bladder

Article Type
Changed
Fri, 01/18/2019 - 12:25
Display Headline
Guideline on diagnosis and treatment of non-neurogenic overactive bladder

Non-neurogenic overactive bladder (OAB) is a common problem characterized by urgency, frequency, and/or nocturia with or without incontinence. The symptoms are not associated with urinary tract infection and do have an adverse effect on an individual’s quality of life (QOL). Severity of symptoms often increases with age. OAB has a prevalence of 7%-27% in men and 9%-43% in women. As OAB is predominantly a diagnosis of exclusion, it is imperative that other diagnoses including UTI, polydipsia, diabetes insipidus, and bladder pain syndrome are ruled out.

Neil Skolnik and Phuong Tien

Diagnosis: Initial evaluation includes a detailed history and physical exam with urinalysis. Documentation includes duration of symptoms, impact on QOL, amount of fluid intake and output, and comorbid conditions that may affect bladder function such as neurologic diseases, uncontrolled diabetes mellitus, and prior pelvic surgeries or radiation.

Physical examination includes abdominal exam; rectal/genitourinary exam with assessment of perineal sensation/rectal sphincter tone to rule out such pelvic floor abnormalities as uterine prolapse or constipation; prostate exam in men; and assessment for atrophic vaginitis. Urine culture is needed if urinalysis shows evidence of infection. Hematuria requires urologic work-up.

A postvoid residual (PVR) is not necessary for patients with uncomplicated OAB symptoms who are receiving first-line behavioral therapy or antimuscarinic medications. PVR can be considered in patients with complicated OAB such as those with history of or risk factors for urinary retention, incontinence, prostatic surgery, or neurologic disorders. PVR can be measured with bladder ultrasound or postvoid urethral catheterization if ultrasound is unavailable. Caution should be exercised if antimuscarinic therapy is used with a PVR greater than 150-250 cc. Bladder diaries for 3-7 days can be useful in defining degree of symptoms, establishing a baseline, and evaluating the effect of treatment. Urodynamics, cystoscopy and diagnostic renal and bladder ultrasound should not be used in initial work-up, but may be considered in the setting of complicated OAB or OAB not responsive to treatment.

Treatment: Since OAB affects QOL, treatment options should weigh the benefits and adverse events. Most treatments will improve symptoms but do not entirely eliminate them. Some patients and caregivers may choose not to be treated.

First-line treatment is behavioral therapy, which includes fluid management, bladder training, delayed voiding, prompted voiding, and pelvic floor muscle exercises to control symptoms of urge incontinence. Randomized trials have shown behavioral therapy to be as or more effective than anti-muscarinic medications in reducing symptoms without the risk of adverse effects.

Second line treatment includes oral or transdermal antimuscarinics, which can often be combined with behavioral therapies. Antimuscarinics include darifenacin, fesoterodine, oxybutynin, solifenacin, tolterodine, and trospium, which all have similar efficacy. In general, patients with more severe symptoms have a greater degree of response to medications, regardless of which medication is chosen. Adverse effects of antimuscarinic medication include dry mouth (20%-40%), blurred vision, impaired cognitive function, constipation, and urinary retention.

While efficacy is similar for all the antimuscarinic medications, the rate of side effects differs. The rate of dry mouth with oxybutynin (61%) was statistically significantly higher than that for tolterodine (24%). The rate of constipation with darifenacin (17%) was higher than that for many of the other agents (7-9%) and the rate of constipation with oxybutynin (12%) was significantly greater than with tolterodine (5%). The guidelines panel concluded that there was more dry mouth and constipation with oxybutynin than tolterodine. Extended release formulations may be preferable when available since they have lower rates of dry mouth. Also available is transdermal oxybutynin as a patch or gel.

It is recommended that if a patient does not tolerate, or has an inadequate response to one antimuscarinic agent, another drug in that class may be tried. Antimuscarinics should not be used in patients with narrow angle glaucoma and should be used cautiously in patients with impaired gastric emptying or history of urinary retention. They are contraindicated in those taking oral potassium supplements as absorption may be impaired in the setting of delayed gastric emptying. Caution should be taken in patients being treated with medicines that also have anticholinergic effects including tricyclic antidepressants, as well as drugs for parkinsonism or Alzheimer’s dementia. Antimuscarinics should also be used with caution in the frail elderly, in whom commonly reported side-effects, as well as less commonly reported side-effects such as memory difficulty are likely to be more frequent and severe.

If patients who are refractory to treatment with behavioral and anti-muscarinic therapy would like their symptoms addressed further then referral to a specialist may be indicated. Treatment options that may be provided by the specialist at that point include sacral neuromodulation, peripheral tibial nerve stimulation, and intradetrusor onabotulinumtoxinA injection. Surgical interventions such as augmentation cystoplasty or urinary diversion are used only in treatment of neurogenic OAB and therefore are rare.

 

 

The bottom line: Non-neurogenic OAB usually presents with urinary urgency, frequency and nocturia with or without incontinence. Initial work up includes detailed history and physical exam and urinalysis. Treatment options are based on weighing benefit versus risk. Some patients and their caregivers may opt for no treatment. Behavioral therapy should be offered to all patients since it has essentially no adverse effects. Antimuscarinics are the only FDA approved medications. All oral antimuscarinics are equally effective, though they vary in side-effects. Close follow up is crucial to assess side effects and efficacy.

Reference

Diagnosis and Treatment of Overactive Bladder (Non-Neurogenic) in Adults: American Urological Association/Society of Urodynamics, Female Pelvic Medicine & Urogenital Reconstruction Guideline (J. Urol. 2012 Dec;188:2455-63 [doi:10.1016/j.juro.2012.09.079]).

Dr. Skolnik is an associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital. Dr. Tien is a third-year resident in the Family Medicine Residency Program at Abington Memorial Hospital, and a new mom to her wonderful son, Andrew.

Author and Disclosure Information

Publications
Legacy Keywords
overactive bladder (OAB)
Sections
Author and Disclosure Information

Author and Disclosure Information

Non-neurogenic overactive bladder (OAB) is a common problem characterized by urgency, frequency, and/or nocturia with or without incontinence. The symptoms are not associated with urinary tract infection and do have an adverse effect on an individual’s quality of life (QOL). Severity of symptoms often increases with age. OAB has a prevalence of 7%-27% in men and 9%-43% in women. As OAB is predominantly a diagnosis of exclusion, it is imperative that other diagnoses including UTI, polydipsia, diabetes insipidus, and bladder pain syndrome are ruled out.

Neil Skolnik and Phuong Tien

Diagnosis: Initial evaluation includes a detailed history and physical exam with urinalysis. Documentation includes duration of symptoms, impact on QOL, amount of fluid intake and output, and comorbid conditions that may affect bladder function such as neurologic diseases, uncontrolled diabetes mellitus, and prior pelvic surgeries or radiation.

Physical examination includes abdominal exam; rectal/genitourinary exam with assessment of perineal sensation/rectal sphincter tone to rule out such pelvic floor abnormalities as uterine prolapse or constipation; prostate exam in men; and assessment for atrophic vaginitis. Urine culture is needed if urinalysis shows evidence of infection. Hematuria requires urologic work-up.

A postvoid residual (PVR) is not necessary for patients with uncomplicated OAB symptoms who are receiving first-line behavioral therapy or antimuscarinic medications. PVR can be considered in patients with complicated OAB such as those with history of or risk factors for urinary retention, incontinence, prostatic surgery, or neurologic disorders. PVR can be measured with bladder ultrasound or postvoid urethral catheterization if ultrasound is unavailable. Caution should be exercised if antimuscarinic therapy is used with a PVR greater than 150-250 cc. Bladder diaries for 3-7 days can be useful in defining degree of symptoms, establishing a baseline, and evaluating the effect of treatment. Urodynamics, cystoscopy and diagnostic renal and bladder ultrasound should not be used in initial work-up, but may be considered in the setting of complicated OAB or OAB not responsive to treatment.

Treatment: Since OAB affects QOL, treatment options should weigh the benefits and adverse events. Most treatments will improve symptoms but do not entirely eliminate them. Some patients and caregivers may choose not to be treated.

First-line treatment is behavioral therapy, which includes fluid management, bladder training, delayed voiding, prompted voiding, and pelvic floor muscle exercises to control symptoms of urge incontinence. Randomized trials have shown behavioral therapy to be as or more effective than anti-muscarinic medications in reducing symptoms without the risk of adverse effects.

Second line treatment includes oral or transdermal antimuscarinics, which can often be combined with behavioral therapies. Antimuscarinics include darifenacin, fesoterodine, oxybutynin, solifenacin, tolterodine, and trospium, which all have similar efficacy. In general, patients with more severe symptoms have a greater degree of response to medications, regardless of which medication is chosen. Adverse effects of antimuscarinic medication include dry mouth (20%-40%), blurred vision, impaired cognitive function, constipation, and urinary retention.

While efficacy is similar for all the antimuscarinic medications, the rate of side effects differs. The rate of dry mouth with oxybutynin (61%) was statistically significantly higher than that for tolterodine (24%). The rate of constipation with darifenacin (17%) was higher than that for many of the other agents (7-9%) and the rate of constipation with oxybutynin (12%) was significantly greater than with tolterodine (5%). The guidelines panel concluded that there was more dry mouth and constipation with oxybutynin than tolterodine. Extended release formulations may be preferable when available since they have lower rates of dry mouth. Also available is transdermal oxybutynin as a patch or gel.

It is recommended that if a patient does not tolerate, or has an inadequate response to one antimuscarinic agent, another drug in that class may be tried. Antimuscarinics should not be used in patients with narrow angle glaucoma and should be used cautiously in patients with impaired gastric emptying or history of urinary retention. They are contraindicated in those taking oral potassium supplements as absorption may be impaired in the setting of delayed gastric emptying. Caution should be taken in patients being treated with medicines that also have anticholinergic effects including tricyclic antidepressants, as well as drugs for parkinsonism or Alzheimer’s dementia. Antimuscarinics should also be used with caution in the frail elderly, in whom commonly reported side-effects, as well as less commonly reported side-effects such as memory difficulty are likely to be more frequent and severe.

If patients who are refractory to treatment with behavioral and anti-muscarinic therapy would like their symptoms addressed further then referral to a specialist may be indicated. Treatment options that may be provided by the specialist at that point include sacral neuromodulation, peripheral tibial nerve stimulation, and intradetrusor onabotulinumtoxinA injection. Surgical interventions such as augmentation cystoplasty or urinary diversion are used only in treatment of neurogenic OAB and therefore are rare.

 

 

The bottom line: Non-neurogenic OAB usually presents with urinary urgency, frequency and nocturia with or without incontinence. Initial work up includes detailed history and physical exam and urinalysis. Treatment options are based on weighing benefit versus risk. Some patients and their caregivers may opt for no treatment. Behavioral therapy should be offered to all patients since it has essentially no adverse effects. Antimuscarinics are the only FDA approved medications. All oral antimuscarinics are equally effective, though they vary in side-effects. Close follow up is crucial to assess side effects and efficacy.

Reference

Diagnosis and Treatment of Overactive Bladder (Non-Neurogenic) in Adults: American Urological Association/Society of Urodynamics, Female Pelvic Medicine & Urogenital Reconstruction Guideline (J. Urol. 2012 Dec;188:2455-63 [doi:10.1016/j.juro.2012.09.079]).

Dr. Skolnik is an associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital. Dr. Tien is a third-year resident in the Family Medicine Residency Program at Abington Memorial Hospital, and a new mom to her wonderful son, Andrew.

Non-neurogenic overactive bladder (OAB) is a common problem characterized by urgency, frequency, and/or nocturia with or without incontinence. The symptoms are not associated with urinary tract infection and do have an adverse effect on an individual’s quality of life (QOL). Severity of symptoms often increases with age. OAB has a prevalence of 7%-27% in men and 9%-43% in women. As OAB is predominantly a diagnosis of exclusion, it is imperative that other diagnoses including UTI, polydipsia, diabetes insipidus, and bladder pain syndrome are ruled out.

Neil Skolnik and Phuong Tien

Diagnosis: Initial evaluation includes a detailed history and physical exam with urinalysis. Documentation includes duration of symptoms, impact on QOL, amount of fluid intake and output, and comorbid conditions that may affect bladder function such as neurologic diseases, uncontrolled diabetes mellitus, and prior pelvic surgeries or radiation.

Physical examination includes abdominal exam; rectal/genitourinary exam with assessment of perineal sensation/rectal sphincter tone to rule out such pelvic floor abnormalities as uterine prolapse or constipation; prostate exam in men; and assessment for atrophic vaginitis. Urine culture is needed if urinalysis shows evidence of infection. Hematuria requires urologic work-up.

A postvoid residual (PVR) is not necessary for patients with uncomplicated OAB symptoms who are receiving first-line behavioral therapy or antimuscarinic medications. PVR can be considered in patients with complicated OAB such as those with history of or risk factors for urinary retention, incontinence, prostatic surgery, or neurologic disorders. PVR can be measured with bladder ultrasound or postvoid urethral catheterization if ultrasound is unavailable. Caution should be exercised if antimuscarinic therapy is used with a PVR greater than 150-250 cc. Bladder diaries for 3-7 days can be useful in defining degree of symptoms, establishing a baseline, and evaluating the effect of treatment. Urodynamics, cystoscopy and diagnostic renal and bladder ultrasound should not be used in initial work-up, but may be considered in the setting of complicated OAB or OAB not responsive to treatment.

Treatment: Since OAB affects QOL, treatment options should weigh the benefits and adverse events. Most treatments will improve symptoms but do not entirely eliminate them. Some patients and caregivers may choose not to be treated.

First-line treatment is behavioral therapy, which includes fluid management, bladder training, delayed voiding, prompted voiding, and pelvic floor muscle exercises to control symptoms of urge incontinence. Randomized trials have shown behavioral therapy to be as or more effective than anti-muscarinic medications in reducing symptoms without the risk of adverse effects.

Second line treatment includes oral or transdermal antimuscarinics, which can often be combined with behavioral therapies. Antimuscarinics include darifenacin, fesoterodine, oxybutynin, solifenacin, tolterodine, and trospium, which all have similar efficacy. In general, patients with more severe symptoms have a greater degree of response to medications, regardless of which medication is chosen. Adverse effects of antimuscarinic medication include dry mouth (20%-40%), blurred vision, impaired cognitive function, constipation, and urinary retention.

While efficacy is similar for all the antimuscarinic medications, the rate of side effects differs. The rate of dry mouth with oxybutynin (61%) was statistically significantly higher than that for tolterodine (24%). The rate of constipation with darifenacin (17%) was higher than that for many of the other agents (7-9%) and the rate of constipation with oxybutynin (12%) was significantly greater than with tolterodine (5%). The guidelines panel concluded that there was more dry mouth and constipation with oxybutynin than tolterodine. Extended release formulations may be preferable when available since they have lower rates of dry mouth. Also available is transdermal oxybutynin as a patch or gel.

It is recommended that if a patient does not tolerate, or has an inadequate response to one antimuscarinic agent, another drug in that class may be tried. Antimuscarinics should not be used in patients with narrow angle glaucoma and should be used cautiously in patients with impaired gastric emptying or history of urinary retention. They are contraindicated in those taking oral potassium supplements as absorption may be impaired in the setting of delayed gastric emptying. Caution should be taken in patients being treated with medicines that also have anticholinergic effects including tricyclic antidepressants, as well as drugs for parkinsonism or Alzheimer’s dementia. Antimuscarinics should also be used with caution in the frail elderly, in whom commonly reported side-effects, as well as less commonly reported side-effects such as memory difficulty are likely to be more frequent and severe.

If patients who are refractory to treatment with behavioral and anti-muscarinic therapy would like their symptoms addressed further then referral to a specialist may be indicated. Treatment options that may be provided by the specialist at that point include sacral neuromodulation, peripheral tibial nerve stimulation, and intradetrusor onabotulinumtoxinA injection. Surgical interventions such as augmentation cystoplasty or urinary diversion are used only in treatment of neurogenic OAB and therefore are rare.

 

 

The bottom line: Non-neurogenic OAB usually presents with urinary urgency, frequency and nocturia with or without incontinence. Initial work up includes detailed history and physical exam and urinalysis. Treatment options are based on weighing benefit versus risk. Some patients and their caregivers may opt for no treatment. Behavioral therapy should be offered to all patients since it has essentially no adverse effects. Antimuscarinics are the only FDA approved medications. All oral antimuscarinics are equally effective, though they vary in side-effects. Close follow up is crucial to assess side effects and efficacy.

Reference

Diagnosis and Treatment of Overactive Bladder (Non-Neurogenic) in Adults: American Urological Association/Society of Urodynamics, Female Pelvic Medicine & Urogenital Reconstruction Guideline (J. Urol. 2012 Dec;188:2455-63 [doi:10.1016/j.juro.2012.09.079]).

Dr. Skolnik is an associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital. Dr. Tien is a third-year resident in the Family Medicine Residency Program at Abington Memorial Hospital, and a new mom to her wonderful son, Andrew.

Publications
Publications
Article Type
Display Headline
Guideline on diagnosis and treatment of non-neurogenic overactive bladder
Display Headline
Guideline on diagnosis and treatment of non-neurogenic overactive bladder
Legacy Keywords
overactive bladder (OAB)
Legacy Keywords
overactive bladder (OAB)
Sections
Article Source

PURLs Copyright

Inside the Article

EHRs, Medicine, and Humanism, Part II

Article Type
Changed
Thu, 03/28/2019 - 16:11
Display Headline
EHRs, Medicine, and Humanism, Part II

"We cannot get to where we need to go by remaining where we are."
–Adopted from Max de Pree
Leadership Is an Art

In our last column, we discussed an article published in JAMA that showed a crayon drawing that was given a doctor by the 7-year-old girl who had drawn the picture (JAMA 2012;307:2497-8). The drawing showed the girl sitting on the exam table, with her sister and mother in nearby chairs, while the doctor was sitting hunched over a computer with his back to the patient and her family. The message of the drawing was clear, that the way we are viewed by our patients is changing. What is equally remarkable though, when you view the picture from the girl’s perspective, is that there was nothing sad about the drawing. The colors where vivid and all the figures in the room were smiling. Why would there be anything sad about this encounter? This is the world that the 7-year-old knows, it’s her reality, a world in which attention is regularly divided, and electronic devices are how information is stored and through which communication occurs. This fact is difficult to integrate and understand for those of us who are a bit older but is simply an ordinary part of life, like milk in a jar or plastic lids for those young enough to know no other world. Nonetheless, the concern remains that we need to be careful that the patient’s needs do not become buried underneath the clicks and hums of the machine.

There are many physicians who are sad about the demise of the paper chart. We hear from those people daily. If we acknowledge the complexity of our needs, then we see that the old paper-based chart system, while easier to use than an electronic chart, simply does not allow us to record information in a form that is retrievable for the evolved purposes for which we are now keeping records. Population management in not just a buzz word, it is the area toward which our care of patients is evolving if we are to truly make an impact on improving their health. So EHRs are a necessary component of this evolution. Our challenge, as physicians who are now beginning to care for populations as well as individual patients, is how to balance and integrate the immediate needs that occur in the exam room – the need to provide the proper diagnosis and treatment, to record data, and to truly listen to the patient. To make sure that the patient feels heard. A colleague of ours who has thought a lot about electronic records, Dr. Keith Sweigard, feels that the EHR will eventually be a tool that will facilitate medical humanism. To use his words:

"Technology will paradoxically foster humanism in medicine. As we implement [EHRs] with standardized templates, care pathways, and order sets, patients will more likely receive the same work-up and evidence based interventions from any care provider. In that scenario, what will become the distinguishing factor that a patient selects one physician over another? Access will certainly be a factor, but ongoing relationships will depend on connecting with the patient on a humanistic level – warmth, sensitivity, compassion, and empathy. In other words, the dictum of patients choosing their physician based on access, affability and then ability – in that order – will be more important than ever!"

The literature supports that how well a doctor communicates influences patients’ satisfaction, sense of well-being, overall health, malpractice suits, and may even influence health care costs. When we are ill, we yearn for two things – to be well, and for someone to understand our suffering. Science and technology improves our chances of being well, but it does not address our need to be understood. The doctor is in a unique position to provide for both aspects of what the ill person needs: to help alleviate their suffering and to understand their unique human position in the world, as all suffering is unique. In order to fulfill this role, there has to be ongoing reinforcement of the “centrality of relationships” in medical care (Ann. Intern. Med. 2008;149:720-4).

We agree with Dr. Sweigard’s assessment that, as the protocols and decision support become easier to use and as the quality tools that EHRs will provide become more sophisticated, what will distinguish us from one another and what payers will increasingly support, is our attention to the patient and his or her needs as a person. That attention to the person will be measured through patient satisfaction, and that quality measure will be reimbursed. It will not be difficult to figure out what medication to use next for this person’s hypertension or elevated glucose. The decision support will be there, integrated and easy to use, and our smile and perhaps our attentiveness to the small tear welling in the corner of a patient’s eye, will again distinguish us and allow us to connect as human beings. In a future column on electronic health records and humanism, we will discuss strategies to help us to use the electronic record to accomplish these goals.

 

 

Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University, Philadelphia. He is also editor in chief of Redi-Reference, a software company that creates medical handheld references. Dr. Notte practices family medicine and health care informatics for Abington Memorial Hospital. They are partners in EHR Practice Consultants, helping practices move to EHR systems. Contact them at [email protected].

Author and Disclosure Information

Publications
Topics
Sections
Author and Disclosure Information

Author and Disclosure Information

"We cannot get to where we need to go by remaining where we are."
–Adopted from Max de Pree
Leadership Is an Art

In our last column, we discussed an article published in JAMA that showed a crayon drawing that was given a doctor by the 7-year-old girl who had drawn the picture (JAMA 2012;307:2497-8). The drawing showed the girl sitting on the exam table, with her sister and mother in nearby chairs, while the doctor was sitting hunched over a computer with his back to the patient and her family. The message of the drawing was clear, that the way we are viewed by our patients is changing. What is equally remarkable though, when you view the picture from the girl’s perspective, is that there was nothing sad about the drawing. The colors where vivid and all the figures in the room were smiling. Why would there be anything sad about this encounter? This is the world that the 7-year-old knows, it’s her reality, a world in which attention is regularly divided, and electronic devices are how information is stored and through which communication occurs. This fact is difficult to integrate and understand for those of us who are a bit older but is simply an ordinary part of life, like milk in a jar or plastic lids for those young enough to know no other world. Nonetheless, the concern remains that we need to be careful that the patient’s needs do not become buried underneath the clicks and hums of the machine.

There are many physicians who are sad about the demise of the paper chart. We hear from those people daily. If we acknowledge the complexity of our needs, then we see that the old paper-based chart system, while easier to use than an electronic chart, simply does not allow us to record information in a form that is retrievable for the evolved purposes for which we are now keeping records. Population management in not just a buzz word, it is the area toward which our care of patients is evolving if we are to truly make an impact on improving their health. So EHRs are a necessary component of this evolution. Our challenge, as physicians who are now beginning to care for populations as well as individual patients, is how to balance and integrate the immediate needs that occur in the exam room – the need to provide the proper diagnosis and treatment, to record data, and to truly listen to the patient. To make sure that the patient feels heard. A colleague of ours who has thought a lot about electronic records, Dr. Keith Sweigard, feels that the EHR will eventually be a tool that will facilitate medical humanism. To use his words:

"Technology will paradoxically foster humanism in medicine. As we implement [EHRs] with standardized templates, care pathways, and order sets, patients will more likely receive the same work-up and evidence based interventions from any care provider. In that scenario, what will become the distinguishing factor that a patient selects one physician over another? Access will certainly be a factor, but ongoing relationships will depend on connecting with the patient on a humanistic level – warmth, sensitivity, compassion, and empathy. In other words, the dictum of patients choosing their physician based on access, affability and then ability – in that order – will be more important than ever!"

The literature supports that how well a doctor communicates influences patients’ satisfaction, sense of well-being, overall health, malpractice suits, and may even influence health care costs. When we are ill, we yearn for two things – to be well, and for someone to understand our suffering. Science and technology improves our chances of being well, but it does not address our need to be understood. The doctor is in a unique position to provide for both aspects of what the ill person needs: to help alleviate their suffering and to understand their unique human position in the world, as all suffering is unique. In order to fulfill this role, there has to be ongoing reinforcement of the “centrality of relationships” in medical care (Ann. Intern. Med. 2008;149:720-4).

We agree with Dr. Sweigard’s assessment that, as the protocols and decision support become easier to use and as the quality tools that EHRs will provide become more sophisticated, what will distinguish us from one another and what payers will increasingly support, is our attention to the patient and his or her needs as a person. That attention to the person will be measured through patient satisfaction, and that quality measure will be reimbursed. It will not be difficult to figure out what medication to use next for this person’s hypertension or elevated glucose. The decision support will be there, integrated and easy to use, and our smile and perhaps our attentiveness to the small tear welling in the corner of a patient’s eye, will again distinguish us and allow us to connect as human beings. In a future column on electronic health records and humanism, we will discuss strategies to help us to use the electronic record to accomplish these goals.

 

 

Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University, Philadelphia. He is also editor in chief of Redi-Reference, a software company that creates medical handheld references. Dr. Notte practices family medicine and health care informatics for Abington Memorial Hospital. They are partners in EHR Practice Consultants, helping practices move to EHR systems. Contact them at [email protected].

"We cannot get to where we need to go by remaining where we are."
–Adopted from Max de Pree
Leadership Is an Art

In our last column, we discussed an article published in JAMA that showed a crayon drawing that was given a doctor by the 7-year-old girl who had drawn the picture (JAMA 2012;307:2497-8). The drawing showed the girl sitting on the exam table, with her sister and mother in nearby chairs, while the doctor was sitting hunched over a computer with his back to the patient and her family. The message of the drawing was clear, that the way we are viewed by our patients is changing. What is equally remarkable though, when you view the picture from the girl’s perspective, is that there was nothing sad about the drawing. The colors where vivid and all the figures in the room were smiling. Why would there be anything sad about this encounter? This is the world that the 7-year-old knows, it’s her reality, a world in which attention is regularly divided, and electronic devices are how information is stored and through which communication occurs. This fact is difficult to integrate and understand for those of us who are a bit older but is simply an ordinary part of life, like milk in a jar or plastic lids for those young enough to know no other world. Nonetheless, the concern remains that we need to be careful that the patient’s needs do not become buried underneath the clicks and hums of the machine.

There are many physicians who are sad about the demise of the paper chart. We hear from those people daily. If we acknowledge the complexity of our needs, then we see that the old paper-based chart system, while easier to use than an electronic chart, simply does not allow us to record information in a form that is retrievable for the evolved purposes for which we are now keeping records. Population management in not just a buzz word, it is the area toward which our care of patients is evolving if we are to truly make an impact on improving their health. So EHRs are a necessary component of this evolution. Our challenge, as physicians who are now beginning to care for populations as well as individual patients, is how to balance and integrate the immediate needs that occur in the exam room – the need to provide the proper diagnosis and treatment, to record data, and to truly listen to the patient. To make sure that the patient feels heard. A colleague of ours who has thought a lot about electronic records, Dr. Keith Sweigard, feels that the EHR will eventually be a tool that will facilitate medical humanism. To use his words:

"Technology will paradoxically foster humanism in medicine. As we implement [EHRs] with standardized templates, care pathways, and order sets, patients will more likely receive the same work-up and evidence based interventions from any care provider. In that scenario, what will become the distinguishing factor that a patient selects one physician over another? Access will certainly be a factor, but ongoing relationships will depend on connecting with the patient on a humanistic level – warmth, sensitivity, compassion, and empathy. In other words, the dictum of patients choosing their physician based on access, affability and then ability – in that order – will be more important than ever!"

The literature supports that how well a doctor communicates influences patients’ satisfaction, sense of well-being, overall health, malpractice suits, and may even influence health care costs. When we are ill, we yearn for two things – to be well, and for someone to understand our suffering. Science and technology improves our chances of being well, but it does not address our need to be understood. The doctor is in a unique position to provide for both aspects of what the ill person needs: to help alleviate their suffering and to understand their unique human position in the world, as all suffering is unique. In order to fulfill this role, there has to be ongoing reinforcement of the “centrality of relationships” in medical care (Ann. Intern. Med. 2008;149:720-4).

We agree with Dr. Sweigard’s assessment that, as the protocols and decision support become easier to use and as the quality tools that EHRs will provide become more sophisticated, what will distinguish us from one another and what payers will increasingly support, is our attention to the patient and his or her needs as a person. That attention to the person will be measured through patient satisfaction, and that quality measure will be reimbursed. It will not be difficult to figure out what medication to use next for this person’s hypertension or elevated glucose. The decision support will be there, integrated and easy to use, and our smile and perhaps our attentiveness to the small tear welling in the corner of a patient’s eye, will again distinguish us and allow us to connect as human beings. In a future column on electronic health records and humanism, we will discuss strategies to help us to use the electronic record to accomplish these goals.

 

 

Dr. Skolnik is associate director of the family medicine residency program at Abington (Pa.) Memorial Hospital and professor of family and community medicine at Temple University, Philadelphia. He is also editor in chief of Redi-Reference, a software company that creates medical handheld references. Dr. Notte practices family medicine and health care informatics for Abington Memorial Hospital. They are partners in EHR Practice Consultants, helping practices move to EHR systems. Contact them at [email protected].

Publications
Publications
Topics
Article Type
Display Headline
EHRs, Medicine, and Humanism, Part II
Display Headline
EHRs, Medicine, and Humanism, Part II
Sections
Article Source

PURLs Copyright

Inside the Article

Nonalcoholic Fatty Liver Disease

Article Type
Changed
Fri, 01/18/2019 - 12:20
Display Headline
Nonalcoholic Fatty Liver Disease

The American College of Gastroenterology and the American Gastroenterological Association together released new guidelines in the spring of 2012 for the diagnosis and management of nonalcoholic fatty liver disease.

NAFLD is defined as histological or imaging evidence of hepatic steatosis without evidence of a secondary cause of fat accumulation, such as large amounts of alcohol consumption. It is further categorized as nonalcoholic fatty liver (NAFL), which is the presence of hepatic steatosis with no hepatocellular injury; and nonalcoholic steatohepatitis (NASH), which is the presence of hepatic steatosis and inflammation with hepatocyte injury with or without fibrosis.

Dr. Neil Skolnik and Dr. Josya-Gony Charles

The prevalence of NAFLD varies among studies, but they suggest a prevalence in the general population of approximately 20%, with a prevalence of NASH of approximately 3%-5%.

Obesity is an important risk factor for NAFLD, with studies showing that up to 90% of patients undergoing bariatric surgery have NAFLD, and 70% of patients with type 2 diabetes have NAFLD. Other risk factors include increasing age, male gender, hypothyroidism, polycystic ovary syndrome, and sleep apnea.

In general, patients with NAFL with steatosis and no inflammation have slow progression of their liver disease, and their main health risks are associated with the metabolic syndrome and obesity that led to the NAFLD. Patients with NASH can have a more severe course, with histological progression to cirrhosis.

Screening

NAFLD screening is not recommended for either average-risk or high-risk patients, due to insufficient evidence of benefit. Screening family members of patients with NAFLD also is not recommended. When patients have abnormal liver enzymes or are symptomatic, evaluation should be conducted for suspected NAFLD.

Evaluation

Patients should be evaluated for other causes of hepatic steatosis, including excessive alcohol use, hepatitis C, medications, and other causes of liver disease. It is not unusual for patients with NAFLD to have elevated ferritin; if that is found, then genetic testing for hemochromatosis should be performed. If genetic testing is positive for the hemochromatosis gene, then a biopsy should be considered.

Liver biopsy is the only reliable method to fully define steatohepatitis and fibrosis, but its use should be reserved for those patients most likely to benefit from the information obtained. Biopsy should be considered in patients with NAFLD who are at increased risk of having steatohepatitis and advanced fibrosis, including those with the metabolic syndrome, elevated NAFLD fibrosis scores, and elevated liver function tests.

Liver biopsy is not recommended in patients who are asymptomatic, have normal liver enzymes, but who have incidental hepatic steatosis discovered during imaging. In addition, liver biopsy should be considered when competing etiologies are being considered in cases of hepatic steatosis and coexisting chronic liver disease.

Management

Weight loss helps reduce hepatic steatosis. A loss of 3%-5% body weight is needed, but a greater loss (up to 10%) can help improve necroinflammation. This can be achieved via caloric restriction or increased physical activity. Bariatric surgery does not have enough evidence to support its use in the treatment of NASH. However, it is not contraindicated in obese patients who have NAFLD or NASH.

Vitamin E (800 IU/day) can help patients with NASH, and should be first-line therapy in nondiabetic patients. There is not enough data to support its use in diabetics. Consideration of use of vitamin E must be weighed against evidence from published studies showing possible increases in mortality with high-dose vitamin E and an increase in prostate cancer in men.

Metformin is not recommended as a treatment for liver disease in patients with NASH, as it has no significant effect on liver histology. Ursodeoxycholic acid is not recommended for treatment. There is limited evidence to support using omega-3 fatty acids to treat NAFLD, but they can be used as a first-line therapy for hypertriglyceridemia in NAFLD patients. Pioglitazone may help, but long-term safety and efficacy in patients with NASH are not well studied.

Statins can be used to treat dyslipidemia in patients with NAFLD and NASH, without concern of increased risk for liver toxicity. While there are some early studies that suggest that statins may be beneficial for NAFLD, these studies are far from conclusive and should not be used to justify statins’ use in such cases.

Finally, heavy alcohol use, defined as more than 14 drinks per week in men or 7 drinks per week in women, should be avoided in patients with NAFLD.

The Bottom Line

Routine screening is not recommended for NAFLD. Screen for excessive alcohol consumption. Vitamin E in nondiabetics and weight loss in all patients are first-line treatments. Metformin and ursodeoxycholic acid are not recommended. There is not enough evidence to support the use of pioglitazone. Statins can reduce dyslipidemia in patients with NAFLD and NASH, but they should not be used primarily for NASH therapy. In children, weight loss and lifestyle change are first-line therapies. Perform a liver biopsy in children if the diagnosis is unclear and before starting NASH therapy.

 

 

Reference

• Chalasani N., Younossi Z., Lavine J. et al. The Diagnosis and Management of Non-Alcoholic Fatty Liver Disease: Practice Guideline by the American Association for the Study of Liver Diseases, American College of Gastroenterology, and the American Gastroenterological Association. Hepatology 2012;55:6,2005-23.

Dr. Charles is a resident in the family medicine residency program at Abington (Pa.) Memorial Hospital. Dr. Skolnik is an associate director of the family medicine residency program at Abington Memorial Hospital.

Author and Disclosure Information

Publications
Legacy Keywords
nonalcoholic fatty liver disease
Sections
Author and Disclosure Information

Author and Disclosure Information

The American College of Gastroenterology and the American Gastroenterological Association together released new guidelines in the spring of 2012 for the diagnosis and management of nonalcoholic fatty liver disease.

NAFLD is defined as histological or imaging evidence of hepatic steatosis without evidence of a secondary cause of fat accumulation, such as large amounts of alcohol consumption. It is further categorized as nonalcoholic fatty liver (NAFL), which is the presence of hepatic steatosis with no hepatocellular injury; and nonalcoholic steatohepatitis (NASH), which is the presence of hepatic steatosis and inflammation with hepatocyte injury with or without fibrosis.

Dr. Neil Skolnik and Dr. Josya-Gony Charles

The prevalence of NAFLD varies among studies, but they suggest a prevalence in the general population of approximately 20%, with a prevalence of NASH of approximately 3%-5%.

Obesity is an important risk factor for NAFLD, with studies showing that up to 90% of patients undergoing bariatric surgery have NAFLD, and 70% of patients with type 2 diabetes have NAFLD. Other risk factors include increasing age, male gender, hypothyroidism, polycystic ovary syndrome, and sleep apnea.

In general, patients with NAFL with steatosis and no inflammation have slow progression of their liver disease, and their main health risks are associated with the metabolic syndrome and obesity that led to the NAFLD. Patients with NASH can have a more severe course, with histological progression to cirrhosis.

Screening

NAFLD screening is not recommended for either average-risk or high-risk patients, due to insufficient evidence of benefit. Screening family members of patients with NAFLD also is not recommended. When patients have abnormal liver enzymes or are symptomatic, evaluation should be conducted for suspected NAFLD.

Evaluation

Patients should be evaluated for other causes of hepatic steatosis, including excessive alcohol use, hepatitis C, medications, and other causes of liver disease. It is not unusual for patients with NAFLD to have elevated ferritin; if that is found, then genetic testing for hemochromatosis should be performed. If genetic testing is positive for the hemochromatosis gene, then a biopsy should be considered.

Liver biopsy is the only reliable method to fully define steatohepatitis and fibrosis, but its use should be reserved for those patients most likely to benefit from the information obtained. Biopsy should be considered in patients with NAFLD who are at increased risk of having steatohepatitis and advanced fibrosis, including those with the metabolic syndrome, elevated NAFLD fibrosis scores, and elevated liver function tests.

Liver biopsy is not recommended in patients who are asymptomatic, have normal liver enzymes, but who have incidental hepatic steatosis discovered during imaging. In addition, liver biopsy should be considered when competing etiologies are being considered in cases of hepatic steatosis and coexisting chronic liver disease.

Management

Weight loss helps reduce hepatic steatosis. A loss of 3%-5% body weight is needed, but a greater loss (up to 10%) can help improve necroinflammation. This can be achieved via caloric restriction or increased physical activity. Bariatric surgery does not have enough evidence to support its use in the treatment of NASH. However, it is not contraindicated in obese patients who have NAFLD or NASH.

Vitamin E (800 IU/day) can help patients with NASH, and should be first-line therapy in nondiabetic patients. There is not enough data to support its use in diabetics. Consideration of use of vitamin E must be weighed against evidence from published studies showing possible increases in mortality with high-dose vitamin E and an increase in prostate cancer in men.

Metformin is not recommended as a treatment for liver disease in patients with NASH, as it has no significant effect on liver histology. Ursodeoxycholic acid is not recommended for treatment. There is limited evidence to support using omega-3 fatty acids to treat NAFLD, but they can be used as a first-line therapy for hypertriglyceridemia in NAFLD patients. Pioglitazone may help, but long-term safety and efficacy in patients with NASH are not well studied.

Statins can be used to treat dyslipidemia in patients with NAFLD and NASH, without concern of increased risk for liver toxicity. While there are some early studies that suggest that statins may be beneficial for NAFLD, these studies are far from conclusive and should not be used to justify statins’ use in such cases.

Finally, heavy alcohol use, defined as more than 14 drinks per week in men or 7 drinks per week in women, should be avoided in patients with NAFLD.

The Bottom Line

Routine screening is not recommended for NAFLD. Screen for excessive alcohol consumption. Vitamin E in nondiabetics and weight loss in all patients are first-line treatments. Metformin and ursodeoxycholic acid are not recommended. There is not enough evidence to support the use of pioglitazone. Statins can reduce dyslipidemia in patients with NAFLD and NASH, but they should not be used primarily for NASH therapy. In children, weight loss and lifestyle change are first-line therapies. Perform a liver biopsy in children if the diagnosis is unclear and before starting NASH therapy.

 

 

Reference

• Chalasani N., Younossi Z., Lavine J. et al. The Diagnosis and Management of Non-Alcoholic Fatty Liver Disease: Practice Guideline by the American Association for the Study of Liver Diseases, American College of Gastroenterology, and the American Gastroenterological Association. Hepatology 2012;55:6,2005-23.

Dr. Charles is a resident in the family medicine residency program at Abington (Pa.) Memorial Hospital. Dr. Skolnik is an associate director of the family medicine residency program at Abington Memorial Hospital.

The American College of Gastroenterology and the American Gastroenterological Association together released new guidelines in the spring of 2012 for the diagnosis and management of nonalcoholic fatty liver disease.

NAFLD is defined as histological or imaging evidence of hepatic steatosis without evidence of a secondary cause of fat accumulation, such as large amounts of alcohol consumption. It is further categorized as nonalcoholic fatty liver (NAFL), which is the presence of hepatic steatosis with no hepatocellular injury; and nonalcoholic steatohepatitis (NASH), which is the presence of hepatic steatosis and inflammation with hepatocyte injury with or without fibrosis.

Dr. Neil Skolnik and Dr. Josya-Gony Charles

The prevalence of NAFLD varies among studies, but they suggest a prevalence in the general population of approximately 20%, with a prevalence of NASH of approximately 3%-5%.

Obesity is an important risk factor for NAFLD, with studies showing that up to 90% of patients undergoing bariatric surgery have NAFLD, and 70% of patients with type 2 diabetes have NAFLD. Other risk factors include increasing age, male gender, hypothyroidism, polycystic ovary syndrome, and sleep apnea.

In general, patients with NAFL with steatosis and no inflammation have slow progression of their liver disease, and their main health risks are associated with the metabolic syndrome and obesity that led to the NAFLD. Patients with NASH can have a more severe course, with histological progression to cirrhosis.

Screening

NAFLD screening is not recommended for either average-risk or high-risk patients, due to insufficient evidence of benefit. Screening family members of patients with NAFLD also is not recommended. When patients have abnormal liver enzymes or are symptomatic, evaluation should be conducted for suspected NAFLD.

Evaluation

Patients should be evaluated for other causes of hepatic steatosis, including excessive alcohol use, hepatitis C, medications, and other causes of liver disease. It is not unusual for patients with NAFLD to have elevated ferritin; if that is found, then genetic testing for hemochromatosis should be performed. If genetic testing is positive for the hemochromatosis gene, then a biopsy should be considered.

Liver biopsy is the only reliable method to fully define steatohepatitis and fibrosis, but its use should be reserved for those patients most likely to benefit from the information obtained. Biopsy should be considered in patients with NAFLD who are at increased risk of having steatohepatitis and advanced fibrosis, including those with the metabolic syndrome, elevated NAFLD fibrosis scores, and elevated liver function tests.

Liver biopsy is not recommended in patients who are asymptomatic, have normal liver enzymes, but who have incidental hepatic steatosis discovered during imaging. In addition, liver biopsy should be considered when competing etiologies are being considered in cases of hepatic steatosis and coexisting chronic liver disease.

Management

Weight loss helps reduce hepatic steatosis. A loss of 3%-5% body weight is needed, but a greater loss (up to 10%) can help improve necroinflammation. This can be achieved via caloric restriction or increased physical activity. Bariatric surgery does not have enough evidence to support its use in the treatment of NASH. However, it is not contraindicated in obese patients who have NAFLD or NASH.

Vitamin E (800 IU/day) can help patients with NASH, and should be first-line therapy in nondiabetic patients. There is not enough data to support its use in diabetics. Consideration of use of vitamin E must be weighed against evidence from published studies showing possible increases in mortality with high-dose vitamin E and an increase in prostate cancer in men.

Metformin is not recommended as a treatment for liver disease in patients with NASH, as it has no significant effect on liver histology. Ursodeoxycholic acid is not recommended for treatment. There is limited evidence to support using omega-3 fatty acids to treat NAFLD, but they can be used as a first-line therapy for hypertriglyceridemia in NAFLD patients. Pioglitazone may help, but long-term safety and efficacy in patients with NASH are not well studied.

Statins can be used to treat dyslipidemia in patients with NAFLD and NASH, without concern of increased risk for liver toxicity. While there are some early studies that suggest that statins may be beneficial for NAFLD, these studies are far from conclusive and should not be used to justify statins’ use in such cases.

Finally, heavy alcohol use, defined as more than 14 drinks per week in men or 7 drinks per week in women, should be avoided in patients with NAFLD.

The Bottom Line

Routine screening is not recommended for NAFLD. Screen for excessive alcohol consumption. Vitamin E in nondiabetics and weight loss in all patients are first-line treatments. Metformin and ursodeoxycholic acid are not recommended. There is not enough evidence to support the use of pioglitazone. Statins can reduce dyslipidemia in patients with NAFLD and NASH, but they should not be used primarily for NASH therapy. In children, weight loss and lifestyle change are first-line therapies. Perform a liver biopsy in children if the diagnosis is unclear and before starting NASH therapy.

 

 

Reference

• Chalasani N., Younossi Z., Lavine J. et al. The Diagnosis and Management of Non-Alcoholic Fatty Liver Disease: Practice Guideline by the American Association for the Study of Liver Diseases, American College of Gastroenterology, and the American Gastroenterological Association. Hepatology 2012;55:6,2005-23.

Dr. Charles is a resident in the family medicine residency program at Abington (Pa.) Memorial Hospital. Dr. Skolnik is an associate director of the family medicine residency program at Abington Memorial Hospital.

Publications
Publications
Article Type
Display Headline
Nonalcoholic Fatty Liver Disease
Display Headline
Nonalcoholic Fatty Liver Disease
Legacy Keywords
nonalcoholic fatty liver disease
Legacy Keywords
nonalcoholic fatty liver disease
Sections
Article Source

PURLs Copyright

Inside the Article