Make the Diagnosis - March 2015

Article Type
Changed
Mon, 01/14/2019 - 09:14
Display Headline
Make the Diagnosis - March 2015

Diagnosis: Bleomycin-induced flagellate hyperpigmentation

Bleomycin is an antineoplastic agent. Most reported side effects involve the lung and skin since these organs have lower concentrations of the enzyme that detoxifies bleomycin. Other dermatologic side effects of bleomycin include Raynaud's phenomenon, hyperkeratosis, nailbed changes, and peeling of the skin on the palmar and plantar surfaces.

Flagellate erythema has been described in relation to bleomycin treatment along with other reported associations with peplomycin (a bleomycin derivative), docetaxel, dermatomyositis, adult-onset Still's disease, and shiitake mushroom dermatitis. The rash may appear following administration of bleomycin by any route and has been shown to be dose independent. Onset occurs anytime from 1 day to several months after exposure to the offending agent. Patients typically present with history of itching followed by the appearance of red linear streaks, which are most commonly found on the trunk. Over time, the erythema will resolve to postinflammatory hyperpigmentation.

The exact cause is unknown, but it is thought that scratching causes vasodilation with bleomycin accumulating in the skin. Diagnosis is made by physical examination of the characteristic appearance of the rash along with the history of chemotherapy. A skin biopsy may be performed. In most cases, the rash resolves spontaneously within 6-8 months. Severe rashes may warrant discontinuation of bleomycin. Using antihistamines and topical and oral corticosteroids in conjunction with bleomycin may reduce the incidence of flagellate erythema/hyperpigmentation.

References

Author and Disclosure Information

Publications
Sections
Author and Disclosure Information

Author and Disclosure Information

Diagnosis: Bleomycin-induced flagellate hyperpigmentation

Bleomycin is an antineoplastic agent. Most reported side effects involve the lung and skin since these organs have lower concentrations of the enzyme that detoxifies bleomycin. Other dermatologic side effects of bleomycin include Raynaud's phenomenon, hyperkeratosis, nailbed changes, and peeling of the skin on the palmar and plantar surfaces.

Flagellate erythema has been described in relation to bleomycin treatment along with other reported associations with peplomycin (a bleomycin derivative), docetaxel, dermatomyositis, adult-onset Still's disease, and shiitake mushroom dermatitis. The rash may appear following administration of bleomycin by any route and has been shown to be dose independent. Onset occurs anytime from 1 day to several months after exposure to the offending agent. Patients typically present with history of itching followed by the appearance of red linear streaks, which are most commonly found on the trunk. Over time, the erythema will resolve to postinflammatory hyperpigmentation.

The exact cause is unknown, but it is thought that scratching causes vasodilation with bleomycin accumulating in the skin. Diagnosis is made by physical examination of the characteristic appearance of the rash along with the history of chemotherapy. A skin biopsy may be performed. In most cases, the rash resolves spontaneously within 6-8 months. Severe rashes may warrant discontinuation of bleomycin. Using antihistamines and topical and oral corticosteroids in conjunction with bleomycin may reduce the incidence of flagellate erythema/hyperpigmentation.

Diagnosis: Bleomycin-induced flagellate hyperpigmentation

Bleomycin is an antineoplastic agent. Most reported side effects involve the lung and skin since these organs have lower concentrations of the enzyme that detoxifies bleomycin. Other dermatologic side effects of bleomycin include Raynaud's phenomenon, hyperkeratosis, nailbed changes, and peeling of the skin on the palmar and plantar surfaces.

Flagellate erythema has been described in relation to bleomycin treatment along with other reported associations with peplomycin (a bleomycin derivative), docetaxel, dermatomyositis, adult-onset Still's disease, and shiitake mushroom dermatitis. The rash may appear following administration of bleomycin by any route and has been shown to be dose independent. Onset occurs anytime from 1 day to several months after exposure to the offending agent. Patients typically present with history of itching followed by the appearance of red linear streaks, which are most commonly found on the trunk. Over time, the erythema will resolve to postinflammatory hyperpigmentation.

The exact cause is unknown, but it is thought that scratching causes vasodilation with bleomycin accumulating in the skin. Diagnosis is made by physical examination of the characteristic appearance of the rash along with the history of chemotherapy. A skin biopsy may be performed. In most cases, the rash resolves spontaneously within 6-8 months. Severe rashes may warrant discontinuation of bleomycin. Using antihistamines and topical and oral corticosteroids in conjunction with bleomycin may reduce the incidence of flagellate erythema/hyperpigmentation.

References

References

Publications
Publications
Article Type
Display Headline
Make the Diagnosis - March 2015
Display Headline
Make the Diagnosis - March 2015
Sections
Questionnaire Body

This case and photo were submitted by Dr. Damon McClain, a dermatologist in Camp Lejeune, N.C., and by Parteek Singla. A 26-year-old male presented with asymptomatic hyperpigmented streaks on his back. He was receiving chemotherapy for testicular cancer. He received dexamethasone with his first chemotherapy treatment and had no cutaneous eruption at that time. He was not given a steroid with his second dose of chemotherapy. The lesions began an hour after that dose. Initially, the lesions were red and pruritic, and then they turned brown. The patient had no nailfold changes on examination. He had no other medical problems and reported that he had not eaten any unusual foods.
Article Source

PURLs Copyright

Inside the Article

New approach to gene therapy for hemophilia

Article Type
Changed
Sat, 03/14/2015 - 05:00
Display Headline
New approach to gene therapy for hemophilia

Irish setter, a breed of dog

that can develop hemophilia B

Gene therapy that produces a potent clotting factor holds promise for treating hemophilia B, researchers have reported in Blood.

The group administered adeno-associated viral-8 (AAV-8) vectors encoding the clotting factor FIX-Padua to mice and dogs with hemophilia B.

The treatment appeared to be safe and effective, did not prompt the formation of inhibitory antibodies, and even eradicated pre-existing inhibitors in one of the dogs.

“Our findings may provide a new approach to gene therapy for hemophilia and perhaps other genetic diseases that have similar complications from inhibiting antibodies,” said study author Valder R. Arruda, MD, PhD, of The Children’s Hospital of Philadelphia in Pennsylvania.

For years, researchers have investigated gene therapy strategies that deliver DNA sequences carrying genetic code to produce clotting factor in patients. But this approach has been hindered by the body’s immune response against vectors.

Those responses, which defeated initial benefits seen in experimental human gene therapy, were dose-dependent. So Dr Arruda and his colleagues decided to test gene therapy that used lower doses of AAV-8 vector to produce FIX-Padua.

Dr Arruda was part of a team that discovered FIX-Padua in a young Italian man with thrombophilia. A mutation in the factor IX gene produced the clotting factor, which was named after the patient’s city of residence.

FIX-Padua is hyperfunctional, clotting blood 8 to 12 times more strongly than wild-type factor IX. Therefore, in the current study, the researchers needed to strike a balance—to relieve severe hemophilia in dogs by using a dose strong enough to allow clotting but not enough to cause thrombosis or stimulate immune reactions.

“Our ultimate goal is to translate this approach to humans by adapting this variant protein found in one patient to benefit other patients with the opposite disease,” Dr Arruda said

He and his colleagues tested the safety of canine FIX-Padua (AAV-cFIX-Padua) in 3 dogs, all with naturally occurring types of hemophilia B.

Two of the dogs had never been exposed to clotting factor and had never developed antibodies. Injections of AAV-cFIX-Padua changed their hemophilia from severe to mild. They had no bleeding episodes for up to 2 years and did not develop inhibitory antibodies or thrombosis.

The third dog, Wiley, already had inhibitory antibodies before receiving AAV-cFIX-Padua. Like his peers, Wiley responded to the treatment, and that response persisted for 3 years. AAV-cFIX-Padua also eradicated his inhibitors, which marks the first time this occurred in an animal model with pre-existing antibodies.

Another set of experiments in mice suggested the gene therapy is safe and effective. Mice that received AAV encoding human FIX-Padua (AAV-hFIX-Padua) did not develop antibodies.

And the researchers found that AAV-hFIX-Padua was comparable to wild-type human FIX with regard to long-term expression and toxicity.

Dr Arruda noted that larger studies are needed in dogs with pre-existing inhibitors to confirm these encouraging early results.

In the meantime, at least one clinical trial is making use of FIX-Padua in adult patients with hemophilia B—at the University of North Carolina at Chapel Hill, under Paul Monahan, MD. Leaders of a separate trial being planned at Spark Therapeutics in Philadelphia, under Katherine A. High, MD, are contemplating using FIX-Padua as well.

Publications
Topics

Irish setter, a breed of dog

that can develop hemophilia B

Gene therapy that produces a potent clotting factor holds promise for treating hemophilia B, researchers have reported in Blood.

The group administered adeno-associated viral-8 (AAV-8) vectors encoding the clotting factor FIX-Padua to mice and dogs with hemophilia B.

The treatment appeared to be safe and effective, did not prompt the formation of inhibitory antibodies, and even eradicated pre-existing inhibitors in one of the dogs.

“Our findings may provide a new approach to gene therapy for hemophilia and perhaps other genetic diseases that have similar complications from inhibiting antibodies,” said study author Valder R. Arruda, MD, PhD, of The Children’s Hospital of Philadelphia in Pennsylvania.

For years, researchers have investigated gene therapy strategies that deliver DNA sequences carrying genetic code to produce clotting factor in patients. But this approach has been hindered by the body’s immune response against vectors.

Those responses, which defeated initial benefits seen in experimental human gene therapy, were dose-dependent. So Dr Arruda and his colleagues decided to test gene therapy that used lower doses of AAV-8 vector to produce FIX-Padua.

Dr Arruda was part of a team that discovered FIX-Padua in a young Italian man with thrombophilia. A mutation in the factor IX gene produced the clotting factor, which was named after the patient’s city of residence.

FIX-Padua is hyperfunctional, clotting blood 8 to 12 times more strongly than wild-type factor IX. Therefore, in the current study, the researchers needed to strike a balance—to relieve severe hemophilia in dogs by using a dose strong enough to allow clotting but not enough to cause thrombosis or stimulate immune reactions.

“Our ultimate goal is to translate this approach to humans by adapting this variant protein found in one patient to benefit other patients with the opposite disease,” Dr Arruda said

He and his colleagues tested the safety of canine FIX-Padua (AAV-cFIX-Padua) in 3 dogs, all with naturally occurring types of hemophilia B.

Two of the dogs had never been exposed to clotting factor and had never developed antibodies. Injections of AAV-cFIX-Padua changed their hemophilia from severe to mild. They had no bleeding episodes for up to 2 years and did not develop inhibitory antibodies or thrombosis.

The third dog, Wiley, already had inhibitory antibodies before receiving AAV-cFIX-Padua. Like his peers, Wiley responded to the treatment, and that response persisted for 3 years. AAV-cFIX-Padua also eradicated his inhibitors, which marks the first time this occurred in an animal model with pre-existing antibodies.

Another set of experiments in mice suggested the gene therapy is safe and effective. Mice that received AAV encoding human FIX-Padua (AAV-hFIX-Padua) did not develop antibodies.

And the researchers found that AAV-hFIX-Padua was comparable to wild-type human FIX with regard to long-term expression and toxicity.

Dr Arruda noted that larger studies are needed in dogs with pre-existing inhibitors to confirm these encouraging early results.

In the meantime, at least one clinical trial is making use of FIX-Padua in adult patients with hemophilia B—at the University of North Carolina at Chapel Hill, under Paul Monahan, MD. Leaders of a separate trial being planned at Spark Therapeutics in Philadelphia, under Katherine A. High, MD, are contemplating using FIX-Padua as well.

Irish setter, a breed of dog

that can develop hemophilia B

Gene therapy that produces a potent clotting factor holds promise for treating hemophilia B, researchers have reported in Blood.

The group administered adeno-associated viral-8 (AAV-8) vectors encoding the clotting factor FIX-Padua to mice and dogs with hemophilia B.

The treatment appeared to be safe and effective, did not prompt the formation of inhibitory antibodies, and even eradicated pre-existing inhibitors in one of the dogs.

“Our findings may provide a new approach to gene therapy for hemophilia and perhaps other genetic diseases that have similar complications from inhibiting antibodies,” said study author Valder R. Arruda, MD, PhD, of The Children’s Hospital of Philadelphia in Pennsylvania.

For years, researchers have investigated gene therapy strategies that deliver DNA sequences carrying genetic code to produce clotting factor in patients. But this approach has been hindered by the body’s immune response against vectors.

Those responses, which defeated initial benefits seen in experimental human gene therapy, were dose-dependent. So Dr Arruda and his colleagues decided to test gene therapy that used lower doses of AAV-8 vector to produce FIX-Padua.

Dr Arruda was part of a team that discovered FIX-Padua in a young Italian man with thrombophilia. A mutation in the factor IX gene produced the clotting factor, which was named after the patient’s city of residence.

FIX-Padua is hyperfunctional, clotting blood 8 to 12 times more strongly than wild-type factor IX. Therefore, in the current study, the researchers needed to strike a balance—to relieve severe hemophilia in dogs by using a dose strong enough to allow clotting but not enough to cause thrombosis or stimulate immune reactions.

“Our ultimate goal is to translate this approach to humans by adapting this variant protein found in one patient to benefit other patients with the opposite disease,” Dr Arruda said

He and his colleagues tested the safety of canine FIX-Padua (AAV-cFIX-Padua) in 3 dogs, all with naturally occurring types of hemophilia B.

Two of the dogs had never been exposed to clotting factor and had never developed antibodies. Injections of AAV-cFIX-Padua changed their hemophilia from severe to mild. They had no bleeding episodes for up to 2 years and did not develop inhibitory antibodies or thrombosis.

The third dog, Wiley, already had inhibitory antibodies before receiving AAV-cFIX-Padua. Like his peers, Wiley responded to the treatment, and that response persisted for 3 years. AAV-cFIX-Padua also eradicated his inhibitors, which marks the first time this occurred in an animal model with pre-existing antibodies.

Another set of experiments in mice suggested the gene therapy is safe and effective. Mice that received AAV encoding human FIX-Padua (AAV-hFIX-Padua) did not develop antibodies.

And the researchers found that AAV-hFIX-Padua was comparable to wild-type human FIX with regard to long-term expression and toxicity.

Dr Arruda noted that larger studies are needed in dogs with pre-existing inhibitors to confirm these encouraging early results.

In the meantime, at least one clinical trial is making use of FIX-Padua in adult patients with hemophilia B—at the University of North Carolina at Chapel Hill, under Paul Monahan, MD. Leaders of a separate trial being planned at Spark Therapeutics in Philadelphia, under Katherine A. High, MD, are contemplating using FIX-Padua as well.

Publications
Publications
Topics
Article Type
Display Headline
New approach to gene therapy for hemophilia
Display Headline
New approach to gene therapy for hemophilia
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Colorectal Cancer: Screening and Surveillance Recommendations

Article Type
Changed
Thu, 03/28/2019 - 15:29
Display Headline
Colorectal Cancer: Screening and Surveillance Recommendations

From the Boston University School of Medicine, Boston, MA.

 

Abstract

  • Objective: To review recommendations for colorectal cancer (CRC) screening.
  • Methods: Review of the literature.
  • Results: In the United States, CRC is the third most commonly diagnosed cancer and the third leading cause of cancer death. CRC screening can reduce mortality through the detection of early-stage disease and the detection and removal of adenomatous polyps. There are several modalities for CRC screening, with current technology falling into 2 general categories: stool tests, which include tests for occult blood or exfoliated DNA; and structural exams, which include flexible sigmoidoscopy, colonoscopy, double-contrast barium enema, and CT colonography. The preferred CRC prevention test for average-risk individuals is colonoscopy starting at age 50 with subsequent examinations every 10 years. Patients unwilling to undergo screening colonoscopy may be offered flexible sigmoidoscopy, CT colonography, or fecal immunohistochemical test. Surveillance examinations should occur based on polyp findings on index colonoscopy. There is no recommendation to continue screening after age 75, though physicians can make a determination based on a patient’s health and risk/benefit profile. Current guidelines recommend against offering screening to patients over age 85.
  • Conclusion: Increasing access to and utilization of CRC screening tests is likely to lead to improvements in mortality reduction, as only about half of people aged 50 or older report having received CRC testing consistent with current guidelines.

In the United States, colorectal cancer (CRC) is the third most commonly diagnosed cancer and the third leading cause of cancer death in both men and women [1]. In 2014, an estimated 136,830 people were diagnosed with CRC and about 50,310 people died of the disease [2]. Colorectal cancer usually develops slowly over a period of 10 to 15 years. The tumor typically begins as a noncancerous polyp, classically an adenomatous polyp or adenoma, though fewer than 10% of adenomas will progress to cancer [3]. Adenomas are common; an estimated one-third to one-half of all individuals will eventually develop 1 or more adenomas [4,5]. In the United States, the lifetime risk of being diagnosed with CRC is approximately 5% for both men and women [6]. Incidence rates for CRC increase with age, with an incidence rate more than 15 times higher in adults aged 50 years and older compared with those aged 20 to 49 years [7].

Certain demographic subgroups have been shown to be at higher risk. Overall, CRC incidence and mortality rates are about 35% to 40% higher in men than in women. The reasons for this are not completely understood but likely reflect complex interactions between gender-related differences in exposure to hormones and risk factors [8]. CRC incidence and mortality rates are highest in African-American men and women; incidence rates are 20% higher and mortality rates are about 45% higher than those in whites. Prior to 1989, incidence rates were predominantly higher in white men than in African American men and were similar for women of both races. Since that time, although incidence rates have declined as a whole [9], incidence rates have been higher for African Americans than whites in both men and women This crossover likely reflects a combination of greater access to and utilization of recommended screening tests among whites (resulting in detection and removal of precancerous polyps), as well as racial differences in trends for CRC risk factors [10].

CRC screening can reduce mortality through the detection of early-stage disease and the detection and removal of ademomatous polyps [11]. Increasing access to and utilization of CRC screening tests is likely to lead to improvements in mortality reduction, as only about half of people aged 50 or older report having received CRC testing consistent with current guidelines [1].

Case Study

Initial Presentation

A 55-year-old white male presents for a routine visit and asks about colon cancer screening. His father was diagnosed with colon cancer at the age of 78. Overall, he feels well and does not have any particular complaints. His bowel habits are normal and he denies melena and hematochezia. His past medical history is significant for diabetes, hypertension, and obesity. He was a previous smoker and has a few alcoholic drinks on the weekends. His physical exam is unremarkable. Results of recent blood work are normal and there is no evidence of anemia.

  • What are this patient’s risk factors for developing colon cancer?

Risk Factors for CRC

There are numerous factors that are thought to influence risk for CRC. Nonmodifiable risk factors include a personal or family history of CRC or adenomatous polyps, and a personal history of chronic inflammatory bowel disease. Modifiable risk factors that have been associated with an increased risk of CRC in epidemiologic studies include physical inactivity, obesity, high consumption of red or processed meats, smoking, and moderate-to-heavy alcohol consumption. In fact, a prospective study showed that up to 23% of colorectal cancers were considered to be potentially avoidable by adhering to multiple healthy lifestyle recommendations including maintaining a healthy weight, being physically active at least 30 minutes per day, eating a healthy diet, and avoiding smoking and drinking excessive amounts of alcohol [12].

People with a first-degree relative (parent, sibling, or offspring) who has had CRC have 2 to 3 times the risk of developing the disease compared with individuals with no family history; if the relative was diagnosed at a young age or if there is more than 1 affected relative, risk increases to 3 to 6 times that of the general population [13,14]. About 5% of patients with CRC have a well-defined genetic syndrome that causes the disease [15]. The most common of these is Lynch syndrome (also known as hereditary nonpolyposis colorectal cancer or HNPCC), which accounts for 2% to 4% of all CRC cases [16]. Although individuals with Lynch syndrome are predisposed to numerous types of cancer, risk of CRC is highest. A recent study of CRC in 147 Lynch syndrome families in the United States found lifetime risk of CRC to be 66% in men and 43% in women, with a median age at diagnosis of 42 years and 47 years, respectively [17]. Familial adenomatous polyposis (FAP) is the second most common predisposing genetic syndrome; for these individuals, the lifetime risk of CRC approaches 100% without intervention (eg, colectomy) [16].

People who have inflammatory bowel disease of the colon (both ulcerative colitis and Crohn’s disease) have an increased risk of developing CRC that correlates with the extent and the duration of the inflammation [18]. It is estimated that 18% of patients with a 30-year history of ulcerative colitis will develop CRC [19]. In addition, several studies have found an association between diabetes and increased risk of CRC [20,21]. Though adult-onset type 2 diabetes (the most common type) and CRC share similar risk factors, including physical inactivity and obesity, a positive association between diabetes and CRC has been found even after accounting for physical activity, body mass index, and waist circumference [22].

Being overweight or obese is also associated with a higher risk of CRC, with stronger associations more consistently observed in men than in women. Obesity increases the risk of CRC independent of physical activity. Abdominal obesity (measured by waist circumference) may be a more important risk factor for colon cancer than overall obesity in both men and women [23–25]. Diet and lifestyle strongly influence CRC risk; however, research on the role of specific dietary elements on CRC risk is still accumulating. Several studies, including one by the American Cancer Society, have found that high consumption of red and/or processed meat increases the risk of both colon and rectal cancer [23,26,27]. Further analyses indicate that the association between CRC and red meat may be related to the cooking process, because a higher risk of CRC is observed particularly among those individuals who consume meat that has been cooked at a high temperature for a long period of time [28]. In contrast to findings from earlier research, more recent large, prospective studies do not indicate a major relationship between CRC and vegetable, fruit, or fiber consumption [28,29]. However, some studies suggest that people with very low fruit and vegetable intake are at above-average risk for CRC [30,31]. Consumption of milk and calcium may decrease the risk of developing CRC [28,29,32].

In November 2009, the International Agency for Research on Cancer reported that there is now sufficient evidence to conclude that tobacco smoking causes CRC [33]. Colorectal cancer has been linked to even moderate alcohol use. Individuals who have a lifetime average of 2 to 4 alcoholic drinks per day have a 23% higher risk of CRC than those who consume less than 1 drink per day [34].

Protective Factors

One of the most consistently reported relationships between colon cancer risk and behavior is the protective effect of physical activity [35]. Based on these findings, as well as the numerous other health benefits of regular physical activity, the American Cancer Society recommends engaging in at least moderate activity for 30 minutes or more on 5 or more days per week.

Accumulating research suggests that aspirin-like drugs, postmenopausal hormones, and calcium supplements may help prevent CRC. Extensive evidence suggests that long-term, regular use of aspirin and other nonsteroidal anti-inflammatory drugs (NSAIDs) is asso-ciated with lower risk of CRC. The American Cancer Society does not currently recommend use of these drugs as chemoprevention because of the potential side effects of gastrointestinal bleeding from aspirin and other traditional NSAIDs and heart attacks from selective cyclooxygenase-2 (COX-2) inhibitors. However, people who are already taking NSAIDs for chronic arthritis or aspirin for heart disease prevention may have a lower risk of CRC as a positive side effect [36,37].

There is substantial evidence that women who use postmenopausal hormones have lower rates of CRC than those who do not. A decreased risk of CRC is especially evident in women who use hormones long-term, although the risk returns to that of nonusers within 3 years of cessation. Despite its positive effect on CRC risk, the use of postmenopausal hormones increases the risk of breast and other cancers as well as cardiovascular disease, and therefore it is not recommended for the prevention of CRC. At present, the American Cancer Society does not recommend any medications or supplements to prevent CRC because of uncertainties about their effectiveness, appropriate dosing, and potential toxicity [38–40].

Case Continued

The physician tells the patient that there are several environmental factors that may predispose him to developing CRC. He recommends that the patient follow a healthy lifestyle, including eating 5 servings of fruits and vegetables daily, minimizing consumption of red meats, exercising for 30 minutes at least 5 days per week, drinking only moderate amounts of alcohol, and continuing to take his aspirin in the setting of his diabetes. He also asks the patient if he would be interested in talking about weight loss and working together to make a plan.

The patient is appreciative of this information and wants to know what CRC creening test the physician recommends.

  • What screening test should be recommended?

Screening Options

There are several modalities for CRC screening, with current technology falling into 2 general categories: stool tests, which include tests for occult blood or exfoliated DNA; and structural exams, which include flexible sigmoidoscopy, colonoscopy, double-contrast barium enema (DCBE), and computed tomographic (CT) colonography. Stool tests are best suited for the detection of CRC, although they also will deliver positive findings for some advanced adenomas, while the structural exams can achieve both detection and prevention of CRC through identification and removal of adenomatous polyps [41]. These tests may be used alone or in combination to improve sensitivity or, in some instances, to ensure a complete examination of the colon if the initial test cannot be completed.

In principle, all adults should have access to the full range of options for CRC screening, and the availability of lower-cost, less invasive options in most practice settings is a public health advantage [11]. However, the availability of multiple testing options can overwhelm the primary care provider and presents challenges for practices in trying to support an office policy that can manage a broad range of testing choices, their follow-up requirements, and shared decision making related to the options. Shared decision making around CRC screening options is both demanding and time consuming and is complicated by the different characteristics of the tests and the test-specific requirements for individuals undergoing screening [42].

Recommended Tests

The joint guideline on screening for CRC from the American Cancer Society, the US Multi-Society Task Force on Colorectal Cancer, and the American College of Radiology (the MSTF guideline) [11] is of the strong opinion that tests designed to detect early cancer and prevent cancer through the detection and removal of adenomatous polyps (the structural exams) should be encouraged if resources are available and patients are willing to undergo an invasive test [11]. In clinical settings in which economic issues preclude primary screening with colonoscopy, or for patients who decline invasive tests, clinicians may offer stool- based testing. However, providers and patients should understand that these tests are less likely to prevent cancer compared with the invasive tests, they must be repeated at regular intervals to be effective (ie, programmatic sensitivity), and if the test is abnormal, a colonoscopy will be needed to follow up. Therefore, if patients are not willing to have repeated testing or pursue colonoscopy if the test is abnormal, these programs will not be effective and should not be recommended [11].

At this time, colonoscopy every 10 years, beginning at age 50, is the American College of Gastroenterology-preferred CRC screening strategy [43]. In cases when patients are unwilling to undergo colonoscopy for screening purposes, patients should be offered flexible sigmoidoscopy every 5-10 years, a computed tomography (CT) colonography every 5 years, or fecal immunohistochemical test (FIT) [43] (Table 1). The US Preventive Services Task Force (USPSTF) recommends screening for colorectal cancer using fecal occult blood testing, sigmoidoscopy, or colonoscopy in adults, beginning at age 50 years and continuing until age 75 years [44].

Stool-Based Testing

Stool blood tests are conventionally known as fecal occult blood tests (FOBT) because they are designed to detect the presence of occult blood in stool. FOBT falls into 2 primary categories based on the detected analyte: guaiac-based and FIT. Blood in the stool is a nonspecific finding but may originate from CRC or larger (> 1 to 2 cm) polyps. Because small adenomatous polyps do not tend to bleed and bleeding from cancers or large polyps may be intermittent or undetectable in a single sample of stool, the proper use of stool blood tests requires annual testing that consists of collecting specimens (2 or 3, depending on the product) from consecutive bowel movements [45–47].

Guaiac-based FOBT

Guaiac-based FOBT (gFOBT) is the most common stool blood test for CRC screening and the only CRC screening test for which there is evidence of efficacy from randomized controlled trials [11]. The usual gFOBT protocol consists of collecting 2 samples from each of 3 consecutive bowel movements at home. Prior to testing with a sensitive guaiac-based test, individuals usually will be instructed to avoid aspirin and other NSAIDs, vitamin C, red meat, poultry, fish, and some raw vegetables because of diet-test interactions that can increase the risk of both false-positive and false-negative (specifically, vitamin C) results [48]. Collection of all 3 samples is important because test sensitivity improves with each additional stool sample [41]. Three large randomized controlled trials with gFOBT have demonstrated that screened patients have cancers detected at an early and more curable stage than unscreened patients. Over time (8 to 13 years), each of the trials demonstrated significant reductions in CRC mortality of 15% to 33% [49–51]. However, the reported sensitivity of a single gFOBT varies considerably [52].

FIT

FIT has several technological advantages when compared with gFOBT. FIT detects human globin, a protein that along with heme constitutes human hemoglobin. Thus, FIT is more specific for human blood than guaiac-based tests, which rely on detection of peroxidase in human blood and also react to the peroxidase that is present in dietary constituents such as rare red meat, cruciferous vegetables, and some fruits [53]. Furthermore, unlike gFOBT, FIT is not subject to false-negative results in the presence of high-dose vitamin C supplements, which block the peroxidase reaction. In addition, because globin is degraded by digestive enzymes in the upper gastrointestinal tract, FIT is also more specific for lower gastrointestinal bleeding, thus improving the specificity for CRC. Finally, the sample collection process for patients for some variants of FIT are less demanding than gFOBT, requiring fewer samples or less direct handling of stool, which may increase FIT’s appeal. Although FIT has superior performance characteristics when compared with older guaiac-based Hemoccult II cards [54–56], the spectrum of benefits, limitations, and harms is similar to a gFOBT with high sensitivity [41]. As for adherence with FIT, there were 10% and 12% gains in adherence with FIT in the first 2 randomized controlled trials comparing FIT with guaiac-based testing [57,58]. Therefore, FIT is preferred over Hemoccult Sensa and is the preferred annual cancer detection test when colonoscopy is not an option [43]. The American College of Gastroenterology supports the joint guideline recommendation [11] that older guaiac-based fecal occult blood testing be abandoned as a method for CRC screening.

sDNA

Fecal DNA testing uses knowledge of molecular genomics and provides the basis of a new method of CRC screening that tests stool for the presence of known DNA alterations in the adenoma-carcinoma sequence of colorectal carcinogenesis [11]. Three different types of fecal DNA testing kits have been evaluated. The sensitivity for cancer in each version was superior to traditional guaiac-based occult blood testing, but the sensitivities ranged from 52%–87%, with the specificities ranging from 82%–95%. Based on the accumulation of evidence since the last update of joint guideline, the joint guideline panel concluded that there now are sufficient data to include sDNA as an acceptable option for CRC screening [11].

As for overall recommendations for stool-based testing, the ACG supports the joint guideline recommendation that older guaiac-based fecal occult blood testing be abandoned as a method for CRC screening. Because of more extensive data (compared with Hemoccult Sensa), and the high cost of fecal DNA testing, the American College of Gastroenterology recommends FIT as the preferred cancer detection test in cases where colonoscopy is not an option [43].

Invasive Tests Other than Colonoscopy

The use of flexible sigmoidoscopy for CRC screening is supported by high-quality case-control and cohort studies [46]. The chief advantage of flexible sigmoidoscopy is that it can be performed with a simple preparation (2 enemas), without sedation, and by a variety of practitioners in diverse settings. The main limitation of the procedure is that it does not examine the entire colon but only the rectum, sigmoid, and descending colon. The effectiveness of a flexible sigmoidoscopy program is based on the assumption that if an adenoma is detected during the procedure, the patient would be referred for colonoscopy to examine the entire colon.

DCBE is an imaging modality which can evaluate the entire colon in almost all cases and can detect most cancers and the majority of significant polyps. However, the lower sensitivity for significant adenomas when compared with colonoscopy may result in less favorable outcomes regarding CRC morbidity and mortality. Double-contrast barium enema is no longer recommended as an alternative CRC prevention test because its use has declined dramatically and also as its effectiveness for polyp detection is less than CT colonography [43].

CT Colonography

CT colonography every 5 years is endorsed as an alternative to colonoscopy every 10 years because of its recent performance in the American College of Imaging Network Trial 6664 (also known as the National CT Colonography Trial) [59]. The principle performance feature that justifies inclusion of CT colonography as a viable alternative in patients who decline colonoscopy is that the sensitivity for polyps ≥ 1 cm in size was 90% in the most recent multicenter US trial [59]. In this study, 25% of radiologists who were tested for entry into the trial but performed poorly were excluded from participation, and thus lower sensitivity might be expected in actual clinical practice. CT colonography probably has a lower risk of perforation than colonoscopy in most settings, but for several reasons it is not considered the equivalent of colonoscopy as a screening strategy. First, the evidence to support an effect of endoscopic screening on prevention of incident CRC and mortality is overwhelming compared with that for CT colonography. Second, the inability of CT colonography to adequately detect polyps 5 mm and smaller, which constitutes 80% of colorectal neoplasms, and whose natural history is still not understood, necessitates performance of the test at 5-year rather than 10-year intervals [43]. Finally, false-positives are common, and the specificity for polyps ≥ 1 cm in size was only 86% in the National CT Colonography Trial, with a positive predictive value of 23% [59]. The American College of Gastroenterology recommends that asymptomatic patients be informed of the possibility of radiation risk associated with one or repeated CT colonography studies, though the exact risk associated with radiation is unclear [60,61].

The value of extracolonic findings detected by CT colonography is mixed, with substantial costs associated with incidental findings, but occasional important extracolonic findings are detected, such as asymptomatic cancers and large abdominal aortic aneurysms. As a final point, the ACG is also concerned about the potential impact of CT colonography on adherence with follow-up colonoscopy and thus on polypectomy rates. Thus, if CT colonography substantially improves adherence, it should improve polypectomy rates and thereby reduce CRC, even if only large polyps are detected and referred for colonoscopy. On the other hand, if CT colonography largely displaces patients who would otherwise be willing to undergo colonoscopy, then polypectomy rates will fall substantially, which could significantly increase the CRC incidence [62]. Thus, for multiple reasons and pending additional study, CT colonography should be offered to patients who decline colonoscopy. It should be noted that CT colonography should only be offered for the purposes of CRC screening and should not be used for diagnostic workup of symptoms (eg, patient with active bleeding or inflammatory bowel disease).

  • When should screening begin?

The American College of Gastroenterology continues to recommend that screening begin at age 50 years in average-risk persons (ie, those without a family history of colorectal neoplasia), except for African Americans, in whom it should begin at age 45 years [43]. The USPSTF does not currently provide specific recommendations based on race or ethnicity, but certain other subgroups of the average-risk population might warrant initiation of screening at an earlier or later age, depending on their risk. For example, the incident risk of CRC has been described to be greater in men than women [63]. In reviewing the literature, the writing committee also identified heavy cigarette smoking and obesity as linked to an increased risk of CRC and to the development of CRC at an earlier age.

For patients with a family history of CRC or adenomatous polyps, the 2008 MSTF guideline recommends initiation of screening at age 40 [11]. The American College of Gastroenterology recommendations for screening in patients with a family history are shown in Table 1. From a practical perspective, many clinicians have found that patients are often not aware of whether their first-degree relatives had advanced adenomas vs. small tubular adenomas, or whether their family members had non-neoplastic vs. neoplastic polyps. Given these difficulties, the American College of Gastroenterology now recommends that adenomas only be counted as equal to a family history of cancer when there is a clear history, or medical report containing evidence, or other evidence to indicate that family members had advanced adenomas (an adenoma ≥ 1 cm in size, or with high-grade dysplasia, or with villous elements) [43]. Continuation of the old recommendation to screen first-degree relatives of patients with only small tubular adenomas could result in most of the population being screened at age 40, with doubtful benefit.

  • What are screening considerations in patients with genetic syndromes?

Patients with features of an inherited CRC syndrome should be advised to pursue genetic counseling with a licensed genetic counselor and, if appropriate, genetic testing. Individuals with FAP should undergo adenomatous polyposis coli (APC) mutation testing and, if negative, MYH mutation testing. Patients with FAP or at risk of FAP based upon family history should undergo annual colonoscopy until colectomy is deemed by both physician and patient as the best treatment [64]. Patients with a retained rectum after total colectomy and ileorectal anastomosis, ileal pouch, after total proctocolectomy and ileal pouch anal anastomosis, or stoma after total proctocolectomy and end ileostomy, should undergo endoscopic assessment approximately every 6 to 12 months after surgery, depending on the polyp burden seen. Individuals with oligopolyposis (< 100 colorectal polyps) should be sent for genetic counseling, consideration of APC and MYH mutation testing, and individualized colonoscopy surveillance depending on the size, number, and pathology of polyps seen. Upper endoscopic surveillance is recommended in individuals with FAP, but there are no established guidelines for endoscopic surveillance in MAP (MYH-associated polyposis) [43].

Patients who meet the Bethesda criteria for HNPCC [65] can be screened by 2 different mechanisms. One is a DNA-based test for microsatellite instability of either the patient’s or a family member’s tumor. The other mechanism is to assess by immunohistochemical staining for evidence of mismatch repair proteins (eg, MLH1, MSH2, MSH6). In those patients in whom deleterious mutations are found, the affected individual should undergo colonoscopy every 2 years beginning at age 20 to 25 years until age 40 years, then annually thereafter [43]. If genetic testing is negative (ie, no deleterious mutation is found), but the patient is still felt to clinically have Lynch syndrome, then they should still be surveyed in the same way.

Case Continued

The physician recommends colonoscopy as the screening modality as it is the most efficient and accurate way of finding precancerous lesions and the most effective way of preventing CRC by removing precancerous lesions. He also explains that because the patient’s father developed CRC after the age of 60, this does not place the patient in a higher risk category and he can follow screening recommendations for “average-risk” individuals.

Screening

The patient undergoes colonoscopy. Two 5-mm adenomas in the transverse colon are detected and removed.

  • When should he have a repeat colonoscopy?

Surveillance Intervals

New data have recently emerged on the risk of interval cancer after colonoscopy. The overall rate of interval cancer is estimated to be 1.1–2.7 per 1000 person-years of follow-up. There are several reasons that may account for why patients develop interval cancers: (1) important lesions may be missed at baseline colonoscopy, (2) adenomas may be incompletely removed at the time of baseline colonoscopy, and (3) interval CRC may be biologically different or more aggressive than prevalent CRC. In order to minimize the risk of interval cancer development, it is important to perform a high-quality baseline screening colonoscopy examination as this is associated with lowering the risk of interval cancer [66]. A high-quality colonoscopy entails completion of the procedure to the cecum (with photodocumentation of the appendiceal orifice and ileocecal valve) with careful inspection of folds including adequate bowel cleanliness and a withdrawal time > 6 minutes.

The MSTF guidelines for surveillance after screening and polypectomy were published in 2006 [67], with an update in 2012 [66]. Their recommendations on surveillance colonoscopy are based on the predication that the initial colonoscopy is high quality and are summarized in Table 2 and discussed below.

Baseline Colonoscopy Findings

No Polyps

Several prospective observational studies in different populations have shown that the risk of advanced adenomas within 5 years after negative findings on colonoscopy is low (1.3%–2.4%) relative to the rate on initial screening examination (4%–10%) [68–73]. In these studies, interval cancers were rare within 5 years. A sigmoidoscopy randomized controlled trial performed in the United Kingdom demonstrated a reduction in CRC incidence and mortality at 10 years in patients who received one-time sigmoidoscopy compared with controls—a benefit limited to the distal colon [46]. This is the first randomized study to show the effectiveness of endoscopic screening, an effect that appears to have at least a 10-year duration [74]. Thus, in patients who have a baseline colonoscopic evaluation without any adenomas or polyps and are average-risk individuals, the recommendation for the next examination is in 10 years [66].

Distal Hyperplastic Polyps < 10 mm

There is considerable evidence that patients with only rectal or sigmoid hyperplastic polyps (HPs) appear to represent a low-risk cohort. Studies have focused on whether the finding in the distal colon was a marker of risk for advanced neoplasia elsewhere and most studies show no such relationship [67]. Prior and current evidence suggests that distal HPs <10 mm are benign without neoplastic potential. If the most advanced lesions at baseline colonoscopy are distal HPs <10 mm, the interval for colonoscopic follow-up should be 10 years [66].

1-2 Tubular Adenomas < 10 mm

Prior evidence suggested that patients with low-risk adenomas (<10 mm, no villous histology or high-grade dysplasia) had a lower risk of developing advanced adenomas during follow-up compared with patients with high risk adenomas (≥ 10mm, villous histology or high -grade dysplasia). At that time in 2006, consensus on the task force was that an interval of 5 years would be acceptable in this low-risk group [75]. Data published since 2006 endorse the assessment that patients with 1–2 tubular adenomas with low-grade dysplasia <10 mm represent a low-risk group. Three new studies suggest that this group may have only a small, nonsignificant increase in risk of advanced neoplasia within 5 years compared with individuals with no baseline neoplasia. The evidence now supports a surveillance interval of longer than 5 years for most patients and can be extended to 10 years based on the quality of the preparation and colonoscopy [66].

3–10 Tubular Adenomas

Two independent meta-analyses in 2006 found that patients with 3 or more adenomas at baseline had an increased RR for adenomas during surveillance, ranging from 1.7 to 4.8 [47,75]. New information from the VA study and the National Cancer Institute Pooling Project also support these prior findings. Patients with 3 or more adenomas have a level of risk for advanced neoplasia similar to other patients with advanced neoplasia (adenoma >10 mm, adenoma with high grade dysplasia) and thus, repeat examination should be performed in 3 years [66,68,76].

> 10 Adenomas

Only a small proportion of patients undergoing screening colonoscopy will have >10 adenomas. The 2006 guidelines for colonoscopy surveillance after polypectomy noted that such patients should be considered for evaluation of hereditary CRC syndromes [67]. Early follow-up surveillance colonoscopy is based on clinical judgment because there is little evidence to support a firm recommendation. At present, the recommendation is to consider follow-up in less than 3 years after a baseline colonoscopy [66].

1 or More Tubular Adenomas ≥ 10mm

The 2006 MSTF guideline reviewed data related to adenoma size, demonstrating that most studies showed a 2- to 5-fold increased risk of advanced neoplasia during follow-up if the baseline examination had one or more adenomas ≥ 10 mm [67]. Newer, additional data shows that patients with one or more adenomas ≥ 10 mm have an increased risk of advanced neoplasia during surveillance compared with those with no neoplasia or small (< 10 mm) adenomas [68,76]. Thus, the recommendations remains that repeat examination should be performed in 3 years [66]. If there is question about complete removal of an adenoma (ie, piecemeal resection), early follow-up colonoscopy is warranted [66].

1 or More Villous Adenomas

The 2006 MSTF guideline considers adenomas with villous histology to be high risk [67]. The NCI Pooling Project analyzed polyp histology as a risk factor for development of interval advanced neoplasia. Compared with patients with tubular adenomas, those with baseline polyp(s) showing adenomas with villous or tubulovillous histology (TVA) had increased risk of advanced neoplasia during follow-up (16.8% vs 9.7%; adjusted OR, 1.28; 95% CI, 1.07–1.52) [76]. Patients with one or more adenomas with villous histology were also found to have an increased risk of advanced neoplasia during surveillance compared with those with no neoplasia or small (<10 mm) tubular adenomas. Thus, the recommendation remains that repeat examination should be performed in 3 years [66].

Adenoma with High-Grade Dysplasia (HGD)

The 2006 MSTF guideline concluded that the presence of HGD in an adenoma was associated with both villous histology and larger size, which are both risk factors for advanced neoplasia during surveillance [67]. In a univariate analysis from the NCI Pooling Project, HGD was strongly associated with risk of advanced neoplasia during surveillance (OR, 1.77; 95% CI, 1.41–2.22) [76]. Thus, the recommendation remains that repeat examination should be performed in 3 years [66].

Serrated Lesions

A total of 20% to 30% of CRCs arise through a molecular pathway characterized by hypermethylation of genes, known as CgG Island Methylator Phenotype (CIMP) [77]. Precursors are believed to be serrated polyps. Tumors in this pathway have a high frequency of BRAF mutation, and up to 50% are microsatellite unstable. CIMP-positive tumors are overrepresented in interval cancers, particularly in the proximal colon. The principal precursor of hypermethylated cancers is probably the sessile serrated polyp (synonymous with sessile serrated adenoma). These polyps are difficult to detect at endoscopy. They may be the same color as surrounding colonic mucosa, have indiscrete edges, are nearly always flat or sessile, and may have a layer of adherent mucus and obscure the vascular pattern.

Recent studies show that proximal colon location or size ≥ 10 mm may be markers of risk for synchronous advanced adenomas elsewhere in the colon [78,79]. Surveillance after colonoscopy was evaluated in one study, which found that coexisting serrated polyps and high-risk adenomas (HRA; ie, size ≥ 10 mm, villous histology, or presence of HGD) is associated with a higher risk of advanced neoplasia at surveillance [78]. This study also found that if small proximal serrated polyps are the only finding at baseline, the risk of adenomas during surveillance is similar to that of patients with low-risk adenomas (LRA; ie, 1–2 small adenomas).

The current evidence suggests that size (>10 mm), histology (a sessile serrated polyp is a more significant lesion than an HP; a sessile serrated polyp with cytological dysplasia is more advanced than a sessile serrated polyp without dysplasia), and location (proximal to the sigmoid colon) are risk factors that might be associated with higher risk of CRC. A sessile serrated polyp ≥ 10 mm and a sessile serrated polyp with cytological dysplasia should be managed like a HRA with repeat colonoscopy occurring in 3 years. Serrated polyps that are <10 mm in size and do not have cytological dysplasia may have lower risk and can be managed like LRA with repeat colonoscopy occurring in 5 years [66].

Follow-up After Surveillance

In a 2009 study, 564 participants underwent 2 surveillance colonoscopies after an index procedure and 10.3% had high-risk findings at the third study examination. If the second examination showed high-risk findings, then results from the first examination added no significant information about the probability of high-risk findings on the third examination (18.2% for high-risk findings on the first examination vs. 20.0% for low-risk findings on the first examination; P = 0.78). If the second examination showed no adenomas, then the results from the first examination added significant information about the probability of high-risk findings on the third exam-ination (12.3% if the first examination had high-risk findings vs. 4.9% if the first examination had low-risk findings; P = 0.015) [80]. Thus, information from 2 previous colonoscopies appears to be helpful in defining the risk of neoplasia for individual patients and in the future, guidelines might consider accounting for the results of 2 exams to tailor surveillance intervals for patients.

  • When should screening / surveillance be stopped?

There is considerable new evidence that the risks of colonoscopy increase with advancing age [81,82]. Neither surveillance nor screening colonoscopy should be performed when the risk of the preparation, sedation, or procedure outweighs the potential benefit. For patients aged 75–85 years, the USPSTF recommends against routine screening but argues for individualization based on comorbidities and findings on any prior colonoscopy. The USPSTF recommends against continued screening after age 85 years because risk could exceed potential benefit [44].

In terms of surveillance of prior adenomas, the 75-85 year age group may still benefit from surveillance because patients with prior HRA are at higher risk for developing advanced neoplasia compared with average-risk screenees. However, the decision to continue surveillance in this population should be individualized and based on an assessment of benefit and risk in the context of the person’s estimated life expectancy [66]. More importantly, it should be noted that an individual’s most important and impactful screening colonoscopy is his or her first one and therefore, from a public health standpoint, great effort should be taken to increase the number of people in a population who undergo screening rather than simply targeting those who need surveillance for prior polyps. This is ever true in settings with limited resources.

Case Conclusion

The physician discusses the findings from the colonoscopy (2 small adenomas) with the patient and recommends a repeat colonoscopy in 5 to 10 years.

Summary

Colorectal cancer is one of the leading causes of cancer-related death in the United States. Since the advent of colonoscopy and the implementation of CRC screening efforts, the rates of CRC have started to decline. There are several environmental factors which have been associated with the development of CRC including obesity, dietary intake, physical activity and smoking. At present, there are multiple tools available for CRC prevention, but the most accurate and effective method is currently colonoscopy. Stool-based tests like FIT should be offered when a patient declines colonoscopy. For those interested in colonoscopy, average-risk individuals should be screened starting at the age of 50 with subsequent examinations every 10 years. Surveillance examinations should occur based on polyp findings on index colonoscopy. There is no recommendation to continue screening after the age of 75, though physicians can determine this based on patients health and risk/benefit profile. Current guidelines recommend against offering any screening to patients over the age of 85. Despite these recommendations, almost half of the eligible screening population has yet to undergo appropriate CRC screening. Future work should include public health efforts to improve access and appeal of widespread CRC screening regardless of modality. While colonoscopy is considered the most effective screening test, the best test is still the one the patient gets.

 

Corresponding author: Audrey H. Calderwood, MD, MS, 85 E. Concord St., Rm. 7724, Boston, MA 02118, [email protected].

Financial disclosures: None.

References

1. American Cancer Society. Colorectal cancer facts & figures 2014–2016. Atlanta: American Cancer Society; 2014.

2. Ries L, Melbert D, Krapcho M, et al. SEER cancer statistics review, 1975–2011. Bethesda, MD: National Cancer Institute; 2014.

3. Levine JS, Ahnen DJ. Clinical practice. Adenomatous polyps of the colon. N Engl J Med 2006;355:2551–7.

4. Bond JH. Polyp guideline: diagnosis, treatment, and surveillance for patients with colorectal polyps. Practice Parameters Committee of the American College of Gastroenterology. Am J Gastroenterol 2000;95:3053–63.

5. Schatzkin A, Freedman LS, Dawsey SM, Lanza E. Interpreting precursor studies: what polyp trials tell us about large-bowel cancer. J Natl Cancer Inst 1994;86:1053–7.

6. DevCan: Probability of developing or dying of cancer software, version 6.5.0; Statistical Research and Applications Branch, National Cancer Institute, 2005. http://srab.cancer.gov/devcan [computer program].

7. Surveillance, Epidemiology, and End Results (SEER) Program (www.seer.cancer.gov), National Cancer Institute, DCCPS, Surveillance Research Program, Cancer Statistics Branch, released April 2010, based on the November 2009 submission.

8. Murphy G, Devesa SS, Cross AJ, et al. Sex disparities in colorectal cancer incidence by anatomic subsite, race and age. Int J Cancer 2011;128:1668–7.

9. Edwards BK, Ward E, Kohler BA, et al. Annual report to the nation on the status of cancer, 1975-2006, featuring colorectal cancer trends and impact of interventions (risk factors, screening, and treatment) to reduce future rates. Cancer 2010;116:544–73.

10. Irby K, Anderson WF, Henson DE, Devesa SS. Emerging and widening colorectal carcinoma disparities between Blacks and Whites in the United States (1975-2002). Cancer Epidemiol Biomarkers Prev 2006;15:792–7.

11. Levin B, Lieberman DA, McFarland B, et al. Screening and surveillance for the early detection of colorectal cancer and adenomatous polyps, 2008: a joint guideline from the American Cancer Society, the US Multi-Society Task Force on Colorectal Cancer, and the American College of Radiology. CA Cancer J Clin 2008;58:130–60.

12. Kirkegaard H, Johnsen NF, Christensen J, et al. Association of adherence to lifestyle recommendations and risk of colorectal cancer: a prospective Danish cohort study. BMJ 2010;341:c5504.

13. Butterworth AS, Higgins JP, Pharoah P. Relative and absolute risk of colorectal cancer for individuals with a family history: a meta-analysis. Eur J Cancer 2006;42:216–27.

14. Johns LE, Houlston RS. A systematic review and meta-analysis of familial colorectal cancer risk. Am J Gastroenterol 2001;96:2992–3003.

15. Lynch HT, de la Chapelle A. Hereditary colorectal cancer. N Engl J Med 2003;348:919–32.

16. Jasperson KW, Tuohy TM, Neklason DW, Burt RW. Hereditary and familial colon cancer. Gastroenterology 2010;138:2044–58.

17. Stoffel E, Mukherjee B, Raymond VM, et al. Calculation of risk of colorectal and endometrial cancer among patients with Lynch syndrome. Gastroenterology 2009;137:1621–7.

18. Bernstein CN, Blanchard JF, Kliewer E, Wajda A. Cancer risk in patients with inflammatory bowel disease: a population-based study. Cancer 2001;91:854–62.

19. Eaden JA, Abrams KR, Mayberry JF. The risk of colorectal cancer in ulcerative colitis: a meta-analysis. Gut 2001;48:526–35.

20. Larsson SC, Orsini N, Wolk A. Diabetes mellitus and risk of colorectal cancer: a meta-analysis. J Natl Cancer Inst 2005;97:1679–87.

21. Campbell PT, Deka A, Jacobs EJ, et al. Prospective study reveals associations between colorectal cancer and type 2 diabetes mellitus or insulin use in men. Gastroenterology 2010;139:1138–46.

22. Larsson SC, Giovannucci E, Wolk A. Diabetes and colorectal cancer incidence in the cohort of Swedish men. Diabetes Care 2005;28:1805–7.

23. Huxley RR, Ansary-Moghaddam A, Clifton P, et al. The impact of dietary and lifestyle risk factors on risk of colorectal cancer: a quantitative overview of the epidemiological evidence. Int J Cancer 2009;125:171–80.

24. Larsson SC, Wolk A. Obesity and colon and rectal cancer risk: a meta-analysis of prospective studies. Am J Clin Nutr 2007;86:556–65.

25. Wang Y, Jacobs EJ, Patel AV, et al. A prospective study of waist circumference and body mass index in relation to colorectal cancer incidence. Cancer Causes Control 2008;19:783–92.

26. Chao A, Thun MJ, Connell CJ, et al. Meat consumption and risk of colorectal cancer. JAMA 2005;293:172–82.

27. Cross AJ, Ferrucci LM, Risch A, et al. A large prospective study of meat consumption and colorectal cancer risk: an investigation of potential mechanisms underlying this association. Cancer Res 2010;70:2406–14.

28. Chan AT, Giovannucci EL. Primary prevention of colorectal cancer. Gastroenterology 2010;138:2029–43.

29. Food, nutrition, physical activity, and the prevention of cancer: a global perspective. Washington DC: World Cancer Research Fund/American Institute for Cancer Research; 2007.

30. McCullough ML, Robertson AS, Chao A, et al. A prospective study of whole grains, fruits, vegetables and colon cancer risk. Cancer Causes Control 2003;14:959–70.

31. Terry P, Giovannucci E, Michels KB, et al. Fruit, vegetables, dietary fiber, and risk of colorectal cancer. J Natl Cancer Inst 2001;93:525–33.

32. Cho E, Smith-Warner SA, Spiegelman D, et al. Dairy foods, calcium, and colorectal cancer: a pooled analysis of 10 cohort studies. J Natl Cancer Inst 2004;96:1015–22.

33. Secretan B, Straif K, Baan R, et al. A review of human carcinogens--Part E: tobacco, areca nut, alcohol, coal smoke, and salted fish. Lancet Oncol 2009;10:1033–4.

34. Ferrari P, Jenab M, Norat T, et al. Lifetime and baseline alcohol intake and risk of colon and rectal cancers in the European prospective investigation into cancer and nutrition (EPIC). Int J Cancer 2007;121:2065–72.

35. Samad AK, Taylor RS, Marshall T, Chapman MA. A meta-analysis of the association of physical activity with reduced risk of colorectal cancer. Colorectal Dis 2005;7:204–13.

36. Flossmann E, Rothwell PM. Effect of aspirin on long-term risk of colorectal cancer: consistent evidence from randomised and observational studies. Lancet 2007;369:1603–13.

37. Rothwell PM, Wilson M, Elwin CE, et al. Long-term effect of aspirin on colorectal cancer incidence and mortality: 20-year follow-up of five randomised trials. Lancet 2010;376:1741–50.

38. Hildebrand JS, Jacobs EJ, Campbell PT, et al. Colorectal cancer incidence and postmenopausal hormone use by type, recency, and duration in cancer prevention study II. Cancer Epidemiol Biomarkers Prev 2009;18:2835–41.

39. Heiss G, Wallace R, Anderson GL, et al. Health risks and benefits 3 years after stopping randomized treatment with estrogen and progestin. JAMA 2008;299:1036–45.

40. Rossouw JE, Anderson GL, Prentice RL, et al. Risks and benefits of estrogen plus progestin in healthy postmenopausal women: principal results From the Women’s Health Initiative randomized controlled trial. JAMA 2002;288:321–33.

41. Lieberman DA,Weiss DG. One-time screening for colorectal cancer with combined fecal occult-blood testing and examination of the distal colon. N Engl J Med 2001;345:555–60.

42. Lafata JE, Divine G, Moon C,Williams LK. Patient-physician colorectal cancer screening discussions and screening use. Am J Prev Med 2006;31:202–9.

43. Rex DK, Johnson DA, Andersone JC, et al. American College of Gastroenterology guidelines for colorectal cancer screening 2008. Am J Gastroenterol 2009;104:739–50.

44. U.S. Preventive Services Task Force. Screening for colorectal cancer: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med 2008;149:627–37.

45. Smith RA, von Eschenbach AC,Wender R, et al. American Cancer Society guidelines for the early detection of cancer: update of early detection guidelines for prostate, colorectal, and endometrial cancers. Also: update 2001—testing for early lung cancer detection. CA Cancer J Clin 2001;51:38–75.

46. Winawer S, Fletcher R, Rex D, et al. Colorectal cancer screening and surveillance: clinical guidelines and rationale—update based on new evidence. Gastroenterology 2003;124:544–60.

47. Rex DK, Kahi CJ, Levin B, et al. Guidelines for colonoscopy surveillance after cancer resection: a consensus update by the American Cancer Society and the US Multi-Society Task Force on Colorectal Cancer. Gastroenterology 2006;130:1865–71.

48. Ransohoff DF, Lang CA. Screening for colorectal cancer with the fecal occult blood test: a background paper. American College of Physicians. Ann Intern Med 1997;126:811–22.

49. Hardcastle JD, Chamberlain JO, Robinson MH, et al. Randomised controlled trial of faecal-occult blood screening for colorectal cancer. Lancet 1996;348:1472–7.

50. Kronborg O, Fenger C, Olsen J, et al. Randomised study of screening for colorectal cancer with faecal-occult blood test. Lancet 1996;348:1467–71.

51. Wilson JMG, Junger G. Principles and practice of screening for disease. Geneva: World Health Organization; 1968.

52. Allison JE, Tekawa IS, Ransom LJ, Adrain AL. A comparison of fecal occult-blood tests for colorectal-cancer screening. N Engl J Med 1996;334:155–9.

53. Caligiore P, Macrae FA, St John DJ, et al. Peroxidase levels in food: relevance to colorectal cancer screening. Am J Clin Nutr 1982;35:1487–9.

54. Nakajima M, Saito H, Soma Y, et al. Prevention of advanced colorectal cancer by screening using the immunochemical faecal occult blood test: a case-control study. Br J Cancer 2003;89:23–8.

55. Lee KJ, Inoue M, Otani T, et al. Colorectal cancer screening using fecal occult blood test and subsequent risk of colorectal cancer: a prospective cohort study in Japan. Cancer Detect Prev 2007;31:3–11.

56. Zappa M, Csatiglione G, Gazzini G, et al. Effect of faecal occult blood testing on colorectal mortality: results of a population-based case-control study in the district of Florence, Italy. Int J Cancer 1997;73:208–10.

57. van Rossum LG, van Rijn AF, Laheij RJ, et al. Random comparison of guaiac and immunochemical fecal occult blood tests for colorectal cancer in a screening population. Gastroenterology 2008;135:82–90.

58. Hol L, van Leerdam ME, van Ballegooijen M, et al. Attendance to screening for colorectal cancer in the Netherlands; randomized controlled trial comparing two different forms of faecal occult blood tests and sigmoidoscopy. Gastroenterology 2008;134:A87.

59. Johnson CD, Chen MH, Toledano AY, et al. Accuracy of CT colonography for detection of large adenomas and cancers.
N Engl J Med 2008;359:1207–17.

60. Brenner DJ, Georgsson MA. Mass screening with CT colonography: should the radiation exposure be of concern? Gastroenterology 2005;129:328–37.

61. Brenner DJ, Hall EJ. Computed tomography—an increasing source of radiation exposure. N Engl J Med 2007;35: 2277–84.

62. Hur C, Chung DC, Schoen RE, et al. The management of small polyps found by virtual colonoscopy: results of a decision analysis. Clin Gastroenterol Hepatol 2007;5:237–44.

63. Chu KC, Tarone RE, Chow WH, et al. Temporal patterns in colorectal cancer incidence, survival, and mortality from 1950 through 1990. J Natl Cancer Inst 1994;86:997–1006.

64. Vasen HF, Moslein G, Alonso A, et al. Guidelines for the clinical management of familial adenomatous polyposis (FAP). Gut 2008;57:704–13.

65. Umar A, Boland CR, Terdiman PJ, et al. Revised Bethesda guidelines for hereditary nonpolyposis colorectal Cancer (Lynch syndrome) and microsatellite instability. J Natl Cancer Inst 2004;96:261–8.

66. Lieberman DA, Rex DR, Winawer SJ, et al. Guidelines for colonoscopy surveillance after screening and polypectomy: a consensus update by the US Multi-Society Task Force on Colorectal Cancer. Gastroenterology 2012;143:844–57.

67. Winawer SJ, Zauber AG, Fletcher RH, et al. Guidelines for colonoscopy surveillance after polypectomy: a consensus update by the US Multi-Society Task Force on colorectal cancer and the American Cancer Society. Gastroenterology 2006;130:1872–85.

68. Lieberman DA, Weiss DG, Harford WV, et al. Five year colon surveillance after screening colonoscopy. Gastroenterology 2007;133:1077–85.

69. Imperiale TF, Glowinski EA, Lin-Cooper C, et al. Five-year risk of colorectal neoplasia after negative screening colonoscopy. N Engl J Med 2008;359:1218–24.

70. Leung WK, Lau JYW, Suen BY, et al. Repeat screening colonoscopy 5 years after normal baseline screening colonoscopy in average-risk Chinese: a prospective study. Am J Gastroenterol 2009;104:2028–34.

71. Brenner H, Haug U, Arndt V, et al. Low risk of colorectal cancer and advanced adenomas more than 10 years after negative colonoscopy. Gastroenterology 2010;138:870–6.

72. Miller H, Mukherjee R, Tian J, et al. Colonoscopy surveillance after polypectomy may be extended beyond five years. J Clin Gastroenterol 2010;44:e162–e166.

73. Chung SJ, Kim YS, Yang SY, et al. Five-year risk for advanced colorectal neoplasia after initial colonoscopy according to the baseline risk stratification: a prospective study in 2452 asymptomatic Koreans. Gut 2011;60:1537–43.

74. Atkin WS, Edwards R, Kralj-Hans I, et al. Once-only flexible sigmoidoscopy screening in prevention of colorectal cancer: a multicentre randomised controlled trial. Lancet 2010;375:1624–33.

75. Saini SD, Kim HM, Schoenfeld P. Incidence of advanced adenomas at surveillance colonoscopy in patients with a personal history of colon adenomas: a meta-analysis and systematic review. Gastrointest Endosc 2006;64:614–26.

76. Martinez ME, Baron JA, Lieberman DA, et al. A pooled analysis of advanced colorectal neoplasia diagnoses following colonoscopic polypectomy. Gastroenterology 2009;136:832–41.

77. Leggett B, Whitehall V. Role of the serrated pathway in colorectal cancer pathogenesis. Gastroenterology 2010;138:
2088–100.

78. Schreiner MA, Weiss DG, Lieberman DA. Proximal and large nonneoplastic serrated polyps: association with synchronous neoplasia at screening colonoscopy and with interval neoplasia at follow- up colonoscopy. Gastroenterology 2010;139:1497–502.

79. Hiraoka S, Kato J, Fujiki S, et al. The presence of large serrated polyps increases risk for colorectal cancer. Gastroenterology 2010;139:1503–10.

80. Robertson DJ, Burke CA, Welch HG, et al. Using the results of a baseline and a surveillance colonoscopy to predict recurrent adenomas with high-risk characteristics. Ann Intern Med 2009;151:103–9.

81. Warren JL, Klabunde CN, Mariotto AB, et al. Adverse events after outpatient colonoscopy in the Medicare population. Ann Intern Med 2009;150:849–57.

82. Ko CW, Riffle S, Michaels L, et al. Serious complications within 30 days of screening and surveillance colonoscopy: a multicenter study. Clin Gastroenterol Hepatol 2010;8:166–73.

Issue
Journal of Clinical Outcomes Management - March 2015, VOL. 22, NO. 3
Publications
Topics
Sections

From the Boston University School of Medicine, Boston, MA.

 

Abstract

  • Objective: To review recommendations for colorectal cancer (CRC) screening.
  • Methods: Review of the literature.
  • Results: In the United States, CRC is the third most commonly diagnosed cancer and the third leading cause of cancer death. CRC screening can reduce mortality through the detection of early-stage disease and the detection and removal of adenomatous polyps. There are several modalities for CRC screening, with current technology falling into 2 general categories: stool tests, which include tests for occult blood or exfoliated DNA; and structural exams, which include flexible sigmoidoscopy, colonoscopy, double-contrast barium enema, and CT colonography. The preferred CRC prevention test for average-risk individuals is colonoscopy starting at age 50 with subsequent examinations every 10 years. Patients unwilling to undergo screening colonoscopy may be offered flexible sigmoidoscopy, CT colonography, or fecal immunohistochemical test. Surveillance examinations should occur based on polyp findings on index colonoscopy. There is no recommendation to continue screening after age 75, though physicians can make a determination based on a patient’s health and risk/benefit profile. Current guidelines recommend against offering screening to patients over age 85.
  • Conclusion: Increasing access to and utilization of CRC screening tests is likely to lead to improvements in mortality reduction, as only about half of people aged 50 or older report having received CRC testing consistent with current guidelines.

In the United States, colorectal cancer (CRC) is the third most commonly diagnosed cancer and the third leading cause of cancer death in both men and women [1]. In 2014, an estimated 136,830 people were diagnosed with CRC and about 50,310 people died of the disease [2]. Colorectal cancer usually develops slowly over a period of 10 to 15 years. The tumor typically begins as a noncancerous polyp, classically an adenomatous polyp or adenoma, though fewer than 10% of adenomas will progress to cancer [3]. Adenomas are common; an estimated one-third to one-half of all individuals will eventually develop 1 or more adenomas [4,5]. In the United States, the lifetime risk of being diagnosed with CRC is approximately 5% for both men and women [6]. Incidence rates for CRC increase with age, with an incidence rate more than 15 times higher in adults aged 50 years and older compared with those aged 20 to 49 years [7].

Certain demographic subgroups have been shown to be at higher risk. Overall, CRC incidence and mortality rates are about 35% to 40% higher in men than in women. The reasons for this are not completely understood but likely reflect complex interactions between gender-related differences in exposure to hormones and risk factors [8]. CRC incidence and mortality rates are highest in African-American men and women; incidence rates are 20% higher and mortality rates are about 45% higher than those in whites. Prior to 1989, incidence rates were predominantly higher in white men than in African American men and were similar for women of both races. Since that time, although incidence rates have declined as a whole [9], incidence rates have been higher for African Americans than whites in both men and women This crossover likely reflects a combination of greater access to and utilization of recommended screening tests among whites (resulting in detection and removal of precancerous polyps), as well as racial differences in trends for CRC risk factors [10].

CRC screening can reduce mortality through the detection of early-stage disease and the detection and removal of ademomatous polyps [11]. Increasing access to and utilization of CRC screening tests is likely to lead to improvements in mortality reduction, as only about half of people aged 50 or older report having received CRC testing consistent with current guidelines [1].

Case Study

Initial Presentation

A 55-year-old white male presents for a routine visit and asks about colon cancer screening. His father was diagnosed with colon cancer at the age of 78. Overall, he feels well and does not have any particular complaints. His bowel habits are normal and he denies melena and hematochezia. His past medical history is significant for diabetes, hypertension, and obesity. He was a previous smoker and has a few alcoholic drinks on the weekends. His physical exam is unremarkable. Results of recent blood work are normal and there is no evidence of anemia.

  • What are this patient’s risk factors for developing colon cancer?

Risk Factors for CRC

There are numerous factors that are thought to influence risk for CRC. Nonmodifiable risk factors include a personal or family history of CRC or adenomatous polyps, and a personal history of chronic inflammatory bowel disease. Modifiable risk factors that have been associated with an increased risk of CRC in epidemiologic studies include physical inactivity, obesity, high consumption of red or processed meats, smoking, and moderate-to-heavy alcohol consumption. In fact, a prospective study showed that up to 23% of colorectal cancers were considered to be potentially avoidable by adhering to multiple healthy lifestyle recommendations including maintaining a healthy weight, being physically active at least 30 minutes per day, eating a healthy diet, and avoiding smoking and drinking excessive amounts of alcohol [12].

People with a first-degree relative (parent, sibling, or offspring) who has had CRC have 2 to 3 times the risk of developing the disease compared with individuals with no family history; if the relative was diagnosed at a young age or if there is more than 1 affected relative, risk increases to 3 to 6 times that of the general population [13,14]. About 5% of patients with CRC have a well-defined genetic syndrome that causes the disease [15]. The most common of these is Lynch syndrome (also known as hereditary nonpolyposis colorectal cancer or HNPCC), which accounts for 2% to 4% of all CRC cases [16]. Although individuals with Lynch syndrome are predisposed to numerous types of cancer, risk of CRC is highest. A recent study of CRC in 147 Lynch syndrome families in the United States found lifetime risk of CRC to be 66% in men and 43% in women, with a median age at diagnosis of 42 years and 47 years, respectively [17]. Familial adenomatous polyposis (FAP) is the second most common predisposing genetic syndrome; for these individuals, the lifetime risk of CRC approaches 100% without intervention (eg, colectomy) [16].

People who have inflammatory bowel disease of the colon (both ulcerative colitis and Crohn’s disease) have an increased risk of developing CRC that correlates with the extent and the duration of the inflammation [18]. It is estimated that 18% of patients with a 30-year history of ulcerative colitis will develop CRC [19]. In addition, several studies have found an association between diabetes and increased risk of CRC [20,21]. Though adult-onset type 2 diabetes (the most common type) and CRC share similar risk factors, including physical inactivity and obesity, a positive association between diabetes and CRC has been found even after accounting for physical activity, body mass index, and waist circumference [22].

Being overweight or obese is also associated with a higher risk of CRC, with stronger associations more consistently observed in men than in women. Obesity increases the risk of CRC independent of physical activity. Abdominal obesity (measured by waist circumference) may be a more important risk factor for colon cancer than overall obesity in both men and women [23–25]. Diet and lifestyle strongly influence CRC risk; however, research on the role of specific dietary elements on CRC risk is still accumulating. Several studies, including one by the American Cancer Society, have found that high consumption of red and/or processed meat increases the risk of both colon and rectal cancer [23,26,27]. Further analyses indicate that the association between CRC and red meat may be related to the cooking process, because a higher risk of CRC is observed particularly among those individuals who consume meat that has been cooked at a high temperature for a long period of time [28]. In contrast to findings from earlier research, more recent large, prospective studies do not indicate a major relationship between CRC and vegetable, fruit, or fiber consumption [28,29]. However, some studies suggest that people with very low fruit and vegetable intake are at above-average risk for CRC [30,31]. Consumption of milk and calcium may decrease the risk of developing CRC [28,29,32].

In November 2009, the International Agency for Research on Cancer reported that there is now sufficient evidence to conclude that tobacco smoking causes CRC [33]. Colorectal cancer has been linked to even moderate alcohol use. Individuals who have a lifetime average of 2 to 4 alcoholic drinks per day have a 23% higher risk of CRC than those who consume less than 1 drink per day [34].

Protective Factors

One of the most consistently reported relationships between colon cancer risk and behavior is the protective effect of physical activity [35]. Based on these findings, as well as the numerous other health benefits of regular physical activity, the American Cancer Society recommends engaging in at least moderate activity for 30 minutes or more on 5 or more days per week.

Accumulating research suggests that aspirin-like drugs, postmenopausal hormones, and calcium supplements may help prevent CRC. Extensive evidence suggests that long-term, regular use of aspirin and other nonsteroidal anti-inflammatory drugs (NSAIDs) is asso-ciated with lower risk of CRC. The American Cancer Society does not currently recommend use of these drugs as chemoprevention because of the potential side effects of gastrointestinal bleeding from aspirin and other traditional NSAIDs and heart attacks from selective cyclooxygenase-2 (COX-2) inhibitors. However, people who are already taking NSAIDs for chronic arthritis or aspirin for heart disease prevention may have a lower risk of CRC as a positive side effect [36,37].

There is substantial evidence that women who use postmenopausal hormones have lower rates of CRC than those who do not. A decreased risk of CRC is especially evident in women who use hormones long-term, although the risk returns to that of nonusers within 3 years of cessation. Despite its positive effect on CRC risk, the use of postmenopausal hormones increases the risk of breast and other cancers as well as cardiovascular disease, and therefore it is not recommended for the prevention of CRC. At present, the American Cancer Society does not recommend any medications or supplements to prevent CRC because of uncertainties about their effectiveness, appropriate dosing, and potential toxicity [38–40].

Case Continued

The physician tells the patient that there are several environmental factors that may predispose him to developing CRC. He recommends that the patient follow a healthy lifestyle, including eating 5 servings of fruits and vegetables daily, minimizing consumption of red meats, exercising for 30 minutes at least 5 days per week, drinking only moderate amounts of alcohol, and continuing to take his aspirin in the setting of his diabetes. He also asks the patient if he would be interested in talking about weight loss and working together to make a plan.

The patient is appreciative of this information and wants to know what CRC creening test the physician recommends.

  • What screening test should be recommended?

Screening Options

There are several modalities for CRC screening, with current technology falling into 2 general categories: stool tests, which include tests for occult blood or exfoliated DNA; and structural exams, which include flexible sigmoidoscopy, colonoscopy, double-contrast barium enema (DCBE), and computed tomographic (CT) colonography. Stool tests are best suited for the detection of CRC, although they also will deliver positive findings for some advanced adenomas, while the structural exams can achieve both detection and prevention of CRC through identification and removal of adenomatous polyps [41]. These tests may be used alone or in combination to improve sensitivity or, in some instances, to ensure a complete examination of the colon if the initial test cannot be completed.

In principle, all adults should have access to the full range of options for CRC screening, and the availability of lower-cost, less invasive options in most practice settings is a public health advantage [11]. However, the availability of multiple testing options can overwhelm the primary care provider and presents challenges for practices in trying to support an office policy that can manage a broad range of testing choices, their follow-up requirements, and shared decision making related to the options. Shared decision making around CRC screening options is both demanding and time consuming and is complicated by the different characteristics of the tests and the test-specific requirements for individuals undergoing screening [42].

Recommended Tests

The joint guideline on screening for CRC from the American Cancer Society, the US Multi-Society Task Force on Colorectal Cancer, and the American College of Radiology (the MSTF guideline) [11] is of the strong opinion that tests designed to detect early cancer and prevent cancer through the detection and removal of adenomatous polyps (the structural exams) should be encouraged if resources are available and patients are willing to undergo an invasive test [11]. In clinical settings in which economic issues preclude primary screening with colonoscopy, or for patients who decline invasive tests, clinicians may offer stool- based testing. However, providers and patients should understand that these tests are less likely to prevent cancer compared with the invasive tests, they must be repeated at regular intervals to be effective (ie, programmatic sensitivity), and if the test is abnormal, a colonoscopy will be needed to follow up. Therefore, if patients are not willing to have repeated testing or pursue colonoscopy if the test is abnormal, these programs will not be effective and should not be recommended [11].

At this time, colonoscopy every 10 years, beginning at age 50, is the American College of Gastroenterology-preferred CRC screening strategy [43]. In cases when patients are unwilling to undergo colonoscopy for screening purposes, patients should be offered flexible sigmoidoscopy every 5-10 years, a computed tomography (CT) colonography every 5 years, or fecal immunohistochemical test (FIT) [43] (Table 1). The US Preventive Services Task Force (USPSTF) recommends screening for colorectal cancer using fecal occult blood testing, sigmoidoscopy, or colonoscopy in adults, beginning at age 50 years and continuing until age 75 years [44].

Stool-Based Testing

Stool blood tests are conventionally known as fecal occult blood tests (FOBT) because they are designed to detect the presence of occult blood in stool. FOBT falls into 2 primary categories based on the detected analyte: guaiac-based and FIT. Blood in the stool is a nonspecific finding but may originate from CRC or larger (> 1 to 2 cm) polyps. Because small adenomatous polyps do not tend to bleed and bleeding from cancers or large polyps may be intermittent or undetectable in a single sample of stool, the proper use of stool blood tests requires annual testing that consists of collecting specimens (2 or 3, depending on the product) from consecutive bowel movements [45–47].

Guaiac-based FOBT

Guaiac-based FOBT (gFOBT) is the most common stool blood test for CRC screening and the only CRC screening test for which there is evidence of efficacy from randomized controlled trials [11]. The usual gFOBT protocol consists of collecting 2 samples from each of 3 consecutive bowel movements at home. Prior to testing with a sensitive guaiac-based test, individuals usually will be instructed to avoid aspirin and other NSAIDs, vitamin C, red meat, poultry, fish, and some raw vegetables because of diet-test interactions that can increase the risk of both false-positive and false-negative (specifically, vitamin C) results [48]. Collection of all 3 samples is important because test sensitivity improves with each additional stool sample [41]. Three large randomized controlled trials with gFOBT have demonstrated that screened patients have cancers detected at an early and more curable stage than unscreened patients. Over time (8 to 13 years), each of the trials demonstrated significant reductions in CRC mortality of 15% to 33% [49–51]. However, the reported sensitivity of a single gFOBT varies considerably [52].

FIT

FIT has several technological advantages when compared with gFOBT. FIT detects human globin, a protein that along with heme constitutes human hemoglobin. Thus, FIT is more specific for human blood than guaiac-based tests, which rely on detection of peroxidase in human blood and also react to the peroxidase that is present in dietary constituents such as rare red meat, cruciferous vegetables, and some fruits [53]. Furthermore, unlike gFOBT, FIT is not subject to false-negative results in the presence of high-dose vitamin C supplements, which block the peroxidase reaction. In addition, because globin is degraded by digestive enzymes in the upper gastrointestinal tract, FIT is also more specific for lower gastrointestinal bleeding, thus improving the specificity for CRC. Finally, the sample collection process for patients for some variants of FIT are less demanding than gFOBT, requiring fewer samples or less direct handling of stool, which may increase FIT’s appeal. Although FIT has superior performance characteristics when compared with older guaiac-based Hemoccult II cards [54–56], the spectrum of benefits, limitations, and harms is similar to a gFOBT with high sensitivity [41]. As for adherence with FIT, there were 10% and 12% gains in adherence with FIT in the first 2 randomized controlled trials comparing FIT with guaiac-based testing [57,58]. Therefore, FIT is preferred over Hemoccult Sensa and is the preferred annual cancer detection test when colonoscopy is not an option [43]. The American College of Gastroenterology supports the joint guideline recommendation [11] that older guaiac-based fecal occult blood testing be abandoned as a method for CRC screening.

sDNA

Fecal DNA testing uses knowledge of molecular genomics and provides the basis of a new method of CRC screening that tests stool for the presence of known DNA alterations in the adenoma-carcinoma sequence of colorectal carcinogenesis [11]. Three different types of fecal DNA testing kits have been evaluated. The sensitivity for cancer in each version was superior to traditional guaiac-based occult blood testing, but the sensitivities ranged from 52%–87%, with the specificities ranging from 82%–95%. Based on the accumulation of evidence since the last update of joint guideline, the joint guideline panel concluded that there now are sufficient data to include sDNA as an acceptable option for CRC screening [11].

As for overall recommendations for stool-based testing, the ACG supports the joint guideline recommendation that older guaiac-based fecal occult blood testing be abandoned as a method for CRC screening. Because of more extensive data (compared with Hemoccult Sensa), and the high cost of fecal DNA testing, the American College of Gastroenterology recommends FIT as the preferred cancer detection test in cases where colonoscopy is not an option [43].

Invasive Tests Other than Colonoscopy

The use of flexible sigmoidoscopy for CRC screening is supported by high-quality case-control and cohort studies [46]. The chief advantage of flexible sigmoidoscopy is that it can be performed with a simple preparation (2 enemas), without sedation, and by a variety of practitioners in diverse settings. The main limitation of the procedure is that it does not examine the entire colon but only the rectum, sigmoid, and descending colon. The effectiveness of a flexible sigmoidoscopy program is based on the assumption that if an adenoma is detected during the procedure, the patient would be referred for colonoscopy to examine the entire colon.

DCBE is an imaging modality which can evaluate the entire colon in almost all cases and can detect most cancers and the majority of significant polyps. However, the lower sensitivity for significant adenomas when compared with colonoscopy may result in less favorable outcomes regarding CRC morbidity and mortality. Double-contrast barium enema is no longer recommended as an alternative CRC prevention test because its use has declined dramatically and also as its effectiveness for polyp detection is less than CT colonography [43].

CT Colonography

CT colonography every 5 years is endorsed as an alternative to colonoscopy every 10 years because of its recent performance in the American College of Imaging Network Trial 6664 (also known as the National CT Colonography Trial) [59]. The principle performance feature that justifies inclusion of CT colonography as a viable alternative in patients who decline colonoscopy is that the sensitivity for polyps ≥ 1 cm in size was 90% in the most recent multicenter US trial [59]. In this study, 25% of radiologists who were tested for entry into the trial but performed poorly were excluded from participation, and thus lower sensitivity might be expected in actual clinical practice. CT colonography probably has a lower risk of perforation than colonoscopy in most settings, but for several reasons it is not considered the equivalent of colonoscopy as a screening strategy. First, the evidence to support an effect of endoscopic screening on prevention of incident CRC and mortality is overwhelming compared with that for CT colonography. Second, the inability of CT colonography to adequately detect polyps 5 mm and smaller, which constitutes 80% of colorectal neoplasms, and whose natural history is still not understood, necessitates performance of the test at 5-year rather than 10-year intervals [43]. Finally, false-positives are common, and the specificity for polyps ≥ 1 cm in size was only 86% in the National CT Colonography Trial, with a positive predictive value of 23% [59]. The American College of Gastroenterology recommends that asymptomatic patients be informed of the possibility of radiation risk associated with one or repeated CT colonography studies, though the exact risk associated with radiation is unclear [60,61].

The value of extracolonic findings detected by CT colonography is mixed, with substantial costs associated with incidental findings, but occasional important extracolonic findings are detected, such as asymptomatic cancers and large abdominal aortic aneurysms. As a final point, the ACG is also concerned about the potential impact of CT colonography on adherence with follow-up colonoscopy and thus on polypectomy rates. Thus, if CT colonography substantially improves adherence, it should improve polypectomy rates and thereby reduce CRC, even if only large polyps are detected and referred for colonoscopy. On the other hand, if CT colonography largely displaces patients who would otherwise be willing to undergo colonoscopy, then polypectomy rates will fall substantially, which could significantly increase the CRC incidence [62]. Thus, for multiple reasons and pending additional study, CT colonography should be offered to patients who decline colonoscopy. It should be noted that CT colonography should only be offered for the purposes of CRC screening and should not be used for diagnostic workup of symptoms (eg, patient with active bleeding or inflammatory bowel disease).

  • When should screening begin?

The American College of Gastroenterology continues to recommend that screening begin at age 50 years in average-risk persons (ie, those without a family history of colorectal neoplasia), except for African Americans, in whom it should begin at age 45 years [43]. The USPSTF does not currently provide specific recommendations based on race or ethnicity, but certain other subgroups of the average-risk population might warrant initiation of screening at an earlier or later age, depending on their risk. For example, the incident risk of CRC has been described to be greater in men than women [63]. In reviewing the literature, the writing committee also identified heavy cigarette smoking and obesity as linked to an increased risk of CRC and to the development of CRC at an earlier age.

For patients with a family history of CRC or adenomatous polyps, the 2008 MSTF guideline recommends initiation of screening at age 40 [11]. The American College of Gastroenterology recommendations for screening in patients with a family history are shown in Table 1. From a practical perspective, many clinicians have found that patients are often not aware of whether their first-degree relatives had advanced adenomas vs. small tubular adenomas, or whether their family members had non-neoplastic vs. neoplastic polyps. Given these difficulties, the American College of Gastroenterology now recommends that adenomas only be counted as equal to a family history of cancer when there is a clear history, or medical report containing evidence, or other evidence to indicate that family members had advanced adenomas (an adenoma ≥ 1 cm in size, or with high-grade dysplasia, or with villous elements) [43]. Continuation of the old recommendation to screen first-degree relatives of patients with only small tubular adenomas could result in most of the population being screened at age 40, with doubtful benefit.

  • What are screening considerations in patients with genetic syndromes?

Patients with features of an inherited CRC syndrome should be advised to pursue genetic counseling with a licensed genetic counselor and, if appropriate, genetic testing. Individuals with FAP should undergo adenomatous polyposis coli (APC) mutation testing and, if negative, MYH mutation testing. Patients with FAP or at risk of FAP based upon family history should undergo annual colonoscopy until colectomy is deemed by both physician and patient as the best treatment [64]. Patients with a retained rectum after total colectomy and ileorectal anastomosis, ileal pouch, after total proctocolectomy and ileal pouch anal anastomosis, or stoma after total proctocolectomy and end ileostomy, should undergo endoscopic assessment approximately every 6 to 12 months after surgery, depending on the polyp burden seen. Individuals with oligopolyposis (< 100 colorectal polyps) should be sent for genetic counseling, consideration of APC and MYH mutation testing, and individualized colonoscopy surveillance depending on the size, number, and pathology of polyps seen. Upper endoscopic surveillance is recommended in individuals with FAP, but there are no established guidelines for endoscopic surveillance in MAP (MYH-associated polyposis) [43].

Patients who meet the Bethesda criteria for HNPCC [65] can be screened by 2 different mechanisms. One is a DNA-based test for microsatellite instability of either the patient’s or a family member’s tumor. The other mechanism is to assess by immunohistochemical staining for evidence of mismatch repair proteins (eg, MLH1, MSH2, MSH6). In those patients in whom deleterious mutations are found, the affected individual should undergo colonoscopy every 2 years beginning at age 20 to 25 years until age 40 years, then annually thereafter [43]. If genetic testing is negative (ie, no deleterious mutation is found), but the patient is still felt to clinically have Lynch syndrome, then they should still be surveyed in the same way.

Case Continued

The physician recommends colonoscopy as the screening modality as it is the most efficient and accurate way of finding precancerous lesions and the most effective way of preventing CRC by removing precancerous lesions. He also explains that because the patient’s father developed CRC after the age of 60, this does not place the patient in a higher risk category and he can follow screening recommendations for “average-risk” individuals.

Screening

The patient undergoes colonoscopy. Two 5-mm adenomas in the transverse colon are detected and removed.

  • When should he have a repeat colonoscopy?

Surveillance Intervals

New data have recently emerged on the risk of interval cancer after colonoscopy. The overall rate of interval cancer is estimated to be 1.1–2.7 per 1000 person-years of follow-up. There are several reasons that may account for why patients develop interval cancers: (1) important lesions may be missed at baseline colonoscopy, (2) adenomas may be incompletely removed at the time of baseline colonoscopy, and (3) interval CRC may be biologically different or more aggressive than prevalent CRC. In order to minimize the risk of interval cancer development, it is important to perform a high-quality baseline screening colonoscopy examination as this is associated with lowering the risk of interval cancer [66]. A high-quality colonoscopy entails completion of the procedure to the cecum (with photodocumentation of the appendiceal orifice and ileocecal valve) with careful inspection of folds including adequate bowel cleanliness and a withdrawal time > 6 minutes.

The MSTF guidelines for surveillance after screening and polypectomy were published in 2006 [67], with an update in 2012 [66]. Their recommendations on surveillance colonoscopy are based on the predication that the initial colonoscopy is high quality and are summarized in Table 2 and discussed below.

Baseline Colonoscopy Findings

No Polyps

Several prospective observational studies in different populations have shown that the risk of advanced adenomas within 5 years after negative findings on colonoscopy is low (1.3%–2.4%) relative to the rate on initial screening examination (4%–10%) [68–73]. In these studies, interval cancers were rare within 5 years. A sigmoidoscopy randomized controlled trial performed in the United Kingdom demonstrated a reduction in CRC incidence and mortality at 10 years in patients who received one-time sigmoidoscopy compared with controls—a benefit limited to the distal colon [46]. This is the first randomized study to show the effectiveness of endoscopic screening, an effect that appears to have at least a 10-year duration [74]. Thus, in patients who have a baseline colonoscopic evaluation without any adenomas or polyps and are average-risk individuals, the recommendation for the next examination is in 10 years [66].

Distal Hyperplastic Polyps < 10 mm

There is considerable evidence that patients with only rectal or sigmoid hyperplastic polyps (HPs) appear to represent a low-risk cohort. Studies have focused on whether the finding in the distal colon was a marker of risk for advanced neoplasia elsewhere and most studies show no such relationship [67]. Prior and current evidence suggests that distal HPs <10 mm are benign without neoplastic potential. If the most advanced lesions at baseline colonoscopy are distal HPs <10 mm, the interval for colonoscopic follow-up should be 10 years [66].

1-2 Tubular Adenomas < 10 mm

Prior evidence suggested that patients with low-risk adenomas (<10 mm, no villous histology or high-grade dysplasia) had a lower risk of developing advanced adenomas during follow-up compared with patients with high risk adenomas (≥ 10mm, villous histology or high -grade dysplasia). At that time in 2006, consensus on the task force was that an interval of 5 years would be acceptable in this low-risk group [75]. Data published since 2006 endorse the assessment that patients with 1–2 tubular adenomas with low-grade dysplasia <10 mm represent a low-risk group. Three new studies suggest that this group may have only a small, nonsignificant increase in risk of advanced neoplasia within 5 years compared with individuals with no baseline neoplasia. The evidence now supports a surveillance interval of longer than 5 years for most patients and can be extended to 10 years based on the quality of the preparation and colonoscopy [66].

3–10 Tubular Adenomas

Two independent meta-analyses in 2006 found that patients with 3 or more adenomas at baseline had an increased RR for adenomas during surveillance, ranging from 1.7 to 4.8 [47,75]. New information from the VA study and the National Cancer Institute Pooling Project also support these prior findings. Patients with 3 or more adenomas have a level of risk for advanced neoplasia similar to other patients with advanced neoplasia (adenoma >10 mm, adenoma with high grade dysplasia) and thus, repeat examination should be performed in 3 years [66,68,76].

> 10 Adenomas

Only a small proportion of patients undergoing screening colonoscopy will have >10 adenomas. The 2006 guidelines for colonoscopy surveillance after polypectomy noted that such patients should be considered for evaluation of hereditary CRC syndromes [67]. Early follow-up surveillance colonoscopy is based on clinical judgment because there is little evidence to support a firm recommendation. At present, the recommendation is to consider follow-up in less than 3 years after a baseline colonoscopy [66].

1 or More Tubular Adenomas ≥ 10mm

The 2006 MSTF guideline reviewed data related to adenoma size, demonstrating that most studies showed a 2- to 5-fold increased risk of advanced neoplasia during follow-up if the baseline examination had one or more adenomas ≥ 10 mm [67]. Newer, additional data shows that patients with one or more adenomas ≥ 10 mm have an increased risk of advanced neoplasia during surveillance compared with those with no neoplasia or small (< 10 mm) adenomas [68,76]. Thus, the recommendations remains that repeat examination should be performed in 3 years [66]. If there is question about complete removal of an adenoma (ie, piecemeal resection), early follow-up colonoscopy is warranted [66].

1 or More Villous Adenomas

The 2006 MSTF guideline considers adenomas with villous histology to be high risk [67]. The NCI Pooling Project analyzed polyp histology as a risk factor for development of interval advanced neoplasia. Compared with patients with tubular adenomas, those with baseline polyp(s) showing adenomas with villous or tubulovillous histology (TVA) had increased risk of advanced neoplasia during follow-up (16.8% vs 9.7%; adjusted OR, 1.28; 95% CI, 1.07–1.52) [76]. Patients with one or more adenomas with villous histology were also found to have an increased risk of advanced neoplasia during surveillance compared with those with no neoplasia or small (<10 mm) tubular adenomas. Thus, the recommendation remains that repeat examination should be performed in 3 years [66].

Adenoma with High-Grade Dysplasia (HGD)

The 2006 MSTF guideline concluded that the presence of HGD in an adenoma was associated with both villous histology and larger size, which are both risk factors for advanced neoplasia during surveillance [67]. In a univariate analysis from the NCI Pooling Project, HGD was strongly associated with risk of advanced neoplasia during surveillance (OR, 1.77; 95% CI, 1.41–2.22) [76]. Thus, the recommendation remains that repeat examination should be performed in 3 years [66].

Serrated Lesions

A total of 20% to 30% of CRCs arise through a molecular pathway characterized by hypermethylation of genes, known as CgG Island Methylator Phenotype (CIMP) [77]. Precursors are believed to be serrated polyps. Tumors in this pathway have a high frequency of BRAF mutation, and up to 50% are microsatellite unstable. CIMP-positive tumors are overrepresented in interval cancers, particularly in the proximal colon. The principal precursor of hypermethylated cancers is probably the sessile serrated polyp (synonymous with sessile serrated adenoma). These polyps are difficult to detect at endoscopy. They may be the same color as surrounding colonic mucosa, have indiscrete edges, are nearly always flat or sessile, and may have a layer of adherent mucus and obscure the vascular pattern.

Recent studies show that proximal colon location or size ≥ 10 mm may be markers of risk for synchronous advanced adenomas elsewhere in the colon [78,79]. Surveillance after colonoscopy was evaluated in one study, which found that coexisting serrated polyps and high-risk adenomas (HRA; ie, size ≥ 10 mm, villous histology, or presence of HGD) is associated with a higher risk of advanced neoplasia at surveillance [78]. This study also found that if small proximal serrated polyps are the only finding at baseline, the risk of adenomas during surveillance is similar to that of patients with low-risk adenomas (LRA; ie, 1–2 small adenomas).

The current evidence suggests that size (>10 mm), histology (a sessile serrated polyp is a more significant lesion than an HP; a sessile serrated polyp with cytological dysplasia is more advanced than a sessile serrated polyp without dysplasia), and location (proximal to the sigmoid colon) are risk factors that might be associated with higher risk of CRC. A sessile serrated polyp ≥ 10 mm and a sessile serrated polyp with cytological dysplasia should be managed like a HRA with repeat colonoscopy occurring in 3 years. Serrated polyps that are <10 mm in size and do not have cytological dysplasia may have lower risk and can be managed like LRA with repeat colonoscopy occurring in 5 years [66].

Follow-up After Surveillance

In a 2009 study, 564 participants underwent 2 surveillance colonoscopies after an index procedure and 10.3% had high-risk findings at the third study examination. If the second examination showed high-risk findings, then results from the first examination added no significant information about the probability of high-risk findings on the third examination (18.2% for high-risk findings on the first examination vs. 20.0% for low-risk findings on the first examination; P = 0.78). If the second examination showed no adenomas, then the results from the first examination added significant information about the probability of high-risk findings on the third exam-ination (12.3% if the first examination had high-risk findings vs. 4.9% if the first examination had low-risk findings; P = 0.015) [80]. Thus, information from 2 previous colonoscopies appears to be helpful in defining the risk of neoplasia for individual patients and in the future, guidelines might consider accounting for the results of 2 exams to tailor surveillance intervals for patients.

  • When should screening / surveillance be stopped?

There is considerable new evidence that the risks of colonoscopy increase with advancing age [81,82]. Neither surveillance nor screening colonoscopy should be performed when the risk of the preparation, sedation, or procedure outweighs the potential benefit. For patients aged 75–85 years, the USPSTF recommends against routine screening but argues for individualization based on comorbidities and findings on any prior colonoscopy. The USPSTF recommends against continued screening after age 85 years because risk could exceed potential benefit [44].

In terms of surveillance of prior adenomas, the 75-85 year age group may still benefit from surveillance because patients with prior HRA are at higher risk for developing advanced neoplasia compared with average-risk screenees. However, the decision to continue surveillance in this population should be individualized and based on an assessment of benefit and risk in the context of the person’s estimated life expectancy [66]. More importantly, it should be noted that an individual’s most important and impactful screening colonoscopy is his or her first one and therefore, from a public health standpoint, great effort should be taken to increase the number of people in a population who undergo screening rather than simply targeting those who need surveillance for prior polyps. This is ever true in settings with limited resources.

Case Conclusion

The physician discusses the findings from the colonoscopy (2 small adenomas) with the patient and recommends a repeat colonoscopy in 5 to 10 years.

Summary

Colorectal cancer is one of the leading causes of cancer-related death in the United States. Since the advent of colonoscopy and the implementation of CRC screening efforts, the rates of CRC have started to decline. There are several environmental factors which have been associated with the development of CRC including obesity, dietary intake, physical activity and smoking. At present, there are multiple tools available for CRC prevention, but the most accurate and effective method is currently colonoscopy. Stool-based tests like FIT should be offered when a patient declines colonoscopy. For those interested in colonoscopy, average-risk individuals should be screened starting at the age of 50 with subsequent examinations every 10 years. Surveillance examinations should occur based on polyp findings on index colonoscopy. There is no recommendation to continue screening after the age of 75, though physicians can determine this based on patients health and risk/benefit profile. Current guidelines recommend against offering any screening to patients over the age of 85. Despite these recommendations, almost half of the eligible screening population has yet to undergo appropriate CRC screening. Future work should include public health efforts to improve access and appeal of widespread CRC screening regardless of modality. While colonoscopy is considered the most effective screening test, the best test is still the one the patient gets.

 

Corresponding author: Audrey H. Calderwood, MD, MS, 85 E. Concord St., Rm. 7724, Boston, MA 02118, [email protected].

Financial disclosures: None.

From the Boston University School of Medicine, Boston, MA.

 

Abstract

  • Objective: To review recommendations for colorectal cancer (CRC) screening.
  • Methods: Review of the literature.
  • Results: In the United States, CRC is the third most commonly diagnosed cancer and the third leading cause of cancer death. CRC screening can reduce mortality through the detection of early-stage disease and the detection and removal of adenomatous polyps. There are several modalities for CRC screening, with current technology falling into 2 general categories: stool tests, which include tests for occult blood or exfoliated DNA; and structural exams, which include flexible sigmoidoscopy, colonoscopy, double-contrast barium enema, and CT colonography. The preferred CRC prevention test for average-risk individuals is colonoscopy starting at age 50 with subsequent examinations every 10 years. Patients unwilling to undergo screening colonoscopy may be offered flexible sigmoidoscopy, CT colonography, or fecal immunohistochemical test. Surveillance examinations should occur based on polyp findings on index colonoscopy. There is no recommendation to continue screening after age 75, though physicians can make a determination based on a patient’s health and risk/benefit profile. Current guidelines recommend against offering screening to patients over age 85.
  • Conclusion: Increasing access to and utilization of CRC screening tests is likely to lead to improvements in mortality reduction, as only about half of people aged 50 or older report having received CRC testing consistent with current guidelines.

In the United States, colorectal cancer (CRC) is the third most commonly diagnosed cancer and the third leading cause of cancer death in both men and women [1]. In 2014, an estimated 136,830 people were diagnosed with CRC and about 50,310 people died of the disease [2]. Colorectal cancer usually develops slowly over a period of 10 to 15 years. The tumor typically begins as a noncancerous polyp, classically an adenomatous polyp or adenoma, though fewer than 10% of adenomas will progress to cancer [3]. Adenomas are common; an estimated one-third to one-half of all individuals will eventually develop 1 or more adenomas [4,5]. In the United States, the lifetime risk of being diagnosed with CRC is approximately 5% for both men and women [6]. Incidence rates for CRC increase with age, with an incidence rate more than 15 times higher in adults aged 50 years and older compared with those aged 20 to 49 years [7].

Certain demographic subgroups have been shown to be at higher risk. Overall, CRC incidence and mortality rates are about 35% to 40% higher in men than in women. The reasons for this are not completely understood but likely reflect complex interactions between gender-related differences in exposure to hormones and risk factors [8]. CRC incidence and mortality rates are highest in African-American men and women; incidence rates are 20% higher and mortality rates are about 45% higher than those in whites. Prior to 1989, incidence rates were predominantly higher in white men than in African American men and were similar for women of both races. Since that time, although incidence rates have declined as a whole [9], incidence rates have been higher for African Americans than whites in both men and women This crossover likely reflects a combination of greater access to and utilization of recommended screening tests among whites (resulting in detection and removal of precancerous polyps), as well as racial differences in trends for CRC risk factors [10].

CRC screening can reduce mortality through the detection of early-stage disease and the detection and removal of ademomatous polyps [11]. Increasing access to and utilization of CRC screening tests is likely to lead to improvements in mortality reduction, as only about half of people aged 50 or older report having received CRC testing consistent with current guidelines [1].

Case Study

Initial Presentation

A 55-year-old white male presents for a routine visit and asks about colon cancer screening. His father was diagnosed with colon cancer at the age of 78. Overall, he feels well and does not have any particular complaints. His bowel habits are normal and he denies melena and hematochezia. His past medical history is significant for diabetes, hypertension, and obesity. He was a previous smoker and has a few alcoholic drinks on the weekends. His physical exam is unremarkable. Results of recent blood work are normal and there is no evidence of anemia.

  • What are this patient’s risk factors for developing colon cancer?

Risk Factors for CRC

There are numerous factors that are thought to influence risk for CRC. Nonmodifiable risk factors include a personal or family history of CRC or adenomatous polyps, and a personal history of chronic inflammatory bowel disease. Modifiable risk factors that have been associated with an increased risk of CRC in epidemiologic studies include physical inactivity, obesity, high consumption of red or processed meats, smoking, and moderate-to-heavy alcohol consumption. In fact, a prospective study showed that up to 23% of colorectal cancers were considered to be potentially avoidable by adhering to multiple healthy lifestyle recommendations including maintaining a healthy weight, being physically active at least 30 minutes per day, eating a healthy diet, and avoiding smoking and drinking excessive amounts of alcohol [12].

People with a first-degree relative (parent, sibling, or offspring) who has had CRC have 2 to 3 times the risk of developing the disease compared with individuals with no family history; if the relative was diagnosed at a young age or if there is more than 1 affected relative, risk increases to 3 to 6 times that of the general population [13,14]. About 5% of patients with CRC have a well-defined genetic syndrome that causes the disease [15]. The most common of these is Lynch syndrome (also known as hereditary nonpolyposis colorectal cancer or HNPCC), which accounts for 2% to 4% of all CRC cases [16]. Although individuals with Lynch syndrome are predisposed to numerous types of cancer, risk of CRC is highest. A recent study of CRC in 147 Lynch syndrome families in the United States found lifetime risk of CRC to be 66% in men and 43% in women, with a median age at diagnosis of 42 years and 47 years, respectively [17]. Familial adenomatous polyposis (FAP) is the second most common predisposing genetic syndrome; for these individuals, the lifetime risk of CRC approaches 100% without intervention (eg, colectomy) [16].

People who have inflammatory bowel disease of the colon (both ulcerative colitis and Crohn’s disease) have an increased risk of developing CRC that correlates with the extent and the duration of the inflammation [18]. It is estimated that 18% of patients with a 30-year history of ulcerative colitis will develop CRC [19]. In addition, several studies have found an association between diabetes and increased risk of CRC [20,21]. Though adult-onset type 2 diabetes (the most common type) and CRC share similar risk factors, including physical inactivity and obesity, a positive association between diabetes and CRC has been found even after accounting for physical activity, body mass index, and waist circumference [22].

Being overweight or obese is also associated with a higher risk of CRC, with stronger associations more consistently observed in men than in women. Obesity increases the risk of CRC independent of physical activity. Abdominal obesity (measured by waist circumference) may be a more important risk factor for colon cancer than overall obesity in both men and women [23–25]. Diet and lifestyle strongly influence CRC risk; however, research on the role of specific dietary elements on CRC risk is still accumulating. Several studies, including one by the American Cancer Society, have found that high consumption of red and/or processed meat increases the risk of both colon and rectal cancer [23,26,27]. Further analyses indicate that the association between CRC and red meat may be related to the cooking process, because a higher risk of CRC is observed particularly among those individuals who consume meat that has been cooked at a high temperature for a long period of time [28]. In contrast to findings from earlier research, more recent large, prospective studies do not indicate a major relationship between CRC and vegetable, fruit, or fiber consumption [28,29]. However, some studies suggest that people with very low fruit and vegetable intake are at above-average risk for CRC [30,31]. Consumption of milk and calcium may decrease the risk of developing CRC [28,29,32].

In November 2009, the International Agency for Research on Cancer reported that there is now sufficient evidence to conclude that tobacco smoking causes CRC [33]. Colorectal cancer has been linked to even moderate alcohol use. Individuals who have a lifetime average of 2 to 4 alcoholic drinks per day have a 23% higher risk of CRC than those who consume less than 1 drink per day [34].

Protective Factors

One of the most consistently reported relationships between colon cancer risk and behavior is the protective effect of physical activity [35]. Based on these findings, as well as the numerous other health benefits of regular physical activity, the American Cancer Society recommends engaging in at least moderate activity for 30 minutes or more on 5 or more days per week.

Accumulating research suggests that aspirin-like drugs, postmenopausal hormones, and calcium supplements may help prevent CRC. Extensive evidence suggests that long-term, regular use of aspirin and other nonsteroidal anti-inflammatory drugs (NSAIDs) is asso-ciated with lower risk of CRC. The American Cancer Society does not currently recommend use of these drugs as chemoprevention because of the potential side effects of gastrointestinal bleeding from aspirin and other traditional NSAIDs and heart attacks from selective cyclooxygenase-2 (COX-2) inhibitors. However, people who are already taking NSAIDs for chronic arthritis or aspirin for heart disease prevention may have a lower risk of CRC as a positive side effect [36,37].

There is substantial evidence that women who use postmenopausal hormones have lower rates of CRC than those who do not. A decreased risk of CRC is especially evident in women who use hormones long-term, although the risk returns to that of nonusers within 3 years of cessation. Despite its positive effect on CRC risk, the use of postmenopausal hormones increases the risk of breast and other cancers as well as cardiovascular disease, and therefore it is not recommended for the prevention of CRC. At present, the American Cancer Society does not recommend any medications or supplements to prevent CRC because of uncertainties about their effectiveness, appropriate dosing, and potential toxicity [38–40].

Case Continued

The physician tells the patient that there are several environmental factors that may predispose him to developing CRC. He recommends that the patient follow a healthy lifestyle, including eating 5 servings of fruits and vegetables daily, minimizing consumption of red meats, exercising for 30 minutes at least 5 days per week, drinking only moderate amounts of alcohol, and continuing to take his aspirin in the setting of his diabetes. He also asks the patient if he would be interested in talking about weight loss and working together to make a plan.

The patient is appreciative of this information and wants to know what CRC creening test the physician recommends.

  • What screening test should be recommended?

Screening Options

There are several modalities for CRC screening, with current technology falling into 2 general categories: stool tests, which include tests for occult blood or exfoliated DNA; and structural exams, which include flexible sigmoidoscopy, colonoscopy, double-contrast barium enema (DCBE), and computed tomographic (CT) colonography. Stool tests are best suited for the detection of CRC, although they also will deliver positive findings for some advanced adenomas, while the structural exams can achieve both detection and prevention of CRC through identification and removal of adenomatous polyps [41]. These tests may be used alone or in combination to improve sensitivity or, in some instances, to ensure a complete examination of the colon if the initial test cannot be completed.

In principle, all adults should have access to the full range of options for CRC screening, and the availability of lower-cost, less invasive options in most practice settings is a public health advantage [11]. However, the availability of multiple testing options can overwhelm the primary care provider and presents challenges for practices in trying to support an office policy that can manage a broad range of testing choices, their follow-up requirements, and shared decision making related to the options. Shared decision making around CRC screening options is both demanding and time consuming and is complicated by the different characteristics of the tests and the test-specific requirements for individuals undergoing screening [42].

Recommended Tests

The joint guideline on screening for CRC from the American Cancer Society, the US Multi-Society Task Force on Colorectal Cancer, and the American College of Radiology (the MSTF guideline) [11] is of the strong opinion that tests designed to detect early cancer and prevent cancer through the detection and removal of adenomatous polyps (the structural exams) should be encouraged if resources are available and patients are willing to undergo an invasive test [11]. In clinical settings in which economic issues preclude primary screening with colonoscopy, or for patients who decline invasive tests, clinicians may offer stool- based testing. However, providers and patients should understand that these tests are less likely to prevent cancer compared with the invasive tests, they must be repeated at regular intervals to be effective (ie, programmatic sensitivity), and if the test is abnormal, a colonoscopy will be needed to follow up. Therefore, if patients are not willing to have repeated testing or pursue colonoscopy if the test is abnormal, these programs will not be effective and should not be recommended [11].

At this time, colonoscopy every 10 years, beginning at age 50, is the American College of Gastroenterology-preferred CRC screening strategy [43]. In cases when patients are unwilling to undergo colonoscopy for screening purposes, patients should be offered flexible sigmoidoscopy every 5-10 years, a computed tomography (CT) colonography every 5 years, or fecal immunohistochemical test (FIT) [43] (Table 1). The US Preventive Services Task Force (USPSTF) recommends screening for colorectal cancer using fecal occult blood testing, sigmoidoscopy, or colonoscopy in adults, beginning at age 50 years and continuing until age 75 years [44].

Stool-Based Testing

Stool blood tests are conventionally known as fecal occult blood tests (FOBT) because they are designed to detect the presence of occult blood in stool. FOBT falls into 2 primary categories based on the detected analyte: guaiac-based and FIT. Blood in the stool is a nonspecific finding but may originate from CRC or larger (> 1 to 2 cm) polyps. Because small adenomatous polyps do not tend to bleed and bleeding from cancers or large polyps may be intermittent or undetectable in a single sample of stool, the proper use of stool blood tests requires annual testing that consists of collecting specimens (2 or 3, depending on the product) from consecutive bowel movements [45–47].

Guaiac-based FOBT

Guaiac-based FOBT (gFOBT) is the most common stool blood test for CRC screening and the only CRC screening test for which there is evidence of efficacy from randomized controlled trials [11]. The usual gFOBT protocol consists of collecting 2 samples from each of 3 consecutive bowel movements at home. Prior to testing with a sensitive guaiac-based test, individuals usually will be instructed to avoid aspirin and other NSAIDs, vitamin C, red meat, poultry, fish, and some raw vegetables because of diet-test interactions that can increase the risk of both false-positive and false-negative (specifically, vitamin C) results [48]. Collection of all 3 samples is important because test sensitivity improves with each additional stool sample [41]. Three large randomized controlled trials with gFOBT have demonstrated that screened patients have cancers detected at an early and more curable stage than unscreened patients. Over time (8 to 13 years), each of the trials demonstrated significant reductions in CRC mortality of 15% to 33% [49–51]. However, the reported sensitivity of a single gFOBT varies considerably [52].

FIT

FIT has several technological advantages when compared with gFOBT. FIT detects human globin, a protein that along with heme constitutes human hemoglobin. Thus, FIT is more specific for human blood than guaiac-based tests, which rely on detection of peroxidase in human blood and also react to the peroxidase that is present in dietary constituents such as rare red meat, cruciferous vegetables, and some fruits [53]. Furthermore, unlike gFOBT, FIT is not subject to false-negative results in the presence of high-dose vitamin C supplements, which block the peroxidase reaction. In addition, because globin is degraded by digestive enzymes in the upper gastrointestinal tract, FIT is also more specific for lower gastrointestinal bleeding, thus improving the specificity for CRC. Finally, the sample collection process for patients for some variants of FIT are less demanding than gFOBT, requiring fewer samples or less direct handling of stool, which may increase FIT’s appeal. Although FIT has superior performance characteristics when compared with older guaiac-based Hemoccult II cards [54–56], the spectrum of benefits, limitations, and harms is similar to a gFOBT with high sensitivity [41]. As for adherence with FIT, there were 10% and 12% gains in adherence with FIT in the first 2 randomized controlled trials comparing FIT with guaiac-based testing [57,58]. Therefore, FIT is preferred over Hemoccult Sensa and is the preferred annual cancer detection test when colonoscopy is not an option [43]. The American College of Gastroenterology supports the joint guideline recommendation [11] that older guaiac-based fecal occult blood testing be abandoned as a method for CRC screening.

sDNA

Fecal DNA testing uses knowledge of molecular genomics and provides the basis of a new method of CRC screening that tests stool for the presence of known DNA alterations in the adenoma-carcinoma sequence of colorectal carcinogenesis [11]. Three different types of fecal DNA testing kits have been evaluated. The sensitivity for cancer in each version was superior to traditional guaiac-based occult blood testing, but the sensitivities ranged from 52%–87%, with the specificities ranging from 82%–95%. Based on the accumulation of evidence since the last update of joint guideline, the joint guideline panel concluded that there now are sufficient data to include sDNA as an acceptable option for CRC screening [11].

As for overall recommendations for stool-based testing, the ACG supports the joint guideline recommendation that older guaiac-based fecal occult blood testing be abandoned as a method for CRC screening. Because of more extensive data (compared with Hemoccult Sensa), and the high cost of fecal DNA testing, the American College of Gastroenterology recommends FIT as the preferred cancer detection test in cases where colonoscopy is not an option [43].

Invasive Tests Other than Colonoscopy

The use of flexible sigmoidoscopy for CRC screening is supported by high-quality case-control and cohort studies [46]. The chief advantage of flexible sigmoidoscopy is that it can be performed with a simple preparation (2 enemas), without sedation, and by a variety of practitioners in diverse settings. The main limitation of the procedure is that it does not examine the entire colon but only the rectum, sigmoid, and descending colon. The effectiveness of a flexible sigmoidoscopy program is based on the assumption that if an adenoma is detected during the procedure, the patient would be referred for colonoscopy to examine the entire colon.

DCBE is an imaging modality which can evaluate the entire colon in almost all cases and can detect most cancers and the majority of significant polyps. However, the lower sensitivity for significant adenomas when compared with colonoscopy may result in less favorable outcomes regarding CRC morbidity and mortality. Double-contrast barium enema is no longer recommended as an alternative CRC prevention test because its use has declined dramatically and also as its effectiveness for polyp detection is less than CT colonography [43].

CT Colonography

CT colonography every 5 years is endorsed as an alternative to colonoscopy every 10 years because of its recent performance in the American College of Imaging Network Trial 6664 (also known as the National CT Colonography Trial) [59]. The principle performance feature that justifies inclusion of CT colonography as a viable alternative in patients who decline colonoscopy is that the sensitivity for polyps ≥ 1 cm in size was 90% in the most recent multicenter US trial [59]. In this study, 25% of radiologists who were tested for entry into the trial but performed poorly were excluded from participation, and thus lower sensitivity might be expected in actual clinical practice. CT colonography probably has a lower risk of perforation than colonoscopy in most settings, but for several reasons it is not considered the equivalent of colonoscopy as a screening strategy. First, the evidence to support an effect of endoscopic screening on prevention of incident CRC and mortality is overwhelming compared with that for CT colonography. Second, the inability of CT colonography to adequately detect polyps 5 mm and smaller, which constitutes 80% of colorectal neoplasms, and whose natural history is still not understood, necessitates performance of the test at 5-year rather than 10-year intervals [43]. Finally, false-positives are common, and the specificity for polyps ≥ 1 cm in size was only 86% in the National CT Colonography Trial, with a positive predictive value of 23% [59]. The American College of Gastroenterology recommends that asymptomatic patients be informed of the possibility of radiation risk associated with one or repeated CT colonography studies, though the exact risk associated with radiation is unclear [60,61].

The value of extracolonic findings detected by CT colonography is mixed, with substantial costs associated with incidental findings, but occasional important extracolonic findings are detected, such as asymptomatic cancers and large abdominal aortic aneurysms. As a final point, the ACG is also concerned about the potential impact of CT colonography on adherence with follow-up colonoscopy and thus on polypectomy rates. Thus, if CT colonography substantially improves adherence, it should improve polypectomy rates and thereby reduce CRC, even if only large polyps are detected and referred for colonoscopy. On the other hand, if CT colonography largely displaces patients who would otherwise be willing to undergo colonoscopy, then polypectomy rates will fall substantially, which could significantly increase the CRC incidence [62]. Thus, for multiple reasons and pending additional study, CT colonography should be offered to patients who decline colonoscopy. It should be noted that CT colonography should only be offered for the purposes of CRC screening and should not be used for diagnostic workup of symptoms (eg, patient with active bleeding or inflammatory bowel disease).

  • When should screening begin?

The American College of Gastroenterology continues to recommend that screening begin at age 50 years in average-risk persons (ie, those without a family history of colorectal neoplasia), except for African Americans, in whom it should begin at age 45 years [43]. The USPSTF does not currently provide specific recommendations based on race or ethnicity, but certain other subgroups of the average-risk population might warrant initiation of screening at an earlier or later age, depending on their risk. For example, the incident risk of CRC has been described to be greater in men than women [63]. In reviewing the literature, the writing committee also identified heavy cigarette smoking and obesity as linked to an increased risk of CRC and to the development of CRC at an earlier age.

For patients with a family history of CRC or adenomatous polyps, the 2008 MSTF guideline recommends initiation of screening at age 40 [11]. The American College of Gastroenterology recommendations for screening in patients with a family history are shown in Table 1. From a practical perspective, many clinicians have found that patients are often not aware of whether their first-degree relatives had advanced adenomas vs. small tubular adenomas, or whether their family members had non-neoplastic vs. neoplastic polyps. Given these difficulties, the American College of Gastroenterology now recommends that adenomas only be counted as equal to a family history of cancer when there is a clear history, or medical report containing evidence, or other evidence to indicate that family members had advanced adenomas (an adenoma ≥ 1 cm in size, or with high-grade dysplasia, or with villous elements) [43]. Continuation of the old recommendation to screen first-degree relatives of patients with only small tubular adenomas could result in most of the population being screened at age 40, with doubtful benefit.

  • What are screening considerations in patients with genetic syndromes?

Patients with features of an inherited CRC syndrome should be advised to pursue genetic counseling with a licensed genetic counselor and, if appropriate, genetic testing. Individuals with FAP should undergo adenomatous polyposis coli (APC) mutation testing and, if negative, MYH mutation testing. Patients with FAP or at risk of FAP based upon family history should undergo annual colonoscopy until colectomy is deemed by both physician and patient as the best treatment [64]. Patients with a retained rectum after total colectomy and ileorectal anastomosis, ileal pouch, after total proctocolectomy and ileal pouch anal anastomosis, or stoma after total proctocolectomy and end ileostomy, should undergo endoscopic assessment approximately every 6 to 12 months after surgery, depending on the polyp burden seen. Individuals with oligopolyposis (< 100 colorectal polyps) should be sent for genetic counseling, consideration of APC and MYH mutation testing, and individualized colonoscopy surveillance depending on the size, number, and pathology of polyps seen. Upper endoscopic surveillance is recommended in individuals with FAP, but there are no established guidelines for endoscopic surveillance in MAP (MYH-associated polyposis) [43].

Patients who meet the Bethesda criteria for HNPCC [65] can be screened by 2 different mechanisms. One is a DNA-based test for microsatellite instability of either the patient’s or a family member’s tumor. The other mechanism is to assess by immunohistochemical staining for evidence of mismatch repair proteins (eg, MLH1, MSH2, MSH6). In those patients in whom deleterious mutations are found, the affected individual should undergo colonoscopy every 2 years beginning at age 20 to 25 years until age 40 years, then annually thereafter [43]. If genetic testing is negative (ie, no deleterious mutation is found), but the patient is still felt to clinically have Lynch syndrome, then they should still be surveyed in the same way.

Case Continued

The physician recommends colonoscopy as the screening modality as it is the most efficient and accurate way of finding precancerous lesions and the most effective way of preventing CRC by removing precancerous lesions. He also explains that because the patient’s father developed CRC after the age of 60, this does not place the patient in a higher risk category and he can follow screening recommendations for “average-risk” individuals.

Screening

The patient undergoes colonoscopy. Two 5-mm adenomas in the transverse colon are detected and removed.

  • When should he have a repeat colonoscopy?

Surveillance Intervals

New data have recently emerged on the risk of interval cancer after colonoscopy. The overall rate of interval cancer is estimated to be 1.1–2.7 per 1000 person-years of follow-up. There are several reasons that may account for why patients develop interval cancers: (1) important lesions may be missed at baseline colonoscopy, (2) adenomas may be incompletely removed at the time of baseline colonoscopy, and (3) interval CRC may be biologically different or more aggressive than prevalent CRC. In order to minimize the risk of interval cancer development, it is important to perform a high-quality baseline screening colonoscopy examination as this is associated with lowering the risk of interval cancer [66]. A high-quality colonoscopy entails completion of the procedure to the cecum (with photodocumentation of the appendiceal orifice and ileocecal valve) with careful inspection of folds including adequate bowel cleanliness and a withdrawal time > 6 minutes.

The MSTF guidelines for surveillance after screening and polypectomy were published in 2006 [67], with an update in 2012 [66]. Their recommendations on surveillance colonoscopy are based on the predication that the initial colonoscopy is high quality and are summarized in Table 2 and discussed below.

Baseline Colonoscopy Findings

No Polyps

Several prospective observational studies in different populations have shown that the risk of advanced adenomas within 5 years after negative findings on colonoscopy is low (1.3%–2.4%) relative to the rate on initial screening examination (4%–10%) [68–73]. In these studies, interval cancers were rare within 5 years. A sigmoidoscopy randomized controlled trial performed in the United Kingdom demonstrated a reduction in CRC incidence and mortality at 10 years in patients who received one-time sigmoidoscopy compared with controls—a benefit limited to the distal colon [46]. This is the first randomized study to show the effectiveness of endoscopic screening, an effect that appears to have at least a 10-year duration [74]. Thus, in patients who have a baseline colonoscopic evaluation without any adenomas or polyps and are average-risk individuals, the recommendation for the next examination is in 10 years [66].

Distal Hyperplastic Polyps < 10 mm

There is considerable evidence that patients with only rectal or sigmoid hyperplastic polyps (HPs) appear to represent a low-risk cohort. Studies have focused on whether the finding in the distal colon was a marker of risk for advanced neoplasia elsewhere and most studies show no such relationship [67]. Prior and current evidence suggests that distal HPs <10 mm are benign without neoplastic potential. If the most advanced lesions at baseline colonoscopy are distal HPs <10 mm, the interval for colonoscopic follow-up should be 10 years [66].

1-2 Tubular Adenomas < 10 mm

Prior evidence suggested that patients with low-risk adenomas (<10 mm, no villous histology or high-grade dysplasia) had a lower risk of developing advanced adenomas during follow-up compared with patients with high risk adenomas (≥ 10mm, villous histology or high -grade dysplasia). At that time in 2006, consensus on the task force was that an interval of 5 years would be acceptable in this low-risk group [75]. Data published since 2006 endorse the assessment that patients with 1–2 tubular adenomas with low-grade dysplasia <10 mm represent a low-risk group. Three new studies suggest that this group may have only a small, nonsignificant increase in risk of advanced neoplasia within 5 years compared with individuals with no baseline neoplasia. The evidence now supports a surveillance interval of longer than 5 years for most patients and can be extended to 10 years based on the quality of the preparation and colonoscopy [66].

3–10 Tubular Adenomas

Two independent meta-analyses in 2006 found that patients with 3 or more adenomas at baseline had an increased RR for adenomas during surveillance, ranging from 1.7 to 4.8 [47,75]. New information from the VA study and the National Cancer Institute Pooling Project also support these prior findings. Patients with 3 or more adenomas have a level of risk for advanced neoplasia similar to other patients with advanced neoplasia (adenoma >10 mm, adenoma with high grade dysplasia) and thus, repeat examination should be performed in 3 years [66,68,76].

> 10 Adenomas

Only a small proportion of patients undergoing screening colonoscopy will have >10 adenomas. The 2006 guidelines for colonoscopy surveillance after polypectomy noted that such patients should be considered for evaluation of hereditary CRC syndromes [67]. Early follow-up surveillance colonoscopy is based on clinical judgment because there is little evidence to support a firm recommendation. At present, the recommendation is to consider follow-up in less than 3 years after a baseline colonoscopy [66].

1 or More Tubular Adenomas ≥ 10mm

The 2006 MSTF guideline reviewed data related to adenoma size, demonstrating that most studies showed a 2- to 5-fold increased risk of advanced neoplasia during follow-up if the baseline examination had one or more adenomas ≥ 10 mm [67]. Newer, additional data shows that patients with one or more adenomas ≥ 10 mm have an increased risk of advanced neoplasia during surveillance compared with those with no neoplasia or small (< 10 mm) adenomas [68,76]. Thus, the recommendations remains that repeat examination should be performed in 3 years [66]. If there is question about complete removal of an adenoma (ie, piecemeal resection), early follow-up colonoscopy is warranted [66].

1 or More Villous Adenomas

The 2006 MSTF guideline considers adenomas with villous histology to be high risk [67]. The NCI Pooling Project analyzed polyp histology as a risk factor for development of interval advanced neoplasia. Compared with patients with tubular adenomas, those with baseline polyp(s) showing adenomas with villous or tubulovillous histology (TVA) had increased risk of advanced neoplasia during follow-up (16.8% vs 9.7%; adjusted OR, 1.28; 95% CI, 1.07–1.52) [76]. Patients with one or more adenomas with villous histology were also found to have an increased risk of advanced neoplasia during surveillance compared with those with no neoplasia or small (<10 mm) tubular adenomas. Thus, the recommendation remains that repeat examination should be performed in 3 years [66].

Adenoma with High-Grade Dysplasia (HGD)

The 2006 MSTF guideline concluded that the presence of HGD in an adenoma was associated with both villous histology and larger size, which are both risk factors for advanced neoplasia during surveillance [67]. In a univariate analysis from the NCI Pooling Project, HGD was strongly associated with risk of advanced neoplasia during surveillance (OR, 1.77; 95% CI, 1.41–2.22) [76]. Thus, the recommendation remains that repeat examination should be performed in 3 years [66].

Serrated Lesions

A total of 20% to 30% of CRCs arise through a molecular pathway characterized by hypermethylation of genes, known as CgG Island Methylator Phenotype (CIMP) [77]. Precursors are believed to be serrated polyps. Tumors in this pathway have a high frequency of BRAF mutation, and up to 50% are microsatellite unstable. CIMP-positive tumors are overrepresented in interval cancers, particularly in the proximal colon. The principal precursor of hypermethylated cancers is probably the sessile serrated polyp (synonymous with sessile serrated adenoma). These polyps are difficult to detect at endoscopy. They may be the same color as surrounding colonic mucosa, have indiscrete edges, are nearly always flat or sessile, and may have a layer of adherent mucus and obscure the vascular pattern.

Recent studies show that proximal colon location or size ≥ 10 mm may be markers of risk for synchronous advanced adenomas elsewhere in the colon [78,79]. Surveillance after colonoscopy was evaluated in one study, which found that coexisting serrated polyps and high-risk adenomas (HRA; ie, size ≥ 10 mm, villous histology, or presence of HGD) is associated with a higher risk of advanced neoplasia at surveillance [78]. This study also found that if small proximal serrated polyps are the only finding at baseline, the risk of adenomas during surveillance is similar to that of patients with low-risk adenomas (LRA; ie, 1–2 small adenomas).

The current evidence suggests that size (>10 mm), histology (a sessile serrated polyp is a more significant lesion than an HP; a sessile serrated polyp with cytological dysplasia is more advanced than a sessile serrated polyp without dysplasia), and location (proximal to the sigmoid colon) are risk factors that might be associated with higher risk of CRC. A sessile serrated polyp ≥ 10 mm and a sessile serrated polyp with cytological dysplasia should be managed like a HRA with repeat colonoscopy occurring in 3 years. Serrated polyps that are <10 mm in size and do not have cytological dysplasia may have lower risk and can be managed like LRA with repeat colonoscopy occurring in 5 years [66].

Follow-up After Surveillance

In a 2009 study, 564 participants underwent 2 surveillance colonoscopies after an index procedure and 10.3% had high-risk findings at the third study examination. If the second examination showed high-risk findings, then results from the first examination added no significant information about the probability of high-risk findings on the third examination (18.2% for high-risk findings on the first examination vs. 20.0% for low-risk findings on the first examination; P = 0.78). If the second examination showed no adenomas, then the results from the first examination added significant information about the probability of high-risk findings on the third exam-ination (12.3% if the first examination had high-risk findings vs. 4.9% if the first examination had low-risk findings; P = 0.015) [80]. Thus, information from 2 previous colonoscopies appears to be helpful in defining the risk of neoplasia for individual patients and in the future, guidelines might consider accounting for the results of 2 exams to tailor surveillance intervals for patients.

  • When should screening / surveillance be stopped?

There is considerable new evidence that the risks of colonoscopy increase with advancing age [81,82]. Neither surveillance nor screening colonoscopy should be performed when the risk of the preparation, sedation, or procedure outweighs the potential benefit. For patients aged 75–85 years, the USPSTF recommends against routine screening but argues for individualization based on comorbidities and findings on any prior colonoscopy. The USPSTF recommends against continued screening after age 85 years because risk could exceed potential benefit [44].

In terms of surveillance of prior adenomas, the 75-85 year age group may still benefit from surveillance because patients with prior HRA are at higher risk for developing advanced neoplasia compared with average-risk screenees. However, the decision to continue surveillance in this population should be individualized and based on an assessment of benefit and risk in the context of the person’s estimated life expectancy [66]. More importantly, it should be noted that an individual’s most important and impactful screening colonoscopy is his or her first one and therefore, from a public health standpoint, great effort should be taken to increase the number of people in a population who undergo screening rather than simply targeting those who need surveillance for prior polyps. This is ever true in settings with limited resources.

Case Conclusion

The physician discusses the findings from the colonoscopy (2 small adenomas) with the patient and recommends a repeat colonoscopy in 5 to 10 years.

Summary

Colorectal cancer is one of the leading causes of cancer-related death in the United States. Since the advent of colonoscopy and the implementation of CRC screening efforts, the rates of CRC have started to decline. There are several environmental factors which have been associated with the development of CRC including obesity, dietary intake, physical activity and smoking. At present, there are multiple tools available for CRC prevention, but the most accurate and effective method is currently colonoscopy. Stool-based tests like FIT should be offered when a patient declines colonoscopy. For those interested in colonoscopy, average-risk individuals should be screened starting at the age of 50 with subsequent examinations every 10 years. Surveillance examinations should occur based on polyp findings on index colonoscopy. There is no recommendation to continue screening after the age of 75, though physicians can determine this based on patients health and risk/benefit profile. Current guidelines recommend against offering any screening to patients over the age of 85. Despite these recommendations, almost half of the eligible screening population has yet to undergo appropriate CRC screening. Future work should include public health efforts to improve access and appeal of widespread CRC screening regardless of modality. While colonoscopy is considered the most effective screening test, the best test is still the one the patient gets.

 

Corresponding author: Audrey H. Calderwood, MD, MS, 85 E. Concord St., Rm. 7724, Boston, MA 02118, [email protected].

Financial disclosures: None.

References

1. American Cancer Society. Colorectal cancer facts & figures 2014–2016. Atlanta: American Cancer Society; 2014.

2. Ries L, Melbert D, Krapcho M, et al. SEER cancer statistics review, 1975–2011. Bethesda, MD: National Cancer Institute; 2014.

3. Levine JS, Ahnen DJ. Clinical practice. Adenomatous polyps of the colon. N Engl J Med 2006;355:2551–7.

4. Bond JH. Polyp guideline: diagnosis, treatment, and surveillance for patients with colorectal polyps. Practice Parameters Committee of the American College of Gastroenterology. Am J Gastroenterol 2000;95:3053–63.

5. Schatzkin A, Freedman LS, Dawsey SM, Lanza E. Interpreting precursor studies: what polyp trials tell us about large-bowel cancer. J Natl Cancer Inst 1994;86:1053–7.

6. DevCan: Probability of developing or dying of cancer software, version 6.5.0; Statistical Research and Applications Branch, National Cancer Institute, 2005. http://srab.cancer.gov/devcan [computer program].

7. Surveillance, Epidemiology, and End Results (SEER) Program (www.seer.cancer.gov), National Cancer Institute, DCCPS, Surveillance Research Program, Cancer Statistics Branch, released April 2010, based on the November 2009 submission.

8. Murphy G, Devesa SS, Cross AJ, et al. Sex disparities in colorectal cancer incidence by anatomic subsite, race and age. Int J Cancer 2011;128:1668–7.

9. Edwards BK, Ward E, Kohler BA, et al. Annual report to the nation on the status of cancer, 1975-2006, featuring colorectal cancer trends and impact of interventions (risk factors, screening, and treatment) to reduce future rates. Cancer 2010;116:544–73.

10. Irby K, Anderson WF, Henson DE, Devesa SS. Emerging and widening colorectal carcinoma disparities between Blacks and Whites in the United States (1975-2002). Cancer Epidemiol Biomarkers Prev 2006;15:792–7.

11. Levin B, Lieberman DA, McFarland B, et al. Screening and surveillance for the early detection of colorectal cancer and adenomatous polyps, 2008: a joint guideline from the American Cancer Society, the US Multi-Society Task Force on Colorectal Cancer, and the American College of Radiology. CA Cancer J Clin 2008;58:130–60.

12. Kirkegaard H, Johnsen NF, Christensen J, et al. Association of adherence to lifestyle recommendations and risk of colorectal cancer: a prospective Danish cohort study. BMJ 2010;341:c5504.

13. Butterworth AS, Higgins JP, Pharoah P. Relative and absolute risk of colorectal cancer for individuals with a family history: a meta-analysis. Eur J Cancer 2006;42:216–27.

14. Johns LE, Houlston RS. A systematic review and meta-analysis of familial colorectal cancer risk. Am J Gastroenterol 2001;96:2992–3003.

15. Lynch HT, de la Chapelle A. Hereditary colorectal cancer. N Engl J Med 2003;348:919–32.

16. Jasperson KW, Tuohy TM, Neklason DW, Burt RW. Hereditary and familial colon cancer. Gastroenterology 2010;138:2044–58.

17. Stoffel E, Mukherjee B, Raymond VM, et al. Calculation of risk of colorectal and endometrial cancer among patients with Lynch syndrome. Gastroenterology 2009;137:1621–7.

18. Bernstein CN, Blanchard JF, Kliewer E, Wajda A. Cancer risk in patients with inflammatory bowel disease: a population-based study. Cancer 2001;91:854–62.

19. Eaden JA, Abrams KR, Mayberry JF. The risk of colorectal cancer in ulcerative colitis: a meta-analysis. Gut 2001;48:526–35.

20. Larsson SC, Orsini N, Wolk A. Diabetes mellitus and risk of colorectal cancer: a meta-analysis. J Natl Cancer Inst 2005;97:1679–87.

21. Campbell PT, Deka A, Jacobs EJ, et al. Prospective study reveals associations between colorectal cancer and type 2 diabetes mellitus or insulin use in men. Gastroenterology 2010;139:1138–46.

22. Larsson SC, Giovannucci E, Wolk A. Diabetes and colorectal cancer incidence in the cohort of Swedish men. Diabetes Care 2005;28:1805–7.

23. Huxley RR, Ansary-Moghaddam A, Clifton P, et al. The impact of dietary and lifestyle risk factors on risk of colorectal cancer: a quantitative overview of the epidemiological evidence. Int J Cancer 2009;125:171–80.

24. Larsson SC, Wolk A. Obesity and colon and rectal cancer risk: a meta-analysis of prospective studies. Am J Clin Nutr 2007;86:556–65.

25. Wang Y, Jacobs EJ, Patel AV, et al. A prospective study of waist circumference and body mass index in relation to colorectal cancer incidence. Cancer Causes Control 2008;19:783–92.

26. Chao A, Thun MJ, Connell CJ, et al. Meat consumption and risk of colorectal cancer. JAMA 2005;293:172–82.

27. Cross AJ, Ferrucci LM, Risch A, et al. A large prospective study of meat consumption and colorectal cancer risk: an investigation of potential mechanisms underlying this association. Cancer Res 2010;70:2406–14.

28. Chan AT, Giovannucci EL. Primary prevention of colorectal cancer. Gastroenterology 2010;138:2029–43.

29. Food, nutrition, physical activity, and the prevention of cancer: a global perspective. Washington DC: World Cancer Research Fund/American Institute for Cancer Research; 2007.

30. McCullough ML, Robertson AS, Chao A, et al. A prospective study of whole grains, fruits, vegetables and colon cancer risk. Cancer Causes Control 2003;14:959–70.

31. Terry P, Giovannucci E, Michels KB, et al. Fruit, vegetables, dietary fiber, and risk of colorectal cancer. J Natl Cancer Inst 2001;93:525–33.

32. Cho E, Smith-Warner SA, Spiegelman D, et al. Dairy foods, calcium, and colorectal cancer: a pooled analysis of 10 cohort studies. J Natl Cancer Inst 2004;96:1015–22.

33. Secretan B, Straif K, Baan R, et al. A review of human carcinogens--Part E: tobacco, areca nut, alcohol, coal smoke, and salted fish. Lancet Oncol 2009;10:1033–4.

34. Ferrari P, Jenab M, Norat T, et al. Lifetime and baseline alcohol intake and risk of colon and rectal cancers in the European prospective investigation into cancer and nutrition (EPIC). Int J Cancer 2007;121:2065–72.

35. Samad AK, Taylor RS, Marshall T, Chapman MA. A meta-analysis of the association of physical activity with reduced risk of colorectal cancer. Colorectal Dis 2005;7:204–13.

36. Flossmann E, Rothwell PM. Effect of aspirin on long-term risk of colorectal cancer: consistent evidence from randomised and observational studies. Lancet 2007;369:1603–13.

37. Rothwell PM, Wilson M, Elwin CE, et al. Long-term effect of aspirin on colorectal cancer incidence and mortality: 20-year follow-up of five randomised trials. Lancet 2010;376:1741–50.

38. Hildebrand JS, Jacobs EJ, Campbell PT, et al. Colorectal cancer incidence and postmenopausal hormone use by type, recency, and duration in cancer prevention study II. Cancer Epidemiol Biomarkers Prev 2009;18:2835–41.

39. Heiss G, Wallace R, Anderson GL, et al. Health risks and benefits 3 years after stopping randomized treatment with estrogen and progestin. JAMA 2008;299:1036–45.

40. Rossouw JE, Anderson GL, Prentice RL, et al. Risks and benefits of estrogen plus progestin in healthy postmenopausal women: principal results From the Women’s Health Initiative randomized controlled trial. JAMA 2002;288:321–33.

41. Lieberman DA,Weiss DG. One-time screening for colorectal cancer with combined fecal occult-blood testing and examination of the distal colon. N Engl J Med 2001;345:555–60.

42. Lafata JE, Divine G, Moon C,Williams LK. Patient-physician colorectal cancer screening discussions and screening use. Am J Prev Med 2006;31:202–9.

43. Rex DK, Johnson DA, Andersone JC, et al. American College of Gastroenterology guidelines for colorectal cancer screening 2008. Am J Gastroenterol 2009;104:739–50.

44. U.S. Preventive Services Task Force. Screening for colorectal cancer: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med 2008;149:627–37.

45. Smith RA, von Eschenbach AC,Wender R, et al. American Cancer Society guidelines for the early detection of cancer: update of early detection guidelines for prostate, colorectal, and endometrial cancers. Also: update 2001—testing for early lung cancer detection. CA Cancer J Clin 2001;51:38–75.

46. Winawer S, Fletcher R, Rex D, et al. Colorectal cancer screening and surveillance: clinical guidelines and rationale—update based on new evidence. Gastroenterology 2003;124:544–60.

47. Rex DK, Kahi CJ, Levin B, et al. Guidelines for colonoscopy surveillance after cancer resection: a consensus update by the American Cancer Society and the US Multi-Society Task Force on Colorectal Cancer. Gastroenterology 2006;130:1865–71.

48. Ransohoff DF, Lang CA. Screening for colorectal cancer with the fecal occult blood test: a background paper. American College of Physicians. Ann Intern Med 1997;126:811–22.

49. Hardcastle JD, Chamberlain JO, Robinson MH, et al. Randomised controlled trial of faecal-occult blood screening for colorectal cancer. Lancet 1996;348:1472–7.

50. Kronborg O, Fenger C, Olsen J, et al. Randomised study of screening for colorectal cancer with faecal-occult blood test. Lancet 1996;348:1467–71.

51. Wilson JMG, Junger G. Principles and practice of screening for disease. Geneva: World Health Organization; 1968.

52. Allison JE, Tekawa IS, Ransom LJ, Adrain AL. A comparison of fecal occult-blood tests for colorectal-cancer screening. N Engl J Med 1996;334:155–9.

53. Caligiore P, Macrae FA, St John DJ, et al. Peroxidase levels in food: relevance to colorectal cancer screening. Am J Clin Nutr 1982;35:1487–9.

54. Nakajima M, Saito H, Soma Y, et al. Prevention of advanced colorectal cancer by screening using the immunochemical faecal occult blood test: a case-control study. Br J Cancer 2003;89:23–8.

55. Lee KJ, Inoue M, Otani T, et al. Colorectal cancer screening using fecal occult blood test and subsequent risk of colorectal cancer: a prospective cohort study in Japan. Cancer Detect Prev 2007;31:3–11.

56. Zappa M, Csatiglione G, Gazzini G, et al. Effect of faecal occult blood testing on colorectal mortality: results of a population-based case-control study in the district of Florence, Italy. Int J Cancer 1997;73:208–10.

57. van Rossum LG, van Rijn AF, Laheij RJ, et al. Random comparison of guaiac and immunochemical fecal occult blood tests for colorectal cancer in a screening population. Gastroenterology 2008;135:82–90.

58. Hol L, van Leerdam ME, van Ballegooijen M, et al. Attendance to screening for colorectal cancer in the Netherlands; randomized controlled trial comparing two different forms of faecal occult blood tests and sigmoidoscopy. Gastroenterology 2008;134:A87.

59. Johnson CD, Chen MH, Toledano AY, et al. Accuracy of CT colonography for detection of large adenomas and cancers.
N Engl J Med 2008;359:1207–17.

60. Brenner DJ, Georgsson MA. Mass screening with CT colonography: should the radiation exposure be of concern? Gastroenterology 2005;129:328–37.

61. Brenner DJ, Hall EJ. Computed tomography—an increasing source of radiation exposure. N Engl J Med 2007;35: 2277–84.

62. Hur C, Chung DC, Schoen RE, et al. The management of small polyps found by virtual colonoscopy: results of a decision analysis. Clin Gastroenterol Hepatol 2007;5:237–44.

63. Chu KC, Tarone RE, Chow WH, et al. Temporal patterns in colorectal cancer incidence, survival, and mortality from 1950 through 1990. J Natl Cancer Inst 1994;86:997–1006.

64. Vasen HF, Moslein G, Alonso A, et al. Guidelines for the clinical management of familial adenomatous polyposis (FAP). Gut 2008;57:704–13.

65. Umar A, Boland CR, Terdiman PJ, et al. Revised Bethesda guidelines for hereditary nonpolyposis colorectal Cancer (Lynch syndrome) and microsatellite instability. J Natl Cancer Inst 2004;96:261–8.

66. Lieberman DA, Rex DR, Winawer SJ, et al. Guidelines for colonoscopy surveillance after screening and polypectomy: a consensus update by the US Multi-Society Task Force on Colorectal Cancer. Gastroenterology 2012;143:844–57.

67. Winawer SJ, Zauber AG, Fletcher RH, et al. Guidelines for colonoscopy surveillance after polypectomy: a consensus update by the US Multi-Society Task Force on colorectal cancer and the American Cancer Society. Gastroenterology 2006;130:1872–85.

68. Lieberman DA, Weiss DG, Harford WV, et al. Five year colon surveillance after screening colonoscopy. Gastroenterology 2007;133:1077–85.

69. Imperiale TF, Glowinski EA, Lin-Cooper C, et al. Five-year risk of colorectal neoplasia after negative screening colonoscopy. N Engl J Med 2008;359:1218–24.

70. Leung WK, Lau JYW, Suen BY, et al. Repeat screening colonoscopy 5 years after normal baseline screening colonoscopy in average-risk Chinese: a prospective study. Am J Gastroenterol 2009;104:2028–34.

71. Brenner H, Haug U, Arndt V, et al. Low risk of colorectal cancer and advanced adenomas more than 10 years after negative colonoscopy. Gastroenterology 2010;138:870–6.

72. Miller H, Mukherjee R, Tian J, et al. Colonoscopy surveillance after polypectomy may be extended beyond five years. J Clin Gastroenterol 2010;44:e162–e166.

73. Chung SJ, Kim YS, Yang SY, et al. Five-year risk for advanced colorectal neoplasia after initial colonoscopy according to the baseline risk stratification: a prospective study in 2452 asymptomatic Koreans. Gut 2011;60:1537–43.

74. Atkin WS, Edwards R, Kralj-Hans I, et al. Once-only flexible sigmoidoscopy screening in prevention of colorectal cancer: a multicentre randomised controlled trial. Lancet 2010;375:1624–33.

75. Saini SD, Kim HM, Schoenfeld P. Incidence of advanced adenomas at surveillance colonoscopy in patients with a personal history of colon adenomas: a meta-analysis and systematic review. Gastrointest Endosc 2006;64:614–26.

76. Martinez ME, Baron JA, Lieberman DA, et al. A pooled analysis of advanced colorectal neoplasia diagnoses following colonoscopic polypectomy. Gastroenterology 2009;136:832–41.

77. Leggett B, Whitehall V. Role of the serrated pathway in colorectal cancer pathogenesis. Gastroenterology 2010;138:
2088–100.

78. Schreiner MA, Weiss DG, Lieberman DA. Proximal and large nonneoplastic serrated polyps: association with synchronous neoplasia at screening colonoscopy and with interval neoplasia at follow- up colonoscopy. Gastroenterology 2010;139:1497–502.

79. Hiraoka S, Kato J, Fujiki S, et al. The presence of large serrated polyps increases risk for colorectal cancer. Gastroenterology 2010;139:1503–10.

80. Robertson DJ, Burke CA, Welch HG, et al. Using the results of a baseline and a surveillance colonoscopy to predict recurrent adenomas with high-risk characteristics. Ann Intern Med 2009;151:103–9.

81. Warren JL, Klabunde CN, Mariotto AB, et al. Adverse events after outpatient colonoscopy in the Medicare population. Ann Intern Med 2009;150:849–57.

82. Ko CW, Riffle S, Michaels L, et al. Serious complications within 30 days of screening and surveillance colonoscopy: a multicenter study. Clin Gastroenterol Hepatol 2010;8:166–73.

References

1. American Cancer Society. Colorectal cancer facts & figures 2014–2016. Atlanta: American Cancer Society; 2014.

2. Ries L, Melbert D, Krapcho M, et al. SEER cancer statistics review, 1975–2011. Bethesda, MD: National Cancer Institute; 2014.

3. Levine JS, Ahnen DJ. Clinical practice. Adenomatous polyps of the colon. N Engl J Med 2006;355:2551–7.

4. Bond JH. Polyp guideline: diagnosis, treatment, and surveillance for patients with colorectal polyps. Practice Parameters Committee of the American College of Gastroenterology. Am J Gastroenterol 2000;95:3053–63.

5. Schatzkin A, Freedman LS, Dawsey SM, Lanza E. Interpreting precursor studies: what polyp trials tell us about large-bowel cancer. J Natl Cancer Inst 1994;86:1053–7.

6. DevCan: Probability of developing or dying of cancer software, version 6.5.0; Statistical Research and Applications Branch, National Cancer Institute, 2005. http://srab.cancer.gov/devcan [computer program].

7. Surveillance, Epidemiology, and End Results (SEER) Program (www.seer.cancer.gov), National Cancer Institute, DCCPS, Surveillance Research Program, Cancer Statistics Branch, released April 2010, based on the November 2009 submission.

8. Murphy G, Devesa SS, Cross AJ, et al. Sex disparities in colorectal cancer incidence by anatomic subsite, race and age. Int J Cancer 2011;128:1668–7.

9. Edwards BK, Ward E, Kohler BA, et al. Annual report to the nation on the status of cancer, 1975-2006, featuring colorectal cancer trends and impact of interventions (risk factors, screening, and treatment) to reduce future rates. Cancer 2010;116:544–73.

10. Irby K, Anderson WF, Henson DE, Devesa SS. Emerging and widening colorectal carcinoma disparities between Blacks and Whites in the United States (1975-2002). Cancer Epidemiol Biomarkers Prev 2006;15:792–7.

11. Levin B, Lieberman DA, McFarland B, et al. Screening and surveillance for the early detection of colorectal cancer and adenomatous polyps, 2008: a joint guideline from the American Cancer Society, the US Multi-Society Task Force on Colorectal Cancer, and the American College of Radiology. CA Cancer J Clin 2008;58:130–60.

12. Kirkegaard H, Johnsen NF, Christensen J, et al. Association of adherence to lifestyle recommendations and risk of colorectal cancer: a prospective Danish cohort study. BMJ 2010;341:c5504.

13. Butterworth AS, Higgins JP, Pharoah P. Relative and absolute risk of colorectal cancer for individuals with a family history: a meta-analysis. Eur J Cancer 2006;42:216–27.

14. Johns LE, Houlston RS. A systematic review and meta-analysis of familial colorectal cancer risk. Am J Gastroenterol 2001;96:2992–3003.

15. Lynch HT, de la Chapelle A. Hereditary colorectal cancer. N Engl J Med 2003;348:919–32.

16. Jasperson KW, Tuohy TM, Neklason DW, Burt RW. Hereditary and familial colon cancer. Gastroenterology 2010;138:2044–58.

17. Stoffel E, Mukherjee B, Raymond VM, et al. Calculation of risk of colorectal and endometrial cancer among patients with Lynch syndrome. Gastroenterology 2009;137:1621–7.

18. Bernstein CN, Blanchard JF, Kliewer E, Wajda A. Cancer risk in patients with inflammatory bowel disease: a population-based study. Cancer 2001;91:854–62.

19. Eaden JA, Abrams KR, Mayberry JF. The risk of colorectal cancer in ulcerative colitis: a meta-analysis. Gut 2001;48:526–35.

20. Larsson SC, Orsini N, Wolk A. Diabetes mellitus and risk of colorectal cancer: a meta-analysis. J Natl Cancer Inst 2005;97:1679–87.

21. Campbell PT, Deka A, Jacobs EJ, et al. Prospective study reveals associations between colorectal cancer and type 2 diabetes mellitus or insulin use in men. Gastroenterology 2010;139:1138–46.

22. Larsson SC, Giovannucci E, Wolk A. Diabetes and colorectal cancer incidence in the cohort of Swedish men. Diabetes Care 2005;28:1805–7.

23. Huxley RR, Ansary-Moghaddam A, Clifton P, et al. The impact of dietary and lifestyle risk factors on risk of colorectal cancer: a quantitative overview of the epidemiological evidence. Int J Cancer 2009;125:171–80.

24. Larsson SC, Wolk A. Obesity and colon and rectal cancer risk: a meta-analysis of prospective studies. Am J Clin Nutr 2007;86:556–65.

25. Wang Y, Jacobs EJ, Patel AV, et al. A prospective study of waist circumference and body mass index in relation to colorectal cancer incidence. Cancer Causes Control 2008;19:783–92.

26. Chao A, Thun MJ, Connell CJ, et al. Meat consumption and risk of colorectal cancer. JAMA 2005;293:172–82.

27. Cross AJ, Ferrucci LM, Risch A, et al. A large prospective study of meat consumption and colorectal cancer risk: an investigation of potential mechanisms underlying this association. Cancer Res 2010;70:2406–14.

28. Chan AT, Giovannucci EL. Primary prevention of colorectal cancer. Gastroenterology 2010;138:2029–43.

29. Food, nutrition, physical activity, and the prevention of cancer: a global perspective. Washington DC: World Cancer Research Fund/American Institute for Cancer Research; 2007.

30. McCullough ML, Robertson AS, Chao A, et al. A prospective study of whole grains, fruits, vegetables and colon cancer risk. Cancer Causes Control 2003;14:959–70.

31. Terry P, Giovannucci E, Michels KB, et al. Fruit, vegetables, dietary fiber, and risk of colorectal cancer. J Natl Cancer Inst 2001;93:525–33.

32. Cho E, Smith-Warner SA, Spiegelman D, et al. Dairy foods, calcium, and colorectal cancer: a pooled analysis of 10 cohort studies. J Natl Cancer Inst 2004;96:1015–22.

33. Secretan B, Straif K, Baan R, et al. A review of human carcinogens--Part E: tobacco, areca nut, alcohol, coal smoke, and salted fish. Lancet Oncol 2009;10:1033–4.

34. Ferrari P, Jenab M, Norat T, et al. Lifetime and baseline alcohol intake and risk of colon and rectal cancers in the European prospective investigation into cancer and nutrition (EPIC). Int J Cancer 2007;121:2065–72.

35. Samad AK, Taylor RS, Marshall T, Chapman MA. A meta-analysis of the association of physical activity with reduced risk of colorectal cancer. Colorectal Dis 2005;7:204–13.

36. Flossmann E, Rothwell PM. Effect of aspirin on long-term risk of colorectal cancer: consistent evidence from randomised and observational studies. Lancet 2007;369:1603–13.

37. Rothwell PM, Wilson M, Elwin CE, et al. Long-term effect of aspirin on colorectal cancer incidence and mortality: 20-year follow-up of five randomised trials. Lancet 2010;376:1741–50.

38. Hildebrand JS, Jacobs EJ, Campbell PT, et al. Colorectal cancer incidence and postmenopausal hormone use by type, recency, and duration in cancer prevention study II. Cancer Epidemiol Biomarkers Prev 2009;18:2835–41.

39. Heiss G, Wallace R, Anderson GL, et al. Health risks and benefits 3 years after stopping randomized treatment with estrogen and progestin. JAMA 2008;299:1036–45.

40. Rossouw JE, Anderson GL, Prentice RL, et al. Risks and benefits of estrogen plus progestin in healthy postmenopausal women: principal results From the Women’s Health Initiative randomized controlled trial. JAMA 2002;288:321–33.

41. Lieberman DA,Weiss DG. One-time screening for colorectal cancer with combined fecal occult-blood testing and examination of the distal colon. N Engl J Med 2001;345:555–60.

42. Lafata JE, Divine G, Moon C,Williams LK. Patient-physician colorectal cancer screening discussions and screening use. Am J Prev Med 2006;31:202–9.

43. Rex DK, Johnson DA, Andersone JC, et al. American College of Gastroenterology guidelines for colorectal cancer screening 2008. Am J Gastroenterol 2009;104:739–50.

44. U.S. Preventive Services Task Force. Screening for colorectal cancer: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med 2008;149:627–37.

45. Smith RA, von Eschenbach AC,Wender R, et al. American Cancer Society guidelines for the early detection of cancer: update of early detection guidelines for prostate, colorectal, and endometrial cancers. Also: update 2001—testing for early lung cancer detection. CA Cancer J Clin 2001;51:38–75.

46. Winawer S, Fletcher R, Rex D, et al. Colorectal cancer screening and surveillance: clinical guidelines and rationale—update based on new evidence. Gastroenterology 2003;124:544–60.

47. Rex DK, Kahi CJ, Levin B, et al. Guidelines for colonoscopy surveillance after cancer resection: a consensus update by the American Cancer Society and the US Multi-Society Task Force on Colorectal Cancer. Gastroenterology 2006;130:1865–71.

48. Ransohoff DF, Lang CA. Screening for colorectal cancer with the fecal occult blood test: a background paper. American College of Physicians. Ann Intern Med 1997;126:811–22.

49. Hardcastle JD, Chamberlain JO, Robinson MH, et al. Randomised controlled trial of faecal-occult blood screening for colorectal cancer. Lancet 1996;348:1472–7.

50. Kronborg O, Fenger C, Olsen J, et al. Randomised study of screening for colorectal cancer with faecal-occult blood test. Lancet 1996;348:1467–71.

51. Wilson JMG, Junger G. Principles and practice of screening for disease. Geneva: World Health Organization; 1968.

52. Allison JE, Tekawa IS, Ransom LJ, Adrain AL. A comparison of fecal occult-blood tests for colorectal-cancer screening. N Engl J Med 1996;334:155–9.

53. Caligiore P, Macrae FA, St John DJ, et al. Peroxidase levels in food: relevance to colorectal cancer screening. Am J Clin Nutr 1982;35:1487–9.

54. Nakajima M, Saito H, Soma Y, et al. Prevention of advanced colorectal cancer by screening using the immunochemical faecal occult blood test: a case-control study. Br J Cancer 2003;89:23–8.

55. Lee KJ, Inoue M, Otani T, et al. Colorectal cancer screening using fecal occult blood test and subsequent risk of colorectal cancer: a prospective cohort study in Japan. Cancer Detect Prev 2007;31:3–11.

56. Zappa M, Csatiglione G, Gazzini G, et al. Effect of faecal occult blood testing on colorectal mortality: results of a population-based case-control study in the district of Florence, Italy. Int J Cancer 1997;73:208–10.

57. van Rossum LG, van Rijn AF, Laheij RJ, et al. Random comparison of guaiac and immunochemical fecal occult blood tests for colorectal cancer in a screening population. Gastroenterology 2008;135:82–90.

58. Hol L, van Leerdam ME, van Ballegooijen M, et al. Attendance to screening for colorectal cancer in the Netherlands; randomized controlled trial comparing two different forms of faecal occult blood tests and sigmoidoscopy. Gastroenterology 2008;134:A87.

59. Johnson CD, Chen MH, Toledano AY, et al. Accuracy of CT colonography for detection of large adenomas and cancers.
N Engl J Med 2008;359:1207–17.

60. Brenner DJ, Georgsson MA. Mass screening with CT colonography: should the radiation exposure be of concern? Gastroenterology 2005;129:328–37.

61. Brenner DJ, Hall EJ. Computed tomography—an increasing source of radiation exposure. N Engl J Med 2007;35: 2277–84.

62. Hur C, Chung DC, Schoen RE, et al. The management of small polyps found by virtual colonoscopy: results of a decision analysis. Clin Gastroenterol Hepatol 2007;5:237–44.

63. Chu KC, Tarone RE, Chow WH, et al. Temporal patterns in colorectal cancer incidence, survival, and mortality from 1950 through 1990. J Natl Cancer Inst 1994;86:997–1006.

64. Vasen HF, Moslein G, Alonso A, et al. Guidelines for the clinical management of familial adenomatous polyposis (FAP). Gut 2008;57:704–13.

65. Umar A, Boland CR, Terdiman PJ, et al. Revised Bethesda guidelines for hereditary nonpolyposis colorectal Cancer (Lynch syndrome) and microsatellite instability. J Natl Cancer Inst 2004;96:261–8.

66. Lieberman DA, Rex DR, Winawer SJ, et al. Guidelines for colonoscopy surveillance after screening and polypectomy: a consensus update by the US Multi-Society Task Force on Colorectal Cancer. Gastroenterology 2012;143:844–57.

67. Winawer SJ, Zauber AG, Fletcher RH, et al. Guidelines for colonoscopy surveillance after polypectomy: a consensus update by the US Multi-Society Task Force on colorectal cancer and the American Cancer Society. Gastroenterology 2006;130:1872–85.

68. Lieberman DA, Weiss DG, Harford WV, et al. Five year colon surveillance after screening colonoscopy. Gastroenterology 2007;133:1077–85.

69. Imperiale TF, Glowinski EA, Lin-Cooper C, et al. Five-year risk of colorectal neoplasia after negative screening colonoscopy. N Engl J Med 2008;359:1218–24.

70. Leung WK, Lau JYW, Suen BY, et al. Repeat screening colonoscopy 5 years after normal baseline screening colonoscopy in average-risk Chinese: a prospective study. Am J Gastroenterol 2009;104:2028–34.

71. Brenner H, Haug U, Arndt V, et al. Low risk of colorectal cancer and advanced adenomas more than 10 years after negative colonoscopy. Gastroenterology 2010;138:870–6.

72. Miller H, Mukherjee R, Tian J, et al. Colonoscopy surveillance after polypectomy may be extended beyond five years. J Clin Gastroenterol 2010;44:e162–e166.

73. Chung SJ, Kim YS, Yang SY, et al. Five-year risk for advanced colorectal neoplasia after initial colonoscopy according to the baseline risk stratification: a prospective study in 2452 asymptomatic Koreans. Gut 2011;60:1537–43.

74. Atkin WS, Edwards R, Kralj-Hans I, et al. Once-only flexible sigmoidoscopy screening in prevention of colorectal cancer: a multicentre randomised controlled trial. Lancet 2010;375:1624–33.

75. Saini SD, Kim HM, Schoenfeld P. Incidence of advanced adenomas at surveillance colonoscopy in patients with a personal history of colon adenomas: a meta-analysis and systematic review. Gastrointest Endosc 2006;64:614–26.

76. Martinez ME, Baron JA, Lieberman DA, et al. A pooled analysis of advanced colorectal neoplasia diagnoses following colonoscopic polypectomy. Gastroenterology 2009;136:832–41.

77. Leggett B, Whitehall V. Role of the serrated pathway in colorectal cancer pathogenesis. Gastroenterology 2010;138:
2088–100.

78. Schreiner MA, Weiss DG, Lieberman DA. Proximal and large nonneoplastic serrated polyps: association with synchronous neoplasia at screening colonoscopy and with interval neoplasia at follow- up colonoscopy. Gastroenterology 2010;139:1497–502.

79. Hiraoka S, Kato J, Fujiki S, et al. The presence of large serrated polyps increases risk for colorectal cancer. Gastroenterology 2010;139:1503–10.

80. Robertson DJ, Burke CA, Welch HG, et al. Using the results of a baseline and a surveillance colonoscopy to predict recurrent adenomas with high-risk characteristics. Ann Intern Med 2009;151:103–9.

81. Warren JL, Klabunde CN, Mariotto AB, et al. Adverse events after outpatient colonoscopy in the Medicare population. Ann Intern Med 2009;150:849–57.

82. Ko CW, Riffle S, Michaels L, et al. Serious complications within 30 days of screening and surveillance colonoscopy: a multicenter study. Clin Gastroenterol Hepatol 2010;8:166–73.

Issue
Journal of Clinical Outcomes Management - March 2015, VOL. 22, NO. 3
Issue
Journal of Clinical Outcomes Management - March 2015, VOL. 22, NO. 3
Publications
Publications
Topics
Article Type
Display Headline
Colorectal Cancer: Screening and Surveillance Recommendations
Display Headline
Colorectal Cancer: Screening and Surveillance Recommendations
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Psychosocial Variables May Predict Likelihood of Weight Regain After Weight Loss

Article Type
Changed
Thu, 03/01/2018 - 14:53
Display Headline
Psychosocial Variables May Predict Likelihood of Weight Regain After Weight Loss

Study Overview

Objective. To identify psychosocial predictors of weight loss maintenance in a multi-site clinical trial following a group-based weight loss program.

Design. Secondary analysis of a 2-phase randomized controlled trial. The first phase was a 6-month group-based weight loss program and the second phase was a 30-month trial comparing weight loss maintenance strategies.

Setting and participants. The patients studied were participants in the Weight Loss Maintenance trial [1], which was conducted at 4 US clinical centers. Eligible participants were overweight and obese adults with a BMI between 25 to 45 who were actively taking medication for hypertension, dyslipidemia, or both. 1685 patients were recruited into the weight-loss phase, and those who lost at least 4 kg (n = 1032) were then randomly assigned to 1 of the 3 maintenance arms: (a) self-directed with minimal intervention (control), (b) interactive technology that consisted of unlimited, interactive study website access, or (c) personal contact consisting of monthly, personalized telephone calls and quarterly face-to-face contact with a study interventionist. There was 1 death in each treatment group, so a total of 1029 participants were included in the analyses.

Main outcome measures. The researchers examined the associations between psychosocial variables and weight change outcomes at 12 and 30 months. Patients completed 5 self-report measures at the time of randomization into phase 2 of the study: a social support and exercise survey, a social support and eating habits survey, the SF-36, the Patient Health Questionnaire Depression Scale, and the Perceived Stress Scale.

Results. Of the 1029 participants initially included for analyses, 2 failed to provide complete data on the social support scales and 2 were identified as outliers at both 12 and 30 months. This resulted in a final sample size of 1025 participants; 63% were women, 61% were non-Latino white, and 38% were black. The mean age was 55.6 years. All groups regained weight, with the personal contact group having the least amount of gain. However, the mean weight at 30 months remained significantly lower than the mean weight at entry into phase 1.

Only 3 psychosocial variables were significantly related to weight loss at 12 and 30 months. At both time marks, less weight regain was associated with higher SF-36 mental health composite scores (P < 0.01). Interestingly, for black participants at the 12-month mark, more weight regain was associated with higher exercise encouragement from friends (P < 0.05). At 30 months, more weight regain was associated with friends’ encouragement for healthy eating (P < 0.05).

Conclusion. The psychosocial variables that were self-reported upon entering phase 2 may predict the ability of an individual to maintain weight loss at 12 and 30 months. The significant, complex interactions between these variables, race, sex, and treatment interventions need to be further studied for proper incorporation into a weight loss maintenance program.

Commentary

The case for the obesity epidemic has been long established. Unfortunately, the causes for weight gain can be complicated and multifactorial. Factors associated with weight gain include nonmodifiable factors such as age, sex, and race as well as modifiable factors like lifestyle, eating habits, and perceived stress [2]. The CDC states that about 78.6 million American adults are overweight or obese [3], and about 25% to 40% of US adults attempt to lose weight each year [4]. It is unclear what proportion of those who lose weight are successful at maintaining their weight loss [5,6].

Researchers and practitioners alike understand that maintaining weight loss is difficult. Most studies on weight regain have focused on biological and lifestyle factors [7]. This article did a good job in detailing gaps in knowledge and supporting the need for further study of psychosocial variables. The results demonstrated complex, interactive relationships between multiple factors. Three psychosocial variables were found to be statistically significant in relation to weight loss, but more significant relationships were found between race, perceived stress, and weight loss.

As a secondary analysis, this study carries the strengths of the initial study, including a 30-month study duration. In addition, this study was randomized and included 3 different treatment arms. Lastly, there was a large representation of black participants, and the authors suggested that the results may offer and initial characterization of this population.

A limitation of this study was the use of self-reported data, which may be subject to be bias and be less reliable than direct measured data. Also, these measures were apparently taken only once prior to the beginning of phase 2 with no rationale provided. Psychosocial variables such as social support, quality of life, and perceived stress are dynamic and cannot be accurately encapsulated in isolated moments in time. The sample’s diversity was also lacking as there were few Hispanics and no Asian
participants.

Applications for Clinical Practice

A few significant, interactive relationships were discovered in this examination and require further study. Continued research and a better understanding of these complex relationships are needed.

—Angela M. Godwin Beoku-Betts, MSN, FNP–BC

References

1. Svetkey LP, Stevens VJ, Brantley PJ, et al; Weight Loss Maintenance Collaborative Research Group. Comparison of strategies for sustaining weight loss: the weight loss maintenance randomized controlled trial. JAMA 2008;299:1139–48.

2. Grundy SM. Multifactorial causation of obesity: implications for prevention. Am J Clin Nutr. 1998 Mar;67(3 Suppl):563S–72S.

3. Ogden CL, Carroll MD, Kit BK, Flegal KM. Prevalence of childhood and adult obesity in the United States, 2011–2012. JAMA 2014;311:806–14.

4. Williamson DF, Serdula MK, Anda RF, et al. Weight loss attempts in adults: goals, duration, and rate of weight loss. Am J Public Health 1992;82:1251–7.

5. The National Weight Control Registry. (1994). Accessed 5 Oct 2014 at www.nwcr.ws/default.htm.

6. Kraschnewski JL, Boan J, Esposito J, et al. Long-term weight loss maintenance in the United States. Int J Obes (Lond) 2010;34:1644–54.

7. Maclean PS, Bergouignan A, Cornier MA, Jackman MR. Biology’s response to dieting: the impetus for weight regain. Am J Physiol Regul Integr Comp Physiol 2011;301:R581–600.

Issue
Journal of Clinical Outcomes Management - March 2015, VOL. 22, NO. 3
Publications
Sections

Study Overview

Objective. To identify psychosocial predictors of weight loss maintenance in a multi-site clinical trial following a group-based weight loss program.

Design. Secondary analysis of a 2-phase randomized controlled trial. The first phase was a 6-month group-based weight loss program and the second phase was a 30-month trial comparing weight loss maintenance strategies.

Setting and participants. The patients studied were participants in the Weight Loss Maintenance trial [1], which was conducted at 4 US clinical centers. Eligible participants were overweight and obese adults with a BMI between 25 to 45 who were actively taking medication for hypertension, dyslipidemia, or both. 1685 patients were recruited into the weight-loss phase, and those who lost at least 4 kg (n = 1032) were then randomly assigned to 1 of the 3 maintenance arms: (a) self-directed with minimal intervention (control), (b) interactive technology that consisted of unlimited, interactive study website access, or (c) personal contact consisting of monthly, personalized telephone calls and quarterly face-to-face contact with a study interventionist. There was 1 death in each treatment group, so a total of 1029 participants were included in the analyses.

Main outcome measures. The researchers examined the associations between psychosocial variables and weight change outcomes at 12 and 30 months. Patients completed 5 self-report measures at the time of randomization into phase 2 of the study: a social support and exercise survey, a social support and eating habits survey, the SF-36, the Patient Health Questionnaire Depression Scale, and the Perceived Stress Scale.

Results. Of the 1029 participants initially included for analyses, 2 failed to provide complete data on the social support scales and 2 were identified as outliers at both 12 and 30 months. This resulted in a final sample size of 1025 participants; 63% were women, 61% were non-Latino white, and 38% were black. The mean age was 55.6 years. All groups regained weight, with the personal contact group having the least amount of gain. However, the mean weight at 30 months remained significantly lower than the mean weight at entry into phase 1.

Only 3 psychosocial variables were significantly related to weight loss at 12 and 30 months. At both time marks, less weight regain was associated with higher SF-36 mental health composite scores (P < 0.01). Interestingly, for black participants at the 12-month mark, more weight regain was associated with higher exercise encouragement from friends (P < 0.05). At 30 months, more weight regain was associated with friends’ encouragement for healthy eating (P < 0.05).

Conclusion. The psychosocial variables that were self-reported upon entering phase 2 may predict the ability of an individual to maintain weight loss at 12 and 30 months. The significant, complex interactions between these variables, race, sex, and treatment interventions need to be further studied for proper incorporation into a weight loss maintenance program.

Commentary

The case for the obesity epidemic has been long established. Unfortunately, the causes for weight gain can be complicated and multifactorial. Factors associated with weight gain include nonmodifiable factors such as age, sex, and race as well as modifiable factors like lifestyle, eating habits, and perceived stress [2]. The CDC states that about 78.6 million American adults are overweight or obese [3], and about 25% to 40% of US adults attempt to lose weight each year [4]. It is unclear what proportion of those who lose weight are successful at maintaining their weight loss [5,6].

Researchers and practitioners alike understand that maintaining weight loss is difficult. Most studies on weight regain have focused on biological and lifestyle factors [7]. This article did a good job in detailing gaps in knowledge and supporting the need for further study of psychosocial variables. The results demonstrated complex, interactive relationships between multiple factors. Three psychosocial variables were found to be statistically significant in relation to weight loss, but more significant relationships were found between race, perceived stress, and weight loss.

As a secondary analysis, this study carries the strengths of the initial study, including a 30-month study duration. In addition, this study was randomized and included 3 different treatment arms. Lastly, there was a large representation of black participants, and the authors suggested that the results may offer and initial characterization of this population.

A limitation of this study was the use of self-reported data, which may be subject to be bias and be less reliable than direct measured data. Also, these measures were apparently taken only once prior to the beginning of phase 2 with no rationale provided. Psychosocial variables such as social support, quality of life, and perceived stress are dynamic and cannot be accurately encapsulated in isolated moments in time. The sample’s diversity was also lacking as there were few Hispanics and no Asian
participants.

Applications for Clinical Practice

A few significant, interactive relationships were discovered in this examination and require further study. Continued research and a better understanding of these complex relationships are needed.

—Angela M. Godwin Beoku-Betts, MSN, FNP–BC

Study Overview

Objective. To identify psychosocial predictors of weight loss maintenance in a multi-site clinical trial following a group-based weight loss program.

Design. Secondary analysis of a 2-phase randomized controlled trial. The first phase was a 6-month group-based weight loss program and the second phase was a 30-month trial comparing weight loss maintenance strategies.

Setting and participants. The patients studied were participants in the Weight Loss Maintenance trial [1], which was conducted at 4 US clinical centers. Eligible participants were overweight and obese adults with a BMI between 25 to 45 who were actively taking medication for hypertension, dyslipidemia, or both. 1685 patients were recruited into the weight-loss phase, and those who lost at least 4 kg (n = 1032) were then randomly assigned to 1 of the 3 maintenance arms: (a) self-directed with minimal intervention (control), (b) interactive technology that consisted of unlimited, interactive study website access, or (c) personal contact consisting of monthly, personalized telephone calls and quarterly face-to-face contact with a study interventionist. There was 1 death in each treatment group, so a total of 1029 participants were included in the analyses.

Main outcome measures. The researchers examined the associations between psychosocial variables and weight change outcomes at 12 and 30 months. Patients completed 5 self-report measures at the time of randomization into phase 2 of the study: a social support and exercise survey, a social support and eating habits survey, the SF-36, the Patient Health Questionnaire Depression Scale, and the Perceived Stress Scale.

Results. Of the 1029 participants initially included for analyses, 2 failed to provide complete data on the social support scales and 2 were identified as outliers at both 12 and 30 months. This resulted in a final sample size of 1025 participants; 63% were women, 61% were non-Latino white, and 38% were black. The mean age was 55.6 years. All groups regained weight, with the personal contact group having the least amount of gain. However, the mean weight at 30 months remained significantly lower than the mean weight at entry into phase 1.

Only 3 psychosocial variables were significantly related to weight loss at 12 and 30 months. At both time marks, less weight regain was associated with higher SF-36 mental health composite scores (P < 0.01). Interestingly, for black participants at the 12-month mark, more weight regain was associated with higher exercise encouragement from friends (P < 0.05). At 30 months, more weight regain was associated with friends’ encouragement for healthy eating (P < 0.05).

Conclusion. The psychosocial variables that were self-reported upon entering phase 2 may predict the ability of an individual to maintain weight loss at 12 and 30 months. The significant, complex interactions between these variables, race, sex, and treatment interventions need to be further studied for proper incorporation into a weight loss maintenance program.

Commentary

The case for the obesity epidemic has been long established. Unfortunately, the causes for weight gain can be complicated and multifactorial. Factors associated with weight gain include nonmodifiable factors such as age, sex, and race as well as modifiable factors like lifestyle, eating habits, and perceived stress [2]. The CDC states that about 78.6 million American adults are overweight or obese [3], and about 25% to 40% of US adults attempt to lose weight each year [4]. It is unclear what proportion of those who lose weight are successful at maintaining their weight loss [5,6].

Researchers and practitioners alike understand that maintaining weight loss is difficult. Most studies on weight regain have focused on biological and lifestyle factors [7]. This article did a good job in detailing gaps in knowledge and supporting the need for further study of psychosocial variables. The results demonstrated complex, interactive relationships between multiple factors. Three psychosocial variables were found to be statistically significant in relation to weight loss, but more significant relationships were found between race, perceived stress, and weight loss.

As a secondary analysis, this study carries the strengths of the initial study, including a 30-month study duration. In addition, this study was randomized and included 3 different treatment arms. Lastly, there was a large representation of black participants, and the authors suggested that the results may offer and initial characterization of this population.

A limitation of this study was the use of self-reported data, which may be subject to be bias and be less reliable than direct measured data. Also, these measures were apparently taken only once prior to the beginning of phase 2 with no rationale provided. Psychosocial variables such as social support, quality of life, and perceived stress are dynamic and cannot be accurately encapsulated in isolated moments in time. The sample’s diversity was also lacking as there were few Hispanics and no Asian
participants.

Applications for Clinical Practice

A few significant, interactive relationships were discovered in this examination and require further study. Continued research and a better understanding of these complex relationships are needed.

—Angela M. Godwin Beoku-Betts, MSN, FNP–BC

References

1. Svetkey LP, Stevens VJ, Brantley PJ, et al; Weight Loss Maintenance Collaborative Research Group. Comparison of strategies for sustaining weight loss: the weight loss maintenance randomized controlled trial. JAMA 2008;299:1139–48.

2. Grundy SM. Multifactorial causation of obesity: implications for prevention. Am J Clin Nutr. 1998 Mar;67(3 Suppl):563S–72S.

3. Ogden CL, Carroll MD, Kit BK, Flegal KM. Prevalence of childhood and adult obesity in the United States, 2011–2012. JAMA 2014;311:806–14.

4. Williamson DF, Serdula MK, Anda RF, et al. Weight loss attempts in adults: goals, duration, and rate of weight loss. Am J Public Health 1992;82:1251–7.

5. The National Weight Control Registry. (1994). Accessed 5 Oct 2014 at www.nwcr.ws/default.htm.

6. Kraschnewski JL, Boan J, Esposito J, et al. Long-term weight loss maintenance in the United States. Int J Obes (Lond) 2010;34:1644–54.

7. Maclean PS, Bergouignan A, Cornier MA, Jackman MR. Biology’s response to dieting: the impetus for weight regain. Am J Physiol Regul Integr Comp Physiol 2011;301:R581–600.

References

1. Svetkey LP, Stevens VJ, Brantley PJ, et al; Weight Loss Maintenance Collaborative Research Group. Comparison of strategies for sustaining weight loss: the weight loss maintenance randomized controlled trial. JAMA 2008;299:1139–48.

2. Grundy SM. Multifactorial causation of obesity: implications for prevention. Am J Clin Nutr. 1998 Mar;67(3 Suppl):563S–72S.

3. Ogden CL, Carroll MD, Kit BK, Flegal KM. Prevalence of childhood and adult obesity in the United States, 2011–2012. JAMA 2014;311:806–14.

4. Williamson DF, Serdula MK, Anda RF, et al. Weight loss attempts in adults: goals, duration, and rate of weight loss. Am J Public Health 1992;82:1251–7.

5. The National Weight Control Registry. (1994). Accessed 5 Oct 2014 at www.nwcr.ws/default.htm.

6. Kraschnewski JL, Boan J, Esposito J, et al. Long-term weight loss maintenance in the United States. Int J Obes (Lond) 2010;34:1644–54.

7. Maclean PS, Bergouignan A, Cornier MA, Jackman MR. Biology’s response to dieting: the impetus for weight regain. Am J Physiol Regul Integr Comp Physiol 2011;301:R581–600.

Issue
Journal of Clinical Outcomes Management - March 2015, VOL. 22, NO. 3
Issue
Journal of Clinical Outcomes Management - March 2015, VOL. 22, NO. 3
Publications
Publications
Article Type
Display Headline
Psychosocial Variables May Predict Likelihood of Weight Regain After Weight Loss
Display Headline
Psychosocial Variables May Predict Likelihood of Weight Regain After Weight Loss
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Are Mortality Benefits from Bariatric Surgery Observed in a Nontraditional Surgical Population? Evidence from a VA Dataset

Article Type
Changed
Thu, 03/01/2018 - 14:39
Display Headline
Are Mortality Benefits from Bariatric Surgery Observed in a Nontraditional Surgical Population? Evidence from a VA Dataset

Study Overview

Objective. To determine the association between bariatric surgery and long-term mortality rates among patients with severe obesity.

Design. Retrospective cohort study.

Setting and participants. This analysis relied upon data from Veteran’s Administration (VA) patients undergoing bariatric surgery between 2000 and 2011 and a group of matched controls. For this data-only study, a waiver of informed consent was obtained. Investigators first used the VA Surgical Quality Improvement Program (SQIP) dataset to identify all bariatric surgical procedures performed at VA hospitals between 2000 and the end of 2011, excluding patients who had any evidence of body mass index (BMI) less than 35 kg/m2 and those with certain baseline diagnoses that would be considered contraindications for surgery, as well as those who had prolonged inpatient stays immediately prior to their surgical date. No upper or lower age limits appear to have been specified, and no upper BMI limit appeared to have been set.

Once all surgical patients were identified, the investigators attempted to find a group of similar control patients who had not undergone surgery. Initially they pulled candidate matches for each surgical patient based on having the same sex, age-group (within 5 years), BMI category (35-40, 40-50, >50), diabetes status (present or absent), racial category, and VA region. From these candidates, they selected up to 3 of the closest matches on age, BMI, and a composite comorbidity score based on inpatient and outpatient claims in the year prior to surgery. The authors specified that controls could convert to surgical patients during the follow-up period, in which case their data was censored beginning with the surgical procedure. However, if a control patient underwent surgery during 2012 or 2013, censoring was not possible given that the dataset for identifying surgeries contained only procedures performed through the end of 2011.

Main outcome measures. The primary outcome of interest was time to death (any cause) beginning at the date of surgery (or baseline date for nonsurgical controls) through the end of 2013. The investigators built Cox proportional hazards models to evaluate survival using multivariable models to adjust for baseline characteristics, including those involved in the matching process, as well as others that might have differentially impacted both likelihood of undergoing surgery and mortality risk. These included marital status, insurance markers of low income or disability, and a number of comorbid medical and psychiatric diagnoses.

In addition to the main analyses, the investigators also looked for effect modification of the surgery-mortality relationship by a patient’s sex and presence or absence of diabetes at the time of surgery, as well as the time period in which their surgery was conducted, dichotomized around the year 2006. This year was selected for several reasons, including that it was the year in which a VA-wide comprehensive weight management and surgical selection program was instituted.

Results. The surgical cohort was made up of 2500 patients, and there were 7462 matched controls. The surgical and control groups were similar with respect to matched baseline characteristics, tested using standardized differences (as opposed to t test or chi-square). Mean (SD) age was 52 (8.8) years for surgical patients versus 53 (8.7) years for controls. 74% of patients in both the surgical and control groups were men, and 81% in both groups were white (ethnicity not specified). Mean (SD) baseline BMI was 47 (7.9) kg/m2 in the surgical group and 46 (7.3) kg/m2 for controls.

Some between-group differences were present for baseline characteristics that had not been included in the matching protocol. More surgical patients than controls had diagnoses of hypertension (80% surgical vs. 70% control), dyslipidemia (61% vs. 52%), arthritis (27% vs. 15%), depression (44% vs. 32%), GERD (35% vs.19%), and fatty liver disease (6.6% vs. 0.6%). In contrast, more control patients than surgical patients had diagnoses of alcohol abuse (6.2% in controls vs. 3.9% in surgical) and schizophrenia (4.9% vs. 1.8%). Also, although a number of different surgical types were represented in the cohort, the vast majority of procedures were classified as Roux-en-Y gastric bypasses (RYGB). 53% of the procedures were open RYGB, 21% were laparoscopic RYGB, 10% were adjustable gastric bands (AGB), and 15% were vertical sleeve gastrectomies (VSG).

Mortality was lower among surgical patients than among matched controls during a mean follow-up time of 6.9 years for surgical patients and 6.6 years for controls. Namely, the 1-, 5- and 10-year cumulative mortality rates for surgical patients were: 2.4%, 6.4%, and 13.8%. Unadjusted mortality rates for nonsurgical controls were lower initially (1.7% at 1 year), but then much higher at years 5 (10.4%), and 10 (23.9%). In multivariable Cox models, the hazard ratio (HR) for mortality in bariatric patients versus controls was nonsignificant at 1 year of follow-up. However, between 1 and 5 years after surgery (or after baseline), multivariable models showed an HR (95% CI) of 0.45 (0.36–0.56) for mortality among surgical patients versus controls. For those with more than 5 years of follow up, the HR was similar (0.47, 95% CI 0.39–0.58) for death among surgical versus control patients. The investigators found that the year during which a patient underwent surgery (before or after 2006) did impact mortality during the first postoperative year, with those who had earlier procedures (2000-2005) exhibiting a significantly higher risk of death in that year relative to non-operative controls (HR 1.66, 95% CI 1.19–2.33). No significant sex or diabetes interactions were observed for the surgery-mortality relationship in multivariable Cox models. There was no information provided as to the breakdown of cause of death within the larger “all-cause mortality” outcome.

Conclusion. Bariatric surgery was associated with significantly lower all-cause mortality among surgical patients in the VA over a 5- to 14-year follow-up period compared with a group of severely obese VA patients who did not undergo surgery.

Commentary

Rates of severe obesity (BMI ≥ 35 kg/m2) have risen at a faster pace than those of obesity in the United States over the past decade [1], driving clinicians, patients and payers to search for effective methods of treating this condition. Bariatric surgery has emerged as the most effective treatment for severe obesity; however, the existing surgical literature is predominated by studies with short- or medium-term postoperative follow-up and homogenous participant populations containing large numbers of younger non-Hispanic white women. Research from the Swedish Obesity Study (SOS), as well as smaller US-based studies, has suggested that severely obese patients who undergo bariatric surgery have better long-term survival than their nonsurgical counterparts [2,3].Counter to this finding, a previous medium-term study utilizing data from VA hospitals did not find that surgery conferred a mortality benefit among this largely male, older, and sicker patient population [4].The current study, by the same group of investigators, attempts to update the previous finding by including more recent surgical data and a longer follow-up period, to see whether or not a survival benefit appears to emerge for VA patients undergoing bariatric surgery.

A major strength of this study was the use of a large and comprehensive clinical dataset, a strength of many studies utilizing data from the VA. The availability of clinical data such as BMI, as well as diagnostic codes and sociodemographic variables, allowed the authors to match and adjust for a number of potential confounders of the surgery-mortality relationship. Another unique feature of VA data is that members of this health care system can often be followed regardless of their location, as the unified medical record transfers between states. This is in contrast to many claims-based or single-center studies of surgery, where patients are lost to follow-up if they move or transfer insurance providers. This study clearly benefited from this aspect of VA data, with a mean postoperative follow-up period of over 5 years in both study groups, much longer than is typically observed in bariatric surgical studies, and probably a necessary feature for examining more of a rare outcome such as mortality (as opposed to comparing weight loss or diabetes remission). Another clear contribution of this study is that it focused on a group of patients not typical of bariatric cohorts—this group was slightly older and sicker, with far more men than women, and therefore at a much higher risk of mortality than the typically younger females that are part of most studies.

Although the authors did adjust for many factors when comparing the surgical and nonsurgical groups, it is possible, as with any observational study, that unmeasured confounders may have been present. Psychosocial and behavioral features that may be linked both to a person’s likelihood of undergoing surgery, and to their mortality risk are of particular concern. It is worth noting, for example, that far more patients in the nonsurgical group were identified as schizophrenic, and that the rate of schizophrenia in that severely obese group was much higher than that of the general population. This pattern may have some relationship to the weight-gain promoting effect of antipsychotic medications and the unfortunate reality that patients with severe obesity and severe mental illness may not be as well equipped to seek out surgery (or viewed as acceptable candidates) as those without severe mental illness. One possible limitation mentioned by the authors was that control group patients who underwent surgery in 2012 or 2013 would not have been recognized (and thus had their data censored in this study), possibly leading to incorrect categorization of exposure category for some amount of person-time during follow-up. In general, though, there is a low likelihood of this phenomenon impacting the findings, given both the relative infrequency of crossover observed in the cohort prior to 2011, and the relatively short amount of person-time any later crossovers would have contributed in the later years of the study.

Although codes for baseline disease states were adjusted for in multivariable analyses, the surgical patients were in general a medically sicker group at baseline than control patients. As the authors point out, if anything, this should have biased the findings in favor of seeing higher mortality rate in the surgical group, the opposite of what was found. Further strengthening the finding of a correlation between survival and surgery is the mix of procedure types included in this study. Over half of the procedures were open RYGB surgeries, with far fewer of the more modern and lower risk procedures (eg, laparoscopic RYGB) represented. Again, this mix of procedures would be expected to result in an overestimation of mortality in surgical patients relative to what might be observed if all patients had been drawn from later years of the cohort, as surgical technique evolved.

Applications for Clinical Practice

This study adds to the evidence that patients with severe obesity who undergo bariatric surgery have a lower risk of death up to 10 years after their surgery compared with patients who do not have these procedures. The findings of this work should provide encouragement, particularly for managaing older adults with more longstanding comorbidities. Those who are strongly motivated to pursue weight loss surgery, and who are deemed good candidates by bariatric teams, may add years to their lives by undergoing one of these procedures. As always, however, the quality of life experienced by patients after surgery, and a realistic expectation of the ways in which surgery will fundamentally change their lifestyle, must be a critical part of the discussion.

—Kristina Lewis, MD, MPH

References

1. Sturm R, Hattori A. Morbid obesity rates continue to rise rapidly in the United States. Int J Obesity 2013;37:889-91.

2. Sjostrom L, Narbo K, Sjostrom CD, et al. Effects of bariatric surgery on mortality in Swedish obese subjects. N Engl J Med 2007;357:741–52.

3. Adams TD, Gress RE, Smith SC, et al. Long-term mortality after gastric bypass surgery. N Engl J Med 2007;357:753–61.

4. Maciejewski ML, Livingston EH, Smith VA, et al. Survival among high-risk patients after bariatric surgery. JAMA 2011;305:2419–26.

Issue
Journal of Clinical Outcomes Management - March 2015, VOL. 22, NO. 3
Publications
Topics
Sections

Study Overview

Objective. To determine the association between bariatric surgery and long-term mortality rates among patients with severe obesity.

Design. Retrospective cohort study.

Setting and participants. This analysis relied upon data from Veteran’s Administration (VA) patients undergoing bariatric surgery between 2000 and 2011 and a group of matched controls. For this data-only study, a waiver of informed consent was obtained. Investigators first used the VA Surgical Quality Improvement Program (SQIP) dataset to identify all bariatric surgical procedures performed at VA hospitals between 2000 and the end of 2011, excluding patients who had any evidence of body mass index (BMI) less than 35 kg/m2 and those with certain baseline diagnoses that would be considered contraindications for surgery, as well as those who had prolonged inpatient stays immediately prior to their surgical date. No upper or lower age limits appear to have been specified, and no upper BMI limit appeared to have been set.

Once all surgical patients were identified, the investigators attempted to find a group of similar control patients who had not undergone surgery. Initially they pulled candidate matches for each surgical patient based on having the same sex, age-group (within 5 years), BMI category (35-40, 40-50, >50), diabetes status (present or absent), racial category, and VA region. From these candidates, they selected up to 3 of the closest matches on age, BMI, and a composite comorbidity score based on inpatient and outpatient claims in the year prior to surgery. The authors specified that controls could convert to surgical patients during the follow-up period, in which case their data was censored beginning with the surgical procedure. However, if a control patient underwent surgery during 2012 or 2013, censoring was not possible given that the dataset for identifying surgeries contained only procedures performed through the end of 2011.

Main outcome measures. The primary outcome of interest was time to death (any cause) beginning at the date of surgery (or baseline date for nonsurgical controls) through the end of 2013. The investigators built Cox proportional hazards models to evaluate survival using multivariable models to adjust for baseline characteristics, including those involved in the matching process, as well as others that might have differentially impacted both likelihood of undergoing surgery and mortality risk. These included marital status, insurance markers of low income or disability, and a number of comorbid medical and psychiatric diagnoses.

In addition to the main analyses, the investigators also looked for effect modification of the surgery-mortality relationship by a patient’s sex and presence or absence of diabetes at the time of surgery, as well as the time period in which their surgery was conducted, dichotomized around the year 2006. This year was selected for several reasons, including that it was the year in which a VA-wide comprehensive weight management and surgical selection program was instituted.

Results. The surgical cohort was made up of 2500 patients, and there were 7462 matched controls. The surgical and control groups were similar with respect to matched baseline characteristics, tested using standardized differences (as opposed to t test or chi-square). Mean (SD) age was 52 (8.8) years for surgical patients versus 53 (8.7) years for controls. 74% of patients in both the surgical and control groups were men, and 81% in both groups were white (ethnicity not specified). Mean (SD) baseline BMI was 47 (7.9) kg/m2 in the surgical group and 46 (7.3) kg/m2 for controls.

Some between-group differences were present for baseline characteristics that had not been included in the matching protocol. More surgical patients than controls had diagnoses of hypertension (80% surgical vs. 70% control), dyslipidemia (61% vs. 52%), arthritis (27% vs. 15%), depression (44% vs. 32%), GERD (35% vs.19%), and fatty liver disease (6.6% vs. 0.6%). In contrast, more control patients than surgical patients had diagnoses of alcohol abuse (6.2% in controls vs. 3.9% in surgical) and schizophrenia (4.9% vs. 1.8%). Also, although a number of different surgical types were represented in the cohort, the vast majority of procedures were classified as Roux-en-Y gastric bypasses (RYGB). 53% of the procedures were open RYGB, 21% were laparoscopic RYGB, 10% were adjustable gastric bands (AGB), and 15% were vertical sleeve gastrectomies (VSG).

Mortality was lower among surgical patients than among matched controls during a mean follow-up time of 6.9 years for surgical patients and 6.6 years for controls. Namely, the 1-, 5- and 10-year cumulative mortality rates for surgical patients were: 2.4%, 6.4%, and 13.8%. Unadjusted mortality rates for nonsurgical controls were lower initially (1.7% at 1 year), but then much higher at years 5 (10.4%), and 10 (23.9%). In multivariable Cox models, the hazard ratio (HR) for mortality in bariatric patients versus controls was nonsignificant at 1 year of follow-up. However, between 1 and 5 years after surgery (or after baseline), multivariable models showed an HR (95% CI) of 0.45 (0.36–0.56) for mortality among surgical patients versus controls. For those with more than 5 years of follow up, the HR was similar (0.47, 95% CI 0.39–0.58) for death among surgical versus control patients. The investigators found that the year during which a patient underwent surgery (before or after 2006) did impact mortality during the first postoperative year, with those who had earlier procedures (2000-2005) exhibiting a significantly higher risk of death in that year relative to non-operative controls (HR 1.66, 95% CI 1.19–2.33). No significant sex or diabetes interactions were observed for the surgery-mortality relationship in multivariable Cox models. There was no information provided as to the breakdown of cause of death within the larger “all-cause mortality” outcome.

Conclusion. Bariatric surgery was associated with significantly lower all-cause mortality among surgical patients in the VA over a 5- to 14-year follow-up period compared with a group of severely obese VA patients who did not undergo surgery.

Commentary

Rates of severe obesity (BMI ≥ 35 kg/m2) have risen at a faster pace than those of obesity in the United States over the past decade [1], driving clinicians, patients and payers to search for effective methods of treating this condition. Bariatric surgery has emerged as the most effective treatment for severe obesity; however, the existing surgical literature is predominated by studies with short- or medium-term postoperative follow-up and homogenous participant populations containing large numbers of younger non-Hispanic white women. Research from the Swedish Obesity Study (SOS), as well as smaller US-based studies, has suggested that severely obese patients who undergo bariatric surgery have better long-term survival than their nonsurgical counterparts [2,3].Counter to this finding, a previous medium-term study utilizing data from VA hospitals did not find that surgery conferred a mortality benefit among this largely male, older, and sicker patient population [4].The current study, by the same group of investigators, attempts to update the previous finding by including more recent surgical data and a longer follow-up period, to see whether or not a survival benefit appears to emerge for VA patients undergoing bariatric surgery.

A major strength of this study was the use of a large and comprehensive clinical dataset, a strength of many studies utilizing data from the VA. The availability of clinical data such as BMI, as well as diagnostic codes and sociodemographic variables, allowed the authors to match and adjust for a number of potential confounders of the surgery-mortality relationship. Another unique feature of VA data is that members of this health care system can often be followed regardless of their location, as the unified medical record transfers between states. This is in contrast to many claims-based or single-center studies of surgery, where patients are lost to follow-up if they move or transfer insurance providers. This study clearly benefited from this aspect of VA data, with a mean postoperative follow-up period of over 5 years in both study groups, much longer than is typically observed in bariatric surgical studies, and probably a necessary feature for examining more of a rare outcome such as mortality (as opposed to comparing weight loss or diabetes remission). Another clear contribution of this study is that it focused on a group of patients not typical of bariatric cohorts—this group was slightly older and sicker, with far more men than women, and therefore at a much higher risk of mortality than the typically younger females that are part of most studies.

Although the authors did adjust for many factors when comparing the surgical and nonsurgical groups, it is possible, as with any observational study, that unmeasured confounders may have been present. Psychosocial and behavioral features that may be linked both to a person’s likelihood of undergoing surgery, and to their mortality risk are of particular concern. It is worth noting, for example, that far more patients in the nonsurgical group were identified as schizophrenic, and that the rate of schizophrenia in that severely obese group was much higher than that of the general population. This pattern may have some relationship to the weight-gain promoting effect of antipsychotic medications and the unfortunate reality that patients with severe obesity and severe mental illness may not be as well equipped to seek out surgery (or viewed as acceptable candidates) as those without severe mental illness. One possible limitation mentioned by the authors was that control group patients who underwent surgery in 2012 or 2013 would not have been recognized (and thus had their data censored in this study), possibly leading to incorrect categorization of exposure category for some amount of person-time during follow-up. In general, though, there is a low likelihood of this phenomenon impacting the findings, given both the relative infrequency of crossover observed in the cohort prior to 2011, and the relatively short amount of person-time any later crossovers would have contributed in the later years of the study.

Although codes for baseline disease states were adjusted for in multivariable analyses, the surgical patients were in general a medically sicker group at baseline than control patients. As the authors point out, if anything, this should have biased the findings in favor of seeing higher mortality rate in the surgical group, the opposite of what was found. Further strengthening the finding of a correlation between survival and surgery is the mix of procedure types included in this study. Over half of the procedures were open RYGB surgeries, with far fewer of the more modern and lower risk procedures (eg, laparoscopic RYGB) represented. Again, this mix of procedures would be expected to result in an overestimation of mortality in surgical patients relative to what might be observed if all patients had been drawn from later years of the cohort, as surgical technique evolved.

Applications for Clinical Practice

This study adds to the evidence that patients with severe obesity who undergo bariatric surgery have a lower risk of death up to 10 years after their surgery compared with patients who do not have these procedures. The findings of this work should provide encouragement, particularly for managaing older adults with more longstanding comorbidities. Those who are strongly motivated to pursue weight loss surgery, and who are deemed good candidates by bariatric teams, may add years to their lives by undergoing one of these procedures. As always, however, the quality of life experienced by patients after surgery, and a realistic expectation of the ways in which surgery will fundamentally change their lifestyle, must be a critical part of the discussion.

—Kristina Lewis, MD, MPH

Study Overview

Objective. To determine the association between bariatric surgery and long-term mortality rates among patients with severe obesity.

Design. Retrospective cohort study.

Setting and participants. This analysis relied upon data from Veteran’s Administration (VA) patients undergoing bariatric surgery between 2000 and 2011 and a group of matched controls. For this data-only study, a waiver of informed consent was obtained. Investigators first used the VA Surgical Quality Improvement Program (SQIP) dataset to identify all bariatric surgical procedures performed at VA hospitals between 2000 and the end of 2011, excluding patients who had any evidence of body mass index (BMI) less than 35 kg/m2 and those with certain baseline diagnoses that would be considered contraindications for surgery, as well as those who had prolonged inpatient stays immediately prior to their surgical date. No upper or lower age limits appear to have been specified, and no upper BMI limit appeared to have been set.

Once all surgical patients were identified, the investigators attempted to find a group of similar control patients who had not undergone surgery. Initially they pulled candidate matches for each surgical patient based on having the same sex, age-group (within 5 years), BMI category (35-40, 40-50, >50), diabetes status (present or absent), racial category, and VA region. From these candidates, they selected up to 3 of the closest matches on age, BMI, and a composite comorbidity score based on inpatient and outpatient claims in the year prior to surgery. The authors specified that controls could convert to surgical patients during the follow-up period, in which case their data was censored beginning with the surgical procedure. However, if a control patient underwent surgery during 2012 or 2013, censoring was not possible given that the dataset for identifying surgeries contained only procedures performed through the end of 2011.

Main outcome measures. The primary outcome of interest was time to death (any cause) beginning at the date of surgery (or baseline date for nonsurgical controls) through the end of 2013. The investigators built Cox proportional hazards models to evaluate survival using multivariable models to adjust for baseline characteristics, including those involved in the matching process, as well as others that might have differentially impacted both likelihood of undergoing surgery and mortality risk. These included marital status, insurance markers of low income or disability, and a number of comorbid medical and psychiatric diagnoses.

In addition to the main analyses, the investigators also looked for effect modification of the surgery-mortality relationship by a patient’s sex and presence or absence of diabetes at the time of surgery, as well as the time period in which their surgery was conducted, dichotomized around the year 2006. This year was selected for several reasons, including that it was the year in which a VA-wide comprehensive weight management and surgical selection program was instituted.

Results. The surgical cohort was made up of 2500 patients, and there were 7462 matched controls. The surgical and control groups were similar with respect to matched baseline characteristics, tested using standardized differences (as opposed to t test or chi-square). Mean (SD) age was 52 (8.8) years for surgical patients versus 53 (8.7) years for controls. 74% of patients in both the surgical and control groups were men, and 81% in both groups were white (ethnicity not specified). Mean (SD) baseline BMI was 47 (7.9) kg/m2 in the surgical group and 46 (7.3) kg/m2 for controls.

Some between-group differences were present for baseline characteristics that had not been included in the matching protocol. More surgical patients than controls had diagnoses of hypertension (80% surgical vs. 70% control), dyslipidemia (61% vs. 52%), arthritis (27% vs. 15%), depression (44% vs. 32%), GERD (35% vs.19%), and fatty liver disease (6.6% vs. 0.6%). In contrast, more control patients than surgical patients had diagnoses of alcohol abuse (6.2% in controls vs. 3.9% in surgical) and schizophrenia (4.9% vs. 1.8%). Also, although a number of different surgical types were represented in the cohort, the vast majority of procedures were classified as Roux-en-Y gastric bypasses (RYGB). 53% of the procedures were open RYGB, 21% were laparoscopic RYGB, 10% were adjustable gastric bands (AGB), and 15% were vertical sleeve gastrectomies (VSG).

Mortality was lower among surgical patients than among matched controls during a mean follow-up time of 6.9 years for surgical patients and 6.6 years for controls. Namely, the 1-, 5- and 10-year cumulative mortality rates for surgical patients were: 2.4%, 6.4%, and 13.8%. Unadjusted mortality rates for nonsurgical controls were lower initially (1.7% at 1 year), but then much higher at years 5 (10.4%), and 10 (23.9%). In multivariable Cox models, the hazard ratio (HR) for mortality in bariatric patients versus controls was nonsignificant at 1 year of follow-up. However, between 1 and 5 years after surgery (or after baseline), multivariable models showed an HR (95% CI) of 0.45 (0.36–0.56) for mortality among surgical patients versus controls. For those with more than 5 years of follow up, the HR was similar (0.47, 95% CI 0.39–0.58) for death among surgical versus control patients. The investigators found that the year during which a patient underwent surgery (before or after 2006) did impact mortality during the first postoperative year, with those who had earlier procedures (2000-2005) exhibiting a significantly higher risk of death in that year relative to non-operative controls (HR 1.66, 95% CI 1.19–2.33). No significant sex or diabetes interactions were observed for the surgery-mortality relationship in multivariable Cox models. There was no information provided as to the breakdown of cause of death within the larger “all-cause mortality” outcome.

Conclusion. Bariatric surgery was associated with significantly lower all-cause mortality among surgical patients in the VA over a 5- to 14-year follow-up period compared with a group of severely obese VA patients who did not undergo surgery.

Commentary

Rates of severe obesity (BMI ≥ 35 kg/m2) have risen at a faster pace than those of obesity in the United States over the past decade [1], driving clinicians, patients and payers to search for effective methods of treating this condition. Bariatric surgery has emerged as the most effective treatment for severe obesity; however, the existing surgical literature is predominated by studies with short- or medium-term postoperative follow-up and homogenous participant populations containing large numbers of younger non-Hispanic white women. Research from the Swedish Obesity Study (SOS), as well as smaller US-based studies, has suggested that severely obese patients who undergo bariatric surgery have better long-term survival than their nonsurgical counterparts [2,3].Counter to this finding, a previous medium-term study utilizing data from VA hospitals did not find that surgery conferred a mortality benefit among this largely male, older, and sicker patient population [4].The current study, by the same group of investigators, attempts to update the previous finding by including more recent surgical data and a longer follow-up period, to see whether or not a survival benefit appears to emerge for VA patients undergoing bariatric surgery.

A major strength of this study was the use of a large and comprehensive clinical dataset, a strength of many studies utilizing data from the VA. The availability of clinical data such as BMI, as well as diagnostic codes and sociodemographic variables, allowed the authors to match and adjust for a number of potential confounders of the surgery-mortality relationship. Another unique feature of VA data is that members of this health care system can often be followed regardless of their location, as the unified medical record transfers between states. This is in contrast to many claims-based or single-center studies of surgery, where patients are lost to follow-up if they move or transfer insurance providers. This study clearly benefited from this aspect of VA data, with a mean postoperative follow-up period of over 5 years in both study groups, much longer than is typically observed in bariatric surgical studies, and probably a necessary feature for examining more of a rare outcome such as mortality (as opposed to comparing weight loss or diabetes remission). Another clear contribution of this study is that it focused on a group of patients not typical of bariatric cohorts—this group was slightly older and sicker, with far more men than women, and therefore at a much higher risk of mortality than the typically younger females that are part of most studies.

Although the authors did adjust for many factors when comparing the surgical and nonsurgical groups, it is possible, as with any observational study, that unmeasured confounders may have been present. Psychosocial and behavioral features that may be linked both to a person’s likelihood of undergoing surgery, and to their mortality risk are of particular concern. It is worth noting, for example, that far more patients in the nonsurgical group were identified as schizophrenic, and that the rate of schizophrenia in that severely obese group was much higher than that of the general population. This pattern may have some relationship to the weight-gain promoting effect of antipsychotic medications and the unfortunate reality that patients with severe obesity and severe mental illness may not be as well equipped to seek out surgery (or viewed as acceptable candidates) as those without severe mental illness. One possible limitation mentioned by the authors was that control group patients who underwent surgery in 2012 or 2013 would not have been recognized (and thus had their data censored in this study), possibly leading to incorrect categorization of exposure category for some amount of person-time during follow-up. In general, though, there is a low likelihood of this phenomenon impacting the findings, given both the relative infrequency of crossover observed in the cohort prior to 2011, and the relatively short amount of person-time any later crossovers would have contributed in the later years of the study.

Although codes for baseline disease states were adjusted for in multivariable analyses, the surgical patients were in general a medically sicker group at baseline than control patients. As the authors point out, if anything, this should have biased the findings in favor of seeing higher mortality rate in the surgical group, the opposite of what was found. Further strengthening the finding of a correlation between survival and surgery is the mix of procedure types included in this study. Over half of the procedures were open RYGB surgeries, with far fewer of the more modern and lower risk procedures (eg, laparoscopic RYGB) represented. Again, this mix of procedures would be expected to result in an overestimation of mortality in surgical patients relative to what might be observed if all patients had been drawn from later years of the cohort, as surgical technique evolved.

Applications for Clinical Practice

This study adds to the evidence that patients with severe obesity who undergo bariatric surgery have a lower risk of death up to 10 years after their surgery compared with patients who do not have these procedures. The findings of this work should provide encouragement, particularly for managaing older adults with more longstanding comorbidities. Those who are strongly motivated to pursue weight loss surgery, and who are deemed good candidates by bariatric teams, may add years to their lives by undergoing one of these procedures. As always, however, the quality of life experienced by patients after surgery, and a realistic expectation of the ways in which surgery will fundamentally change their lifestyle, must be a critical part of the discussion.

—Kristina Lewis, MD, MPH

References

1. Sturm R, Hattori A. Morbid obesity rates continue to rise rapidly in the United States. Int J Obesity 2013;37:889-91.

2. Sjostrom L, Narbo K, Sjostrom CD, et al. Effects of bariatric surgery on mortality in Swedish obese subjects. N Engl J Med 2007;357:741–52.

3. Adams TD, Gress RE, Smith SC, et al. Long-term mortality after gastric bypass surgery. N Engl J Med 2007;357:753–61.

4. Maciejewski ML, Livingston EH, Smith VA, et al. Survival among high-risk patients after bariatric surgery. JAMA 2011;305:2419–26.

References

1. Sturm R, Hattori A. Morbid obesity rates continue to rise rapidly in the United States. Int J Obesity 2013;37:889-91.

2. Sjostrom L, Narbo K, Sjostrom CD, et al. Effects of bariatric surgery on mortality in Swedish obese subjects. N Engl J Med 2007;357:741–52.

3. Adams TD, Gress RE, Smith SC, et al. Long-term mortality after gastric bypass surgery. N Engl J Med 2007;357:753–61.

4. Maciejewski ML, Livingston EH, Smith VA, et al. Survival among high-risk patients after bariatric surgery. JAMA 2011;305:2419–26.

Issue
Journal of Clinical Outcomes Management - March 2015, VOL. 22, NO. 3
Issue
Journal of Clinical Outcomes Management - March 2015, VOL. 22, NO. 3
Publications
Publications
Topics
Article Type
Display Headline
Are Mortality Benefits from Bariatric Surgery Observed in a Nontraditional Surgical Population? Evidence from a VA Dataset
Display Headline
Are Mortality Benefits from Bariatric Surgery Observed in a Nontraditional Surgical Population? Evidence from a VA Dataset
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Perfect Depression Care Spread: The Traction of Zero Suicides

Article Type
Changed
Thu, 03/01/2018 - 14:37
Display Headline
Perfect Depression Care Spread: The Traction of Zero Suicides

From The Menninger Clinic, Houston, TX.

 

Abstract

  • Objective: To summarize the Perfect Depression Care initiative and describe recent work to spread this quality improvement initiative.
  • Methods: We summarize the background and methodology of the Perfect Depression Care initiative within the specialty behavioral health care setting and then describe the application of this methodology to 2 examples of spreading Perfect Depression Care to general medical settings: primary care and general hospitals.
  • Results: In the primary care setting, Perfect Depression Care spread successfully in association with the development and implementation of a practice guideline for managing the potentially suicidal patient. In the general hospital setting, Perfect Depression Care is spreading successfully in association with the development and implementation of a simple and efficient tool to screen not for suicide risk specifically, but for common psychiatric conditions associated with increased risk of suicide.
  • Conclusion: Both examples of spreading Perfect Depression Care to general medical settings illustrate the social traction of “zero suicides,” the audacious and transformative goal of the Perfect Depression Care Initiative.

Each year depression affects roughly 10% of adults in the United States [1]. The leading cause of disability in developed countries, depression results in substantial medical care expenditures, lost productivity, and absenteeism [1]. It is a chronic condition, and one that is associated with tremendous comorbidity from multiple chronic general medical conditions, including congestive heart failure, coronary artery disease, and diabetes [2]. Moreover, the presence of depression has deleterious effects on the outcomes of those comorbid conditions [2]. Untreated or poorly treated, depression can be deadly—each year as many as 10% of patients with major depression die from suicide [1].

In 1999 the Behavioral Health Services (BHS) division of Henry Ford Health System in Detroit, Michigan, set out to eliminate suicide among all patients with depression in our HMO network. This audacious goal was a key lever in a broader aim, which was to build a system of perfect depression care. We aimed to achieve breakthrough improvement in quality and safety by completely redesigning the delivery of depression care using the 6 aims and 10 new rules set forth in the Institute of Medicine’s (IOM) report Crossing the Quality Chasm [3]. To communicate our bold vision, we called the initiative Perfect Depression Care. Today, we can report a dramatic and sustained reduction in suicide that is unprecedented in the clinical and quality improvement literature [4].

In the Chasm report, the IOM cast a spotlight on behavioral health care, placing depression and anxiety disorders on the short list of priority conditions for immediate national attention and improvement. Importantly, the IOM called for a focus on not only behavioral health care benefits and coverage, but access and quality of care for all persons with depression. Finding inspiration from our success in the specialty behavioral health care setting, we decided to answer the IOM’s call. We set out to build a system of depression care that is not confined to the specialty behavioral health care setting, a system that delivers perfect care to every patient with depression, regardless of general medical comorbidity or care setting. We called this work Perfect Depression Care Spread.

In this article, we first summarize the background and methodology of the Perfect Depression Care initiative. We then describe the application of this methodology to spreading Perfect Depression Care into 2 nonspecialty care settings—primary care and general hospitals. Finally, we review some of the challenges and lessons learned from our efforts to sustain this important work.

Building a System of Perfect Depression Care

The bedrock of Perfect Depression Care was a cultural intervention. The first step in the intervention was to commit to the goal of “zero defects.” Such a commitment is not just to the goal of improving, but to the ideal that perfect care is—indeed, must be—attainable. It is designed to take devoted yet average performers through complete organizational transformation. We began our transformation within BHS by establishing a “zero defects” goal for each of the IOM’s 6 aims (Table). We then used “pursuing perfection” methodology to work continually towards each goal [5].

One example of the transformative power of a “zero defects” approach is the case of the Effectiveness aim. Our team engaged in vigorous debate about the goal for this aim. While some team members eagerly embraced the “zero defects” ambition and argued that truly perfect care could only mean “no suicides,” others challenged it, viewing it as lofty but unrealistic. After all, we had been taught that for some number of individuals with depression, suicide was the tragic yet inevitable outcome of their illness. How could it be possible to eliminate every single suicide? The debate was ultimately resolved when one team member asked, “If zero isn’t the right number of suicides, then what is? Two? Four? Forty?” The answer was obvious and undeniable. It was at that moment that setting “zero suicides” as the goal became a galvanizing force within BHS for the Perfect Depression Care initiative.

The pursuit of zero defects must take place within a “just culture,” an organizational environment in which frontline staff feel comfortable disclosing errors, especially their own, while still maintaining professional accountability [6]. Without a just culture, good but imperfect performance can breed disengagement and resentment. By contrast, within a just culture, it becomes possible to implement specific strategies and tactics to pursue perfection. Along the way, each step towards “zero defects” is celebrated because each defect that does occur is identified as an opportunity for learning.

One core strategy for Perfect Depression Care was organizing care according to the planned care model, a locally tailored version of the chronic care model [7]. We developed a clear vision for how each patient’s care would change in a system of Perfect Depression Care. We partnered with patients to ensure their voice in the redesign of our depression care services. We then conceptualized, designed, and tested strategies for improvement in 4 high-leverage domains (patient partnership, clinical practice, access to care, and information systems), which were identified through mapping our current care processes. Once this new model of care was in place, we implemented relevant measures of care quality and began continually assessing progress and then adjusting the plan as needed (ie, following the Model for Improvement).

The multiple changes we implemented during each layer of transformation (Figure 1) have been described elsewhere in detail [8,9]. The challenge of spreading Perfect Depression Care was to apply all that we learned to new and different social systems where suicide is not seen as key measure of quality of the daily work that is done.

Spread to Primary Care

The spread to primary care began in 2005, about 5 years after the initial launch of Perfect Depression Care in BHS. (There had been some previous work done aimed at integrating depression screening into a small number of specialty chronic disease management initiatives, although that work was not sustained.) We based the overall clinical structure on the IMPACT model of integrated behavioral health care [10]. Primary care providers collaborated with depression care managers, typically nurses, who had been trained to provide education to primary care providers and problem solving therapy to patients. The care managers were supervised by a project leader (a full-time clinical psychologist) and supported by 2 full-time psychiatric nurse practitioners who were embedded in each clinic during the early phases of implementation. An electronic medical record (EMR) was comfortably in place and facilitated the delivery of evidence-based depression care, as well as the collection of relevant process and outcome measures, which were fed back to the care teams on a regular basis. And, importantly, the primary care leadership team formally sanctioned depression care to be spread to all 27 primary care clinics.

Overcoming the Challenges of the Primary Care Visits

From 2005 to 2010, the model was spread tenuously to 5 primary care clinics. At that rate (1 clinic per year), it would have taken over 20 years to spread depression care through all 27 primary care clinics. Not satisfied with this progress, we stepped back to consider why adoption was happening so slowly. First, we spoke with leaders. Although the project was on a shoestring budget, our leaders understood the business case for integrating some version of depression care into the primary care setting [11]. They advised limiting the scope of the project to focus only on adults with 1 of 6 chronic diseases: diabetes mellitus, congestive heart failure, coronary artery disease, chronic obstructive pulmonary disease (COPD), asthma, and chronic kidney disease. This narrower focus was aimed at using the project’s limited resources more effectively on behalf of patients who were more frequent utilizers of care and statistically more likely to have a comorbid depressive illness. Through the use of time studies, however, we learned that the time consumed discerning which patients each day were eligible for depression screening created delays in clinic workflow that were untenable. It turned out that the process of screening all patients was far more efficient that the process of identifying which patients “should” be screened and then screening only those who were identified. This pragmatic approach to daily workflow in the clinics was a key driver of successful spread.

Next, we spoke to patients. In an effort to assess patient engagement, we reviewed the records of 830 patients who had been seen in one of the clinics where depression care was up and running. Among this group, less than 1% had declined to receive depression screening. In fact, during informal discussions with patients and clinic staff, patients were thanking their primary care providers for talking with them about depression. When it came to spreading depression care, patient engagement was not the problem.

Finally, we spoke with primary care providers, physicians who were viewed as leaders in their clinics. They described trepidation among their teams about adopting an innovation that would lead to patients being identified as at risk for suicide. Their concern was not that integrating depression care was not the right thing to do in the primary care setting; indeed, they had a strong and genuine desire to provide better depression care for their patients. Their concern was that the primary care clinic was not equipped to manage a suicidal patient safely and effectively. This concern was real, and it was pervasive. After all, the typical primary care office visit was already replete with problem lists too long to be managed effectively in the diminishing amount of time allotted to each visit. Screening for depression would only make matters worse [12]. Furthermore, identifying a patient at risk for suicide was not uncommon in our primary care setting. Between 2006 and 2012, an average of 16% of primary care patients screened each year had reported some degree of suicidal ideation (as measures by a positive response on question 9 of the PHQ-9). These discussions showed us that the model of depression care we were trying to spread into primary care was not designed with an explicit and confident approach to suicide—it was not Perfect Depression Care.

Leveraging Suicide As a Driver of Spread

When we realized that the anxiety surrounding the management of a suicidal patient was the biggest obstacle to Perfect Depression Care spread to primary care, we decided to turn this obstacle into an opportunity. First, an interdisciplinary team developed a practice guideline for managing the suicidal patient in general medical settings. The guideline was based on the World Health Organization’s evidence-based guidelines for addressing mental health disorders in nonspecialized health settings [13] and modified into a single page to make it easy to adopt. Following the guideline was not at all a requirement, but doing so made it very easy to identify patients at potential risk for suicide and to refer them safely and seamlessly to the next most appropriate level of care.

Second, and most importantly, BHS made a formal commitment to provide immediate access for any patient referred by a primary care provider following the practice guideline. BHS pledged to perform the evaluation on the same day as the referral was made and without any questions asked. Delivering on this promise required BHS to develop and implement reliable processes for its ambulatory centers to receive same-day referrals from any one of 27 primary care clinics. Success meant delighting our customers in primary care while obviating the expense and trauma associated with sending patients to local emergency departments. This work was hard. And it was made possible by the culture within BHS of pursuing perfection.

The practice guideline was adopted readily and rapidly, and its implementation was followed by much success. During the 5 years of Perfect Depression Care spread when there was no practice guideline for managing the suicidal patient in general medical settings, we achieved a spread rate of 1 clinic per year. From 2010 to 2012, after the practice guideline was implemented, the model was spread to 22 primary care clinics, a rate of 7.3 clinics per year. This operational improvement brought with it powerful clinical improvement as well. After the implementation of the practice guideline, the average number of primary care patients receiving Perfect Depression Care increased from 835 per month to 9186 per month (Figure 2).

During this time of successful spread, project resources remained similar, no new or additional financial support was provided, and no new leadership directives had been communicated. The only new features of Perfect Depression Care spread were a 1-page practice guideline and a promise. Making suicide an explicit target of the intervention, and doing so in a ruthlessly practical way, created the conditions for the intervention to diffuse and be adopted more readily.

Spread to General Hospitals

In 2006, the Joint Commission established National Patient Safety Goal (NPSG) 15.01.01 for hospitals and health care facilities “to identify patients at risk for suicide” [14]. NPSG 15.01.01 applies not just to patients in psychiatric hospitals, but to all patients “being treated for emotional or behavioral disorders in general hospitals,” including emergency departments. As a measure of safety, suicide is the second most common sentinel event among hospitalized patients—only wrong-site surgery occurs more often. And when a suicide does take place in a hospital, the impact on patients, families, health care workers, and administrators is profound.

Still, completed suicide among hospitalized patients is statistically a very rare event. As a result, general hospitals find it challenging to meet the expectations set forth in NPSG 15.01.01, which seemingly asks hospitals to search for a needle in a haystack. Is it really valuable to ask a patient about suicide when that patient is a 16-year-old teenager who presented to the emergency department for minor scrapes and bruises sustained while skateboarding? Should all patients with “do not resuscitate” orders receive a mandatory, comprehensive suicide risk assessment? In 2010, general hospitals in our organization enlisted our  Perfect Depression Care team to help them develop a meaningful approach to NPSG 15.01.01, and so Perfect Depression Care spread to general hospitals began.

The goal of NPSG 15.01.01 is “to identify patients at risk for suicide.” To accomplish this goal, hospital care teams need simple, efficient, evidence-based tools for identifying such patients and responding appropriately to the identified risk. In a general hospital setting, implementing targeted suicide risk assessments is simply not feasible. Assessing every single hospitalized patient for suicide risk seems clinically unnecessary, if not wasteful, and yet the processes needed to identify reliably which patients ought to be assessed end up taking far longer than simply screening everybody. With these considerations in mind, our Perfect Depression Care team took a different approach.

The DAPS Tool

We developed a simple and easy tool to screen, not for suicide risk specifically, but for common psychiatric conditions associated with increased risk of suicide. The Depression, Anxiety, Polysubstance Use, and Suicide screen (DAPS) [15] consists of 7 questions coming from 5 individual evidence-based screening measures: the PHQ-2 for depression, the GAD-2 for anxiety, question 9 from the PHQ-9 for suicidal ideation, the SASQ for problem alcohol use, and a single drug use question for substance use. Each of these questionnaires has been validated as a sensitive screening measure for the psychiatric condition of interest (eg, major depression, generalized anxiety, current problem drinking). Some of them have been validated specifically in general medical settings or among general medical patient populations. Moreover, each questionnaire is valid whether clinician-administered or self-completed. Some have also been validated in languages other than English.

The DAPS tool bundles together these separate screening measures into one easy to use and efficient tool. As a bundle, the DAPS tool offers 3 major advantages over traditional screening tools. First, the tool takes a broader approach to suicide risk with the aim of increasing utility. Suicide is a statistically rare event, especially in general medical settings. On the other hand, psychiatric conditions that themselves increase people’s risk of suicide are quite common, particularly in hospital settings. Rather than screening exclusively for suicidal thoughts and behavior, the DAPS tool screens for psychiatric conditions associated with an increased risk of suicide that are common in general medical settings. This approach to suicide screening is novel. It allows for the recognition of higher number of patients who may benefit from behavioral health interventions, whether or not they are “actively suicidal” at that moment. By not including extensive assessments of numerous suicide risk factors, the DAPS tool offers practical utility without losing much specificity. After all, persons in general hospital settings who at acutely increased risk of suicide (eg, a person admitted to the hospital following a suicide attempt via overdose) are already being identified.

The second advantage of the DAPS tool is that the information it obtains is actionable. Suicide screening tools, whether brief or comprehensive, are not immediately predictive and arrive at essentially the same conclusion—the person screened is deemed to fall into some risk stratification (eg, high, medium, low risk; acute vs non-acute risk). In general hospital settings, the responses to these stratifications are limited (eg, order a sitter, call a psychiatry consultation) and not specific to the level of risk. Furthermore, persons with psychiatric disorders may be at increased risk of suicide even if they deny having suicidal thoughts. The DAPS tool allows for the recognition of these persons, thus identifying opportunities for intervention. For example, a person who screens positive on the PHQ-2 portion of the DAPS but who denies having recent suicidal thoughts or behavior may not benefit from an immediate safety measure (eg, ordering a sitter) but may benefit from an evaluation and, if indicated, treatment for depression. Treating that person’s depression would decrease the longitudinal risk of suicide. If another person screens negative on the PHQ-2 but positive on the SASQ, then that person may benefit most from interventions targeting problem alcohol use, such as the initiation of a CIWA protocol in order to prevent the emergence of alcohol withdrawal during the hospitalization, but not necessarily from depression treatment.

The third main advantage of the DAPS tool is its ease of use. There are a limited number of psychiatrists and other mental health care workers in general hospitals, and that number is not adequate to have all psychiatric screens and assessments in performed by a specialist. The DAPS tool consists of scripted questions that any health care provider can read and follow. This type of instruction may be especially beneficial to health care providers who are unsure or uncomfortable about how to screen patients for suicide or psychiatric disorders. The DAPS tool provides these clinicians with language they can use comfortably when talking with patients. Alternatively, patients themselves can complete the DAPS questions, which frees up valuable time for providers to deliver other types of care. During a pilot project at one of our general hospitals, 20 general floor nurses were asked to implement the DAPS with their patients after receiving only a very brief set of instructions. On average, it took a nurse less than 4 minutes to complete the DAPS. Ninety percent of the nurses stated the DAPS tool would take “less time” or “no additional time” compared with the behavioral health questions in the current nursing admission assessment they were required to complete on every patient. Eighty-five percent found the tool “easy” or “very easy” to use.

At the time of publication of this article, one of our general hospitals is set to roll out DAPS screening hospital wide with the goal of prospectively identifying patients who might benefit from some form of behavioral health intervention and thus reducing length of stay. Another of our general hospitals is already using the DAPS to reduce hospital readmissions [15]. What started out as an initiative simply to meet a regulatory requirement turned into a novel and efficient means to bring mental health care services to hospitalized patients.

Lessons Learned

Our goal in the Perfect Depression Care initiative was to eliminate suicide, and we have come remarkably close to achieving that goal. Our determination to strive for perfection rather than incremental goals had a powerful effect on our results. To move to a different order of performance required us to challenge our most basic assumptions and required new learning and new behavior.

This social aspect of our improvement work was fundamental to every effort made to spread Perfect Depression Care outside of the specialty behavioral health care setting. Indeed, the diffusion of all innovation occurs within a social context [16]. Ideas do not spread by themselves—they are spread from one person (the messenger) to another (the adopter). Successful spread, therefore, depends in large part on the communication between messenger and adopter.

Implementing Perfect Depression Care within BHS involved like-minded messengers and adopters from the same department, whereas spreading the initiative to the general medical setting involved messengers from one specialty and adopters from another. The nature of such a social system demands that the goals of the messenger be aligned with the incentives of the adopter. In health service organizations, such alignment requires effective leadership, not just local champions [17]. For example, spreading the initiative to the primary care setting really only became possible when our departmental leaders made a public promise to the leaders of primary care that BHS would see any patient referred from primary care on the same day of referral with no questions asked. And while it is true that operationalizing that promise was a more arduous task than articulating it, the promise itself is what created a social space within which the innovation could diffuse.

Even if leaders are successful at aligning the messenger’s goals and the adopter’s incentives, spread still must actually occur locally between 2 people. This social context means that a “good” idea in the mind of the messenger must be a “better” idea in the mind of the adopter. In other words, an idea or innovation is more likely to be adopted if it is better than the status quo [18]. And it is the adopter’s definition of “better” that matters. For example, our organization’s primary care clinics agreed that improving their depression care was a good idea. However, specific interventions were not adopted (or adoptable) until they became a way to make daily life easier for the front-line clinic staff (eg, by facilitating more efficient referrals to BHS). Furthermore, because daily life in each clinic was a little bit different, the specific interventions adopted were allowed to vary. Similarly, in the general hospital setting, DAPS screening was nothing more than a good idea until the nurses learned that it took less time and yielded more actionable results than the long list of behavioral health screening questions they were currently required to complete on every patient being admitted. When replacing those questions with the DAPS screen saved time and added value, the DAPS became better than the status quo, a tipping point was reached, and spread took place.

Future Spread

The 2 examples of Perfect Depression Care Spread described herein are testaments to the social traction of  “zero suicides.” Importantly, the success of each effort has hinged on its creative, practical approach to suicide, even though there is scant scientific evidence to support suicide prevention initiatives in general medical settings [19].

As it turns out, there is also little scientific knowledge about how innovations in health service organizations are successfully sustained [16]. It is our hope that the 15 years of Perfect Depression Care shed some light on this question, and that the initiative can continue to be sustained in today’s turbulent and increasingly austere health care environment. We are confident that we will keep improving as long as we keep learning.

In addition, we find tremendous inspiration in the many others who are learning and improving with us. In 2012, for instance, the US Surgeon General promoted the adoption “zero suicides” as a national strategic objective [1]. And in 2015, the Deputy Prime Minister of the United Kingdom called for the adoption of “zero suicides” across the entire National Health Service [20]. As the Perfect Depression Care team continues to grow, the pursuit of perfection becomes even more stirring.

 

Acknowledgment: The author acknowledges Brian K. Ahmedani, PhD, Charles E. Coffey, MD, MS, C. Edward Coffey, MD, Terri Robertson, PhD, and the entire Perfect Depression Care team.

Corresponding author: M. Justin Coffey, MD, The Menninger Clinic, 12301 S. Main St., Houston, TX 77035, [email protected].

Financial disclosures: None.

References

1. U.S. Department of Health and Human Services (HHS) Office of the Surgeon General and National Action Alliance for Suicide Prevention. 2012 National Strategy for Suicide Prevention: goals and objectives for action. Washington, DC: HHS; 2012.

2. Druss BG, Walker ER. Mental disorders and medical comorbidity: research synthesis report no. 21. Robert Wood Johnson Foundation 2011.

3. Committee on Quality Health Care in America, Institute of Medicine. Crossing the Quality Chasm. Washington, DC: National Academy Press; 2001.

4. Coffey CE, Coffey MJ, Ahmedani BK. An update on Perfect Depression Care. Psychiatric Services 2013;64:396.

5. Robert Wood Johnson Foundation. Pursuing Perfection: Raising the bar in health care performance. Robert Wood Johnson Foundation; 2014.

6. Marx D. Patient safety and the “just culture”: a primer for health care executives. New York: Columbia University; 2001.

7. Coleman K, Austin BT, Brach C, Wagner EH. Evidence on the chronic care model in the new millennium. Health Aff 2009;28:75–85.

8. Coffey CE. Building a system of perfect depression care in behavioral health. Jt Comm J Qual Patient Saf 2007;33:193–9.

9. Hampton T. Depression care effort brings dramatic drop in large HMO population’s suicide rate. JAMA 2010;303: 1903–5.

10. Unützer J, Powers D, Katon W, Langston C. From establishing an evidence-based practice to implementation in real-world settings: IMPACT as a case study. Psychiatr Clin North Am 2005;28:1079–92.

11. Melek SP, Norris DT, Paulus J. Economic impact of integrated medical-behavioral healthcare: implications for psychiatry. Milliman; 2014.

12. Schmitt MR, Miller MJ, Harrison DL, Touchet BK. Relationship of depression screening and physician office visit duration in a national sample. Psych Svc 2010;61:1126–31.

13. mhGAP intervention guide for mental, neurological, and substance use disorders in non-specialized health settings: Mental Health Gap Action Programme (mhGAP). World Health Organization; 2010.

14. National Patient Safety Goals 2008. The Joint Commission. Oakbrook, IL.

15. Coffey CE, Johns J, Veliz S, Coffey MJ. The DAPS tool: an actionable screen for psychiatric risk factors for rehospitalization. J Hosp Med 2012;7(suppl 2):S100–101.

16. Greenhalgh T, Robert G, Macfarlane F, et al. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q 2004;82:581–629.

17. Berwick DM. Disseminating innovations in health care. JAMA 2003;289:1969–75.

18. Rogers EM. Diffusion of innovations. 4th ed. New York: The Free Press; 1995.

19. LeFevre MF. Screening for suicide risk in adolescents, adults, and older adults in primary care: US Preventive Services Task Force Recommendation Statement. Ann Intern Med 2014;160:719–26.

20. Clegg N. Speech at mental health conference. Available at  www.gov.uk/government/speeches/nick-clegg-at-mental-health-conference.

Issue
Journal of Clinical Outcomes Management - March 2015, VOL. 22, NO. 3
Publications
Topics
Sections

From The Menninger Clinic, Houston, TX.

 

Abstract

  • Objective: To summarize the Perfect Depression Care initiative and describe recent work to spread this quality improvement initiative.
  • Methods: We summarize the background and methodology of the Perfect Depression Care initiative within the specialty behavioral health care setting and then describe the application of this methodology to 2 examples of spreading Perfect Depression Care to general medical settings: primary care and general hospitals.
  • Results: In the primary care setting, Perfect Depression Care spread successfully in association with the development and implementation of a practice guideline for managing the potentially suicidal patient. In the general hospital setting, Perfect Depression Care is spreading successfully in association with the development and implementation of a simple and efficient tool to screen not for suicide risk specifically, but for common psychiatric conditions associated with increased risk of suicide.
  • Conclusion: Both examples of spreading Perfect Depression Care to general medical settings illustrate the social traction of “zero suicides,” the audacious and transformative goal of the Perfect Depression Care Initiative.

Each year depression affects roughly 10% of adults in the United States [1]. The leading cause of disability in developed countries, depression results in substantial medical care expenditures, lost productivity, and absenteeism [1]. It is a chronic condition, and one that is associated with tremendous comorbidity from multiple chronic general medical conditions, including congestive heart failure, coronary artery disease, and diabetes [2]. Moreover, the presence of depression has deleterious effects on the outcomes of those comorbid conditions [2]. Untreated or poorly treated, depression can be deadly—each year as many as 10% of patients with major depression die from suicide [1].

In 1999 the Behavioral Health Services (BHS) division of Henry Ford Health System in Detroit, Michigan, set out to eliminate suicide among all patients with depression in our HMO network. This audacious goal was a key lever in a broader aim, which was to build a system of perfect depression care. We aimed to achieve breakthrough improvement in quality and safety by completely redesigning the delivery of depression care using the 6 aims and 10 new rules set forth in the Institute of Medicine’s (IOM) report Crossing the Quality Chasm [3]. To communicate our bold vision, we called the initiative Perfect Depression Care. Today, we can report a dramatic and sustained reduction in suicide that is unprecedented in the clinical and quality improvement literature [4].

In the Chasm report, the IOM cast a spotlight on behavioral health care, placing depression and anxiety disorders on the short list of priority conditions for immediate national attention and improvement. Importantly, the IOM called for a focus on not only behavioral health care benefits and coverage, but access and quality of care for all persons with depression. Finding inspiration from our success in the specialty behavioral health care setting, we decided to answer the IOM’s call. We set out to build a system of depression care that is not confined to the specialty behavioral health care setting, a system that delivers perfect care to every patient with depression, regardless of general medical comorbidity or care setting. We called this work Perfect Depression Care Spread.

In this article, we first summarize the background and methodology of the Perfect Depression Care initiative. We then describe the application of this methodology to spreading Perfect Depression Care into 2 nonspecialty care settings—primary care and general hospitals. Finally, we review some of the challenges and lessons learned from our efforts to sustain this important work.

Building a System of Perfect Depression Care

The bedrock of Perfect Depression Care was a cultural intervention. The first step in the intervention was to commit to the goal of “zero defects.” Such a commitment is not just to the goal of improving, but to the ideal that perfect care is—indeed, must be—attainable. It is designed to take devoted yet average performers through complete organizational transformation. We began our transformation within BHS by establishing a “zero defects” goal for each of the IOM’s 6 aims (Table). We then used “pursuing perfection” methodology to work continually towards each goal [5].

One example of the transformative power of a “zero defects” approach is the case of the Effectiveness aim. Our team engaged in vigorous debate about the goal for this aim. While some team members eagerly embraced the “zero defects” ambition and argued that truly perfect care could only mean “no suicides,” others challenged it, viewing it as lofty but unrealistic. After all, we had been taught that for some number of individuals with depression, suicide was the tragic yet inevitable outcome of their illness. How could it be possible to eliminate every single suicide? The debate was ultimately resolved when one team member asked, “If zero isn’t the right number of suicides, then what is? Two? Four? Forty?” The answer was obvious and undeniable. It was at that moment that setting “zero suicides” as the goal became a galvanizing force within BHS for the Perfect Depression Care initiative.

The pursuit of zero defects must take place within a “just culture,” an organizational environment in which frontline staff feel comfortable disclosing errors, especially their own, while still maintaining professional accountability [6]. Without a just culture, good but imperfect performance can breed disengagement and resentment. By contrast, within a just culture, it becomes possible to implement specific strategies and tactics to pursue perfection. Along the way, each step towards “zero defects” is celebrated because each defect that does occur is identified as an opportunity for learning.

One core strategy for Perfect Depression Care was organizing care according to the planned care model, a locally tailored version of the chronic care model [7]. We developed a clear vision for how each patient’s care would change in a system of Perfect Depression Care. We partnered with patients to ensure their voice in the redesign of our depression care services. We then conceptualized, designed, and tested strategies for improvement in 4 high-leverage domains (patient partnership, clinical practice, access to care, and information systems), which were identified through mapping our current care processes. Once this new model of care was in place, we implemented relevant measures of care quality and began continually assessing progress and then adjusting the plan as needed (ie, following the Model for Improvement).

The multiple changes we implemented during each layer of transformation (Figure 1) have been described elsewhere in detail [8,9]. The challenge of spreading Perfect Depression Care was to apply all that we learned to new and different social systems where suicide is not seen as key measure of quality of the daily work that is done.

Spread to Primary Care

The spread to primary care began in 2005, about 5 years after the initial launch of Perfect Depression Care in BHS. (There had been some previous work done aimed at integrating depression screening into a small number of specialty chronic disease management initiatives, although that work was not sustained.) We based the overall clinical structure on the IMPACT model of integrated behavioral health care [10]. Primary care providers collaborated with depression care managers, typically nurses, who had been trained to provide education to primary care providers and problem solving therapy to patients. The care managers were supervised by a project leader (a full-time clinical psychologist) and supported by 2 full-time psychiatric nurse practitioners who were embedded in each clinic during the early phases of implementation. An electronic medical record (EMR) was comfortably in place and facilitated the delivery of evidence-based depression care, as well as the collection of relevant process and outcome measures, which were fed back to the care teams on a regular basis. And, importantly, the primary care leadership team formally sanctioned depression care to be spread to all 27 primary care clinics.

Overcoming the Challenges of the Primary Care Visits

From 2005 to 2010, the model was spread tenuously to 5 primary care clinics. At that rate (1 clinic per year), it would have taken over 20 years to spread depression care through all 27 primary care clinics. Not satisfied with this progress, we stepped back to consider why adoption was happening so slowly. First, we spoke with leaders. Although the project was on a shoestring budget, our leaders understood the business case for integrating some version of depression care into the primary care setting [11]. They advised limiting the scope of the project to focus only on adults with 1 of 6 chronic diseases: diabetes mellitus, congestive heart failure, coronary artery disease, chronic obstructive pulmonary disease (COPD), asthma, and chronic kidney disease. This narrower focus was aimed at using the project’s limited resources more effectively on behalf of patients who were more frequent utilizers of care and statistically more likely to have a comorbid depressive illness. Through the use of time studies, however, we learned that the time consumed discerning which patients each day were eligible for depression screening created delays in clinic workflow that were untenable. It turned out that the process of screening all patients was far more efficient that the process of identifying which patients “should” be screened and then screening only those who were identified. This pragmatic approach to daily workflow in the clinics was a key driver of successful spread.

Next, we spoke to patients. In an effort to assess patient engagement, we reviewed the records of 830 patients who had been seen in one of the clinics where depression care was up and running. Among this group, less than 1% had declined to receive depression screening. In fact, during informal discussions with patients and clinic staff, patients were thanking their primary care providers for talking with them about depression. When it came to spreading depression care, patient engagement was not the problem.

Finally, we spoke with primary care providers, physicians who were viewed as leaders in their clinics. They described trepidation among their teams about adopting an innovation that would lead to patients being identified as at risk for suicide. Their concern was not that integrating depression care was not the right thing to do in the primary care setting; indeed, they had a strong and genuine desire to provide better depression care for their patients. Their concern was that the primary care clinic was not equipped to manage a suicidal patient safely and effectively. This concern was real, and it was pervasive. After all, the typical primary care office visit was already replete with problem lists too long to be managed effectively in the diminishing amount of time allotted to each visit. Screening for depression would only make matters worse [12]. Furthermore, identifying a patient at risk for suicide was not uncommon in our primary care setting. Between 2006 and 2012, an average of 16% of primary care patients screened each year had reported some degree of suicidal ideation (as measures by a positive response on question 9 of the PHQ-9). These discussions showed us that the model of depression care we were trying to spread into primary care was not designed with an explicit and confident approach to suicide—it was not Perfect Depression Care.

Leveraging Suicide As a Driver of Spread

When we realized that the anxiety surrounding the management of a suicidal patient was the biggest obstacle to Perfect Depression Care spread to primary care, we decided to turn this obstacle into an opportunity. First, an interdisciplinary team developed a practice guideline for managing the suicidal patient in general medical settings. The guideline was based on the World Health Organization’s evidence-based guidelines for addressing mental health disorders in nonspecialized health settings [13] and modified into a single page to make it easy to adopt. Following the guideline was not at all a requirement, but doing so made it very easy to identify patients at potential risk for suicide and to refer them safely and seamlessly to the next most appropriate level of care.

Second, and most importantly, BHS made a formal commitment to provide immediate access for any patient referred by a primary care provider following the practice guideline. BHS pledged to perform the evaluation on the same day as the referral was made and without any questions asked. Delivering on this promise required BHS to develop and implement reliable processes for its ambulatory centers to receive same-day referrals from any one of 27 primary care clinics. Success meant delighting our customers in primary care while obviating the expense and trauma associated with sending patients to local emergency departments. This work was hard. And it was made possible by the culture within BHS of pursuing perfection.

The practice guideline was adopted readily and rapidly, and its implementation was followed by much success. During the 5 years of Perfect Depression Care spread when there was no practice guideline for managing the suicidal patient in general medical settings, we achieved a spread rate of 1 clinic per year. From 2010 to 2012, after the practice guideline was implemented, the model was spread to 22 primary care clinics, a rate of 7.3 clinics per year. This operational improvement brought with it powerful clinical improvement as well. After the implementation of the practice guideline, the average number of primary care patients receiving Perfect Depression Care increased from 835 per month to 9186 per month (Figure 2).

During this time of successful spread, project resources remained similar, no new or additional financial support was provided, and no new leadership directives had been communicated. The only new features of Perfect Depression Care spread were a 1-page practice guideline and a promise. Making suicide an explicit target of the intervention, and doing so in a ruthlessly practical way, created the conditions for the intervention to diffuse and be adopted more readily.

Spread to General Hospitals

In 2006, the Joint Commission established National Patient Safety Goal (NPSG) 15.01.01 for hospitals and health care facilities “to identify patients at risk for suicide” [14]. NPSG 15.01.01 applies not just to patients in psychiatric hospitals, but to all patients “being treated for emotional or behavioral disorders in general hospitals,” including emergency departments. As a measure of safety, suicide is the second most common sentinel event among hospitalized patients—only wrong-site surgery occurs more often. And when a suicide does take place in a hospital, the impact on patients, families, health care workers, and administrators is profound.

Still, completed suicide among hospitalized patients is statistically a very rare event. As a result, general hospitals find it challenging to meet the expectations set forth in NPSG 15.01.01, which seemingly asks hospitals to search for a needle in a haystack. Is it really valuable to ask a patient about suicide when that patient is a 16-year-old teenager who presented to the emergency department for minor scrapes and bruises sustained while skateboarding? Should all patients with “do not resuscitate” orders receive a mandatory, comprehensive suicide risk assessment? In 2010, general hospitals in our organization enlisted our  Perfect Depression Care team to help them develop a meaningful approach to NPSG 15.01.01, and so Perfect Depression Care spread to general hospitals began.

The goal of NPSG 15.01.01 is “to identify patients at risk for suicide.” To accomplish this goal, hospital care teams need simple, efficient, evidence-based tools for identifying such patients and responding appropriately to the identified risk. In a general hospital setting, implementing targeted suicide risk assessments is simply not feasible. Assessing every single hospitalized patient for suicide risk seems clinically unnecessary, if not wasteful, and yet the processes needed to identify reliably which patients ought to be assessed end up taking far longer than simply screening everybody. With these considerations in mind, our Perfect Depression Care team took a different approach.

The DAPS Tool

We developed a simple and easy tool to screen, not for suicide risk specifically, but for common psychiatric conditions associated with increased risk of suicide. The Depression, Anxiety, Polysubstance Use, and Suicide screen (DAPS) [15] consists of 7 questions coming from 5 individual evidence-based screening measures: the PHQ-2 for depression, the GAD-2 for anxiety, question 9 from the PHQ-9 for suicidal ideation, the SASQ for problem alcohol use, and a single drug use question for substance use. Each of these questionnaires has been validated as a sensitive screening measure for the psychiatric condition of interest (eg, major depression, generalized anxiety, current problem drinking). Some of them have been validated specifically in general medical settings or among general medical patient populations. Moreover, each questionnaire is valid whether clinician-administered or self-completed. Some have also been validated in languages other than English.

The DAPS tool bundles together these separate screening measures into one easy to use and efficient tool. As a bundle, the DAPS tool offers 3 major advantages over traditional screening tools. First, the tool takes a broader approach to suicide risk with the aim of increasing utility. Suicide is a statistically rare event, especially in general medical settings. On the other hand, psychiatric conditions that themselves increase people’s risk of suicide are quite common, particularly in hospital settings. Rather than screening exclusively for suicidal thoughts and behavior, the DAPS tool screens for psychiatric conditions associated with an increased risk of suicide that are common in general medical settings. This approach to suicide screening is novel. It allows for the recognition of higher number of patients who may benefit from behavioral health interventions, whether or not they are “actively suicidal” at that moment. By not including extensive assessments of numerous suicide risk factors, the DAPS tool offers practical utility without losing much specificity. After all, persons in general hospital settings who at acutely increased risk of suicide (eg, a person admitted to the hospital following a suicide attempt via overdose) are already being identified.

The second advantage of the DAPS tool is that the information it obtains is actionable. Suicide screening tools, whether brief or comprehensive, are not immediately predictive and arrive at essentially the same conclusion—the person screened is deemed to fall into some risk stratification (eg, high, medium, low risk; acute vs non-acute risk). In general hospital settings, the responses to these stratifications are limited (eg, order a sitter, call a psychiatry consultation) and not specific to the level of risk. Furthermore, persons with psychiatric disorders may be at increased risk of suicide even if they deny having suicidal thoughts. The DAPS tool allows for the recognition of these persons, thus identifying opportunities for intervention. For example, a person who screens positive on the PHQ-2 portion of the DAPS but who denies having recent suicidal thoughts or behavior may not benefit from an immediate safety measure (eg, ordering a sitter) but may benefit from an evaluation and, if indicated, treatment for depression. Treating that person’s depression would decrease the longitudinal risk of suicide. If another person screens negative on the PHQ-2 but positive on the SASQ, then that person may benefit most from interventions targeting problem alcohol use, such as the initiation of a CIWA protocol in order to prevent the emergence of alcohol withdrawal during the hospitalization, but not necessarily from depression treatment.

The third main advantage of the DAPS tool is its ease of use. There are a limited number of psychiatrists and other mental health care workers in general hospitals, and that number is not adequate to have all psychiatric screens and assessments in performed by a specialist. The DAPS tool consists of scripted questions that any health care provider can read and follow. This type of instruction may be especially beneficial to health care providers who are unsure or uncomfortable about how to screen patients for suicide or psychiatric disorders. The DAPS tool provides these clinicians with language they can use comfortably when talking with patients. Alternatively, patients themselves can complete the DAPS questions, which frees up valuable time for providers to deliver other types of care. During a pilot project at one of our general hospitals, 20 general floor nurses were asked to implement the DAPS with their patients after receiving only a very brief set of instructions. On average, it took a nurse less than 4 minutes to complete the DAPS. Ninety percent of the nurses stated the DAPS tool would take “less time” or “no additional time” compared with the behavioral health questions in the current nursing admission assessment they were required to complete on every patient. Eighty-five percent found the tool “easy” or “very easy” to use.

At the time of publication of this article, one of our general hospitals is set to roll out DAPS screening hospital wide with the goal of prospectively identifying patients who might benefit from some form of behavioral health intervention and thus reducing length of stay. Another of our general hospitals is already using the DAPS to reduce hospital readmissions [15]. What started out as an initiative simply to meet a regulatory requirement turned into a novel and efficient means to bring mental health care services to hospitalized patients.

Lessons Learned

Our goal in the Perfect Depression Care initiative was to eliminate suicide, and we have come remarkably close to achieving that goal. Our determination to strive for perfection rather than incremental goals had a powerful effect on our results. To move to a different order of performance required us to challenge our most basic assumptions and required new learning and new behavior.

This social aspect of our improvement work was fundamental to every effort made to spread Perfect Depression Care outside of the specialty behavioral health care setting. Indeed, the diffusion of all innovation occurs within a social context [16]. Ideas do not spread by themselves—they are spread from one person (the messenger) to another (the adopter). Successful spread, therefore, depends in large part on the communication between messenger and adopter.

Implementing Perfect Depression Care within BHS involved like-minded messengers and adopters from the same department, whereas spreading the initiative to the general medical setting involved messengers from one specialty and adopters from another. The nature of such a social system demands that the goals of the messenger be aligned with the incentives of the adopter. In health service organizations, such alignment requires effective leadership, not just local champions [17]. For example, spreading the initiative to the primary care setting really only became possible when our departmental leaders made a public promise to the leaders of primary care that BHS would see any patient referred from primary care on the same day of referral with no questions asked. And while it is true that operationalizing that promise was a more arduous task than articulating it, the promise itself is what created a social space within which the innovation could diffuse.

Even if leaders are successful at aligning the messenger’s goals and the adopter’s incentives, spread still must actually occur locally between 2 people. This social context means that a “good” idea in the mind of the messenger must be a “better” idea in the mind of the adopter. In other words, an idea or innovation is more likely to be adopted if it is better than the status quo [18]. And it is the adopter’s definition of “better” that matters. For example, our organization’s primary care clinics agreed that improving their depression care was a good idea. However, specific interventions were not adopted (or adoptable) until they became a way to make daily life easier for the front-line clinic staff (eg, by facilitating more efficient referrals to BHS). Furthermore, because daily life in each clinic was a little bit different, the specific interventions adopted were allowed to vary. Similarly, in the general hospital setting, DAPS screening was nothing more than a good idea until the nurses learned that it took less time and yielded more actionable results than the long list of behavioral health screening questions they were currently required to complete on every patient being admitted. When replacing those questions with the DAPS screen saved time and added value, the DAPS became better than the status quo, a tipping point was reached, and spread took place.

Future Spread

The 2 examples of Perfect Depression Care Spread described herein are testaments to the social traction of  “zero suicides.” Importantly, the success of each effort has hinged on its creative, practical approach to suicide, even though there is scant scientific evidence to support suicide prevention initiatives in general medical settings [19].

As it turns out, there is also little scientific knowledge about how innovations in health service organizations are successfully sustained [16]. It is our hope that the 15 years of Perfect Depression Care shed some light on this question, and that the initiative can continue to be sustained in today’s turbulent and increasingly austere health care environment. We are confident that we will keep improving as long as we keep learning.

In addition, we find tremendous inspiration in the many others who are learning and improving with us. In 2012, for instance, the US Surgeon General promoted the adoption “zero suicides” as a national strategic objective [1]. And in 2015, the Deputy Prime Minister of the United Kingdom called for the adoption of “zero suicides” across the entire National Health Service [20]. As the Perfect Depression Care team continues to grow, the pursuit of perfection becomes even more stirring.

 

Acknowledgment: The author acknowledges Brian K. Ahmedani, PhD, Charles E. Coffey, MD, MS, C. Edward Coffey, MD, Terri Robertson, PhD, and the entire Perfect Depression Care team.

Corresponding author: M. Justin Coffey, MD, The Menninger Clinic, 12301 S. Main St., Houston, TX 77035, [email protected].

Financial disclosures: None.

From The Menninger Clinic, Houston, TX.

 

Abstract

  • Objective: To summarize the Perfect Depression Care initiative and describe recent work to spread this quality improvement initiative.
  • Methods: We summarize the background and methodology of the Perfect Depression Care initiative within the specialty behavioral health care setting and then describe the application of this methodology to 2 examples of spreading Perfect Depression Care to general medical settings: primary care and general hospitals.
  • Results: In the primary care setting, Perfect Depression Care spread successfully in association with the development and implementation of a practice guideline for managing the potentially suicidal patient. In the general hospital setting, Perfect Depression Care is spreading successfully in association with the development and implementation of a simple and efficient tool to screen not for suicide risk specifically, but for common psychiatric conditions associated with increased risk of suicide.
  • Conclusion: Both examples of spreading Perfect Depression Care to general medical settings illustrate the social traction of “zero suicides,” the audacious and transformative goal of the Perfect Depression Care Initiative.

Each year depression affects roughly 10% of adults in the United States [1]. The leading cause of disability in developed countries, depression results in substantial medical care expenditures, lost productivity, and absenteeism [1]. It is a chronic condition, and one that is associated with tremendous comorbidity from multiple chronic general medical conditions, including congestive heart failure, coronary artery disease, and diabetes [2]. Moreover, the presence of depression has deleterious effects on the outcomes of those comorbid conditions [2]. Untreated or poorly treated, depression can be deadly—each year as many as 10% of patients with major depression die from suicide [1].

In 1999 the Behavioral Health Services (BHS) division of Henry Ford Health System in Detroit, Michigan, set out to eliminate suicide among all patients with depression in our HMO network. This audacious goal was a key lever in a broader aim, which was to build a system of perfect depression care. We aimed to achieve breakthrough improvement in quality and safety by completely redesigning the delivery of depression care using the 6 aims and 10 new rules set forth in the Institute of Medicine’s (IOM) report Crossing the Quality Chasm [3]. To communicate our bold vision, we called the initiative Perfect Depression Care. Today, we can report a dramatic and sustained reduction in suicide that is unprecedented in the clinical and quality improvement literature [4].

In the Chasm report, the IOM cast a spotlight on behavioral health care, placing depression and anxiety disorders on the short list of priority conditions for immediate national attention and improvement. Importantly, the IOM called for a focus on not only behavioral health care benefits and coverage, but access and quality of care for all persons with depression. Finding inspiration from our success in the specialty behavioral health care setting, we decided to answer the IOM’s call. We set out to build a system of depression care that is not confined to the specialty behavioral health care setting, a system that delivers perfect care to every patient with depression, regardless of general medical comorbidity or care setting. We called this work Perfect Depression Care Spread.

In this article, we first summarize the background and methodology of the Perfect Depression Care initiative. We then describe the application of this methodology to spreading Perfect Depression Care into 2 nonspecialty care settings—primary care and general hospitals. Finally, we review some of the challenges and lessons learned from our efforts to sustain this important work.

Building a System of Perfect Depression Care

The bedrock of Perfect Depression Care was a cultural intervention. The first step in the intervention was to commit to the goal of “zero defects.” Such a commitment is not just to the goal of improving, but to the ideal that perfect care is—indeed, must be—attainable. It is designed to take devoted yet average performers through complete organizational transformation. We began our transformation within BHS by establishing a “zero defects” goal for each of the IOM’s 6 aims (Table). We then used “pursuing perfection” methodology to work continually towards each goal [5].

One example of the transformative power of a “zero defects” approach is the case of the Effectiveness aim. Our team engaged in vigorous debate about the goal for this aim. While some team members eagerly embraced the “zero defects” ambition and argued that truly perfect care could only mean “no suicides,” others challenged it, viewing it as lofty but unrealistic. After all, we had been taught that for some number of individuals with depression, suicide was the tragic yet inevitable outcome of their illness. How could it be possible to eliminate every single suicide? The debate was ultimately resolved when one team member asked, “If zero isn’t the right number of suicides, then what is? Two? Four? Forty?” The answer was obvious and undeniable. It was at that moment that setting “zero suicides” as the goal became a galvanizing force within BHS for the Perfect Depression Care initiative.

The pursuit of zero defects must take place within a “just culture,” an organizational environment in which frontline staff feel comfortable disclosing errors, especially their own, while still maintaining professional accountability [6]. Without a just culture, good but imperfect performance can breed disengagement and resentment. By contrast, within a just culture, it becomes possible to implement specific strategies and tactics to pursue perfection. Along the way, each step towards “zero defects” is celebrated because each defect that does occur is identified as an opportunity for learning.

One core strategy for Perfect Depression Care was organizing care according to the planned care model, a locally tailored version of the chronic care model [7]. We developed a clear vision for how each patient’s care would change in a system of Perfect Depression Care. We partnered with patients to ensure their voice in the redesign of our depression care services. We then conceptualized, designed, and tested strategies for improvement in 4 high-leverage domains (patient partnership, clinical practice, access to care, and information systems), which were identified through mapping our current care processes. Once this new model of care was in place, we implemented relevant measures of care quality and began continually assessing progress and then adjusting the plan as needed (ie, following the Model for Improvement).

The multiple changes we implemented during each layer of transformation (Figure 1) have been described elsewhere in detail [8,9]. The challenge of spreading Perfect Depression Care was to apply all that we learned to new and different social systems where suicide is not seen as key measure of quality of the daily work that is done.

Spread to Primary Care

The spread to primary care began in 2005, about 5 years after the initial launch of Perfect Depression Care in BHS. (There had been some previous work done aimed at integrating depression screening into a small number of specialty chronic disease management initiatives, although that work was not sustained.) We based the overall clinical structure on the IMPACT model of integrated behavioral health care [10]. Primary care providers collaborated with depression care managers, typically nurses, who had been trained to provide education to primary care providers and problem solving therapy to patients. The care managers were supervised by a project leader (a full-time clinical psychologist) and supported by 2 full-time psychiatric nurse practitioners who were embedded in each clinic during the early phases of implementation. An electronic medical record (EMR) was comfortably in place and facilitated the delivery of evidence-based depression care, as well as the collection of relevant process and outcome measures, which were fed back to the care teams on a regular basis. And, importantly, the primary care leadership team formally sanctioned depression care to be spread to all 27 primary care clinics.

Overcoming the Challenges of the Primary Care Visits

From 2005 to 2010, the model was spread tenuously to 5 primary care clinics. At that rate (1 clinic per year), it would have taken over 20 years to spread depression care through all 27 primary care clinics. Not satisfied with this progress, we stepped back to consider why adoption was happening so slowly. First, we spoke with leaders. Although the project was on a shoestring budget, our leaders understood the business case for integrating some version of depression care into the primary care setting [11]. They advised limiting the scope of the project to focus only on adults with 1 of 6 chronic diseases: diabetes mellitus, congestive heart failure, coronary artery disease, chronic obstructive pulmonary disease (COPD), asthma, and chronic kidney disease. This narrower focus was aimed at using the project’s limited resources more effectively on behalf of patients who were more frequent utilizers of care and statistically more likely to have a comorbid depressive illness. Through the use of time studies, however, we learned that the time consumed discerning which patients each day were eligible for depression screening created delays in clinic workflow that were untenable. It turned out that the process of screening all patients was far more efficient that the process of identifying which patients “should” be screened and then screening only those who were identified. This pragmatic approach to daily workflow in the clinics was a key driver of successful spread.

Next, we spoke to patients. In an effort to assess patient engagement, we reviewed the records of 830 patients who had been seen in one of the clinics where depression care was up and running. Among this group, less than 1% had declined to receive depression screening. In fact, during informal discussions with patients and clinic staff, patients were thanking their primary care providers for talking with them about depression. When it came to spreading depression care, patient engagement was not the problem.

Finally, we spoke with primary care providers, physicians who were viewed as leaders in their clinics. They described trepidation among their teams about adopting an innovation that would lead to patients being identified as at risk for suicide. Their concern was not that integrating depression care was not the right thing to do in the primary care setting; indeed, they had a strong and genuine desire to provide better depression care for their patients. Their concern was that the primary care clinic was not equipped to manage a suicidal patient safely and effectively. This concern was real, and it was pervasive. After all, the typical primary care office visit was already replete with problem lists too long to be managed effectively in the diminishing amount of time allotted to each visit. Screening for depression would only make matters worse [12]. Furthermore, identifying a patient at risk for suicide was not uncommon in our primary care setting. Between 2006 and 2012, an average of 16% of primary care patients screened each year had reported some degree of suicidal ideation (as measures by a positive response on question 9 of the PHQ-9). These discussions showed us that the model of depression care we were trying to spread into primary care was not designed with an explicit and confident approach to suicide—it was not Perfect Depression Care.

Leveraging Suicide As a Driver of Spread

When we realized that the anxiety surrounding the management of a suicidal patient was the biggest obstacle to Perfect Depression Care spread to primary care, we decided to turn this obstacle into an opportunity. First, an interdisciplinary team developed a practice guideline for managing the suicidal patient in general medical settings. The guideline was based on the World Health Organization’s evidence-based guidelines for addressing mental health disorders in nonspecialized health settings [13] and modified into a single page to make it easy to adopt. Following the guideline was not at all a requirement, but doing so made it very easy to identify patients at potential risk for suicide and to refer them safely and seamlessly to the next most appropriate level of care.

Second, and most importantly, BHS made a formal commitment to provide immediate access for any patient referred by a primary care provider following the practice guideline. BHS pledged to perform the evaluation on the same day as the referral was made and without any questions asked. Delivering on this promise required BHS to develop and implement reliable processes for its ambulatory centers to receive same-day referrals from any one of 27 primary care clinics. Success meant delighting our customers in primary care while obviating the expense and trauma associated with sending patients to local emergency departments. This work was hard. And it was made possible by the culture within BHS of pursuing perfection.

The practice guideline was adopted readily and rapidly, and its implementation was followed by much success. During the 5 years of Perfect Depression Care spread when there was no practice guideline for managing the suicidal patient in general medical settings, we achieved a spread rate of 1 clinic per year. From 2010 to 2012, after the practice guideline was implemented, the model was spread to 22 primary care clinics, a rate of 7.3 clinics per year. This operational improvement brought with it powerful clinical improvement as well. After the implementation of the practice guideline, the average number of primary care patients receiving Perfect Depression Care increased from 835 per month to 9186 per month (Figure 2).

During this time of successful spread, project resources remained similar, no new or additional financial support was provided, and no new leadership directives had been communicated. The only new features of Perfect Depression Care spread were a 1-page practice guideline and a promise. Making suicide an explicit target of the intervention, and doing so in a ruthlessly practical way, created the conditions for the intervention to diffuse and be adopted more readily.

Spread to General Hospitals

In 2006, the Joint Commission established National Patient Safety Goal (NPSG) 15.01.01 for hospitals and health care facilities “to identify patients at risk for suicide” [14]. NPSG 15.01.01 applies not just to patients in psychiatric hospitals, but to all patients “being treated for emotional or behavioral disorders in general hospitals,” including emergency departments. As a measure of safety, suicide is the second most common sentinel event among hospitalized patients—only wrong-site surgery occurs more often. And when a suicide does take place in a hospital, the impact on patients, families, health care workers, and administrators is profound.

Still, completed suicide among hospitalized patients is statistically a very rare event. As a result, general hospitals find it challenging to meet the expectations set forth in NPSG 15.01.01, which seemingly asks hospitals to search for a needle in a haystack. Is it really valuable to ask a patient about suicide when that patient is a 16-year-old teenager who presented to the emergency department for minor scrapes and bruises sustained while skateboarding? Should all patients with “do not resuscitate” orders receive a mandatory, comprehensive suicide risk assessment? In 2010, general hospitals in our organization enlisted our  Perfect Depression Care team to help them develop a meaningful approach to NPSG 15.01.01, and so Perfect Depression Care spread to general hospitals began.

The goal of NPSG 15.01.01 is “to identify patients at risk for suicide.” To accomplish this goal, hospital care teams need simple, efficient, evidence-based tools for identifying such patients and responding appropriately to the identified risk. In a general hospital setting, implementing targeted suicide risk assessments is simply not feasible. Assessing every single hospitalized patient for suicide risk seems clinically unnecessary, if not wasteful, and yet the processes needed to identify reliably which patients ought to be assessed end up taking far longer than simply screening everybody. With these considerations in mind, our Perfect Depression Care team took a different approach.

The DAPS Tool

We developed a simple and easy tool to screen, not for suicide risk specifically, but for common psychiatric conditions associated with increased risk of suicide. The Depression, Anxiety, Polysubstance Use, and Suicide screen (DAPS) [15] consists of 7 questions coming from 5 individual evidence-based screening measures: the PHQ-2 for depression, the GAD-2 for anxiety, question 9 from the PHQ-9 for suicidal ideation, the SASQ for problem alcohol use, and a single drug use question for substance use. Each of these questionnaires has been validated as a sensitive screening measure for the psychiatric condition of interest (eg, major depression, generalized anxiety, current problem drinking). Some of them have been validated specifically in general medical settings or among general medical patient populations. Moreover, each questionnaire is valid whether clinician-administered or self-completed. Some have also been validated in languages other than English.

The DAPS tool bundles together these separate screening measures into one easy to use and efficient tool. As a bundle, the DAPS tool offers 3 major advantages over traditional screening tools. First, the tool takes a broader approach to suicide risk with the aim of increasing utility. Suicide is a statistically rare event, especially in general medical settings. On the other hand, psychiatric conditions that themselves increase people’s risk of suicide are quite common, particularly in hospital settings. Rather than screening exclusively for suicidal thoughts and behavior, the DAPS tool screens for psychiatric conditions associated with an increased risk of suicide that are common in general medical settings. This approach to suicide screening is novel. It allows for the recognition of higher number of patients who may benefit from behavioral health interventions, whether or not they are “actively suicidal” at that moment. By not including extensive assessments of numerous suicide risk factors, the DAPS tool offers practical utility without losing much specificity. After all, persons in general hospital settings who at acutely increased risk of suicide (eg, a person admitted to the hospital following a suicide attempt via overdose) are already being identified.

The second advantage of the DAPS tool is that the information it obtains is actionable. Suicide screening tools, whether brief or comprehensive, are not immediately predictive and arrive at essentially the same conclusion—the person screened is deemed to fall into some risk stratification (eg, high, medium, low risk; acute vs non-acute risk). In general hospital settings, the responses to these stratifications are limited (eg, order a sitter, call a psychiatry consultation) and not specific to the level of risk. Furthermore, persons with psychiatric disorders may be at increased risk of suicide even if they deny having suicidal thoughts. The DAPS tool allows for the recognition of these persons, thus identifying opportunities for intervention. For example, a person who screens positive on the PHQ-2 portion of the DAPS but who denies having recent suicidal thoughts or behavior may not benefit from an immediate safety measure (eg, ordering a sitter) but may benefit from an evaluation and, if indicated, treatment for depression. Treating that person’s depression would decrease the longitudinal risk of suicide. If another person screens negative on the PHQ-2 but positive on the SASQ, then that person may benefit most from interventions targeting problem alcohol use, such as the initiation of a CIWA protocol in order to prevent the emergence of alcohol withdrawal during the hospitalization, but not necessarily from depression treatment.

The third main advantage of the DAPS tool is its ease of use. There are a limited number of psychiatrists and other mental health care workers in general hospitals, and that number is not adequate to have all psychiatric screens and assessments in performed by a specialist. The DAPS tool consists of scripted questions that any health care provider can read and follow. This type of instruction may be especially beneficial to health care providers who are unsure or uncomfortable about how to screen patients for suicide or psychiatric disorders. The DAPS tool provides these clinicians with language they can use comfortably when talking with patients. Alternatively, patients themselves can complete the DAPS questions, which frees up valuable time for providers to deliver other types of care. During a pilot project at one of our general hospitals, 20 general floor nurses were asked to implement the DAPS with their patients after receiving only a very brief set of instructions. On average, it took a nurse less than 4 minutes to complete the DAPS. Ninety percent of the nurses stated the DAPS tool would take “less time” or “no additional time” compared with the behavioral health questions in the current nursing admission assessment they were required to complete on every patient. Eighty-five percent found the tool “easy” or “very easy” to use.

At the time of publication of this article, one of our general hospitals is set to roll out DAPS screening hospital wide with the goal of prospectively identifying patients who might benefit from some form of behavioral health intervention and thus reducing length of stay. Another of our general hospitals is already using the DAPS to reduce hospital readmissions [15]. What started out as an initiative simply to meet a regulatory requirement turned into a novel and efficient means to bring mental health care services to hospitalized patients.

Lessons Learned

Our goal in the Perfect Depression Care initiative was to eliminate suicide, and we have come remarkably close to achieving that goal. Our determination to strive for perfection rather than incremental goals had a powerful effect on our results. To move to a different order of performance required us to challenge our most basic assumptions and required new learning and new behavior.

This social aspect of our improvement work was fundamental to every effort made to spread Perfect Depression Care outside of the specialty behavioral health care setting. Indeed, the diffusion of all innovation occurs within a social context [16]. Ideas do not spread by themselves—they are spread from one person (the messenger) to another (the adopter). Successful spread, therefore, depends in large part on the communication between messenger and adopter.

Implementing Perfect Depression Care within BHS involved like-minded messengers and adopters from the same department, whereas spreading the initiative to the general medical setting involved messengers from one specialty and adopters from another. The nature of such a social system demands that the goals of the messenger be aligned with the incentives of the adopter. In health service organizations, such alignment requires effective leadership, not just local champions [17]. For example, spreading the initiative to the primary care setting really only became possible when our departmental leaders made a public promise to the leaders of primary care that BHS would see any patient referred from primary care on the same day of referral with no questions asked. And while it is true that operationalizing that promise was a more arduous task than articulating it, the promise itself is what created a social space within which the innovation could diffuse.

Even if leaders are successful at aligning the messenger’s goals and the adopter’s incentives, spread still must actually occur locally between 2 people. This social context means that a “good” idea in the mind of the messenger must be a “better” idea in the mind of the adopter. In other words, an idea or innovation is more likely to be adopted if it is better than the status quo [18]. And it is the adopter’s definition of “better” that matters. For example, our organization’s primary care clinics agreed that improving their depression care was a good idea. However, specific interventions were not adopted (or adoptable) until they became a way to make daily life easier for the front-line clinic staff (eg, by facilitating more efficient referrals to BHS). Furthermore, because daily life in each clinic was a little bit different, the specific interventions adopted were allowed to vary. Similarly, in the general hospital setting, DAPS screening was nothing more than a good idea until the nurses learned that it took less time and yielded more actionable results than the long list of behavioral health screening questions they were currently required to complete on every patient being admitted. When replacing those questions with the DAPS screen saved time and added value, the DAPS became better than the status quo, a tipping point was reached, and spread took place.

Future Spread

The 2 examples of Perfect Depression Care Spread described herein are testaments to the social traction of  “zero suicides.” Importantly, the success of each effort has hinged on its creative, practical approach to suicide, even though there is scant scientific evidence to support suicide prevention initiatives in general medical settings [19].

As it turns out, there is also little scientific knowledge about how innovations in health service organizations are successfully sustained [16]. It is our hope that the 15 years of Perfect Depression Care shed some light on this question, and that the initiative can continue to be sustained in today’s turbulent and increasingly austere health care environment. We are confident that we will keep improving as long as we keep learning.

In addition, we find tremendous inspiration in the many others who are learning and improving with us. In 2012, for instance, the US Surgeon General promoted the adoption “zero suicides” as a national strategic objective [1]. And in 2015, the Deputy Prime Minister of the United Kingdom called for the adoption of “zero suicides” across the entire National Health Service [20]. As the Perfect Depression Care team continues to grow, the pursuit of perfection becomes even more stirring.

 

Acknowledgment: The author acknowledges Brian K. Ahmedani, PhD, Charles E. Coffey, MD, MS, C. Edward Coffey, MD, Terri Robertson, PhD, and the entire Perfect Depression Care team.

Corresponding author: M. Justin Coffey, MD, The Menninger Clinic, 12301 S. Main St., Houston, TX 77035, [email protected].

Financial disclosures: None.

References

1. U.S. Department of Health and Human Services (HHS) Office of the Surgeon General and National Action Alliance for Suicide Prevention. 2012 National Strategy for Suicide Prevention: goals and objectives for action. Washington, DC: HHS; 2012.

2. Druss BG, Walker ER. Mental disorders and medical comorbidity: research synthesis report no. 21. Robert Wood Johnson Foundation 2011.

3. Committee on Quality Health Care in America, Institute of Medicine. Crossing the Quality Chasm. Washington, DC: National Academy Press; 2001.

4. Coffey CE, Coffey MJ, Ahmedani BK. An update on Perfect Depression Care. Psychiatric Services 2013;64:396.

5. Robert Wood Johnson Foundation. Pursuing Perfection: Raising the bar in health care performance. Robert Wood Johnson Foundation; 2014.

6. Marx D. Patient safety and the “just culture”: a primer for health care executives. New York: Columbia University; 2001.

7. Coleman K, Austin BT, Brach C, Wagner EH. Evidence on the chronic care model in the new millennium. Health Aff 2009;28:75–85.

8. Coffey CE. Building a system of perfect depression care in behavioral health. Jt Comm J Qual Patient Saf 2007;33:193–9.

9. Hampton T. Depression care effort brings dramatic drop in large HMO population’s suicide rate. JAMA 2010;303: 1903–5.

10. Unützer J, Powers D, Katon W, Langston C. From establishing an evidence-based practice to implementation in real-world settings: IMPACT as a case study. Psychiatr Clin North Am 2005;28:1079–92.

11. Melek SP, Norris DT, Paulus J. Economic impact of integrated medical-behavioral healthcare: implications for psychiatry. Milliman; 2014.

12. Schmitt MR, Miller MJ, Harrison DL, Touchet BK. Relationship of depression screening and physician office visit duration in a national sample. Psych Svc 2010;61:1126–31.

13. mhGAP intervention guide for mental, neurological, and substance use disorders in non-specialized health settings: Mental Health Gap Action Programme (mhGAP). World Health Organization; 2010.

14. National Patient Safety Goals 2008. The Joint Commission. Oakbrook, IL.

15. Coffey CE, Johns J, Veliz S, Coffey MJ. The DAPS tool: an actionable screen for psychiatric risk factors for rehospitalization. J Hosp Med 2012;7(suppl 2):S100–101.

16. Greenhalgh T, Robert G, Macfarlane F, et al. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q 2004;82:581–629.

17. Berwick DM. Disseminating innovations in health care. JAMA 2003;289:1969–75.

18. Rogers EM. Diffusion of innovations. 4th ed. New York: The Free Press; 1995.

19. LeFevre MF. Screening for suicide risk in adolescents, adults, and older adults in primary care: US Preventive Services Task Force Recommendation Statement. Ann Intern Med 2014;160:719–26.

20. Clegg N. Speech at mental health conference. Available at  www.gov.uk/government/speeches/nick-clegg-at-mental-health-conference.

References

1. U.S. Department of Health and Human Services (HHS) Office of the Surgeon General and National Action Alliance for Suicide Prevention. 2012 National Strategy for Suicide Prevention: goals and objectives for action. Washington, DC: HHS; 2012.

2. Druss BG, Walker ER. Mental disorders and medical comorbidity: research synthesis report no. 21. Robert Wood Johnson Foundation 2011.

3. Committee on Quality Health Care in America, Institute of Medicine. Crossing the Quality Chasm. Washington, DC: National Academy Press; 2001.

4. Coffey CE, Coffey MJ, Ahmedani BK. An update on Perfect Depression Care. Psychiatric Services 2013;64:396.

5. Robert Wood Johnson Foundation. Pursuing Perfection: Raising the bar in health care performance. Robert Wood Johnson Foundation; 2014.

6. Marx D. Patient safety and the “just culture”: a primer for health care executives. New York: Columbia University; 2001.

7. Coleman K, Austin BT, Brach C, Wagner EH. Evidence on the chronic care model in the new millennium. Health Aff 2009;28:75–85.

8. Coffey CE. Building a system of perfect depression care in behavioral health. Jt Comm J Qual Patient Saf 2007;33:193–9.

9. Hampton T. Depression care effort brings dramatic drop in large HMO population’s suicide rate. JAMA 2010;303: 1903–5.

10. Unützer J, Powers D, Katon W, Langston C. From establishing an evidence-based practice to implementation in real-world settings: IMPACT as a case study. Psychiatr Clin North Am 2005;28:1079–92.

11. Melek SP, Norris DT, Paulus J. Economic impact of integrated medical-behavioral healthcare: implications for psychiatry. Milliman; 2014.

12. Schmitt MR, Miller MJ, Harrison DL, Touchet BK. Relationship of depression screening and physician office visit duration in a national sample. Psych Svc 2010;61:1126–31.

13. mhGAP intervention guide for mental, neurological, and substance use disorders in non-specialized health settings: Mental Health Gap Action Programme (mhGAP). World Health Organization; 2010.

14. National Patient Safety Goals 2008. The Joint Commission. Oakbrook, IL.

15. Coffey CE, Johns J, Veliz S, Coffey MJ. The DAPS tool: an actionable screen for psychiatric risk factors for rehospitalization. J Hosp Med 2012;7(suppl 2):S100–101.

16. Greenhalgh T, Robert G, Macfarlane F, et al. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q 2004;82:581–629.

17. Berwick DM. Disseminating innovations in health care. JAMA 2003;289:1969–75.

18. Rogers EM. Diffusion of innovations. 4th ed. New York: The Free Press; 1995.

19. LeFevre MF. Screening for suicide risk in adolescents, adults, and older adults in primary care: US Preventive Services Task Force Recommendation Statement. Ann Intern Med 2014;160:719–26.

20. Clegg N. Speech at mental health conference. Available at  www.gov.uk/government/speeches/nick-clegg-at-mental-health-conference.

Issue
Journal of Clinical Outcomes Management - March 2015, VOL. 22, NO. 3
Issue
Journal of Clinical Outcomes Management - March 2015, VOL. 22, NO. 3
Publications
Publications
Topics
Article Type
Display Headline
Perfect Depression Care Spread: The Traction of Zero Suicides
Display Headline
Perfect Depression Care Spread: The Traction of Zero Suicides
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Cost Drivers Associated withClostridium difficile-Associated Diarrhea in a Hospital Setting

Article Type
Changed
Wed, 05/22/2019 - 09:55
Display Headline
Cost Drivers Associated with Clostridium difficile-Associated Diarrhea in a Hospital Setting

From HealthCore, Wilmington, DE, and Cubist Pharma-ceuticals, San Diego, CA.

 

Abstract

  • Objectives: To describe trends in inpatient resource utilization and potential cost drivers of Clostridium difficile-associated diarrhea (CDAD) treated in the hospital.
  • Methods: Retrospective medical record review included 500 patients with ≥1 inpatient medical claim diagnosis of CDAD (ICD-9-CM: 008.45) between 01/01/2005-10/31/2010. Information was collected on patient demographics, admission diagnoses, laboratory data, and CDAD-related characteristics and discharge. Hospital costs were evaluated for the entire inpatient episode and prorated for the duration of the CDAD episode (ie, CDAD diagnosis date to diarrhea resolution/discharge date).
  • Results: The cohort was mostly female (62%), Caucasian (72%), with mean (SD) age 66 (±17.6) years. 60% had diagnosis of CDAD or presence of diarrhea at admission. CDAD diagnosis was confirmed with laboratory test in 92% of patients. ~44% had mild CDAD, 35% had severe CDAD. Following CDAD diagnosis, approximately 53% of patients were isolated for ≥1 days, 12% transferred to the ICU for a median (Q1–Q3) length of stay of 8 (5–15) days. Two-thirds received gastrointestinal or infectious disease consult. Median time from CDAD diagnosis to discharge was 6 (4–9) days; 5.5 (4–8) days for patients admitted with CDAD, 6.5 (4–10) days for those with hospital-acquired CDAD. The mean and median costs (2011 USD) for CDAD-associated hospitalization were $35,621 and $13,153, respectively.
  • Conclusion: Patients with CDAD utilize numerous expensive resources during hospitalization including laboratory tests, isolation, prolonged ICU stay, and specialist consultations.

 

Clostridium difficile, classified as an urgent public health threat by the Centers for Disease Control and Prevention (CDC), causes approximately 250,000 hospitalizations and an estimated 14,000 deaths per year in the United States [1]. An estimated 15% to 25% of patients with C. difficile-associated diarrhea (CDAD) will experience at least 1 recurrence [2-4], frequently requiring rehospitalization [5]. The high incidence of primary and recurrent infections contributes to a substantial burden associated with CDAD in terms of extended and repeat hospital stays [6,7].

Conservative estimates of the direct annual costs of CDAD in the United States over the past 15 years range from $1.1 billion [8] to $3.2 billion, with an average cost per stay of $10,212 for patients hospitalized with a principal diagnosis of CDAD or a CDAD-related symptom [5]. O’Brien et al estimated that costs associated with rehospitalizations accounted for 11% of overall CDAD-related hospital costs;when considering all CDAD-related hospitalizations, including both initial and subsequent rehospitalizations for recurrent infection and not accounting for post-acute or outpatient care, the 2-year cumulative cost was estimated to be $51.2 million. While studies have yielded varying assessments of the actual CDAD burden [5–10], they all suggest that CDAD burden is considerable and that extended hospital stays are the major component of CDAD-associated costs [9,10]. In a claims-based study by Quimbo et al [11], when multiple and diverse cohorts of CDAD patients at elevated risk for recurrence were matched with patients with similar underlying at-risk condition(s) but no CDAD, the CDAD at-risk groups had an incremental LOS per hospitalization ranging from approximately 3 to 18 days and an incremental cost burden ranging from a mean of $11,179 to $115,632 (2011 USD) per stay.

While it is recognized that CDAD carries significant cost burden and is driven by LOS, current literature is lacking regarding the characteristics of these hospital stays. Building on the Quimbo et al study, the current study was designed to probe further into the nature of the burden (ie, resource use) incurred during the course of CDAD hospitalizations. As such, the objective of this study was to identify the common trends in hospital-related resource utilization and describe the potential drivers that affect the cost burden of CDAD using hospital medical record data.

 

 

Methods

Population

Patients were selected for this retrospective medical record review from the HealthCore Integrated Research Database (HealthCore, Wilmington, DE). The database contains a broad, clinically rich and geographically diverse spectrum of longitudinal claims information from one of the largest commercially insured populations in the United States, representing 48 million lives. We identified 21,177 adult (≥ 18 years) patients with at least 1 inpatient claim with an International Classification of Diseases, 9th Edition, Clinical Modification (ICD-9-CM) diagnosis code for C. difficile infection (CDI; 008.45) between 1 January 2005 and 31 October 2010 (intake period). All patients had at least 12 months of prior and continuous medical and pharmacy health plan eligibility prior to the incident CDAD-associated hospitalization within the database. Additional details regarding this cohort identification has been published previously [11]. The study was undertaken in accordance with Health Insurance Portability and Accountability Act (HIPAA) guidelines and the necessary central institutional review board approval was obtained prior to medical record identification and abstraction.

Sampling Strategy

To achieve the target sample of 500 fully abstracted medical records, 2500 patients were randomly selected via systematic random sampling without replacement from the original CDAD population of 21,177; their earliest hospital stay during the intake period with a diagnosis of CDI was identified using medical claims’ data and targeted for medical record abstraction. To be considered eligible for abstraction, medical records were required to include physicians’ and nurses’ notes/reports, discharge summary notes, medication administration record (MAR), and confirmation of CDAD via a documented CDI ICD-9-CM diagnosis or written note by a physician or nurse. Records with invalid or missing hospital or patient names were deemed ineligible for abstraction. Medical record retrieval continued until the requisite 500 CDAD-validated medical records were abstracted (Figure).

Medical Record Abstraction

During the record abstraction process, information was collected on patients’ race/ethnicity, body mass index (BMI), admission diagnosis and other conditions, point of entry and prior location, body temperature and laboratory data (eg, creatinine and albumin values, white blood cell [WBC] count), diarrhea and stool characteristics, CDAD diagnosis date, CDAD-specific characteristics, severity, complications, and related tests/procedures, CDAD treatments (eg, dose, duration, and formulation of medications), hospital LOS, including stays in the intensive care unit (ICU), cardiac care unit (CCU) following CDAD diagnosis; consultations provided by gastrointestinal, infectious disease, intensivists, or surgery care specialists, and discharge summary on disposition, CDAD status, and medications prescribed. Standardized data collection forms were used by trained nurses or pharmacists to collect information from the medical records and inter-rater reliability testing with a 0.9 cutoff was required to confirm accuracy. To ensure consistency, a pilot test of the first 20 abstracted records were re-abstracted by the research team. Last, quality checks were implemented throughout the abstraction process to identify any inconsistencies or data entry errors including coding errors and atypical, unrealistic data entry patterns (eg, identical values for a particular data field entered on multiple records; implausible or erratic inputs; or a high percentage of missing data points). Missing data were not imputed.

Study Definitions

Diarrhea was defined as 3 or more unformed (includes bloody, watery, thin, loose, soft, and/or unformed stool) bowel movements per day.CDAD severity was classified as mild (4–5 unformed bowel movements per day or WBC ≤ 2000/mm3); moderate (6–9 unformed bowel movements per day or WBC between 12,001/mm3 and 15,000/mm3); or severe (≥10 unformed bowel movements per day or WBC ≥15,001/mm3) [12,13]. Diarrhea was considered to be resolved when the patient had no more than 3 unformed stools for 2 consecutive days and lasting until treatment was completed, with no additional therapy required for CDAD as of the second day after the end of the course of therapy [2,14].CDAD episode was defined as the duration from the date of the CDAD diagnosis or confirmation (whichever occurred first), to the date of diarrhea resolution (where documented) or discharge date.

Cost Measures

The total hospital health plan paid costs for the entire inpatient episode (includes treatment costs, diagnostics, services provided, etc.) were estimated using medical claims present in the database and pertaining to the hospitalization from where medical records were abstracted. Then the proportionate amount for the duration of the CDAD episode (from CDAD diagnosis to the diarrhea resolution date or the discharge date in cases where the resolution date could not be ascertained) was calculated to estimate the average CDAD associated in-hospital costs.

Analysis

Means (± standard deviation [SD]), medians (interquartile range Q1 to Q3), and relative frequencies were calculated for continuous and categorical data, respectively. This analysis was descriptive in nature; hence, no statistical tests to determine significance were conducted.

 

 

Results

We had a 55.3% success rate in obtaining the medical records from the contacted hospitals with refusal to participate/consent by the hospital in question being the most frequent reason for failure in 3 out of 4 cases. An additional attrition of 39.3% was observed among the medical records received, with absence of a MAR form (23.9%) and confirmatory CDAD diagnosis or note (9.1%) being the most frequent criteria for discarding an available record prior to abstraction (Figure).

Patient Characteristics

Consistent with the characteristics of the overall CDAD population within the original database, the randomly selected patients whose records were abstracted were predominantly women and elderly with a mean age of 66 (± 17.6) years (Table 1). Patients had a mean BMI of 26.7 (± 7.6), with 44% classified as being either overweight or obese. Most of the cohort had either CDAD or diarrhea as a primary diagnosis at admission. Among those with no admission diagnosis of CDAD or diarrhea, the mean time to CDAD acquisition was about approximately 1 week after admission.

CDAD Characteristics and Complications

Using a derived definition of severity, most CDAD cases were classified either as 

mild or severe (Table 2). For those with available diarrhea information, as expected the majority of patients reported thin/loose/soft or watery stools during the course of their CDAD episode. Patients had on average 5.5 (± 13.3) stools per day during the CDAD episode. In addition to diarrhea, stomach pain, vomiting, and dehydration were commonly reported. A relatively low proportion of patients had serious complications including mucosal inflammation and colectomy. One of every 5 patients was a recurrent CDAD case with documented prior CDAD.

CDAD-Related Resource Utilization

Following CDAD diagnosis, more than half of the study patients were isolated for 1 or more days. While the majority of patients with CDAD (74.0%) stayed in a general hospital room, 12.4% stayed in the ICU for a mean duration of 12.1 (± 12.3) days (Table 3). Half of these ICU patients required 

isolation for at least 1 day. 5.6% stayed in the CCU in a private or semi-private room for 5 to 7 days during the CDAD episode.

About one-third of patients consulted a gastrointestinal or infectious disease specialist at least once. Among these patients, assuming that a patient following an initial specialist consultation would have follow-up visits at least once a day (formal or informal) for the remainder of the CDAD episode, we estimate that there were an average of 8.7 (± 15.6) and 11.6 (± 19.4) GI or ID specialist visits respectively during the CDAD episode.

Nearly all patients had their CDAD diagnosis confirmed by laboratory tests. CDAD virulence was identified as toxin A and/or toxin B in 47.6% of the samples. However, nearly three-fifths of patients also underwent 1 or more nondiagnostic tests including endoscopy, colonoscopy, computed axial tomography (CAT), or magnetic resonance imaging (MRI) scans, sigmoidoscopy, and/or other obstructive series tests during the CDAD episode.

CDAD Treatment

About 4.2% patients received no antibiotic treatment following CDAD diagnosis. Nevertheless, twice-daily metro-nidazole was the most frequently used antibiotic for CDAD (87.2%), with oral being the preferred route of administration among these patients. The median duration of treatment and daily dose were 4 (3–7) days and 1000 mg, respectively. Oral vancomycin was administered to nearly half of the patients for a median duration and daily dose of 5 (3–8) days and 500 mg, respectively; mean frequency of administration was 2.5 (± 1.2) times per day (Table 4). One-third of the patients (33.6%) augmented their first-line therapy, most frequently adding vancomycin to the initial metronidazole treatment or vice-versa. Only 5.2% of patients switched completely from the first-line therapy, predominantly from metronidazole to vancomycin (4%).

CDAD at Discharge

Overall, the mean time from CDAD diagnosis to hospital discharge was 8.8 (± 13.3) days (Table 5). Notably, CDAD was documented to persist in 84.4% of patients at the time of discharge, with 82.5% of patients obtaining prescriptions for post-discharge antibiotic treatment involving metronidazole, vancomycin, or rifaximin (Table 6). Among the 7.6% of patients who died while hospitalized, CDAD was identified as the cause of death in one-fifth of these cases.

 

 

Hospitalization Costs

Based on claims data, the mean (±SD) and median (Q1–Q3) plan costs for the duration of a CDAD-associated hospitalization (2011 USD) for these 500 patients were found to be $35,621 (± $100,502) and $13,153 ($8,209–$26,893), respectively.

 

Discussion

While multiple studies have documented the considerable economic burden associated with CDAD [5–10], this study was the first to our knowledge to evaluate the specific hospital resources that are used during an extended hospital stay for CDAD. This real-world analysis, in conjunction with the Quimbo et al claims analysis, demonstrated the significant burden associated with CDAD in terms of both fixed costs (eg, hospital stay) as well as the variable components that drive these expenditures (eg, consultations, ICU stay).

The mean ($35,621) and median ($13,153) total costs associated with the CDAD segment of the hospitalization, as measured via the claims, were quite high despite a greater prevalence of mild CDAD rather than severe infection, and required only a general hospital room stay. Both of the above CDAD hospital cost measures were well above the mean US general hospitalization cost of $11,666 and the median cost of $7334 measured from Healthcare Cost and Utilization Project data [15]. However, the mean cost of hospitalization reported in the current study falls within the range of previously reported costs for CDAD-associated hospitalizations [5,8,10]. While the mean cost may have been disproportionately inflated by a few extreme cases, the median CDAD-associated hospitalization cost was nearly twice the median cost of an average general hospital stay in the US [15]. Our finding that these elevated costs were observed among patients with mild CDAD and its relative magnitude compared with the average hospitalization costs (approximately 3-fold higher) were also consistent with the literature. For instance, Pakyz and colleagues reported that relative to patients without CDAD, hospital costs were tripled for patients with low-severity CDAD and 10% higher for those with more severe CDAD, presumably because CDAD resulted in costly complications that prolonged what would have otherwise been a short, simple hospital stay [10].

Type of hospital room could also be an important driver of cost. While most patients stayed in general hospital rooms, more than half were isolated for at least a day, and 12% of patients required nearly 2 weeks of intensive care. Taken together, 26% of patients in the current study were required to stay in a special care unit or a non–general hospital room for 5.5 to 12.2 days. This is consistent with the 28% of patients with CDAD that required stay on a special care unit previously reported by O’Brien et al [5].Additionally, previous research from Canadian health care data has shown that a single ICU stay costs an average of $7000 more per patient per day than a general hospital room (1992 Canadian dollars) or $9589 (2013 USD calculated using historical exchange rate data and adjusted for inflation) [16].However, despite this additional cost and resource burden, it appears that overall only 53.4% of all patients received care within an isolated setting as guidelines recommended.

Repeated specialist visits, procedures and multiple testing (concomitant diagnostic EIA and nondiagnostic tests) potentially added to the health care resource utilization and costs, along with the extra resources associated with specialized hospital care. We found that roughly one-third of patients consulted a specialist, although we did not distinguish between ‘formal’ and ‘informal’ consultations. Numerous studies published over the past 2 decades have demonstrated increased costs and resource utilization associated with specialist consultations [17–21]. Although the focused knowledge and experience of specialists may reduce morbidity and mortality [18,21], specialists are more likely than generalists to order more diagnostic tests, perform more procedures, and keep patients hospitalized longer and in ICUs, all of which contribute to higher costs without necessarily leading to improved health outcomes [21].

Limitations

One major limitation of this study was the inability to assess the individual costs of the resources used for each individual patient either through the medical charts or via claims. Additionally, the burden of CDAD was found to continue beyond the hospital stay, with documented evidence of persisting infection in 84% of patients at the point of discharge. Since the medical records obtained were limited to a single hospitalization and a single place of service, the data capture of an entire CDAD episode remains potentially incomplete for a number of patients who had recurrences or who had visited multiple sites of care in addition to the hospital (ie, emergency department or outpatient facility). The transition to outpatient care is often multifaceted and challenging for patients, especially those who are elderly and have multiple underlying conditions [18]. Access to care become more difficult, and patients become wholly responsible for taking their medication as prescribed and following other post-discharge treatment stratagems. Furthermore, no differentiation was made between patients having a primary versus secondary CDAD diagnosis.

 

 

Another limitation is that the costs of the hospitalization was calculated from claims and as such do not include either patient paid costs (eg, deductible) or indirect costs (eg, lost work or productivity or caregiver costs) due to CDAD. This study likely underestimates the true costs associated with CDAD. Finally, the patients included in this analysis were all members of large commercial health plans in the US and who are also working and relatively healthy. Therefore, these results may not be generalizable to patients with other types of health insurance or no insurance or to those living outside of the United States.

It is important to note that the trends and drivers described in this study are “potential” influencers contributing to the burden of CDAD. Given that this study is descriptive in nature, formal analyses aimed at confirming these factors as “drivers” should be conducted in future. CDAD-related hospitalizations have previously been shown to be associated with increased inpatient LOS and a substantial economic burden. Our study demonstrates that the CDAD-associated cost burden in hospital settings may be driven by the use of numerous high-cost hospital resources including prolonged ICU stays, isolation, frequent GI and ID consultations, CDAD-related non-diagnostic tests/procedures, and symptomatic CDAD treatment.

 

Acknowledgments: The authors acknowledge Cheryl Jones for her editorial assistance in preparing this manuscript.

Corresponding author: Swetha Rao Palli, CTI Clinical Trial and Consulting, 1775 Lexington Ave, Ste. 200, Cincinnati, OH 45209, [email protected]

Funding/support: Funding for this study was provided Cubist Pharmaceuticals.

Financial disclosures: Ms. Palli and Mr. Quimbo are former and current employees of HealthCore, respectively. HealthCore is an independent research organization that received funding from Cubist Pharmaceuticals for the conduct of this study. Dr. Broderick is an employee of Cubist Pharmaceuticals. Ms. Strauss was an employee of Optimer Pharmaceuticals during the time the study was carried out.

References

1. Centers for Disease Control and Prevention. Antibiotic resistance threats in the United States, 2013. Available at: www.cdc.gov/drugresistance/threat-report-2013/pdf/ar-threats-2013-508.pdf. Accessed March 6, 2013.

2. Louie TJ, Miller MA, Mullane KM, et al. Fidaxomicin versus vancomycin for Clostridium difficile infection. N Engl J Med 2011;364:422–31.

3. Lowy I, Molrine DC, Leav BA, et al. Treatment with monoclonal antibodies against Clostridium difficile toxins. N Engl J Med 2010;362:197–205.

4. Bouza E, Dryden M, Mohammed R, et al. Results of a phase III trial comparing tolevamer, vancomycin and metronidazole in patients with Clostridium difficile-associated diarrhoea [ECCMID abstract O464]. Clin Microbiol Infect 2008;14(Suppl s7):S103–4.

5. O’Brien JA, Betsy JL, Caro J, Davidson DM. The emerging infectious challenge of Clostridium difficile-associated disease in Massachusetts hospitals: clinical and economic consequences. Infect Control Hosp Epidemiol 2007;28:1219–27.

6. Dubberke ER, Wertheimer AI. Review of current literature on the economic burden of Clostridium difficile infection. Infect Control Hosp Epidemiol 2009;30:57–66.

7. Ghantoji SS, Sail K, Lairson DR, et al. Economic healthcare costs of Clostridium difficile infection: a systematic review. J Hosp Infect 2010;74:309–18.

8. Kyne L, Hamel MB, Polavaram R, Kelly CP. Health care costs and mortality associated with nosocomial diarrhea due to Clostridium difficile. Clin Infec Dis 2002;34:346–53.

9. Forster AJ, Taljaard M, Oake N, et al. The effect of hospital-acquired infection with Clostridium difficile on length of stay in hospital. CMAJ 2012;184:37–42.

10. Pakyz A, Carroll NV, Harpe SE, et al. Economic impact of Clostridium difficile infection in a multihospital cohort of academic health centers. Pharmacotherapy 2011;31:546–51.

11. Quimbo RA, Palli SR, Singer J, et al. Burden of Clostridium difficile-associated diarrhea among hospitalized patients at high risk of recurrent infection. J Clin Outcomes Manag 2013;20:544–54.

12. Golan Y, Mullane KM, Miller MA, et al. Low recurrence rate among patients with C. difficile infection treated with fidaxomicin. Poster presented at: 49th Annual Interscience Conference on Antimicrobial Agents and Chemotherapy; 12–15 Sep 2009; San Francisco, CA.

13. Lewis SJ, Heaton KW. Stool form scale as a useful guide to intestinal transit time. Scand J Gastroenterol 1997;32:920–4.

14. Cornely OA, Crook DW, Esposito R, et al. Fidaxomicin versus vancomycin for infection with Clostridium difficile in Europe, Canada, and the USA: a double-blind, non-inferiority, randomised controlled trial. Lancet Infect Dis 2012;12:281–9.

15. Palli SR, Strauss M, Quimbo RA, et al. Cost drivers associated with Clostridium-difficile infection in a hospital setting. Poster presented at American Society of Health System Pharmacists Midyear Clinical Meeting; December 2012; Las Vegas, NV.

16. Noseworthy TW, Konopad E, Shustack A, et al. Cost accounting of adult intensive care: methods and human and capital inputs. Crit Care Med 1996;24:1168–72.

17. Classen DC, Burke JP, Wenzel RP. Infectious diseases consultation: impact on outcomes for hospitalized patients and results of a preliminary study. Clin Infect Dis 1997;24:468–70.

18. Petrak RM, Sexton DJ, Butera ML, et al. The value of an infectious diseases specialist. Clin Infect Dis 2003;36:1013–7.

19. Sellier E, Pavese P, Gennai S, et al. Factors and outcomes associated with physicians’ adherence to recommendations of infectious disease consultations for patients. J Antimicrob Chemother 2010;65:156–62.

20. Jollis JG, DeLong ER, Peterson ED, et al. Outcome of acute myocardial infarction according to the specialty of the admitting physician. N Engl J Med 1996;335:1880–7.

21. Harrold LR, Field TS, Gurwitz JH. Knowledge, patterns of care, and outcomes of care for generalists and specialists. J Gen Intern Med 1999;14:499–511.

Issue
Journal of Clinical Outcomes Management - March 2015, VOL. 22, NO. 3
Publications
Topics
Sections

From HealthCore, Wilmington, DE, and Cubist Pharma-ceuticals, San Diego, CA.

 

Abstract

  • Objectives: To describe trends in inpatient resource utilization and potential cost drivers of Clostridium difficile-associated diarrhea (CDAD) treated in the hospital.
  • Methods: Retrospective medical record review included 500 patients with ≥1 inpatient medical claim diagnosis of CDAD (ICD-9-CM: 008.45) between 01/01/2005-10/31/2010. Information was collected on patient demographics, admission diagnoses, laboratory data, and CDAD-related characteristics and discharge. Hospital costs were evaluated for the entire inpatient episode and prorated for the duration of the CDAD episode (ie, CDAD diagnosis date to diarrhea resolution/discharge date).
  • Results: The cohort was mostly female (62%), Caucasian (72%), with mean (SD) age 66 (±17.6) years. 60% had diagnosis of CDAD or presence of diarrhea at admission. CDAD diagnosis was confirmed with laboratory test in 92% of patients. ~44% had mild CDAD, 35% had severe CDAD. Following CDAD diagnosis, approximately 53% of patients were isolated for ≥1 days, 12% transferred to the ICU for a median (Q1–Q3) length of stay of 8 (5–15) days. Two-thirds received gastrointestinal or infectious disease consult. Median time from CDAD diagnosis to discharge was 6 (4–9) days; 5.5 (4–8) days for patients admitted with CDAD, 6.5 (4–10) days for those with hospital-acquired CDAD. The mean and median costs (2011 USD) for CDAD-associated hospitalization were $35,621 and $13,153, respectively.
  • Conclusion: Patients with CDAD utilize numerous expensive resources during hospitalization including laboratory tests, isolation, prolonged ICU stay, and specialist consultations.

 

Clostridium difficile, classified as an urgent public health threat by the Centers for Disease Control and Prevention (CDC), causes approximately 250,000 hospitalizations and an estimated 14,000 deaths per year in the United States [1]. An estimated 15% to 25% of patients with C. difficile-associated diarrhea (CDAD) will experience at least 1 recurrence [2-4], frequently requiring rehospitalization [5]. The high incidence of primary and recurrent infections contributes to a substantial burden associated with CDAD in terms of extended and repeat hospital stays [6,7].

Conservative estimates of the direct annual costs of CDAD in the United States over the past 15 years range from $1.1 billion [8] to $3.2 billion, with an average cost per stay of $10,212 for patients hospitalized with a principal diagnosis of CDAD or a CDAD-related symptom [5]. O’Brien et al estimated that costs associated with rehospitalizations accounted for 11% of overall CDAD-related hospital costs;when considering all CDAD-related hospitalizations, including both initial and subsequent rehospitalizations for recurrent infection and not accounting for post-acute or outpatient care, the 2-year cumulative cost was estimated to be $51.2 million. While studies have yielded varying assessments of the actual CDAD burden [5–10], they all suggest that CDAD burden is considerable and that extended hospital stays are the major component of CDAD-associated costs [9,10]. In a claims-based study by Quimbo et al [11], when multiple and diverse cohorts of CDAD patients at elevated risk for recurrence were matched with patients with similar underlying at-risk condition(s) but no CDAD, the CDAD at-risk groups had an incremental LOS per hospitalization ranging from approximately 3 to 18 days and an incremental cost burden ranging from a mean of $11,179 to $115,632 (2011 USD) per stay.

While it is recognized that CDAD carries significant cost burden and is driven by LOS, current literature is lacking regarding the characteristics of these hospital stays. Building on the Quimbo et al study, the current study was designed to probe further into the nature of the burden (ie, resource use) incurred during the course of CDAD hospitalizations. As such, the objective of this study was to identify the common trends in hospital-related resource utilization and describe the potential drivers that affect the cost burden of CDAD using hospital medical record data.

 

 

Methods

Population

Patients were selected for this retrospective medical record review from the HealthCore Integrated Research Database (HealthCore, Wilmington, DE). The database contains a broad, clinically rich and geographically diverse spectrum of longitudinal claims information from one of the largest commercially insured populations in the United States, representing 48 million lives. We identified 21,177 adult (≥ 18 years) patients with at least 1 inpatient claim with an International Classification of Diseases, 9th Edition, Clinical Modification (ICD-9-CM) diagnosis code for C. difficile infection (CDI; 008.45) between 1 January 2005 and 31 October 2010 (intake period). All patients had at least 12 months of prior and continuous medical and pharmacy health plan eligibility prior to the incident CDAD-associated hospitalization within the database. Additional details regarding this cohort identification has been published previously [11]. The study was undertaken in accordance with Health Insurance Portability and Accountability Act (HIPAA) guidelines and the necessary central institutional review board approval was obtained prior to medical record identification and abstraction.

Sampling Strategy

To achieve the target sample of 500 fully abstracted medical records, 2500 patients were randomly selected via systematic random sampling without replacement from the original CDAD population of 21,177; their earliest hospital stay during the intake period with a diagnosis of CDI was identified using medical claims’ data and targeted for medical record abstraction. To be considered eligible for abstraction, medical records were required to include physicians’ and nurses’ notes/reports, discharge summary notes, medication administration record (MAR), and confirmation of CDAD via a documented CDI ICD-9-CM diagnosis or written note by a physician or nurse. Records with invalid or missing hospital or patient names were deemed ineligible for abstraction. Medical record retrieval continued until the requisite 500 CDAD-validated medical records were abstracted (Figure).

Medical Record Abstraction

During the record abstraction process, information was collected on patients’ race/ethnicity, body mass index (BMI), admission diagnosis and other conditions, point of entry and prior location, body temperature and laboratory data (eg, creatinine and albumin values, white blood cell [WBC] count), diarrhea and stool characteristics, CDAD diagnosis date, CDAD-specific characteristics, severity, complications, and related tests/procedures, CDAD treatments (eg, dose, duration, and formulation of medications), hospital LOS, including stays in the intensive care unit (ICU), cardiac care unit (CCU) following CDAD diagnosis; consultations provided by gastrointestinal, infectious disease, intensivists, or surgery care specialists, and discharge summary on disposition, CDAD status, and medications prescribed. Standardized data collection forms were used by trained nurses or pharmacists to collect information from the medical records and inter-rater reliability testing with a 0.9 cutoff was required to confirm accuracy. To ensure consistency, a pilot test of the first 20 abstracted records were re-abstracted by the research team. Last, quality checks were implemented throughout the abstraction process to identify any inconsistencies or data entry errors including coding errors and atypical, unrealistic data entry patterns (eg, identical values for a particular data field entered on multiple records; implausible or erratic inputs; or a high percentage of missing data points). Missing data were not imputed.

Study Definitions

Diarrhea was defined as 3 or more unformed (includes bloody, watery, thin, loose, soft, and/or unformed stool) bowel movements per day.CDAD severity was classified as mild (4–5 unformed bowel movements per day or WBC ≤ 2000/mm3); moderate (6–9 unformed bowel movements per day or WBC between 12,001/mm3 and 15,000/mm3); or severe (≥10 unformed bowel movements per day or WBC ≥15,001/mm3) [12,13]. Diarrhea was considered to be resolved when the patient had no more than 3 unformed stools for 2 consecutive days and lasting until treatment was completed, with no additional therapy required for CDAD as of the second day after the end of the course of therapy [2,14].CDAD episode was defined as the duration from the date of the CDAD diagnosis or confirmation (whichever occurred first), to the date of diarrhea resolution (where documented) or discharge date.

Cost Measures

The total hospital health plan paid costs for the entire inpatient episode (includes treatment costs, diagnostics, services provided, etc.) were estimated using medical claims present in the database and pertaining to the hospitalization from where medical records were abstracted. Then the proportionate amount for the duration of the CDAD episode (from CDAD diagnosis to the diarrhea resolution date or the discharge date in cases where the resolution date could not be ascertained) was calculated to estimate the average CDAD associated in-hospital costs.

Analysis

Means (± standard deviation [SD]), medians (interquartile range Q1 to Q3), and relative frequencies were calculated for continuous and categorical data, respectively. This analysis was descriptive in nature; hence, no statistical tests to determine significance were conducted.

 

 

Results

We had a 55.3% success rate in obtaining the medical records from the contacted hospitals with refusal to participate/consent by the hospital in question being the most frequent reason for failure in 3 out of 4 cases. An additional attrition of 39.3% was observed among the medical records received, with absence of a MAR form (23.9%) and confirmatory CDAD diagnosis or note (9.1%) being the most frequent criteria for discarding an available record prior to abstraction (Figure).

Patient Characteristics

Consistent with the characteristics of the overall CDAD population within the original database, the randomly selected patients whose records were abstracted were predominantly women and elderly with a mean age of 66 (± 17.6) years (Table 1). Patients had a mean BMI of 26.7 (± 7.6), with 44% classified as being either overweight or obese. Most of the cohort had either CDAD or diarrhea as a primary diagnosis at admission. Among those with no admission diagnosis of CDAD or diarrhea, the mean time to CDAD acquisition was about approximately 1 week after admission.

CDAD Characteristics and Complications

Using a derived definition of severity, most CDAD cases were classified either as 

mild or severe (Table 2). For those with available diarrhea information, as expected the majority of patients reported thin/loose/soft or watery stools during the course of their CDAD episode. Patients had on average 5.5 (± 13.3) stools per day during the CDAD episode. In addition to diarrhea, stomach pain, vomiting, and dehydration were commonly reported. A relatively low proportion of patients had serious complications including mucosal inflammation and colectomy. One of every 5 patients was a recurrent CDAD case with documented prior CDAD.

CDAD-Related Resource Utilization

Following CDAD diagnosis, more than half of the study patients were isolated for 1 or more days. While the majority of patients with CDAD (74.0%) stayed in a general hospital room, 12.4% stayed in the ICU for a mean duration of 12.1 (± 12.3) days (Table 3). Half of these ICU patients required 

isolation for at least 1 day. 5.6% stayed in the CCU in a private or semi-private room for 5 to 7 days during the CDAD episode.

About one-third of patients consulted a gastrointestinal or infectious disease specialist at least once. Among these patients, assuming that a patient following an initial specialist consultation would have follow-up visits at least once a day (formal or informal) for the remainder of the CDAD episode, we estimate that there were an average of 8.7 (± 15.6) and 11.6 (± 19.4) GI or ID specialist visits respectively during the CDAD episode.

Nearly all patients had their CDAD diagnosis confirmed by laboratory tests. CDAD virulence was identified as toxin A and/or toxin B in 47.6% of the samples. However, nearly three-fifths of patients also underwent 1 or more nondiagnostic tests including endoscopy, colonoscopy, computed axial tomography (CAT), or magnetic resonance imaging (MRI) scans, sigmoidoscopy, and/or other obstructive series tests during the CDAD episode.

CDAD Treatment

About 4.2% patients received no antibiotic treatment following CDAD diagnosis. Nevertheless, twice-daily metro-nidazole was the most frequently used antibiotic for CDAD (87.2%), with oral being the preferred route of administration among these patients. The median duration of treatment and daily dose were 4 (3–7) days and 1000 mg, respectively. Oral vancomycin was administered to nearly half of the patients for a median duration and daily dose of 5 (3–8) days and 500 mg, respectively; mean frequency of administration was 2.5 (± 1.2) times per day (Table 4). One-third of the patients (33.6%) augmented their first-line therapy, most frequently adding vancomycin to the initial metronidazole treatment or vice-versa. Only 5.2% of patients switched completely from the first-line therapy, predominantly from metronidazole to vancomycin (4%).

CDAD at Discharge

Overall, the mean time from CDAD diagnosis to hospital discharge was 8.8 (± 13.3) days (Table 5). Notably, CDAD was documented to persist in 84.4% of patients at the time of discharge, with 82.5% of patients obtaining prescriptions for post-discharge antibiotic treatment involving metronidazole, vancomycin, or rifaximin (Table 6). Among the 7.6% of patients who died while hospitalized, CDAD was identified as the cause of death in one-fifth of these cases.

 

 

Hospitalization Costs

Based on claims data, the mean (±SD) and median (Q1–Q3) plan costs for the duration of a CDAD-associated hospitalization (2011 USD) for these 500 patients were found to be $35,621 (± $100,502) and $13,153 ($8,209–$26,893), respectively.

 

Discussion

While multiple studies have documented the considerable economic burden associated with CDAD [5–10], this study was the first to our knowledge to evaluate the specific hospital resources that are used during an extended hospital stay for CDAD. This real-world analysis, in conjunction with the Quimbo et al claims analysis, demonstrated the significant burden associated with CDAD in terms of both fixed costs (eg, hospital stay) as well as the variable components that drive these expenditures (eg, consultations, ICU stay).

The mean ($35,621) and median ($13,153) total costs associated with the CDAD segment of the hospitalization, as measured via the claims, were quite high despite a greater prevalence of mild CDAD rather than severe infection, and required only a general hospital room stay. Both of the above CDAD hospital cost measures were well above the mean US general hospitalization cost of $11,666 and the median cost of $7334 measured from Healthcare Cost and Utilization Project data [15]. However, the mean cost of hospitalization reported in the current study falls within the range of previously reported costs for CDAD-associated hospitalizations [5,8,10]. While the mean cost may have been disproportionately inflated by a few extreme cases, the median CDAD-associated hospitalization cost was nearly twice the median cost of an average general hospital stay in the US [15]. Our finding that these elevated costs were observed among patients with mild CDAD and its relative magnitude compared with the average hospitalization costs (approximately 3-fold higher) were also consistent with the literature. For instance, Pakyz and colleagues reported that relative to patients without CDAD, hospital costs were tripled for patients with low-severity CDAD and 10% higher for those with more severe CDAD, presumably because CDAD resulted in costly complications that prolonged what would have otherwise been a short, simple hospital stay [10].

Type of hospital room could also be an important driver of cost. While most patients stayed in general hospital rooms, more than half were isolated for at least a day, and 12% of patients required nearly 2 weeks of intensive care. Taken together, 26% of patients in the current study were required to stay in a special care unit or a non–general hospital room for 5.5 to 12.2 days. This is consistent with the 28% of patients with CDAD that required stay on a special care unit previously reported by O’Brien et al [5].Additionally, previous research from Canadian health care data has shown that a single ICU stay costs an average of $7000 more per patient per day than a general hospital room (1992 Canadian dollars) or $9589 (2013 USD calculated using historical exchange rate data and adjusted for inflation) [16].However, despite this additional cost and resource burden, it appears that overall only 53.4% of all patients received care within an isolated setting as guidelines recommended.

Repeated specialist visits, procedures and multiple testing (concomitant diagnostic EIA and nondiagnostic tests) potentially added to the health care resource utilization and costs, along with the extra resources associated with specialized hospital care. We found that roughly one-third of patients consulted a specialist, although we did not distinguish between ‘formal’ and ‘informal’ consultations. Numerous studies published over the past 2 decades have demonstrated increased costs and resource utilization associated with specialist consultations [17–21]. Although the focused knowledge and experience of specialists may reduce morbidity and mortality [18,21], specialists are more likely than generalists to order more diagnostic tests, perform more procedures, and keep patients hospitalized longer and in ICUs, all of which contribute to higher costs without necessarily leading to improved health outcomes [21].

Limitations

One major limitation of this study was the inability to assess the individual costs of the resources used for each individual patient either through the medical charts or via claims. Additionally, the burden of CDAD was found to continue beyond the hospital stay, with documented evidence of persisting infection in 84% of patients at the point of discharge. Since the medical records obtained were limited to a single hospitalization and a single place of service, the data capture of an entire CDAD episode remains potentially incomplete for a number of patients who had recurrences or who had visited multiple sites of care in addition to the hospital (ie, emergency department or outpatient facility). The transition to outpatient care is often multifaceted and challenging for patients, especially those who are elderly and have multiple underlying conditions [18]. Access to care become more difficult, and patients become wholly responsible for taking their medication as prescribed and following other post-discharge treatment stratagems. Furthermore, no differentiation was made between patients having a primary versus secondary CDAD diagnosis.

 

 

Another limitation is that the costs of the hospitalization was calculated from claims and as such do not include either patient paid costs (eg, deductible) or indirect costs (eg, lost work or productivity or caregiver costs) due to CDAD. This study likely underestimates the true costs associated with CDAD. Finally, the patients included in this analysis were all members of large commercial health plans in the US and who are also working and relatively healthy. Therefore, these results may not be generalizable to patients with other types of health insurance or no insurance or to those living outside of the United States.

It is important to note that the trends and drivers described in this study are “potential” influencers contributing to the burden of CDAD. Given that this study is descriptive in nature, formal analyses aimed at confirming these factors as “drivers” should be conducted in future. CDAD-related hospitalizations have previously been shown to be associated with increased inpatient LOS and a substantial economic burden. Our study demonstrates that the CDAD-associated cost burden in hospital settings may be driven by the use of numerous high-cost hospital resources including prolonged ICU stays, isolation, frequent GI and ID consultations, CDAD-related non-diagnostic tests/procedures, and symptomatic CDAD treatment.

 

Acknowledgments: The authors acknowledge Cheryl Jones for her editorial assistance in preparing this manuscript.

Corresponding author: Swetha Rao Palli, CTI Clinical Trial and Consulting, 1775 Lexington Ave, Ste. 200, Cincinnati, OH 45209, [email protected]

Funding/support: Funding for this study was provided Cubist Pharmaceuticals.

Financial disclosures: Ms. Palli and Mr. Quimbo are former and current employees of HealthCore, respectively. HealthCore is an independent research organization that received funding from Cubist Pharmaceuticals for the conduct of this study. Dr. Broderick is an employee of Cubist Pharmaceuticals. Ms. Strauss was an employee of Optimer Pharmaceuticals during the time the study was carried out.

From HealthCore, Wilmington, DE, and Cubist Pharma-ceuticals, San Diego, CA.

 

Abstract

  • Objectives: To describe trends in inpatient resource utilization and potential cost drivers of Clostridium difficile-associated diarrhea (CDAD) treated in the hospital.
  • Methods: Retrospective medical record review included 500 patients with ≥1 inpatient medical claim diagnosis of CDAD (ICD-9-CM: 008.45) between 01/01/2005-10/31/2010. Information was collected on patient demographics, admission diagnoses, laboratory data, and CDAD-related characteristics and discharge. Hospital costs were evaluated for the entire inpatient episode and prorated for the duration of the CDAD episode (ie, CDAD diagnosis date to diarrhea resolution/discharge date).
  • Results: The cohort was mostly female (62%), Caucasian (72%), with mean (SD) age 66 (±17.6) years. 60% had diagnosis of CDAD or presence of diarrhea at admission. CDAD diagnosis was confirmed with laboratory test in 92% of patients. ~44% had mild CDAD, 35% had severe CDAD. Following CDAD diagnosis, approximately 53% of patients were isolated for ≥1 days, 12% transferred to the ICU for a median (Q1–Q3) length of stay of 8 (5–15) days. Two-thirds received gastrointestinal or infectious disease consult. Median time from CDAD diagnosis to discharge was 6 (4–9) days; 5.5 (4–8) days for patients admitted with CDAD, 6.5 (4–10) days for those with hospital-acquired CDAD. The mean and median costs (2011 USD) for CDAD-associated hospitalization were $35,621 and $13,153, respectively.
  • Conclusion: Patients with CDAD utilize numerous expensive resources during hospitalization including laboratory tests, isolation, prolonged ICU stay, and specialist consultations.

 

Clostridium difficile, classified as an urgent public health threat by the Centers for Disease Control and Prevention (CDC), causes approximately 250,000 hospitalizations and an estimated 14,000 deaths per year in the United States [1]. An estimated 15% to 25% of patients with C. difficile-associated diarrhea (CDAD) will experience at least 1 recurrence [2-4], frequently requiring rehospitalization [5]. The high incidence of primary and recurrent infections contributes to a substantial burden associated with CDAD in terms of extended and repeat hospital stays [6,7].

Conservative estimates of the direct annual costs of CDAD in the United States over the past 15 years range from $1.1 billion [8] to $3.2 billion, with an average cost per stay of $10,212 for patients hospitalized with a principal diagnosis of CDAD or a CDAD-related symptom [5]. O’Brien et al estimated that costs associated with rehospitalizations accounted for 11% of overall CDAD-related hospital costs;when considering all CDAD-related hospitalizations, including both initial and subsequent rehospitalizations for recurrent infection and not accounting for post-acute or outpatient care, the 2-year cumulative cost was estimated to be $51.2 million. While studies have yielded varying assessments of the actual CDAD burden [5–10], they all suggest that CDAD burden is considerable and that extended hospital stays are the major component of CDAD-associated costs [9,10]. In a claims-based study by Quimbo et al [11], when multiple and diverse cohorts of CDAD patients at elevated risk for recurrence were matched with patients with similar underlying at-risk condition(s) but no CDAD, the CDAD at-risk groups had an incremental LOS per hospitalization ranging from approximately 3 to 18 days and an incremental cost burden ranging from a mean of $11,179 to $115,632 (2011 USD) per stay.

While it is recognized that CDAD carries significant cost burden and is driven by LOS, current literature is lacking regarding the characteristics of these hospital stays. Building on the Quimbo et al study, the current study was designed to probe further into the nature of the burden (ie, resource use) incurred during the course of CDAD hospitalizations. As such, the objective of this study was to identify the common trends in hospital-related resource utilization and describe the potential drivers that affect the cost burden of CDAD using hospital medical record data.

 

 

Methods

Population

Patients were selected for this retrospective medical record review from the HealthCore Integrated Research Database (HealthCore, Wilmington, DE). The database contains a broad, clinically rich and geographically diverse spectrum of longitudinal claims information from one of the largest commercially insured populations in the United States, representing 48 million lives. We identified 21,177 adult (≥ 18 years) patients with at least 1 inpatient claim with an International Classification of Diseases, 9th Edition, Clinical Modification (ICD-9-CM) diagnosis code for C. difficile infection (CDI; 008.45) between 1 January 2005 and 31 October 2010 (intake period). All patients had at least 12 months of prior and continuous medical and pharmacy health plan eligibility prior to the incident CDAD-associated hospitalization within the database. Additional details regarding this cohort identification has been published previously [11]. The study was undertaken in accordance with Health Insurance Portability and Accountability Act (HIPAA) guidelines and the necessary central institutional review board approval was obtained prior to medical record identification and abstraction.

Sampling Strategy

To achieve the target sample of 500 fully abstracted medical records, 2500 patients were randomly selected via systematic random sampling without replacement from the original CDAD population of 21,177; their earliest hospital stay during the intake period with a diagnosis of CDI was identified using medical claims’ data and targeted for medical record abstraction. To be considered eligible for abstraction, medical records were required to include physicians’ and nurses’ notes/reports, discharge summary notes, medication administration record (MAR), and confirmation of CDAD via a documented CDI ICD-9-CM diagnosis or written note by a physician or nurse. Records with invalid or missing hospital or patient names were deemed ineligible for abstraction. Medical record retrieval continued until the requisite 500 CDAD-validated medical records were abstracted (Figure).

Medical Record Abstraction

During the record abstraction process, information was collected on patients’ race/ethnicity, body mass index (BMI), admission diagnosis and other conditions, point of entry and prior location, body temperature and laboratory data (eg, creatinine and albumin values, white blood cell [WBC] count), diarrhea and stool characteristics, CDAD diagnosis date, CDAD-specific characteristics, severity, complications, and related tests/procedures, CDAD treatments (eg, dose, duration, and formulation of medications), hospital LOS, including stays in the intensive care unit (ICU), cardiac care unit (CCU) following CDAD diagnosis; consultations provided by gastrointestinal, infectious disease, intensivists, or surgery care specialists, and discharge summary on disposition, CDAD status, and medications prescribed. Standardized data collection forms were used by trained nurses or pharmacists to collect information from the medical records and inter-rater reliability testing with a 0.9 cutoff was required to confirm accuracy. To ensure consistency, a pilot test of the first 20 abstracted records were re-abstracted by the research team. Last, quality checks were implemented throughout the abstraction process to identify any inconsistencies or data entry errors including coding errors and atypical, unrealistic data entry patterns (eg, identical values for a particular data field entered on multiple records; implausible or erratic inputs; or a high percentage of missing data points). Missing data were not imputed.

Study Definitions

Diarrhea was defined as 3 or more unformed (includes bloody, watery, thin, loose, soft, and/or unformed stool) bowel movements per day.CDAD severity was classified as mild (4–5 unformed bowel movements per day or WBC ≤ 2000/mm3); moderate (6–9 unformed bowel movements per day or WBC between 12,001/mm3 and 15,000/mm3); or severe (≥10 unformed bowel movements per day or WBC ≥15,001/mm3) [12,13]. Diarrhea was considered to be resolved when the patient had no more than 3 unformed stools for 2 consecutive days and lasting until treatment was completed, with no additional therapy required for CDAD as of the second day after the end of the course of therapy [2,14].CDAD episode was defined as the duration from the date of the CDAD diagnosis or confirmation (whichever occurred first), to the date of diarrhea resolution (where documented) or discharge date.

Cost Measures

The total hospital health plan paid costs for the entire inpatient episode (includes treatment costs, diagnostics, services provided, etc.) were estimated using medical claims present in the database and pertaining to the hospitalization from where medical records were abstracted. Then the proportionate amount for the duration of the CDAD episode (from CDAD diagnosis to the diarrhea resolution date or the discharge date in cases where the resolution date could not be ascertained) was calculated to estimate the average CDAD associated in-hospital costs.

Analysis

Means (± standard deviation [SD]), medians (interquartile range Q1 to Q3), and relative frequencies were calculated for continuous and categorical data, respectively. This analysis was descriptive in nature; hence, no statistical tests to determine significance were conducted.

 

 

Results

We had a 55.3% success rate in obtaining the medical records from the contacted hospitals with refusal to participate/consent by the hospital in question being the most frequent reason for failure in 3 out of 4 cases. An additional attrition of 39.3% was observed among the medical records received, with absence of a MAR form (23.9%) and confirmatory CDAD diagnosis or note (9.1%) being the most frequent criteria for discarding an available record prior to abstraction (Figure).

Patient Characteristics

Consistent with the characteristics of the overall CDAD population within the original database, the randomly selected patients whose records were abstracted were predominantly women and elderly with a mean age of 66 (± 17.6) years (Table 1). Patients had a mean BMI of 26.7 (± 7.6), with 44% classified as being either overweight or obese. Most of the cohort had either CDAD or diarrhea as a primary diagnosis at admission. Among those with no admission diagnosis of CDAD or diarrhea, the mean time to CDAD acquisition was about approximately 1 week after admission.

CDAD Characteristics and Complications

Using a derived definition of severity, most CDAD cases were classified either as 

mild or severe (Table 2). For those with available diarrhea information, as expected the majority of patients reported thin/loose/soft or watery stools during the course of their CDAD episode. Patients had on average 5.5 (± 13.3) stools per day during the CDAD episode. In addition to diarrhea, stomach pain, vomiting, and dehydration were commonly reported. A relatively low proportion of patients had serious complications including mucosal inflammation and colectomy. One of every 5 patients was a recurrent CDAD case with documented prior CDAD.

CDAD-Related Resource Utilization

Following CDAD diagnosis, more than half of the study patients were isolated for 1 or more days. While the majority of patients with CDAD (74.0%) stayed in a general hospital room, 12.4% stayed in the ICU for a mean duration of 12.1 (± 12.3) days (Table 3). Half of these ICU patients required 

isolation for at least 1 day. 5.6% stayed in the CCU in a private or semi-private room for 5 to 7 days during the CDAD episode.

About one-third of patients consulted a gastrointestinal or infectious disease specialist at least once. Among these patients, assuming that a patient following an initial specialist consultation would have follow-up visits at least once a day (formal or informal) for the remainder of the CDAD episode, we estimate that there were an average of 8.7 (± 15.6) and 11.6 (± 19.4) GI or ID specialist visits respectively during the CDAD episode.

Nearly all patients had their CDAD diagnosis confirmed by laboratory tests. CDAD virulence was identified as toxin A and/or toxin B in 47.6% of the samples. However, nearly three-fifths of patients also underwent 1 or more nondiagnostic tests including endoscopy, colonoscopy, computed axial tomography (CAT), or magnetic resonance imaging (MRI) scans, sigmoidoscopy, and/or other obstructive series tests during the CDAD episode.

CDAD Treatment

About 4.2% patients received no antibiotic treatment following CDAD diagnosis. Nevertheless, twice-daily metro-nidazole was the most frequently used antibiotic for CDAD (87.2%), with oral being the preferred route of administration among these patients. The median duration of treatment and daily dose were 4 (3–7) days and 1000 mg, respectively. Oral vancomycin was administered to nearly half of the patients for a median duration and daily dose of 5 (3–8) days and 500 mg, respectively; mean frequency of administration was 2.5 (± 1.2) times per day (Table 4). One-third of the patients (33.6%) augmented their first-line therapy, most frequently adding vancomycin to the initial metronidazole treatment or vice-versa. Only 5.2% of patients switched completely from the first-line therapy, predominantly from metronidazole to vancomycin (4%).

CDAD at Discharge

Overall, the mean time from CDAD diagnosis to hospital discharge was 8.8 (± 13.3) days (Table 5). Notably, CDAD was documented to persist in 84.4% of patients at the time of discharge, with 82.5% of patients obtaining prescriptions for post-discharge antibiotic treatment involving metronidazole, vancomycin, or rifaximin (Table 6). Among the 7.6% of patients who died while hospitalized, CDAD was identified as the cause of death in one-fifth of these cases.

 

 

Hospitalization Costs

Based on claims data, the mean (±SD) and median (Q1–Q3) plan costs for the duration of a CDAD-associated hospitalization (2011 USD) for these 500 patients were found to be $35,621 (± $100,502) and $13,153 ($8,209–$26,893), respectively.

 

Discussion

While multiple studies have documented the considerable economic burden associated with CDAD [5–10], this study was the first to our knowledge to evaluate the specific hospital resources that are used during an extended hospital stay for CDAD. This real-world analysis, in conjunction with the Quimbo et al claims analysis, demonstrated the significant burden associated with CDAD in terms of both fixed costs (eg, hospital stay) as well as the variable components that drive these expenditures (eg, consultations, ICU stay).

The mean ($35,621) and median ($13,153) total costs associated with the CDAD segment of the hospitalization, as measured via the claims, were quite high despite a greater prevalence of mild CDAD rather than severe infection, and required only a general hospital room stay. Both of the above CDAD hospital cost measures were well above the mean US general hospitalization cost of $11,666 and the median cost of $7334 measured from Healthcare Cost and Utilization Project data [15]. However, the mean cost of hospitalization reported in the current study falls within the range of previously reported costs for CDAD-associated hospitalizations [5,8,10]. While the mean cost may have been disproportionately inflated by a few extreme cases, the median CDAD-associated hospitalization cost was nearly twice the median cost of an average general hospital stay in the US [15]. Our finding that these elevated costs were observed among patients with mild CDAD and its relative magnitude compared with the average hospitalization costs (approximately 3-fold higher) were also consistent with the literature. For instance, Pakyz and colleagues reported that relative to patients without CDAD, hospital costs were tripled for patients with low-severity CDAD and 10% higher for those with more severe CDAD, presumably because CDAD resulted in costly complications that prolonged what would have otherwise been a short, simple hospital stay [10].

Type of hospital room could also be an important driver of cost. While most patients stayed in general hospital rooms, more than half were isolated for at least a day, and 12% of patients required nearly 2 weeks of intensive care. Taken together, 26% of patients in the current study were required to stay in a special care unit or a non–general hospital room for 5.5 to 12.2 days. This is consistent with the 28% of patients with CDAD that required stay on a special care unit previously reported by O’Brien et al [5].Additionally, previous research from Canadian health care data has shown that a single ICU stay costs an average of $7000 more per patient per day than a general hospital room (1992 Canadian dollars) or $9589 (2013 USD calculated using historical exchange rate data and adjusted for inflation) [16].However, despite this additional cost and resource burden, it appears that overall only 53.4% of all patients received care within an isolated setting as guidelines recommended.

Repeated specialist visits, procedures and multiple testing (concomitant diagnostic EIA and nondiagnostic tests) potentially added to the health care resource utilization and costs, along with the extra resources associated with specialized hospital care. We found that roughly one-third of patients consulted a specialist, although we did not distinguish between ‘formal’ and ‘informal’ consultations. Numerous studies published over the past 2 decades have demonstrated increased costs and resource utilization associated with specialist consultations [17–21]. Although the focused knowledge and experience of specialists may reduce morbidity and mortality [18,21], specialists are more likely than generalists to order more diagnostic tests, perform more procedures, and keep patients hospitalized longer and in ICUs, all of which contribute to higher costs without necessarily leading to improved health outcomes [21].

Limitations

One major limitation of this study was the inability to assess the individual costs of the resources used for each individual patient either through the medical charts or via claims. Additionally, the burden of CDAD was found to continue beyond the hospital stay, with documented evidence of persisting infection in 84% of patients at the point of discharge. Since the medical records obtained were limited to a single hospitalization and a single place of service, the data capture of an entire CDAD episode remains potentially incomplete for a number of patients who had recurrences or who had visited multiple sites of care in addition to the hospital (ie, emergency department or outpatient facility). The transition to outpatient care is often multifaceted and challenging for patients, especially those who are elderly and have multiple underlying conditions [18]. Access to care become more difficult, and patients become wholly responsible for taking their medication as prescribed and following other post-discharge treatment stratagems. Furthermore, no differentiation was made between patients having a primary versus secondary CDAD diagnosis.

 

 

Another limitation is that the costs of the hospitalization was calculated from claims and as such do not include either patient paid costs (eg, deductible) or indirect costs (eg, lost work or productivity or caregiver costs) due to CDAD. This study likely underestimates the true costs associated with CDAD. Finally, the patients included in this analysis were all members of large commercial health plans in the US and who are also working and relatively healthy. Therefore, these results may not be generalizable to patients with other types of health insurance or no insurance or to those living outside of the United States.

It is important to note that the trends and drivers described in this study are “potential” influencers contributing to the burden of CDAD. Given that this study is descriptive in nature, formal analyses aimed at confirming these factors as “drivers” should be conducted in future. CDAD-related hospitalizations have previously been shown to be associated with increased inpatient LOS and a substantial economic burden. Our study demonstrates that the CDAD-associated cost burden in hospital settings may be driven by the use of numerous high-cost hospital resources including prolonged ICU stays, isolation, frequent GI and ID consultations, CDAD-related non-diagnostic tests/procedures, and symptomatic CDAD treatment.

 

Acknowledgments: The authors acknowledge Cheryl Jones for her editorial assistance in preparing this manuscript.

Corresponding author: Swetha Rao Palli, CTI Clinical Trial and Consulting, 1775 Lexington Ave, Ste. 200, Cincinnati, OH 45209, [email protected]

Funding/support: Funding for this study was provided Cubist Pharmaceuticals.

Financial disclosures: Ms. Palli and Mr. Quimbo are former and current employees of HealthCore, respectively. HealthCore is an independent research organization that received funding from Cubist Pharmaceuticals for the conduct of this study. Dr. Broderick is an employee of Cubist Pharmaceuticals. Ms. Strauss was an employee of Optimer Pharmaceuticals during the time the study was carried out.

References

1. Centers for Disease Control and Prevention. Antibiotic resistance threats in the United States, 2013. Available at: www.cdc.gov/drugresistance/threat-report-2013/pdf/ar-threats-2013-508.pdf. Accessed March 6, 2013.

2. Louie TJ, Miller MA, Mullane KM, et al. Fidaxomicin versus vancomycin for Clostridium difficile infection. N Engl J Med 2011;364:422–31.

3. Lowy I, Molrine DC, Leav BA, et al. Treatment with monoclonal antibodies against Clostridium difficile toxins. N Engl J Med 2010;362:197–205.

4. Bouza E, Dryden M, Mohammed R, et al. Results of a phase III trial comparing tolevamer, vancomycin and metronidazole in patients with Clostridium difficile-associated diarrhoea [ECCMID abstract O464]. Clin Microbiol Infect 2008;14(Suppl s7):S103–4.

5. O’Brien JA, Betsy JL, Caro J, Davidson DM. The emerging infectious challenge of Clostridium difficile-associated disease in Massachusetts hospitals: clinical and economic consequences. Infect Control Hosp Epidemiol 2007;28:1219–27.

6. Dubberke ER, Wertheimer AI. Review of current literature on the economic burden of Clostridium difficile infection. Infect Control Hosp Epidemiol 2009;30:57–66.

7. Ghantoji SS, Sail K, Lairson DR, et al. Economic healthcare costs of Clostridium difficile infection: a systematic review. J Hosp Infect 2010;74:309–18.

8. Kyne L, Hamel MB, Polavaram R, Kelly CP. Health care costs and mortality associated with nosocomial diarrhea due to Clostridium difficile. Clin Infec Dis 2002;34:346–53.

9. Forster AJ, Taljaard M, Oake N, et al. The effect of hospital-acquired infection with Clostridium difficile on length of stay in hospital. CMAJ 2012;184:37–42.

10. Pakyz A, Carroll NV, Harpe SE, et al. Economic impact of Clostridium difficile infection in a multihospital cohort of academic health centers. Pharmacotherapy 2011;31:546–51.

11. Quimbo RA, Palli SR, Singer J, et al. Burden of Clostridium difficile-associated diarrhea among hospitalized patients at high risk of recurrent infection. J Clin Outcomes Manag 2013;20:544–54.

12. Golan Y, Mullane KM, Miller MA, et al. Low recurrence rate among patients with C. difficile infection treated with fidaxomicin. Poster presented at: 49th Annual Interscience Conference on Antimicrobial Agents and Chemotherapy; 12–15 Sep 2009; San Francisco, CA.

13. Lewis SJ, Heaton KW. Stool form scale as a useful guide to intestinal transit time. Scand J Gastroenterol 1997;32:920–4.

14. Cornely OA, Crook DW, Esposito R, et al. Fidaxomicin versus vancomycin for infection with Clostridium difficile in Europe, Canada, and the USA: a double-blind, non-inferiority, randomised controlled trial. Lancet Infect Dis 2012;12:281–9.

15. Palli SR, Strauss M, Quimbo RA, et al. Cost drivers associated with Clostridium-difficile infection in a hospital setting. Poster presented at American Society of Health System Pharmacists Midyear Clinical Meeting; December 2012; Las Vegas, NV.

16. Noseworthy TW, Konopad E, Shustack A, et al. Cost accounting of adult intensive care: methods and human and capital inputs. Crit Care Med 1996;24:1168–72.

17. Classen DC, Burke JP, Wenzel RP. Infectious diseases consultation: impact on outcomes for hospitalized patients and results of a preliminary study. Clin Infect Dis 1997;24:468–70.

18. Petrak RM, Sexton DJ, Butera ML, et al. The value of an infectious diseases specialist. Clin Infect Dis 2003;36:1013–7.

19. Sellier E, Pavese P, Gennai S, et al. Factors and outcomes associated with physicians’ adherence to recommendations of infectious disease consultations for patients. J Antimicrob Chemother 2010;65:156–62.

20. Jollis JG, DeLong ER, Peterson ED, et al. Outcome of acute myocardial infarction according to the specialty of the admitting physician. N Engl J Med 1996;335:1880–7.

21. Harrold LR, Field TS, Gurwitz JH. Knowledge, patterns of care, and outcomes of care for generalists and specialists. J Gen Intern Med 1999;14:499–511.

References

1. Centers for Disease Control and Prevention. Antibiotic resistance threats in the United States, 2013. Available at: www.cdc.gov/drugresistance/threat-report-2013/pdf/ar-threats-2013-508.pdf. Accessed March 6, 2013.

2. Louie TJ, Miller MA, Mullane KM, et al. Fidaxomicin versus vancomycin for Clostridium difficile infection. N Engl J Med 2011;364:422–31.

3. Lowy I, Molrine DC, Leav BA, et al. Treatment with monoclonal antibodies against Clostridium difficile toxins. N Engl J Med 2010;362:197–205.

4. Bouza E, Dryden M, Mohammed R, et al. Results of a phase III trial comparing tolevamer, vancomycin and metronidazole in patients with Clostridium difficile-associated diarrhoea [ECCMID abstract O464]. Clin Microbiol Infect 2008;14(Suppl s7):S103–4.

5. O’Brien JA, Betsy JL, Caro J, Davidson DM. The emerging infectious challenge of Clostridium difficile-associated disease in Massachusetts hospitals: clinical and economic consequences. Infect Control Hosp Epidemiol 2007;28:1219–27.

6. Dubberke ER, Wertheimer AI. Review of current literature on the economic burden of Clostridium difficile infection. Infect Control Hosp Epidemiol 2009;30:57–66.

7. Ghantoji SS, Sail K, Lairson DR, et al. Economic healthcare costs of Clostridium difficile infection: a systematic review. J Hosp Infect 2010;74:309–18.

8. Kyne L, Hamel MB, Polavaram R, Kelly CP. Health care costs and mortality associated with nosocomial diarrhea due to Clostridium difficile. Clin Infec Dis 2002;34:346–53.

9. Forster AJ, Taljaard M, Oake N, et al. The effect of hospital-acquired infection with Clostridium difficile on length of stay in hospital. CMAJ 2012;184:37–42.

10. Pakyz A, Carroll NV, Harpe SE, et al. Economic impact of Clostridium difficile infection in a multihospital cohort of academic health centers. Pharmacotherapy 2011;31:546–51.

11. Quimbo RA, Palli SR, Singer J, et al. Burden of Clostridium difficile-associated diarrhea among hospitalized patients at high risk of recurrent infection. J Clin Outcomes Manag 2013;20:544–54.

12. Golan Y, Mullane KM, Miller MA, et al. Low recurrence rate among patients with C. difficile infection treated with fidaxomicin. Poster presented at: 49th Annual Interscience Conference on Antimicrobial Agents and Chemotherapy; 12–15 Sep 2009; San Francisco, CA.

13. Lewis SJ, Heaton KW. Stool form scale as a useful guide to intestinal transit time. Scand J Gastroenterol 1997;32:920–4.

14. Cornely OA, Crook DW, Esposito R, et al. Fidaxomicin versus vancomycin for infection with Clostridium difficile in Europe, Canada, and the USA: a double-blind, non-inferiority, randomised controlled trial. Lancet Infect Dis 2012;12:281–9.

15. Palli SR, Strauss M, Quimbo RA, et al. Cost drivers associated with Clostridium-difficile infection in a hospital setting. Poster presented at American Society of Health System Pharmacists Midyear Clinical Meeting; December 2012; Las Vegas, NV.

16. Noseworthy TW, Konopad E, Shustack A, et al. Cost accounting of adult intensive care: methods and human and capital inputs. Crit Care Med 1996;24:1168–72.

17. Classen DC, Burke JP, Wenzel RP. Infectious diseases consultation: impact on outcomes for hospitalized patients and results of a preliminary study. Clin Infect Dis 1997;24:468–70.

18. Petrak RM, Sexton DJ, Butera ML, et al. The value of an infectious diseases specialist. Clin Infect Dis 2003;36:1013–7.

19. Sellier E, Pavese P, Gennai S, et al. Factors and outcomes associated with physicians’ adherence to recommendations of infectious disease consultations for patients. J Antimicrob Chemother 2010;65:156–62.

20. Jollis JG, DeLong ER, Peterson ED, et al. Outcome of acute myocardial infarction according to the specialty of the admitting physician. N Engl J Med 1996;335:1880–7.

21. Harrold LR, Field TS, Gurwitz JH. Knowledge, patterns of care, and outcomes of care for generalists and specialists. J Gen Intern Med 1999;14:499–511.

Issue
Journal of Clinical Outcomes Management - March 2015, VOL. 22, NO. 3
Issue
Journal of Clinical Outcomes Management - March 2015, VOL. 22, NO. 3
Publications
Publications
Topics
Article Type
Display Headline
Cost Drivers Associated with Clostridium difficile-Associated Diarrhea in a Hospital Setting
Display Headline
Cost Drivers Associated with Clostridium difficile-Associated Diarrhea in a Hospital Setting
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Newer antifungals shorten tinea pedis treatment duration, promote adherence

Article Type
Changed
Fri, 01/18/2019 - 14:33
Display Headline
Newer antifungals shorten tinea pedis treatment duration, promote adherence

MIAMI BEACH – Two new antifungal agents on the market – luliconazole and naftifine – each have something unique to offer when it comes to treating tinea pedis, according to Dr. Boni E. Elewski.

Luliconazole is an azole drug, meaning it is broad spectrum and kills dermatophytes, yeast, and molds. Also, like all azoles, it has some antibacterial activity, she said at the South Beach Symposium.

Naftifine is an allylamine drug, and is mainly an antidermatophyte agent – albeit a “very, very potent antidermatophyte” – with no antibacterial activity, she said.

Both are approved for once-daily use for 2 weeks, and that’s good because the short treatment duration improves adherence to the regimen, especially compared with other drugs that require 4-6 weeks of treatment to eradicate the problem, noted Dr. Elewski, professor of dermatology and director of clinical trials research at the University of Alabama at Birmingham.

Both drugs also stay in the skin and continue working after treatment stops.

Making the choice regarding which drug or class of drugs to use depends on the patient’s symptoms.

“First of all, tinea pedis may not be obvious. People don’t often tell you, ‘This is what I have – it’s tinea pedis,’ ” she said.

Keep in mind that tinea pedis and onychomycosis are related. If you have a patient who you think has onychomycosis, look at the bottom of their foot, she advised.

“If they don’t have tinea pedis, they probably don’t have onychomycosis unless they’ve had tinea pedis recently and got rid of it,” she said.

Also, look for collarettes of scale, which may be very subtle and may look like “tiny little circular pieces of scale on the medial or lateral foot.”

“If you are not sure, just keep looking harder because you might see it,” Dr. Elewski said.

Interdigital tinea pedis will be a little more obvious, with scaling and crusting between the toes, as well as maceration and oozing in many cases.

When the toe web is oozing, you’re likely dealing with intertrigo, she said.

In such cases, an azole cream is the better treatment choice, because azoles will kill Candida, bacteria, and dermatophytes that are there, she said.

“So when I have a moist macerated space, I like an azole. If you have a dry scaly process – with or without the collarettes – you’re probably better with an allylamine, particularly if you use a keratolytic with it, something that has urea or a lactic acid,” she said.

Dr. Elewski is a consultant for Valeant Pharmaceuticals International and a contracted researcher for Anacor Pharmaceuticals.

References

Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
tinea pedis, luliconazole, naftinine, onychomycosis, intertrigo, azoles, dermatophytes
Sections
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

MIAMI BEACH – Two new antifungal agents on the market – luliconazole and naftifine – each have something unique to offer when it comes to treating tinea pedis, according to Dr. Boni E. Elewski.

Luliconazole is an azole drug, meaning it is broad spectrum and kills dermatophytes, yeast, and molds. Also, like all azoles, it has some antibacterial activity, she said at the South Beach Symposium.

Naftifine is an allylamine drug, and is mainly an antidermatophyte agent – albeit a “very, very potent antidermatophyte” – with no antibacterial activity, she said.

Both are approved for once-daily use for 2 weeks, and that’s good because the short treatment duration improves adherence to the regimen, especially compared with other drugs that require 4-6 weeks of treatment to eradicate the problem, noted Dr. Elewski, professor of dermatology and director of clinical trials research at the University of Alabama at Birmingham.

Both drugs also stay in the skin and continue working after treatment stops.

Making the choice regarding which drug or class of drugs to use depends on the patient’s symptoms.

“First of all, tinea pedis may not be obvious. People don’t often tell you, ‘This is what I have – it’s tinea pedis,’ ” she said.

Keep in mind that tinea pedis and onychomycosis are related. If you have a patient who you think has onychomycosis, look at the bottom of their foot, she advised.

“If they don’t have tinea pedis, they probably don’t have onychomycosis unless they’ve had tinea pedis recently and got rid of it,” she said.

Also, look for collarettes of scale, which may be very subtle and may look like “tiny little circular pieces of scale on the medial or lateral foot.”

“If you are not sure, just keep looking harder because you might see it,” Dr. Elewski said.

Interdigital tinea pedis will be a little more obvious, with scaling and crusting between the toes, as well as maceration and oozing in many cases.

When the toe web is oozing, you’re likely dealing with intertrigo, she said.

In such cases, an azole cream is the better treatment choice, because azoles will kill Candida, bacteria, and dermatophytes that are there, she said.

“So when I have a moist macerated space, I like an azole. If you have a dry scaly process – with or without the collarettes – you’re probably better with an allylamine, particularly if you use a keratolytic with it, something that has urea or a lactic acid,” she said.

Dr. Elewski is a consultant for Valeant Pharmaceuticals International and a contracted researcher for Anacor Pharmaceuticals.

MIAMI BEACH – Two new antifungal agents on the market – luliconazole and naftifine – each have something unique to offer when it comes to treating tinea pedis, according to Dr. Boni E. Elewski.

Luliconazole is an azole drug, meaning it is broad spectrum and kills dermatophytes, yeast, and molds. Also, like all azoles, it has some antibacterial activity, she said at the South Beach Symposium.

Naftifine is an allylamine drug, and is mainly an antidermatophyte agent – albeit a “very, very potent antidermatophyte” – with no antibacterial activity, she said.

Both are approved for once-daily use for 2 weeks, and that’s good because the short treatment duration improves adherence to the regimen, especially compared with other drugs that require 4-6 weeks of treatment to eradicate the problem, noted Dr. Elewski, professor of dermatology and director of clinical trials research at the University of Alabama at Birmingham.

Both drugs also stay in the skin and continue working after treatment stops.

Making the choice regarding which drug or class of drugs to use depends on the patient’s symptoms.

“First of all, tinea pedis may not be obvious. People don’t often tell you, ‘This is what I have – it’s tinea pedis,’ ” she said.

Keep in mind that tinea pedis and onychomycosis are related. If you have a patient who you think has onychomycosis, look at the bottom of their foot, she advised.

“If they don’t have tinea pedis, they probably don’t have onychomycosis unless they’ve had tinea pedis recently and got rid of it,” she said.

Also, look for collarettes of scale, which may be very subtle and may look like “tiny little circular pieces of scale on the medial or lateral foot.”

“If you are not sure, just keep looking harder because you might see it,” Dr. Elewski said.

Interdigital tinea pedis will be a little more obvious, with scaling and crusting between the toes, as well as maceration and oozing in many cases.

When the toe web is oozing, you’re likely dealing with intertrigo, she said.

In such cases, an azole cream is the better treatment choice, because azoles will kill Candida, bacteria, and dermatophytes that are there, she said.

“So when I have a moist macerated space, I like an azole. If you have a dry scaly process – with or without the collarettes – you’re probably better with an allylamine, particularly if you use a keratolytic with it, something that has urea or a lactic acid,” she said.

Dr. Elewski is a consultant for Valeant Pharmaceuticals International and a contracted researcher for Anacor Pharmaceuticals.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Newer antifungals shorten tinea pedis treatment duration, promote adherence
Display Headline
Newer antifungals shorten tinea pedis treatment duration, promote adherence
Legacy Keywords
tinea pedis, luliconazole, naftinine, onychomycosis, intertrigo, azoles, dermatophytes
Legacy Keywords
tinea pedis, luliconazole, naftinine, onychomycosis, intertrigo, azoles, dermatophytes
Sections
Article Source

AT THE SOUTH BEACH SYMPOSIUM

PURLs Copyright

Inside the Article

Acute renal failure biggest short-term risk in I-EVAR explantation

Article Type
Changed
Wed, 01/02/2019 - 09:09
Display Headline
Acute renal failure biggest short-term risk in I-EVAR explantation

SCOTTSDALE, ARIZ. – Acute renal failure occurred postoperatively in one-third of patients who underwent endograft explantation after endovascular abdominal aortic aneurysm repair (EVAR), according to the results of a small retrospective study.

The perioperative infected EVAR (I-EVAR) mortality across the study’s 36 patient records (83% male patients, average age 69 years), culled from four surgery centers’ data from 1997 to 2014, was 8%. The overall mortality was 25%, according to Dr. Victor J. Davila of Mayo Clinic Arizona, Phoenix, and his colleagues. Dr. Davila presented the findings at the Southern Association for Vascular Surgery annual meeting.

“These data show that I-EVAR explantation can be performed safely, with acceptable morbidity and mortality,” said Dr. Davila, who noted that while acceptable, the rates were still high, particularly for acute renal failure.

Whitney McKnight/Frontline Medical News
Dr. Victor J. Davila

“We did not find any difference between the patients who developed renal failure and the type of graft, whether or not there was suprarenal fixation, and an incidence of postoperative acute renal failure,” Dr. Davila said, “However, because acute renal failure is multifactorial, we need to minimize aortic clamp time, as well as minimize the aortic intimal disruption around the renal arteries.”

Three deaths occurred within 30 days post operation, all from anastomotic dehiscence. Additional short-term morbidities included respiratory failure that required tracheostomy in three patients, and bleeding and sepsis in two patients each. Six patients required re-exploration because of infected hematoma, lymphatic leak, small-bowel perforation, open abdomen at initial operation, and anastomotic bleeding. Six more deaths occurred at a mean follow-up of 402 days. One death was attributable to a ruptured aneurysm, another to a progressive inflammatory illness, and four deaths were of indeterminate cause.

Only three of the explantations reviewed by Dr. Davila and his colleagues were considered emergent. The rest (92%) were either elective or urgent. Infected patients tended to present with leukocytosis (63%), pain (58%), and fever (56%), usually about 65 days prior to explantation. The average time between EVAR and presentation with infection was 589 days.

Although most underwent total graft excision, two patients underwent partial excision, including one with a distal iliac limb infection that showed no sign of infection within the main portion of the endograft. Nearly three-quarters of patients had in situ reconstruction.

While nearly a third of patients had positive preoperative blood cultures indicating infection, 81% of intraoperative cultures taken from the explanted graft, aneurysm wall, or sac contents indicated infection.

The gram-positive Staphylococcus and Streptococcus were the most common organisms found in cultures (33% and 17%, respectively), although anaerobics were found in a third of patients, gram negatives in a quarter of patients, and fungal infections in 14%. A majority (58%) of patients received long-term suppressive antibiotic therapy.

Surgeons should reserve the option to keep a graft in situ only in infected EVAR patients who likely would not survive surgical explantation and reconstruction, Dr. Davila said. “Although I believe [medical management] is an alternative, the best course of action is to remove the endograft.”

[email protected]

On Twitter @whitneymcknight

References

Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
infection, EVAR, SAVS, Scottsdale, Victor Davila, Whitney McKnight, WHITNEY MCKNIGHT
Sections
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

SCOTTSDALE, ARIZ. – Acute renal failure occurred postoperatively in one-third of patients who underwent endograft explantation after endovascular abdominal aortic aneurysm repair (EVAR), according to the results of a small retrospective study.

The perioperative infected EVAR (I-EVAR) mortality across the study’s 36 patient records (83% male patients, average age 69 years), culled from four surgery centers’ data from 1997 to 2014, was 8%. The overall mortality was 25%, according to Dr. Victor J. Davila of Mayo Clinic Arizona, Phoenix, and his colleagues. Dr. Davila presented the findings at the Southern Association for Vascular Surgery annual meeting.

“These data show that I-EVAR explantation can be performed safely, with acceptable morbidity and mortality,” said Dr. Davila, who noted that while acceptable, the rates were still high, particularly for acute renal failure.

Whitney McKnight/Frontline Medical News
Dr. Victor J. Davila

“We did not find any difference between the patients who developed renal failure and the type of graft, whether or not there was suprarenal fixation, and an incidence of postoperative acute renal failure,” Dr. Davila said, “However, because acute renal failure is multifactorial, we need to minimize aortic clamp time, as well as minimize the aortic intimal disruption around the renal arteries.”

Three deaths occurred within 30 days post operation, all from anastomotic dehiscence. Additional short-term morbidities included respiratory failure that required tracheostomy in three patients, and bleeding and sepsis in two patients each. Six patients required re-exploration because of infected hematoma, lymphatic leak, small-bowel perforation, open abdomen at initial operation, and anastomotic bleeding. Six more deaths occurred at a mean follow-up of 402 days. One death was attributable to a ruptured aneurysm, another to a progressive inflammatory illness, and four deaths were of indeterminate cause.

Only three of the explantations reviewed by Dr. Davila and his colleagues were considered emergent. The rest (92%) were either elective or urgent. Infected patients tended to present with leukocytosis (63%), pain (58%), and fever (56%), usually about 65 days prior to explantation. The average time between EVAR and presentation with infection was 589 days.

Although most underwent total graft excision, two patients underwent partial excision, including one with a distal iliac limb infection that showed no sign of infection within the main portion of the endograft. Nearly three-quarters of patients had in situ reconstruction.

While nearly a third of patients had positive preoperative blood cultures indicating infection, 81% of intraoperative cultures taken from the explanted graft, aneurysm wall, or sac contents indicated infection.

The gram-positive Staphylococcus and Streptococcus were the most common organisms found in cultures (33% and 17%, respectively), although anaerobics were found in a third of patients, gram negatives in a quarter of patients, and fungal infections in 14%. A majority (58%) of patients received long-term suppressive antibiotic therapy.

Surgeons should reserve the option to keep a graft in situ only in infected EVAR patients who likely would not survive surgical explantation and reconstruction, Dr. Davila said. “Although I believe [medical management] is an alternative, the best course of action is to remove the endograft.”

[email protected]

On Twitter @whitneymcknight

SCOTTSDALE, ARIZ. – Acute renal failure occurred postoperatively in one-third of patients who underwent endograft explantation after endovascular abdominal aortic aneurysm repair (EVAR), according to the results of a small retrospective study.

The perioperative infected EVAR (I-EVAR) mortality across the study’s 36 patient records (83% male patients, average age 69 years), culled from four surgery centers’ data from 1997 to 2014, was 8%. The overall mortality was 25%, according to Dr. Victor J. Davila of Mayo Clinic Arizona, Phoenix, and his colleagues. Dr. Davila presented the findings at the Southern Association for Vascular Surgery annual meeting.

“These data show that I-EVAR explantation can be performed safely, with acceptable morbidity and mortality,” said Dr. Davila, who noted that while acceptable, the rates were still high, particularly for acute renal failure.

Whitney McKnight/Frontline Medical News
Dr. Victor J. Davila

“We did not find any difference between the patients who developed renal failure and the type of graft, whether or not there was suprarenal fixation, and an incidence of postoperative acute renal failure,” Dr. Davila said, “However, because acute renal failure is multifactorial, we need to minimize aortic clamp time, as well as minimize the aortic intimal disruption around the renal arteries.”

Three deaths occurred within 30 days post operation, all from anastomotic dehiscence. Additional short-term morbidities included respiratory failure that required tracheostomy in three patients, and bleeding and sepsis in two patients each. Six patients required re-exploration because of infected hematoma, lymphatic leak, small-bowel perforation, open abdomen at initial operation, and anastomotic bleeding. Six more deaths occurred at a mean follow-up of 402 days. One death was attributable to a ruptured aneurysm, another to a progressive inflammatory illness, and four deaths were of indeterminate cause.

Only three of the explantations reviewed by Dr. Davila and his colleagues were considered emergent. The rest (92%) were either elective or urgent. Infected patients tended to present with leukocytosis (63%), pain (58%), and fever (56%), usually about 65 days prior to explantation. The average time between EVAR and presentation with infection was 589 days.

Although most underwent total graft excision, two patients underwent partial excision, including one with a distal iliac limb infection that showed no sign of infection within the main portion of the endograft. Nearly three-quarters of patients had in situ reconstruction.

While nearly a third of patients had positive preoperative blood cultures indicating infection, 81% of intraoperative cultures taken from the explanted graft, aneurysm wall, or sac contents indicated infection.

The gram-positive Staphylococcus and Streptococcus were the most common organisms found in cultures (33% and 17%, respectively), although anaerobics were found in a third of patients, gram negatives in a quarter of patients, and fungal infections in 14%. A majority (58%) of patients received long-term suppressive antibiotic therapy.

Surgeons should reserve the option to keep a graft in situ only in infected EVAR patients who likely would not survive surgical explantation and reconstruction, Dr. Davila said. “Although I believe [medical management] is an alternative, the best course of action is to remove the endograft.”

[email protected]

On Twitter @whitneymcknight

References

References

Publications
Publications
Topics
Article Type
Display Headline
Acute renal failure biggest short-term risk in I-EVAR explantation
Display Headline
Acute renal failure biggest short-term risk in I-EVAR explantation
Legacy Keywords
infection, EVAR, SAVS, Scottsdale, Victor Davila, Whitney McKnight, WHITNEY MCKNIGHT
Legacy Keywords
infection, EVAR, SAVS, Scottsdale, Victor Davila, Whitney McKnight, WHITNEY MCKNIGHT
Sections
Article Source

AT THE SAVS ANNUAL MEETING

PURLs Copyright

Inside the Article

Vitals

Key clinical point: Minimizing cross-clamp time may reduce the rate of acute renal failure 30 days post op in infected EVAR explantation patients.

Major finding: One-third of I-EVAR patients had postoperative acute renal failure; perioperative mortality in I-EVAR was 8%, and overall mortality was 25%.

Data source: Retrospective analysis of 36 patients with infected EVAR explants performed between 1997 and 2014 across four surgical centers.

Disclosures: Dr. Davila reported he had no relevant disclosures.

Achieving pregnancy after gynecological cancer

Article Type
Changed
Fri, 01/04/2019 - 12:52
Display Headline
Achieving pregnancy after gynecological cancer

Gynecological cancer in a woman of reproductive age is devastating news. Many women facing cancer treatment are interested in maintaining fertility. Fortunately, fertility-sparing treatment options are increasingly available and successful pregnancies have been reported.

These pregnancies present unique challenges to optimizing care of the mother and the fetus. In this article, we review the literature on pregnancies after successful treatment of ovarian, cervical, and endometrial cancer, and gestational trophoblastic disease.

Ovarian cancer

For young women diagnosed with ovarian cancer, the question of fertility preservation is often paramount. The American Society for Reproductive Medicine and the American Society of Clinical Oncology have published guidelines endorsing embryo and oocyte cryopreservation as viable strategies for maintaining fertility (J. Clin. Oncol. 2013;31:2500-10/ Fertil. Steril. 2013;100:1224-31).

Dr. Marcela Smid

Particularly with non–epithelial cell (germ cell) and borderline tumors, innovations in cryopreservation have become more widely available. Cryopreservation of immature oocytes in young girls is still considered investigational and should be undertaken as part of a research protocol. In a study of 62 women with epithelial ovarian cancer who underwent oocyte cryopreservation, there were 19 conceptions and 22 deliveries – all at term with no anomalies (Gynecol. Oncol. 2008;110:345-53).

However, pregnancies resulting from in vitro fertilization are at increased risk for anomalies and a targeted ultrasound and fetal echocardiogram are recommended.

Cervical cancer

In the United States, 43% of women diagnosed with cervical cancer are under age 45. For women with early-stage cancer with radiographically negative lymph nodes, tumors less than 2 cm, and no deep stromal invasion, fertility-sparing procedures include radical trachelectomy and simple vaginal trachelectomy.

Trachelectomy for appropriately selected patients is safe with recurrence rates of 2%-3% and death rates of 2%-5%. While experimental, for women with bulky disease (greater than 2 cm), neoadjuvant chemotherapy and subsequent trachelectomy has been reported (Gynecol. Oncol. 2014;135:213-6). While there is no consensus, most experts recommend 6 months to 1 year after surgery to attempt conception.

Dr. Thomas S. Ivester

Conception rates after trachelectomy are promising with 60%-80% able to conceive. Approximately 10%-15% of these women will experience cervical stenosis, often attributed to the cerclage, resulting in menstrual or fertility issues (Gynecol. Oncol. 2005;99:S152-6/ Gyncol. Oncol. 2013;131:77-82). Placement of an intrauterine cannula (Smith sleeve) at the time of trachelectomy decreases the rate of stenosis (Gynecol. Oncol. 2012;124:276-80).

Pregnancy outcomes in several case series after trachelectomy have demonstrated a rate of first trimester loss of 13%-20%, second trimester loss of 5%-8%, and preterm delivery of 27%-51%, mostly secondary to preterm premature rupture of membranes (PPROM) and/or chorioamnionitis. Both preterm deliveries and midtrimester losses are thought to be secondary to cervical insufficiency, decreased cervical mucus, and ascending infection.

Women who have undergone fertility-sparing treatment for cervical cancer should be counseled about the challenges of pregnancy, including decreased fertility, risk of early and late miscarriage, and preterm delivery. Practitioners should consider cervical length surveillance, especially for those without a cerclage, and vaginal progesterone. The potential utility of preemptive antibiotics in this population is unclear, though early treatment of urinary or genital tract infections is prudent.

Endometrial cancer

As a consequence of the obesity epidemic, younger women are being diagnosed with endometrial hyperplasia and cancer. Approximately 25% of early stage endometrial cancers are diagnosed in premenopausal women, and 5% in women under age 40.

While hysterectomy is standard, fertility-sparing treatment with progestin for well-differentiated grade 1 stage 1A endometrial cancer has been successful and is not associated with any increase in disease progression and/or death (Obstet. Gynecol. 2013; 121:136-42).

Nearly two-thirds of the successfully treated women will require fertility medications and/or assisted reproductive technology (ART). Among those who conceive, 25% will miscarry. Following childbearing, definitive hysterectomy is recommended given the high recurrence rate (Gynecol. Oncol. 2014;133:229-33).

Gestational trophoblastic disease

Women with a history of complete and partial molar pregnancies and persistent gestational trophoblastic neoplasia (GTN) often pursue subsequent pregnancy. In a large cohort of more than 2,400 pregnancies after GTN, pregnancy outcomes were similar to those of the general population (J. Reprod. Med. 2014;59:188-94).

Among women with a history of a complete or partial mole, 1.7% had a subsequent pregnancy complicated by another molar pregnancy. Women who received chemotherapy for GTN may have a slightly higher risk of stillbirth (1.3%) and higher rates of anxiety in subsequent pregnancies (BJOG 2003;110:560-6).

Young women experiencing gynecologic malignancies are often concerned about the safety of pregnancy. In appropriately selected patients, fertility preservation is safe and pregnancy outcomes overall are favorable, although women should be counseled regarding reduced fertility, the need for ART, and the risks of prematurity and stillbirth.

 

 

Pregnant women with a history of cancer or gestational trophoblastic disease are also at high risk for depression and anxiety. Women with a personal history of gynecologic cancer or GTD should be followed by a multidisciplinary team that can address the obstetrical, oncological, and psychological aspects of pregnancy.

Dr. Smid is a second-year fellow in maternal-fetal medicine at the University of North Carolina at Chapel Hill. Dr. Ivester is an associate professor of maternal-fetal medicine and an associate professor of maternal child health at UNC-Chapel Hill. The authors reported having no financial disclosures.

References

Author and Disclosure Information

Publications
Topics
Legacy Keywords
pregnancy, cancer, gynecology
Sections
Author and Disclosure Information

Author and Disclosure Information

Gynecological cancer in a woman of reproductive age is devastating news. Many women facing cancer treatment are interested in maintaining fertility. Fortunately, fertility-sparing treatment options are increasingly available and successful pregnancies have been reported.

These pregnancies present unique challenges to optimizing care of the mother and the fetus. In this article, we review the literature on pregnancies after successful treatment of ovarian, cervical, and endometrial cancer, and gestational trophoblastic disease.

Ovarian cancer

For young women diagnosed with ovarian cancer, the question of fertility preservation is often paramount. The American Society for Reproductive Medicine and the American Society of Clinical Oncology have published guidelines endorsing embryo and oocyte cryopreservation as viable strategies for maintaining fertility (J. Clin. Oncol. 2013;31:2500-10/ Fertil. Steril. 2013;100:1224-31).

Dr. Marcela Smid

Particularly with non–epithelial cell (germ cell) and borderline tumors, innovations in cryopreservation have become more widely available. Cryopreservation of immature oocytes in young girls is still considered investigational and should be undertaken as part of a research protocol. In a study of 62 women with epithelial ovarian cancer who underwent oocyte cryopreservation, there were 19 conceptions and 22 deliveries – all at term with no anomalies (Gynecol. Oncol. 2008;110:345-53).

However, pregnancies resulting from in vitro fertilization are at increased risk for anomalies and a targeted ultrasound and fetal echocardiogram are recommended.

Cervical cancer

In the United States, 43% of women diagnosed with cervical cancer are under age 45. For women with early-stage cancer with radiographically negative lymph nodes, tumors less than 2 cm, and no deep stromal invasion, fertility-sparing procedures include radical trachelectomy and simple vaginal trachelectomy.

Trachelectomy for appropriately selected patients is safe with recurrence rates of 2%-3% and death rates of 2%-5%. While experimental, for women with bulky disease (greater than 2 cm), neoadjuvant chemotherapy and subsequent trachelectomy has been reported (Gynecol. Oncol. 2014;135:213-6). While there is no consensus, most experts recommend 6 months to 1 year after surgery to attempt conception.

Dr. Thomas S. Ivester

Conception rates after trachelectomy are promising with 60%-80% able to conceive. Approximately 10%-15% of these women will experience cervical stenosis, often attributed to the cerclage, resulting in menstrual or fertility issues (Gynecol. Oncol. 2005;99:S152-6/ Gyncol. Oncol. 2013;131:77-82). Placement of an intrauterine cannula (Smith sleeve) at the time of trachelectomy decreases the rate of stenosis (Gynecol. Oncol. 2012;124:276-80).

Pregnancy outcomes in several case series after trachelectomy have demonstrated a rate of first trimester loss of 13%-20%, second trimester loss of 5%-8%, and preterm delivery of 27%-51%, mostly secondary to preterm premature rupture of membranes (PPROM) and/or chorioamnionitis. Both preterm deliveries and midtrimester losses are thought to be secondary to cervical insufficiency, decreased cervical mucus, and ascending infection.

Women who have undergone fertility-sparing treatment for cervical cancer should be counseled about the challenges of pregnancy, including decreased fertility, risk of early and late miscarriage, and preterm delivery. Practitioners should consider cervical length surveillance, especially for those without a cerclage, and vaginal progesterone. The potential utility of preemptive antibiotics in this population is unclear, though early treatment of urinary or genital tract infections is prudent.

Endometrial cancer

As a consequence of the obesity epidemic, younger women are being diagnosed with endometrial hyperplasia and cancer. Approximately 25% of early stage endometrial cancers are diagnosed in premenopausal women, and 5% in women under age 40.

While hysterectomy is standard, fertility-sparing treatment with progestin for well-differentiated grade 1 stage 1A endometrial cancer has been successful and is not associated with any increase in disease progression and/or death (Obstet. Gynecol. 2013; 121:136-42).

Nearly two-thirds of the successfully treated women will require fertility medications and/or assisted reproductive technology (ART). Among those who conceive, 25% will miscarry. Following childbearing, definitive hysterectomy is recommended given the high recurrence rate (Gynecol. Oncol. 2014;133:229-33).

Gestational trophoblastic disease

Women with a history of complete and partial molar pregnancies and persistent gestational trophoblastic neoplasia (GTN) often pursue subsequent pregnancy. In a large cohort of more than 2,400 pregnancies after GTN, pregnancy outcomes were similar to those of the general population (J. Reprod. Med. 2014;59:188-94).

Among women with a history of a complete or partial mole, 1.7% had a subsequent pregnancy complicated by another molar pregnancy. Women who received chemotherapy for GTN may have a slightly higher risk of stillbirth (1.3%) and higher rates of anxiety in subsequent pregnancies (BJOG 2003;110:560-6).

Young women experiencing gynecologic malignancies are often concerned about the safety of pregnancy. In appropriately selected patients, fertility preservation is safe and pregnancy outcomes overall are favorable, although women should be counseled regarding reduced fertility, the need for ART, and the risks of prematurity and stillbirth.

 

 

Pregnant women with a history of cancer or gestational trophoblastic disease are also at high risk for depression and anxiety. Women with a personal history of gynecologic cancer or GTD should be followed by a multidisciplinary team that can address the obstetrical, oncological, and psychological aspects of pregnancy.

Dr. Smid is a second-year fellow in maternal-fetal medicine at the University of North Carolina at Chapel Hill. Dr. Ivester is an associate professor of maternal-fetal medicine and an associate professor of maternal child health at UNC-Chapel Hill. The authors reported having no financial disclosures.

Gynecological cancer in a woman of reproductive age is devastating news. Many women facing cancer treatment are interested in maintaining fertility. Fortunately, fertility-sparing treatment options are increasingly available and successful pregnancies have been reported.

These pregnancies present unique challenges to optimizing care of the mother and the fetus. In this article, we review the literature on pregnancies after successful treatment of ovarian, cervical, and endometrial cancer, and gestational trophoblastic disease.

Ovarian cancer

For young women diagnosed with ovarian cancer, the question of fertility preservation is often paramount. The American Society for Reproductive Medicine and the American Society of Clinical Oncology have published guidelines endorsing embryo and oocyte cryopreservation as viable strategies for maintaining fertility (J. Clin. Oncol. 2013;31:2500-10/ Fertil. Steril. 2013;100:1224-31).

Dr. Marcela Smid

Particularly with non–epithelial cell (germ cell) and borderline tumors, innovations in cryopreservation have become more widely available. Cryopreservation of immature oocytes in young girls is still considered investigational and should be undertaken as part of a research protocol. In a study of 62 women with epithelial ovarian cancer who underwent oocyte cryopreservation, there were 19 conceptions and 22 deliveries – all at term with no anomalies (Gynecol. Oncol. 2008;110:345-53).

However, pregnancies resulting from in vitro fertilization are at increased risk for anomalies and a targeted ultrasound and fetal echocardiogram are recommended.

Cervical cancer

In the United States, 43% of women diagnosed with cervical cancer are under age 45. For women with early-stage cancer with radiographically negative lymph nodes, tumors less than 2 cm, and no deep stromal invasion, fertility-sparing procedures include radical trachelectomy and simple vaginal trachelectomy.

Trachelectomy for appropriately selected patients is safe with recurrence rates of 2%-3% and death rates of 2%-5%. While experimental, for women with bulky disease (greater than 2 cm), neoadjuvant chemotherapy and subsequent trachelectomy has been reported (Gynecol. Oncol. 2014;135:213-6). While there is no consensus, most experts recommend 6 months to 1 year after surgery to attempt conception.

Dr. Thomas S. Ivester

Conception rates after trachelectomy are promising with 60%-80% able to conceive. Approximately 10%-15% of these women will experience cervical stenosis, often attributed to the cerclage, resulting in menstrual or fertility issues (Gynecol. Oncol. 2005;99:S152-6/ Gyncol. Oncol. 2013;131:77-82). Placement of an intrauterine cannula (Smith sleeve) at the time of trachelectomy decreases the rate of stenosis (Gynecol. Oncol. 2012;124:276-80).

Pregnancy outcomes in several case series after trachelectomy have demonstrated a rate of first trimester loss of 13%-20%, second trimester loss of 5%-8%, and preterm delivery of 27%-51%, mostly secondary to preterm premature rupture of membranes (PPROM) and/or chorioamnionitis. Both preterm deliveries and midtrimester losses are thought to be secondary to cervical insufficiency, decreased cervical mucus, and ascending infection.

Women who have undergone fertility-sparing treatment for cervical cancer should be counseled about the challenges of pregnancy, including decreased fertility, risk of early and late miscarriage, and preterm delivery. Practitioners should consider cervical length surveillance, especially for those without a cerclage, and vaginal progesterone. The potential utility of preemptive antibiotics in this population is unclear, though early treatment of urinary or genital tract infections is prudent.

Endometrial cancer

As a consequence of the obesity epidemic, younger women are being diagnosed with endometrial hyperplasia and cancer. Approximately 25% of early stage endometrial cancers are diagnosed in premenopausal women, and 5% in women under age 40.

While hysterectomy is standard, fertility-sparing treatment with progestin for well-differentiated grade 1 stage 1A endometrial cancer has been successful and is not associated with any increase in disease progression and/or death (Obstet. Gynecol. 2013; 121:136-42).

Nearly two-thirds of the successfully treated women will require fertility medications and/or assisted reproductive technology (ART). Among those who conceive, 25% will miscarry. Following childbearing, definitive hysterectomy is recommended given the high recurrence rate (Gynecol. Oncol. 2014;133:229-33).

Gestational trophoblastic disease

Women with a history of complete and partial molar pregnancies and persistent gestational trophoblastic neoplasia (GTN) often pursue subsequent pregnancy. In a large cohort of more than 2,400 pregnancies after GTN, pregnancy outcomes were similar to those of the general population (J. Reprod. Med. 2014;59:188-94).

Among women with a history of a complete or partial mole, 1.7% had a subsequent pregnancy complicated by another molar pregnancy. Women who received chemotherapy for GTN may have a slightly higher risk of stillbirth (1.3%) and higher rates of anxiety in subsequent pregnancies (BJOG 2003;110:560-6).

Young women experiencing gynecologic malignancies are often concerned about the safety of pregnancy. In appropriately selected patients, fertility preservation is safe and pregnancy outcomes overall are favorable, although women should be counseled regarding reduced fertility, the need for ART, and the risks of prematurity and stillbirth.

 

 

Pregnant women with a history of cancer or gestational trophoblastic disease are also at high risk for depression and anxiety. Women with a personal history of gynecologic cancer or GTD should be followed by a multidisciplinary team that can address the obstetrical, oncological, and psychological aspects of pregnancy.

Dr. Smid is a second-year fellow in maternal-fetal medicine at the University of North Carolina at Chapel Hill. Dr. Ivester is an associate professor of maternal-fetal medicine and an associate professor of maternal child health at UNC-Chapel Hill. The authors reported having no financial disclosures.

References

References

Publications
Publications
Topics
Article Type
Display Headline
Achieving pregnancy after gynecological cancer
Display Headline
Achieving pregnancy after gynecological cancer
Legacy Keywords
pregnancy, cancer, gynecology
Legacy Keywords
pregnancy, cancer, gynecology
Sections
Article Source

PURLs Copyright

Inside the Article