User login
Chest tightness and wheezing
This patient's physical examination and imaging findings are consistent with a diagnosis of acute severe asthma. Agitation, breathlessness during rest, and a respiratory rate > 30 breaths/min are some manifestations of an acute severe episode. During severe episodes, accessory muscles of respiration are usually used, and suprasternal retractions are often present. The heart rate is > 120 beats/min and the respiratory rate is > 30 breaths/min. Loud biphasic (expiratory and inspiratory) wheezing can be heard, and pulsus paradoxus is often present (20-40 mm Hg). Oxyhemoglobin saturation with room air is < 91%. As the severity increases, the patient increasingly assumes a hunched-over sitting position with the hands supporting the torso, termed the tripod position.
Asthma is a chronic, heterogenous inflammatory airway disorder characterized by variable expiratory flow; airway wall thickening; respiratory symptoms; and exacerbations, which sometimes require hospitalization. According to the World Health Organization, asthma affected an estimated 262 million people in 2019. The presence of airway hyperresponsiveness or bronchial hyperreactivity in asthma is an exaggerated response to various exogenous and endogenous stimuli. Mechanisms implicated in the development of asthma include direct stimulation of airway smooth muscle and indirect stimulation by pharmacologically active substances from mediator-secreting cells, such as mast cells or nonmyelinated sensory neurons. The degree of airway hyperresponsiveness is associated with the clinical severity of asthma.
Acute severe asthma is a life-threatening emergency characterized by severe airflow limitation that is unresponsive to the typical appropriate bronchodilator therapy. As a result of pathophysiologic changes, airflow is severely restricted in severe asthma, leading to premature closure of the airway on expiration; impaired gas exchange; and dynamic hyperinflation, or air-trapping. In such cases, urgent action is essential to thwart serious outcomes, including mechanical ventilation and death.
Asthma severity is defined by the level of treatment required to control a patient's symptoms and exacerbations. According to the 2022 Global Initiative for Asthma (GINA) guidelines, a severe asthma exacerbation describes a patient who talks in words (rather than sentences); leans forward; is agitated; uses accessory respiratory muscles; and has a respiratory rate > 30 breaths/min, heart rate > 120 beats/min, oxygen saturation on air < 90%, and peak expiratory flow ≤ 50% of their best or of predicted value. Given the heterogeneity of asthma, patients with acute severe asthma may present with a variety of signs and symptoms, including dyspnea, chest tightness, cough and wheezing, agitation, drowsiness or signs of confusion, and significant breathlessness at rest.
Exposure to external agents, such as indoor and outdoor allergens, air pollutants, and respiratory tract infections (primarily viral), are the most common causes of asthma exacerbations, which vary in severity. Numerous other factors can trigger an asthma exacerbation, including exercise, weather changes, certain foods, additives, drugs, extreme emotional expressions, rhinitis, sinusitis, polyposis, gastroesophageal reflux, menstruation, and pregnancy. Importantly, a patient with known asthma of any level of severity can experience an asthma exacerbation, including patients with mild or well-controlled asthma.
Patients with a history of poorly controlled asthma or a recent exacerbation are at risk for an acute asthma exacerbation. Other risk factors include poor perception of airflow limitation, regular or overuse of short-acting beta agonists, incorrect inhaler technique, and suboptimal adherence to therapy. Comorbidities associated with risk for an acute asthma exacerbation include obesity, chronic rhinosinusitis, inducible laryngeal obstruction (vocal cord dysfunction), gastroesophageal reflux disease, chronic obstructive pulmonary disease, obstructive sleep apnea, bronchiectasis, cardiac disease, and kyphosis due to osteoporosis (followed by corticosteroid overuse). The lack of a written asthma action plan and socioeconomic factors are also associated with increased risk for a severe exacerbation.
In the emergency department setting, pharmacologic therapy of acute severe asthma should consist of a short-acting beta agonist, ipratropium bromide, systemic corticosteroids (oral or intravenous), and controlled oxygen therapy. Clinicians may also consider intravenous magnesium sulfate and high-dose inhaled corticosteroids. Once stable, patients should be treated with optimal asthma-controlling therapy, as outlined in GINA guidelines. Optimizing patients' inhaler technique and adherence to therapy are imperative, and comorbidities should be appropriately managed. Nonpharmacologic interventions, such as smoking cessation, pulmonary rehabilitation, exercise, weight loss, and influenza/COVID-19 vaccination, are also recommended as indicated.
Zab Mosenifar, MD, Medical Director, Women's Lung Institute; Executive Vice Chairman, Department of Medicine, Cedars Sinai Medical Center, Los Angeles, California.
Zab Mosenifar, MD, has disclosed no relevant financial relationships.
Image Quizzes are fictional or fictionalized clinical scenarios intended to provide evidence-based educational takeaways.
This patient's physical examination and imaging findings are consistent with a diagnosis of acute severe asthma. Agitation, breathlessness during rest, and a respiratory rate > 30 breaths/min are some manifestations of an acute severe episode. During severe episodes, accessory muscles of respiration are usually used, and suprasternal retractions are often present. The heart rate is > 120 beats/min and the respiratory rate is > 30 breaths/min. Loud biphasic (expiratory and inspiratory) wheezing can be heard, and pulsus paradoxus is often present (20-40 mm Hg). Oxyhemoglobin saturation with room air is < 91%. As the severity increases, the patient increasingly assumes a hunched-over sitting position with the hands supporting the torso, termed the tripod position.
Asthma is a chronic, heterogenous inflammatory airway disorder characterized by variable expiratory flow; airway wall thickening; respiratory symptoms; and exacerbations, which sometimes require hospitalization. According to the World Health Organization, asthma affected an estimated 262 million people in 2019. The presence of airway hyperresponsiveness or bronchial hyperreactivity in asthma is an exaggerated response to various exogenous and endogenous stimuli. Mechanisms implicated in the development of asthma include direct stimulation of airway smooth muscle and indirect stimulation by pharmacologically active substances from mediator-secreting cells, such as mast cells or nonmyelinated sensory neurons. The degree of airway hyperresponsiveness is associated with the clinical severity of asthma.
Acute severe asthma is a life-threatening emergency characterized by severe airflow limitation that is unresponsive to the typical appropriate bronchodilator therapy. As a result of pathophysiologic changes, airflow is severely restricted in severe asthma, leading to premature closure of the airway on expiration; impaired gas exchange; and dynamic hyperinflation, or air-trapping. In such cases, urgent action is essential to thwart serious outcomes, including mechanical ventilation and death.
Asthma severity is defined by the level of treatment required to control a patient's symptoms and exacerbations. According to the 2022 Global Initiative for Asthma (GINA) guidelines, a severe asthma exacerbation describes a patient who talks in words (rather than sentences); leans forward; is agitated; uses accessory respiratory muscles; and has a respiratory rate > 30 breaths/min, heart rate > 120 beats/min, oxygen saturation on air < 90%, and peak expiratory flow ≤ 50% of their best or of predicted value. Given the heterogeneity of asthma, patients with acute severe asthma may present with a variety of signs and symptoms, including dyspnea, chest tightness, cough and wheezing, agitation, drowsiness or signs of confusion, and significant breathlessness at rest.
Exposure to external agents, such as indoor and outdoor allergens, air pollutants, and respiratory tract infections (primarily viral), are the most common causes of asthma exacerbations, which vary in severity. Numerous other factors can trigger an asthma exacerbation, including exercise, weather changes, certain foods, additives, drugs, extreme emotional expressions, rhinitis, sinusitis, polyposis, gastroesophageal reflux, menstruation, and pregnancy. Importantly, a patient with known asthma of any level of severity can experience an asthma exacerbation, including patients with mild or well-controlled asthma.
Patients with a history of poorly controlled asthma or a recent exacerbation are at risk for an acute asthma exacerbation. Other risk factors include poor perception of airflow limitation, regular or overuse of short-acting beta agonists, incorrect inhaler technique, and suboptimal adherence to therapy. Comorbidities associated with risk for an acute asthma exacerbation include obesity, chronic rhinosinusitis, inducible laryngeal obstruction (vocal cord dysfunction), gastroesophageal reflux disease, chronic obstructive pulmonary disease, obstructive sleep apnea, bronchiectasis, cardiac disease, and kyphosis due to osteoporosis (followed by corticosteroid overuse). The lack of a written asthma action plan and socioeconomic factors are also associated with increased risk for a severe exacerbation.
In the emergency department setting, pharmacologic therapy of acute severe asthma should consist of a short-acting beta agonist, ipratropium bromide, systemic corticosteroids (oral or intravenous), and controlled oxygen therapy. Clinicians may also consider intravenous magnesium sulfate and high-dose inhaled corticosteroids. Once stable, patients should be treated with optimal asthma-controlling therapy, as outlined in GINA guidelines. Optimizing patients' inhaler technique and adherence to therapy are imperative, and comorbidities should be appropriately managed. Nonpharmacologic interventions, such as smoking cessation, pulmonary rehabilitation, exercise, weight loss, and influenza/COVID-19 vaccination, are also recommended as indicated.
Zab Mosenifar, MD, Medical Director, Women's Lung Institute; Executive Vice Chairman, Department of Medicine, Cedars Sinai Medical Center, Los Angeles, California.
Zab Mosenifar, MD, has disclosed no relevant financial relationships.
Image Quizzes are fictional or fictionalized clinical scenarios intended to provide evidence-based educational takeaways.
This patient's physical examination and imaging findings are consistent with a diagnosis of acute severe asthma. Agitation, breathlessness during rest, and a respiratory rate > 30 breaths/min are some manifestations of an acute severe episode. During severe episodes, accessory muscles of respiration are usually used, and suprasternal retractions are often present. The heart rate is > 120 beats/min and the respiratory rate is > 30 breaths/min. Loud biphasic (expiratory and inspiratory) wheezing can be heard, and pulsus paradoxus is often present (20-40 mm Hg). Oxyhemoglobin saturation with room air is < 91%. As the severity increases, the patient increasingly assumes a hunched-over sitting position with the hands supporting the torso, termed the tripod position.
Asthma is a chronic, heterogenous inflammatory airway disorder characterized by variable expiratory flow; airway wall thickening; respiratory symptoms; and exacerbations, which sometimes require hospitalization. According to the World Health Organization, asthma affected an estimated 262 million people in 2019. The presence of airway hyperresponsiveness or bronchial hyperreactivity in asthma is an exaggerated response to various exogenous and endogenous stimuli. Mechanisms implicated in the development of asthma include direct stimulation of airway smooth muscle and indirect stimulation by pharmacologically active substances from mediator-secreting cells, such as mast cells or nonmyelinated sensory neurons. The degree of airway hyperresponsiveness is associated with the clinical severity of asthma.
Acute severe asthma is a life-threatening emergency characterized by severe airflow limitation that is unresponsive to the typical appropriate bronchodilator therapy. As a result of pathophysiologic changes, airflow is severely restricted in severe asthma, leading to premature closure of the airway on expiration; impaired gas exchange; and dynamic hyperinflation, or air-trapping. In such cases, urgent action is essential to thwart serious outcomes, including mechanical ventilation and death.
Asthma severity is defined by the level of treatment required to control a patient's symptoms and exacerbations. According to the 2022 Global Initiative for Asthma (GINA) guidelines, a severe asthma exacerbation describes a patient who talks in words (rather than sentences); leans forward; is agitated; uses accessory respiratory muscles; and has a respiratory rate > 30 breaths/min, heart rate > 120 beats/min, oxygen saturation on air < 90%, and peak expiratory flow ≤ 50% of their best or of predicted value. Given the heterogeneity of asthma, patients with acute severe asthma may present with a variety of signs and symptoms, including dyspnea, chest tightness, cough and wheezing, agitation, drowsiness or signs of confusion, and significant breathlessness at rest.
Exposure to external agents, such as indoor and outdoor allergens, air pollutants, and respiratory tract infections (primarily viral), are the most common causes of asthma exacerbations, which vary in severity. Numerous other factors can trigger an asthma exacerbation, including exercise, weather changes, certain foods, additives, drugs, extreme emotional expressions, rhinitis, sinusitis, polyposis, gastroesophageal reflux, menstruation, and pregnancy. Importantly, a patient with known asthma of any level of severity can experience an asthma exacerbation, including patients with mild or well-controlled asthma.
Patients with a history of poorly controlled asthma or a recent exacerbation are at risk for an acute asthma exacerbation. Other risk factors include poor perception of airflow limitation, regular or overuse of short-acting beta agonists, incorrect inhaler technique, and suboptimal adherence to therapy. Comorbidities associated with risk for an acute asthma exacerbation include obesity, chronic rhinosinusitis, inducible laryngeal obstruction (vocal cord dysfunction), gastroesophageal reflux disease, chronic obstructive pulmonary disease, obstructive sleep apnea, bronchiectasis, cardiac disease, and kyphosis due to osteoporosis (followed by corticosteroid overuse). The lack of a written asthma action plan and socioeconomic factors are also associated with increased risk for a severe exacerbation.
In the emergency department setting, pharmacologic therapy of acute severe asthma should consist of a short-acting beta agonist, ipratropium bromide, systemic corticosteroids (oral or intravenous), and controlled oxygen therapy. Clinicians may also consider intravenous magnesium sulfate and high-dose inhaled corticosteroids. Once stable, patients should be treated with optimal asthma-controlling therapy, as outlined in GINA guidelines. Optimizing patients' inhaler technique and adherence to therapy are imperative, and comorbidities should be appropriately managed. Nonpharmacologic interventions, such as smoking cessation, pulmonary rehabilitation, exercise, weight loss, and influenza/COVID-19 vaccination, are also recommended as indicated.
Zab Mosenifar, MD, Medical Director, Women's Lung Institute; Executive Vice Chairman, Department of Medicine, Cedars Sinai Medical Center, Los Angeles, California.
Zab Mosenifar, MD, has disclosed no relevant financial relationships.
Image Quizzes are fictional or fictionalized clinical scenarios intended to provide evidence-based educational takeaways.
A 32-year-old Black man presents to the emergency department with severe dyspnea, chest tightness, and wheezing. The patient is sitting forward in the tripod position and appears agitated and confused. Use of accessory respiratory muscles and suprasternal retractions are noted. He reports an approximate 2-week history of rhinorrhea, cough, and mild fever, for which he has been taking an over-the-counter nonsteroidal anti-inflammatory agent and cough suppressant. His prior medical history is notable for obesity, type 2 diabetes, allergic rhinitis, mild asthma, and hypercholesterolemia. The patient is a current smoker (17 pack-years). Pertinent physical examination reveals a respiratory rate of 48 breaths/min, heart rate of 135 beats/min, 87% oxygen saturation, and peak expiratory flow of 300 L/min. Low biphasic wheezing can be heard. Rapid antigen and PCR tests for SARS-CoV-2 detected by nasopharyngeal swabs both come back negative. Chest radiography is ordered and shows pulmonary hyperinflation with bronchial wall thickening.
New and Improved Devices Add More Therapeutic Options for Treatment of Migraine
Since the mid-2010s, the US Food and Drug Administration (FDA) has approved or cleared no fewer than 10 migraine treatments in the form of orals, injectables, nasal sprays, and devices. The medical achievements of the last decade in the field of migraine have been nothing less than stunning for physicians and their patients, whether they relied on off-label medications or those sanctioned by the FDA to treat patients living with migraine.
That said, the newer orals and injectables cannot help everyone living with migraine. The small molecule calcitonin gene-related peptide (CGRP) receptor antagonists (gepants) and the monoclonal antibodies that target the CGRP ligand or receptor, while well received by patients and physicians alike, have drawbacks for some patients, including lack of efficacy, slow response rate, and adverse events that prevent some patients from taking them. The gepants, which are oral medications—as opposed to the CGRP monoclonal antibody injectables—can occasionally cause enough nausea, drowsiness, and constipation for patients to choose to discontinue their use.
Certain patients have other reasons to shun orals and injectables. Some cannot swallow pills while others fear or do not tolerate injections. Insurance companies limit the quantity of acute care medications, so some patients cannot treat every migraine attack. Then there are those who have failed so many therapies in the past that they will not try the latest one. Consequently, some lie in bed, vomiting until the pain is gone, and some take too many over-the-counter or migraine-specific products, which make migraine symptoms worse if they develop medication overuse headache. And lastly, there are patients who have never walked through a physician’s door to secure a migraine diagnosis and get appropriate treatment.
Non interventional medical devices cleared by the FDA now allow physicians to offer relief to patients with migraine. They work either through various types of electrical neuromodulation to nerves outside the brain or they apply magnetic stimulation to the back of the brain itself to reach pain-associated pathways. A 2019 report on pain management from the US Department of Health and Human Services noted that some randomized control trials (RCTs) and other studies “have demonstrated that noninvasive vagal nerve stimulation can be effective in ameliorating pain in various types of cluster headaches and migraines.”
At least 3 devices, 1 designed to stimulate both the occipital and trigeminal nerves (eCOT-NS, Relivion, Neurolief Ltd), 1 that stimulates the vagus nerve noninvasively (nVNS, gammaCORE, electroCore), and 1 that stimulates peripheral nerves in the upper arm (remote electrical neuromodulation [REN], Nerivio, Theranica Bio-Electronics Ltd), are FDA cleared to treat episodic and chronic migraine. nVNS is also cleared to treat migraine, episodic cluster headache acutely, and chronic cluster acutely in connection with medication.
Real-world studies on all migraine treatments, especially the devices, are flooding PubMed. As for a physician’s observation, we will get to that shortly.
The Devices
Nerivio
Theranica Bio-Electronics Ltd makes a REN called Nerivio, which was FDA cleared in January 2021 to treat episodic migraine acutely in adults and adolescents. Studies have shown its effectiveness for chronic migraine patients who are treated acutely, and it has also helped patients with menstrual migraine. The patient wears the device on the upper arm. Sensory fibers, once stimulated in the arm, send an impulse to the brainstem to affect the serotonin- and norepinephrine-modulated descending inhibitory pathway to disrupt incoming pain messaging. Theranica has applied to the FDA for clearance to treat patients with chronic migraine, as well as for prevention.
Relivion
Neurolief Ltd created the external combined occipital and trigeminal nerve stimulation device (eCOT-NS), which stimulates both the occipital and trigeminal nerves. It has multiple output electrodes, which are placed on the forehead to stimulate the trigeminal supraorbital and supratrochlear nerve branches bilaterally, and over the occipital nerves in the back of the head. It is worn like a tiara as it must be in good contact with the forehead and the back of the head simultaneously. It is FDA cleared to treat acute migraine.
gammaCORE
gammaCORE is a nVNS device that is FDA cleared for acute and preventive treatment of migraine in adolescents and adults, and acute and preventive treatment of episodic cluster headache in adults. It is also cleared to treat chronic cluster headache acutely along with medication. The patient applies gel to the device’s 2 electrical contacts and then locates the vagus nerve on the side of the neck and applies the electrodes to the area that will be treated. Patients can adjust the stimulation’s intensity so that they can barely feel the stimulation; it has not been reported to be painful. nVNS is also an FDA cleared treatment for paroxysmal hemicrania and hemicrania continua.
SAVI Dual
The s-TMS (SAVI Dual, formerly called the Spring TMS and the sTMS mini), made by eNeura, is a single-pulse, transcranial magnetic stimulation applied to the back of the head to stimulate the occipital lobes in the brain. It was FDA cleared for acute and preventive care of migraine in adolescents over 12 years and for adults in February 2019. The patient holds a handheld magnetic device against their occiput, and when the tool is discharged, a brief magnetic pulse interrupts the pattern of neuronal firing (probably cortical spreading depression) that can trigger migraine and the visual aura associated with migraine in one-third of patients.
Cefaly
The e-TNS (Cefaly) works by external trigeminal nerve stimulation of the supraorbital and trochlear nerves bilaterally in the forehead. It gradually and automatically increases in intensity and can be controlled by the patient. It is FDA cleared for acute and preventive treatment of migraine, and, unlike the other devices, it is sold over the counter without a prescription. According to the company website, there are 3 devices: 1 is for acute treatment, 1 is for preventive treatment, and 1 device has 2 settings for both acute and preventive treatment.
The Studies
While most of the published studies on devices are company-sponsored, these device makers have underwritten numerous, sometimes very well-designed, studies on their products. A review by VanderPluym et al described those studies and their various risks of bias.
There are at least 10 studies on REN published so far. These include 2 randomized, sham-controlled trials looking at pain freedom and pain relief at 2 hours after stimulation begins. Another study detailed treatment reports from many patients in which 66.5% experienced pain relief at 2 hours post treatment initiation in half of their treatments. A subgroup of 16% of those patients were prescribed REN by their primary care physicians. Of that group, 77.8% experienced pain relief in half their treatments. That figure was very close to another study that found that 23 of 31 (74.2%) of the study patients treated virtually by non headache providers found relief in 50% of their headaches. REN comes with an education and behavioral medicine app that is used during treatment. A study done by the company shows that when a patient uses the relaxation app along with the standard stimulation, they do considerably better than with stimulation alone.
The eCOT-NS has also been tested in an RCT. At 2 hours, the responder rate was twice as high as in the sham group (66.7% vs 32%). Overall headache relief at 2 hours was higher in the responder group (76% vs 31.6%). In a study collecting real-world data on the efficacy of eCOT-NS in the preventive treatment of migraine (abstract data were presented at the American Headache Society meeting in June 2022), there was a 65.3% reduction in monthly migraine days (MMD) from baseline through 6 months. Treatment reduced MMD by 10.0 (from 15.3 to 5.3—a 76.8% reduction), and reduced acute medication use days (12.5 at baseline to 2.9) at 6 months.
Users of nVNS discussed their experiences with the device, which is the size of a large bar of soap, in a patient registry. They reported 192 attacks, with a mean pain score starting at 2.7 and dropping to 1.3 after 30 minutes. The pain levels of 70% of the attacks dropped to either mild or nonexistent. In a multicenter study on nNVS, 48 patients and 44 sham patients with episodic and chronic cluster headache showed no significant difference in the primary endpoint of pain freedom at 15 minutes between the nVNS and sham. There was also no difference in the chronic cluster headache group. But the episodic cluster subgroup showed a difference; nVNS was superior to sham, 48% to 6% (P
The e-TNS device is cleared for treating adults with migraine, acutely and preventively. It received initial clearance in 2017; in 2020, Cefaly Technology received clearance from the FDA to sell its products over the counter. The device, which resembles a large diamond that affixes to the forehead, has received differing reviews between various patient reports (found online at major retailer sites) and study results. In a blinded, intent-to-treat study involving 538 patients, 25.5% of the verum group reported they were pain-free at 2 hours; 18.3% in the sham group reported the same. Additionally, 56.4% of the subjects in the verum group reported they were free of the most bothersome migraine symptoms, as opposed to 42.3% of the sham group.
Adverse Events
The adverse events observed with these devices were, overall, relatively mild, and disappeared once the device was shut off. A few nVNS users said they experienced discomfort at the application site. With REN, 59 of 12,368 patients reported device-related issues; the vast majority were considered mild and consisted mostly of a sensation of warmth under the device. Of the 259 e-TNS users, 8.5% reported minor and reversible occurrences, such as treatment-related discomfort, paresthesia, and burning.
Patients in the Clinic
A few observations from the clinic regarding these devices:
Some devices are easier to use than others. I know this, because at a recent demonstration session in a course for physicians on headache treatment, I agreed to be the person on whom the device was demonstrated. The physician applying the device had difficulty aligning the device’s sensors with the appropriate nerves. Making sure your patients use these devices correctly is essential, and you or your staff should demonstrate their use to the patient. No doubt, this could be time-consuming in some cases, and patients who are reading the device’s instructions while in pain will likely get frustrated if they cannot get the device to work.
Some patients who have failed every medication class can occasionally find partial relief with these devices. One longtime patient of mine came to me severely disabled from chronic migraine and medication overuse headache but was somewhat better with 2 preventive medications. Triptans worked acutely, but she developed nearly every side effect imaginable. I was able to reverse her medication overuse headache, but the gepants, although they worked somewhat, took too long to take effect. We agreed the next step would be to use REN for each migraine attack, combined with acute care medication if necessary. (She uses REN alone for a milder headache and adds a gepant with naproxen if necessary.) She has found using the relaxation module on the REN app increases her chances of eliminating the migraine. She is not pain free all the time, but she appreciates the pain-free intervals.
One chronic cluster patient has relied on subcutaneous sumatriptan and breathing 100% oxygen at 12 liters per minute through a mask over his nose and mouth for acute relief from his headaches. His headache pain can climb from a 3 to a 10 in a matter of minutes. It starts behind and a bit above the right eye where he feels a tremendous pressure building up. He says that at times it feels like a screwdriver has been thrust into his eye and is being turned. Along with the pain, the eye becomes red, the pupil constricts, and the eyelid droops. He also has dripping from the right nostril, which stuffs up when the pain abates. The pain lasts for 1 to 2 hours, then returns 3 to 5 times a day for 5 days a week, on average. The pain never goes away for more than 3 weeks in a year’s time, hence the reason for his chronic cluster headache diagnosis. He is now using nVNS as soon as he feels the pain coming on. If the device does not provide sufficient relief, he uses oxygen or takes the sumatriptan injection.
Some patients who get cluster headaches think of suicide if the pain cannot be stopped; but in my experience, most can become pain free, or at least realize some partial relief from a variety of treatments (sometimes given at the same time).
Doctors often do not think of devices as options, and some doctors think devices do not work even though they have no experience with using them. Devices can give good relief on their own, and when a severe headache needs stronger treatment, medications added to a device usually work better than either treatment alone.
Since the mid-2010s, the US Food and Drug Administration (FDA) has approved or cleared no fewer than 10 migraine treatments in the form of orals, injectables, nasal sprays, and devices. The medical achievements of the last decade in the field of migraine have been nothing less than stunning for physicians and their patients, whether they relied on off-label medications or those sanctioned by the FDA to treat patients living with migraine.
That said, the newer orals and injectables cannot help everyone living with migraine. The small molecule calcitonin gene-related peptide (CGRP) receptor antagonists (gepants) and the monoclonal antibodies that target the CGRP ligand or receptor, while well received by patients and physicians alike, have drawbacks for some patients, including lack of efficacy, slow response rate, and adverse events that prevent some patients from taking them. The gepants, which are oral medications—as opposed to the CGRP monoclonal antibody injectables—can occasionally cause enough nausea, drowsiness, and constipation for patients to choose to discontinue their use.
Certain patients have other reasons to shun orals and injectables. Some cannot swallow pills while others fear or do not tolerate injections. Insurance companies limit the quantity of acute care medications, so some patients cannot treat every migraine attack. Then there are those who have failed so many therapies in the past that they will not try the latest one. Consequently, some lie in bed, vomiting until the pain is gone, and some take too many over-the-counter or migraine-specific products, which make migraine symptoms worse if they develop medication overuse headache. And lastly, there are patients who have never walked through a physician’s door to secure a migraine diagnosis and get appropriate treatment.
Non interventional medical devices cleared by the FDA now allow physicians to offer relief to patients with migraine. They work either through various types of electrical neuromodulation to nerves outside the brain or they apply magnetic stimulation to the back of the brain itself to reach pain-associated pathways. A 2019 report on pain management from the US Department of Health and Human Services noted that some randomized control trials (RCTs) and other studies “have demonstrated that noninvasive vagal nerve stimulation can be effective in ameliorating pain in various types of cluster headaches and migraines.”
At least 3 devices, 1 designed to stimulate both the occipital and trigeminal nerves (eCOT-NS, Relivion, Neurolief Ltd), 1 that stimulates the vagus nerve noninvasively (nVNS, gammaCORE, electroCore), and 1 that stimulates peripheral nerves in the upper arm (remote electrical neuromodulation [REN], Nerivio, Theranica Bio-Electronics Ltd), are FDA cleared to treat episodic and chronic migraine. nVNS is also cleared to treat migraine, episodic cluster headache acutely, and chronic cluster acutely in connection with medication.
Real-world studies on all migraine treatments, especially the devices, are flooding PubMed. As for a physician’s observation, we will get to that shortly.
The Devices
Nerivio
Theranica Bio-Electronics Ltd makes a REN called Nerivio, which was FDA cleared in January 2021 to treat episodic migraine acutely in adults and adolescents. Studies have shown its effectiveness for chronic migraine patients who are treated acutely, and it has also helped patients with menstrual migraine. The patient wears the device on the upper arm. Sensory fibers, once stimulated in the arm, send an impulse to the brainstem to affect the serotonin- and norepinephrine-modulated descending inhibitory pathway to disrupt incoming pain messaging. Theranica has applied to the FDA for clearance to treat patients with chronic migraine, as well as for prevention.
Relivion
Neurolief Ltd created the external combined occipital and trigeminal nerve stimulation device (eCOT-NS), which stimulates both the occipital and trigeminal nerves. It has multiple output electrodes, which are placed on the forehead to stimulate the trigeminal supraorbital and supratrochlear nerve branches bilaterally, and over the occipital nerves in the back of the head. It is worn like a tiara as it must be in good contact with the forehead and the back of the head simultaneously. It is FDA cleared to treat acute migraine.
gammaCORE
gammaCORE is a nVNS device that is FDA cleared for acute and preventive treatment of migraine in adolescents and adults, and acute and preventive treatment of episodic cluster headache in adults. It is also cleared to treat chronic cluster headache acutely along with medication. The patient applies gel to the device’s 2 electrical contacts and then locates the vagus nerve on the side of the neck and applies the electrodes to the area that will be treated. Patients can adjust the stimulation’s intensity so that they can barely feel the stimulation; it has not been reported to be painful. nVNS is also an FDA cleared treatment for paroxysmal hemicrania and hemicrania continua.
SAVI Dual
The s-TMS (SAVI Dual, formerly called the Spring TMS and the sTMS mini), made by eNeura, is a single-pulse, transcranial magnetic stimulation applied to the back of the head to stimulate the occipital lobes in the brain. It was FDA cleared for acute and preventive care of migraine in adolescents over 12 years and for adults in February 2019. The patient holds a handheld magnetic device against their occiput, and when the tool is discharged, a brief magnetic pulse interrupts the pattern of neuronal firing (probably cortical spreading depression) that can trigger migraine and the visual aura associated with migraine in one-third of patients.
Cefaly
The e-TNS (Cefaly) works by external trigeminal nerve stimulation of the supraorbital and trochlear nerves bilaterally in the forehead. It gradually and automatically increases in intensity and can be controlled by the patient. It is FDA cleared for acute and preventive treatment of migraine, and, unlike the other devices, it is sold over the counter without a prescription. According to the company website, there are 3 devices: 1 is for acute treatment, 1 is for preventive treatment, and 1 device has 2 settings for both acute and preventive treatment.
The Studies
While most of the published studies on devices are company-sponsored, these device makers have underwritten numerous, sometimes very well-designed, studies on their products. A review by VanderPluym et al described those studies and their various risks of bias.
There are at least 10 studies on REN published so far. These include 2 randomized, sham-controlled trials looking at pain freedom and pain relief at 2 hours after stimulation begins. Another study detailed treatment reports from many patients in which 66.5% experienced pain relief at 2 hours post treatment initiation in half of their treatments. A subgroup of 16% of those patients were prescribed REN by their primary care physicians. Of that group, 77.8% experienced pain relief in half their treatments. That figure was very close to another study that found that 23 of 31 (74.2%) of the study patients treated virtually by non headache providers found relief in 50% of their headaches. REN comes with an education and behavioral medicine app that is used during treatment. A study done by the company shows that when a patient uses the relaxation app along with the standard stimulation, they do considerably better than with stimulation alone.
The eCOT-NS has also been tested in an RCT. At 2 hours, the responder rate was twice as high as in the sham group (66.7% vs 32%). Overall headache relief at 2 hours was higher in the responder group (76% vs 31.6%). In a study collecting real-world data on the efficacy of eCOT-NS in the preventive treatment of migraine (abstract data were presented at the American Headache Society meeting in June 2022), there was a 65.3% reduction in monthly migraine days (MMD) from baseline through 6 months. Treatment reduced MMD by 10.0 (from 15.3 to 5.3—a 76.8% reduction), and reduced acute medication use days (12.5 at baseline to 2.9) at 6 months.
Users of nVNS discussed their experiences with the device, which is the size of a large bar of soap, in a patient registry. They reported 192 attacks, with a mean pain score starting at 2.7 and dropping to 1.3 after 30 minutes. The pain levels of 70% of the attacks dropped to either mild or nonexistent. In a multicenter study on nNVS, 48 patients and 44 sham patients with episodic and chronic cluster headache showed no significant difference in the primary endpoint of pain freedom at 15 minutes between the nVNS and sham. There was also no difference in the chronic cluster headache group. But the episodic cluster subgroup showed a difference; nVNS was superior to sham, 48% to 6% (P
The e-TNS device is cleared for treating adults with migraine, acutely and preventively. It received initial clearance in 2017; in 2020, Cefaly Technology received clearance from the FDA to sell its products over the counter. The device, which resembles a large diamond that affixes to the forehead, has received differing reviews between various patient reports (found online at major retailer sites) and study results. In a blinded, intent-to-treat study involving 538 patients, 25.5% of the verum group reported they were pain-free at 2 hours; 18.3% in the sham group reported the same. Additionally, 56.4% of the subjects in the verum group reported they were free of the most bothersome migraine symptoms, as opposed to 42.3% of the sham group.
Adverse Events
The adverse events observed with these devices were, overall, relatively mild, and disappeared once the device was shut off. A few nVNS users said they experienced discomfort at the application site. With REN, 59 of 12,368 patients reported device-related issues; the vast majority were considered mild and consisted mostly of a sensation of warmth under the device. Of the 259 e-TNS users, 8.5% reported minor and reversible occurrences, such as treatment-related discomfort, paresthesia, and burning.
Patients in the Clinic
A few observations from the clinic regarding these devices:
Some devices are easier to use than others. I know this, because at a recent demonstration session in a course for physicians on headache treatment, I agreed to be the person on whom the device was demonstrated. The physician applying the device had difficulty aligning the device’s sensors with the appropriate nerves. Making sure your patients use these devices correctly is essential, and you or your staff should demonstrate their use to the patient. No doubt, this could be time-consuming in some cases, and patients who are reading the device’s instructions while in pain will likely get frustrated if they cannot get the device to work.
Some patients who have failed every medication class can occasionally find partial relief with these devices. One longtime patient of mine came to me severely disabled from chronic migraine and medication overuse headache but was somewhat better with 2 preventive medications. Triptans worked acutely, but she developed nearly every side effect imaginable. I was able to reverse her medication overuse headache, but the gepants, although they worked somewhat, took too long to take effect. We agreed the next step would be to use REN for each migraine attack, combined with acute care medication if necessary. (She uses REN alone for a milder headache and adds a gepant with naproxen if necessary.) She has found using the relaxation module on the REN app increases her chances of eliminating the migraine. She is not pain free all the time, but she appreciates the pain-free intervals.
One chronic cluster patient has relied on subcutaneous sumatriptan and breathing 100% oxygen at 12 liters per minute through a mask over his nose and mouth for acute relief from his headaches. His headache pain can climb from a 3 to a 10 in a matter of minutes. It starts behind and a bit above the right eye where he feels a tremendous pressure building up. He says that at times it feels like a screwdriver has been thrust into his eye and is being turned. Along with the pain, the eye becomes red, the pupil constricts, and the eyelid droops. He also has dripping from the right nostril, which stuffs up when the pain abates. The pain lasts for 1 to 2 hours, then returns 3 to 5 times a day for 5 days a week, on average. The pain never goes away for more than 3 weeks in a year’s time, hence the reason for his chronic cluster headache diagnosis. He is now using nVNS as soon as he feels the pain coming on. If the device does not provide sufficient relief, he uses oxygen or takes the sumatriptan injection.
Some patients who get cluster headaches think of suicide if the pain cannot be stopped; but in my experience, most can become pain free, or at least realize some partial relief from a variety of treatments (sometimes given at the same time).
Doctors often do not think of devices as options, and some doctors think devices do not work even though they have no experience with using them. Devices can give good relief on their own, and when a severe headache needs stronger treatment, medications added to a device usually work better than either treatment alone.
Since the mid-2010s, the US Food and Drug Administration (FDA) has approved or cleared no fewer than 10 migraine treatments in the form of orals, injectables, nasal sprays, and devices. The medical achievements of the last decade in the field of migraine have been nothing less than stunning for physicians and their patients, whether they relied on off-label medications or those sanctioned by the FDA to treat patients living with migraine.
That said, the newer orals and injectables cannot help everyone living with migraine. The small molecule calcitonin gene-related peptide (CGRP) receptor antagonists (gepants) and the monoclonal antibodies that target the CGRP ligand or receptor, while well received by patients and physicians alike, have drawbacks for some patients, including lack of efficacy, slow response rate, and adverse events that prevent some patients from taking them. The gepants, which are oral medications—as opposed to the CGRP monoclonal antibody injectables—can occasionally cause enough nausea, drowsiness, and constipation for patients to choose to discontinue their use.
Certain patients have other reasons to shun orals and injectables. Some cannot swallow pills while others fear or do not tolerate injections. Insurance companies limit the quantity of acute care medications, so some patients cannot treat every migraine attack. Then there are those who have failed so many therapies in the past that they will not try the latest one. Consequently, some lie in bed, vomiting until the pain is gone, and some take too many over-the-counter or migraine-specific products, which make migraine symptoms worse if they develop medication overuse headache. And lastly, there are patients who have never walked through a physician’s door to secure a migraine diagnosis and get appropriate treatment.
Non interventional medical devices cleared by the FDA now allow physicians to offer relief to patients with migraine. They work either through various types of electrical neuromodulation to nerves outside the brain or they apply magnetic stimulation to the back of the brain itself to reach pain-associated pathways. A 2019 report on pain management from the US Department of Health and Human Services noted that some randomized control trials (RCTs) and other studies “have demonstrated that noninvasive vagal nerve stimulation can be effective in ameliorating pain in various types of cluster headaches and migraines.”
At least 3 devices, 1 designed to stimulate both the occipital and trigeminal nerves (eCOT-NS, Relivion, Neurolief Ltd), 1 that stimulates the vagus nerve noninvasively (nVNS, gammaCORE, electroCore), and 1 that stimulates peripheral nerves in the upper arm (remote electrical neuromodulation [REN], Nerivio, Theranica Bio-Electronics Ltd), are FDA cleared to treat episodic and chronic migraine. nVNS is also cleared to treat migraine, episodic cluster headache acutely, and chronic cluster acutely in connection with medication.
Real-world studies on all migraine treatments, especially the devices, are flooding PubMed. As for a physician’s observation, we will get to that shortly.
The Devices
Nerivio
Theranica Bio-Electronics Ltd makes a REN called Nerivio, which was FDA cleared in January 2021 to treat episodic migraine acutely in adults and adolescents. Studies have shown its effectiveness for chronic migraine patients who are treated acutely, and it has also helped patients with menstrual migraine. The patient wears the device on the upper arm. Sensory fibers, once stimulated in the arm, send an impulse to the brainstem to affect the serotonin- and norepinephrine-modulated descending inhibitory pathway to disrupt incoming pain messaging. Theranica has applied to the FDA for clearance to treat patients with chronic migraine, as well as for prevention.
Relivion
Neurolief Ltd created the external combined occipital and trigeminal nerve stimulation device (eCOT-NS), which stimulates both the occipital and trigeminal nerves. It has multiple output electrodes, which are placed on the forehead to stimulate the trigeminal supraorbital and supratrochlear nerve branches bilaterally, and over the occipital nerves in the back of the head. It is worn like a tiara as it must be in good contact with the forehead and the back of the head simultaneously. It is FDA cleared to treat acute migraine.
gammaCORE
gammaCORE is a nVNS device that is FDA cleared for acute and preventive treatment of migraine in adolescents and adults, and acute and preventive treatment of episodic cluster headache in adults. It is also cleared to treat chronic cluster headache acutely along with medication. The patient applies gel to the device’s 2 electrical contacts and then locates the vagus nerve on the side of the neck and applies the electrodes to the area that will be treated. Patients can adjust the stimulation’s intensity so that they can barely feel the stimulation; it has not been reported to be painful. nVNS is also an FDA cleared treatment for paroxysmal hemicrania and hemicrania continua.
SAVI Dual
The s-TMS (SAVI Dual, formerly called the Spring TMS and the sTMS mini), made by eNeura, is a single-pulse, transcranial magnetic stimulation applied to the back of the head to stimulate the occipital lobes in the brain. It was FDA cleared for acute and preventive care of migraine in adolescents over 12 years and for adults in February 2019. The patient holds a handheld magnetic device against their occiput, and when the tool is discharged, a brief magnetic pulse interrupts the pattern of neuronal firing (probably cortical spreading depression) that can trigger migraine and the visual aura associated with migraine in one-third of patients.
Cefaly
The e-TNS (Cefaly) works by external trigeminal nerve stimulation of the supraorbital and trochlear nerves bilaterally in the forehead. It gradually and automatically increases in intensity and can be controlled by the patient. It is FDA cleared for acute and preventive treatment of migraine, and, unlike the other devices, it is sold over the counter without a prescription. According to the company website, there are 3 devices: 1 is for acute treatment, 1 is for preventive treatment, and 1 device has 2 settings for both acute and preventive treatment.
The Studies
While most of the published studies on devices are company-sponsored, these device makers have underwritten numerous, sometimes very well-designed, studies on their products. A review by VanderPluym et al described those studies and their various risks of bias.
There are at least 10 studies on REN published so far. These include 2 randomized, sham-controlled trials looking at pain freedom and pain relief at 2 hours after stimulation begins. Another study detailed treatment reports from many patients in which 66.5% experienced pain relief at 2 hours post treatment initiation in half of their treatments. A subgroup of 16% of those patients were prescribed REN by their primary care physicians. Of that group, 77.8% experienced pain relief in half their treatments. That figure was very close to another study that found that 23 of 31 (74.2%) of the study patients treated virtually by non headache providers found relief in 50% of their headaches. REN comes with an education and behavioral medicine app that is used during treatment. A study done by the company shows that when a patient uses the relaxation app along with the standard stimulation, they do considerably better than with stimulation alone.
The eCOT-NS has also been tested in an RCT. At 2 hours, the responder rate was twice as high as in the sham group (66.7% vs 32%). Overall headache relief at 2 hours was higher in the responder group (76% vs 31.6%). In a study collecting real-world data on the efficacy of eCOT-NS in the preventive treatment of migraine (abstract data were presented at the American Headache Society meeting in June 2022), there was a 65.3% reduction in monthly migraine days (MMD) from baseline through 6 months. Treatment reduced MMD by 10.0 (from 15.3 to 5.3—a 76.8% reduction), and reduced acute medication use days (12.5 at baseline to 2.9) at 6 months.
Users of nVNS discussed their experiences with the device, which is the size of a large bar of soap, in a patient registry. They reported 192 attacks, with a mean pain score starting at 2.7 and dropping to 1.3 after 30 minutes. The pain levels of 70% of the attacks dropped to either mild or nonexistent. In a multicenter study on nNVS, 48 patients and 44 sham patients with episodic and chronic cluster headache showed no significant difference in the primary endpoint of pain freedom at 15 minutes between the nVNS and sham. There was also no difference in the chronic cluster headache group. But the episodic cluster subgroup showed a difference; nVNS was superior to sham, 48% to 6% (P
The e-TNS device is cleared for treating adults with migraine, acutely and preventively. It received initial clearance in 2017; in 2020, Cefaly Technology received clearance from the FDA to sell its products over the counter. The device, which resembles a large diamond that affixes to the forehead, has received differing reviews between various patient reports (found online at major retailer sites) and study results. In a blinded, intent-to-treat study involving 538 patients, 25.5% of the verum group reported they were pain-free at 2 hours; 18.3% in the sham group reported the same. Additionally, 56.4% of the subjects in the verum group reported they were free of the most bothersome migraine symptoms, as opposed to 42.3% of the sham group.
Adverse Events
The adverse events observed with these devices were, overall, relatively mild, and disappeared once the device was shut off. A few nVNS users said they experienced discomfort at the application site. With REN, 59 of 12,368 patients reported device-related issues; the vast majority were considered mild and consisted mostly of a sensation of warmth under the device. Of the 259 e-TNS users, 8.5% reported minor and reversible occurrences, such as treatment-related discomfort, paresthesia, and burning.
Patients in the Clinic
A few observations from the clinic regarding these devices:
Some devices are easier to use than others. I know this, because at a recent demonstration session in a course for physicians on headache treatment, I agreed to be the person on whom the device was demonstrated. The physician applying the device had difficulty aligning the device’s sensors with the appropriate nerves. Making sure your patients use these devices correctly is essential, and you or your staff should demonstrate their use to the patient. No doubt, this could be time-consuming in some cases, and patients who are reading the device’s instructions while in pain will likely get frustrated if they cannot get the device to work.
Some patients who have failed every medication class can occasionally find partial relief with these devices. One longtime patient of mine came to me severely disabled from chronic migraine and medication overuse headache but was somewhat better with 2 preventive medications. Triptans worked acutely, but she developed nearly every side effect imaginable. I was able to reverse her medication overuse headache, but the gepants, although they worked somewhat, took too long to take effect. We agreed the next step would be to use REN for each migraine attack, combined with acute care medication if necessary. (She uses REN alone for a milder headache and adds a gepant with naproxen if necessary.) She has found using the relaxation module on the REN app increases her chances of eliminating the migraine. She is not pain free all the time, but she appreciates the pain-free intervals.
One chronic cluster patient has relied on subcutaneous sumatriptan and breathing 100% oxygen at 12 liters per minute through a mask over his nose and mouth for acute relief from his headaches. His headache pain can climb from a 3 to a 10 in a matter of minutes. It starts behind and a bit above the right eye where he feels a tremendous pressure building up. He says that at times it feels like a screwdriver has been thrust into his eye and is being turned. Along with the pain, the eye becomes red, the pupil constricts, and the eyelid droops. He also has dripping from the right nostril, which stuffs up when the pain abates. The pain lasts for 1 to 2 hours, then returns 3 to 5 times a day for 5 days a week, on average. The pain never goes away for more than 3 weeks in a year’s time, hence the reason for his chronic cluster headache diagnosis. He is now using nVNS as soon as he feels the pain coming on. If the device does not provide sufficient relief, he uses oxygen or takes the sumatriptan injection.
Some patients who get cluster headaches think of suicide if the pain cannot be stopped; but in my experience, most can become pain free, or at least realize some partial relief from a variety of treatments (sometimes given at the same time).
Doctors often do not think of devices as options, and some doctors think devices do not work even though they have no experience with using them. Devices can give good relief on their own, and when a severe headache needs stronger treatment, medications added to a device usually work better than either treatment alone.
Quality of Life and Population Health in Behavioral Health Care: A Retrospective, Cross-Sectional Study
From Milwaukee County Behavioral Health Services, Milwaukee, WI.
Abstract
Objectives: The goal of this study was to determine whether a single-item quality of life (QOL) measure could serve as a useful population health–level metric within the Quadruple Aim framework in a publicly funded behavioral health system.
Design: This was a retrospective, cross-sectional study that examined the correlation between the single-item QOL measure and several other key measures of the social determinants of health and a composite measure of acute service utilization for all patients receiving mental health and substance use services in a community behavioral health system.
Methods: Data were collected for 4488 patients who had at least 1 assessment between October 1, 2020, and September 30, 2021. Data on social determinants of health were obtained through patient self-report; acute service use data were obtained from electronic health records.
Results: Statistical analyses revealed results in the expected direction for all relationships tested. Patients with higher QOL were more likely to report “Good” or better self-rated physical health, be employed, have a private residence, and report recent positive social interactions, and were less likely to have received acute services in the previous 90 days.
Conclusion: A single-item QOL measure shows promise as a general, minimally burdensome whole-system metric that can function as a target for population health management efforts in a large behavioral health system. Future research should explore whether this QOL measure is sensitive to change over time and examine its temporal relationship with other key outcome metrics.
Keywords: Quadruple Aim, single-item measures, social determinants of health, acute service utilization metrics.
The Triple Aim for health care—improving the individual experience of care, increasing the health of populations, and reducing the costs of care—was first proposed in 2008.1 More recently, some have advocated for an expanded focus to include a fourth aim: the quality of staff work life.2 Since this seminal paper was published, many health care systems have endeavored to adopt and implement the Quadruple Aim3,4; however, the concepts representing each of the aims are not universally defined,3 nor are the measures needed to populate the Quadruple Aim always available within the health system in question.5
Although several assessment models and frameworks that provide guidance to stakeholders have been developed,6,7 it is ultimately up to organizations themselves to determine which measures they should deploy to best represent the different quadrants of the Quadruple Aim.6 Evidence suggests, however, that quality measurement, and the administrative time required to conduct it, can be both financially and emotionally burdensome to providers and health systems.8-10 Thus, it is incumbent on organizations to select a set of measures that are not only meaningful but as parsimonious as possible.6,11,12
Quality of life (QOL) is a potential candidate to assess the aim of population health. Brief health-related QOL questions have long been used in epidemiological surveys, such as the Behavioral Risk Factor Surveillance System survey.13 Such questions are also a key component of community health frameworks, such as the County Health Rankings developed by the University of Wisconsin Population Health Institute.14 Furthermore, Humana recently revealed that increasing the number of physical and mental health “Healthy Days” (which are among the Centers for Disease Control and Prevention’s Health-Related Quality of Life questions15) among the members enrolled in their insurance plan would become a major goal for the organization.16,17 Many of these measures, while brief, focus on QOL as a function of health, often as a self-rated construct (from “Poor” to “Excellent”) or in the form of days of poor physical or mental health in the past 30 days,15 rather than evaluating QOL itself; however, several authors have pointed out that health status and QOL are related but distinct concepts.18,19
Brief single-item assessments focused specifically on QOL have been developed and implemented within nonclinical20 and clinical populations, including individuals with cancer,21 adults with disabilities,22 individuals with cystic fibrosis,23 and children with epilepsy.24 Despite the long history of QOL assessment in behavioral health treatment,25 single-item measures have not been widely implemented in this population.
Milwaukee County Behavioral Health Services (BHS), a publicly funded, county-based behavioral health care system in Milwaukee, Wisconsin, provides inpatient and ambulatory treatment, psychiatric emergency care, withdrawal management, care management, crisis services, and other support services to individuals in Milwaukee County. In 2018 the community services arm of BHS began implementing a single QOL question from the World Health Organization’s WHOQOL-BREF26: On a 5-point rating scale of “Very Poor” to “Very Good,” “How would you rate your overall quality of life right now?” Previous research by Atroszko and colleagues,20 which used a similar approach with the same item from the WHOQOL-BREF, reported correlations in the expected direction of the single-item QOL measure with perceived stress, depression, anxiety, loneliness, and daily hours of sleep. This study’s sample, however, comprised opportunistically recruited college students, not a clinical population. Further, the researchers did not examine the relationship of QOL with acute service utilization or other measures of the social determinants of health, such as housing, employment, or social connectedness.
The following study was designed to extend these results by focusing on a clinical population—individuals with mental health or substance use issues—being served in a large, publicly funded behavioral health system in Milwaukee, Wisconsin. The objective of this study was to determine whether a single-item QOL measure could be used as a brief, parsimonious measure of overall population health by examining its relationship with other key outcome measures for patients receiving services from BHS. This study was reviewed and approved by BHS’s Institutional Review Board.
Methods
All patients engaged in nonacute community services are offered a standardized assessment that includes, among other measures, items related to QOL, housing status, employment status, self-rated physical health, and social connectedness. This assessment is administered at intake, discharge, and every 6 months while patients are enrolled in services. Patients who received at least 1 assessment between October 1, 2020, and September 30, 2021, were included in the analyses. Patients receiving crisis, inpatient, or withdrawal management services alone (ie, did not receive any other community-based services) were not offered the standard assessment and thus were not included in the analyses. If patients had more than 1 assessment during this time period, QOL data from the last assessment were used. Data on housing (private residence status, defined as adults living alone or with others without supervision in a house or apartment), employment status, self-rated physical health, and social connectedness (measured by asking people whether they have had positive interactions with family or friends in the past 30 days) were extracted from the same timepoint as well.
Also included in the analyses were rates of acute service utilization, in which any patient with at least 1 visit to BHS’s psychiatric emergency department, withdrawal management facility, or psychiatric inpatient facility in the 90 days prior to the date of the assessment received a code of “Yes,” and any patient who did not receive any of these services received a code of “No.” Chi-square analyses were conducted to determine the relationship between QOL rankings (“Very Poor,” “Poor,” “Neither Good nor Poor,” “Good,” and “Very Good”) and housing, employment, self-rated physical health, social connectedness, and 90-day acute service use. All acute service utilization data were obtained from BHS’s electronic health records system. All data used in the study were stored on a secure, password-protected server. All analyses were conducted with SPSS software (SPSS 28; IBM).
Results
Data were available for 4488 patients who received an assessment between October 1, 2020, and September 30, 2021 (total numbers per item vary because some items had missing data; see supplementary eTables 1-3 for sample size per item). Demographics of the patient sample are listed in Table 1; the demographics of the patients who were missing data for specific outcomes are presented in eTables 1-3.
Statistical analyses revealed results in the expected direction for all relationships tested (Table 2). As patients’ self-reported QOL improved, so did the likelihood of higher rates of self-reported “Good” or better physical health, which was 576% higher among individuals who reported “Very Good” QOL relative to those who reported “Very Poor” QOL. Similarly, when compared with individuals with “Very Poor” QOL, individuals who reported “Very Good” QOL were 21.91% more likely to report having a private residence, 126.7% more likely to report being employed, and 29.17% more likely to report having had positive social interactions with family and friends in the past 30 days. There was an inverse relationship between QOL and the likelihood that a patient had received at least 1 admission for an acute service in the previous 90 days, such that patients who reported “Very Good” QOL were 86.34% less likely to have had an admission compared to patients with “Very Poor” QOL (2.8% vs 20.5%, respectively). The relationships among the criterion variables used in this study are presented in Table 3.

Discussion
The results of this preliminary analysis suggest that self-rated QOL is related to key health, social determinants of health, and acute service utilization metrics. These data are important for several reasons. First, because QOL is a diagnostically agnostic measure, it is a cross-cutting measure to use with clinically diverse populations receiving an array of different services. Second, at 1 item, the QOL measure is extremely brief and therefore minimally onerous to implement for both patients and administratively overburdened providers. Third, its correlation with other key metrics suggests that it can function as a broad population health measure for health care organizations because individuals with higher QOL will also likely have better outcomes in other key areas. This suggests that it has the potential to broadly represent the overall status of a population of patients, thus functioning as a type of “whole system” measure, which the Institute for Healthcare Improvement describes as “a small set of measures that reflect a health system’s overall performance on core dimensions of quality guided by the Triple Aim.”7 These whole system measures can help focus an organization’s strategic initiatives and efforts on the issues that matter most to the patients and community it serves.
The relationship of QOL to acute service utilization deserves special mention. As an administrative measure, utilization is not susceptible to the same response bias as the other self-reported variables. Furthermore, acute services are costly to health systems, and hospital readmissions are associated with payment reductions in the Centers for Medicare and Medicaid Services (CMS) Hospital Readmissions Reduction Program for hospitals that fail to meet certain performance targets.27 Thus, because of its alignment with federal mandates, improved QOL (and potentially concomitant decreases in acute service use) may have significant financial implications for health systems as well.
This study was limited by several factors. First, it was focused on a population receiving publicly funded behavioral health services with strict eligibility requirements, one of which stipulated that individuals must be at 200% or less of the Federal Poverty Level; therefore, the results might not be applicable to health systems with a more clinically or socioeconomically diverse patient population. Second, because these data are cross-sectional, it was not possible to determine whether QOL improved over time or whether changes in QOL covaried longitudinally with the other metrics under observation. For example, if patients’ QOL improved from the first to last assessment, did their employment or residential status improve as well, or were these patients more likely to be employed at their first assessment? Furthermore, if there was covariance, did changes in employment, housing status, and so on precede changes in QOL or vice versa? Multiple longitudinal observations would help to address these questions and will be the focus of future analyses.
Conclusion
This preliminary study suggests that a single-item QOL measure may be a valuable population health–level metric for health systems. It requires little administrative effort on the part of either the clinician or patient. It is also agnostic with regard to clinical issue or treatment approach and can therefore admit of a range of diagnoses or patient-specific, idiosyncratic recovery goals. It is correlated with other key health, social determinants of health, and acute service utilization indicators and can therefore serve as a “whole system” measure because of its ability to broadly represent improvements in an entire population. Furthermore, QOL is patient-centered in that data are obtained through patient self-report, which is a high priority for CMS and other health care organizations.28 In summary, a single-item QOL measure holds promise for health care organizations looking to implement the Quadruple Aim and assess the health of the populations they serve in a manner that is simple, efficient, and patient-centered.
Acknowledgments: The author thanks Jennifer Wittwer for her thoughtful comments on the initial draft of this manuscript and Gary Kraft for his help extracting the data used in the analyses.
Corresponding author: Walter Matthew Drymalski, PhD; [email protected]
Disclosures: None reported.
1. Berwick DM, Nolan TW, Whittington J. The triple aim: care, health, and cost. Health Aff (Millwood). 2008;27(3):759-769. doi:10.1377/hlthaff.27.3.759
2. Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med. 2014;12(6):573-576. doi:10.1370/afm.1713
3. Hendrikx RJP, Drewes HW, Spreeuwenberg M, et al. Which triple aim related measures are being used to evaluate population management initiatives? An international comparative analysis. Health Policy. 2016;120(5):471-485. doi:10.1016/j.healthpol.2016.03.008
4. Whittington JW, Nolan K, Lewis N, Torres T. Pursuing the triple aim: the first 7 years. Milbank Q. 2015;93(2):263-300. doi:10.1111/1468-0009.12122
5. Ryan BL, Brown JB, Glazier RH, Hutchison B. Examining primary healthcare performance through a triple aim lens. Healthc Policy. 2016;11(3):19-31.
6. Stiefel M, Nolan K. A guide to measuring the Triple Aim: population health, experience of care, and per capita cost. Institute for Healthcare Improvement; 2012. Accessed November 1, 2022. https://nhchc.org/wp-content/uploads/2019/08/ihiguidetomeasuringtripleaimwhitepaper2012.pdf
7. Martin L, Nelson E, Rakover J, Chase A. Whole system measures 2.0: a compass for health system leaders. Institute for Healthcare Improvement; 2016. Accessed November 1, 2022. http://www.ihi.org:80/resources/Pages/IHIWhitePapers/Whole-System-Measures-Compass-for-Health-System-Leaders.aspx
8. Casalino LP, Gans D, Weber R, et al. US physician practices spend more than $15.4 billion annually to report quality measures. Health Aff (Millwood). 2016;35(3):401-406. doi:10.1377/hlthaff.2015.1258
9. Rao SK, Kimball AB, Lehrhoff SR, et al. The impact of administrative burden on academic physicians: results of a hospital-wide physician survey. Acad Med. 2017;92(2):237-243. doi:10.1097/ACM.0000000000001461
10. Woolhandler S, Himmelstein DU. Administrative work consumes one-sixth of U.S. physicians’ working hours and lowers their career satisfaction. Int J Health Serv. 2014;44(4):635-642. doi:10.2190/HS.44.4.a
11. Meyer GS, Nelson EC, Pryor DB, et al. More quality measures versus measuring what matters: a call for balance and parsimony. BMJ Qual Saf. 2012;21(11):964-968. doi:10.1136/bmjqs-2012-001081
12. Vital Signs: Core Metrics for Health and Health Care Progress. Washington, DC: National Academies Press; 2015. doi:10.17226/19402
13. Centers for Disease Control and Prevention. BRFSS questionnaires. Accessed November 1, 2022. https://www.cdc.gov/brfss/questionnaires/index.htm
14. County Health Rankings and Roadmaps. Measures & data sources. University of Wisconsin Population Health Institute. Accessed November 1, 2022. https://www.countyhealthrankings.org/explore-health-rankings/measures-data-sources
15. Centers for Disease Control and Prevention. Healthy days core module (CDC HRQOL-4). Accessed November 1, 2022. https://www.cdc.gov/hrqol/hrqol14_measure.htm
16. Cordier T, Song Y, Cambon J, et al. A bold goal: more healthy days through improved community health. Popul Health Manag. 2018;21(3):202-208. doi:10.1089/pop.2017.0142
17. Slabaugh SL, Shah M, Zack M, et al. Leveraging health-related quality of life in population health management: the case for healthy days. Popul Health Manag. 2017;20(1):13-22. doi:10.1089/pop.2015.0162
18. Karimi M, Brazier J. Health, health-related quality of life, and quality of life: what is the difference? Pharmacoeconomics. 2016;34(7):645-649. doi:10.1007/s40273-016-0389-9
19. Smith KW, Avis NE, Assmann SF. Distinguishing between quality of life and health status in quality of life research: a meta-analysis. Qual Life Res. 1999;8(5):447-459. doi:10.1023/a:1008928518577
20. Atroszko PA, Baginska P, Mokosinska M, et al. Validity and reliability of single-item self-report measures of general quality of life, general health and sleep quality. In: CER Comparative European Research 2015. Sciemcee Publishing; 2015:207-211.
21. Singh JA, Satele D, Pattabasavaiah S, et al. Normative data and clinically significant effect sizes for single-item numerical linear analogue self-assessment (LASA) scales. Health Qual Life Outcomes. 2014;12:187. doi:10.1186/s12955-014-0187-z
22. Siebens HC, Tsukerman D, Adkins RH, et al. Correlates of a single-item quality-of-life measure in people aging with disabilities. Am J Phys Med Rehabil. 2015;94(12):1065-1074. doi:10.1097/PHM.0000000000000298
23. Yohannes AM, Dodd M, Morris J, Webb K. Reliability and validity of a single item measure of quality of life scale for adult patients with cystic fibrosis. Health Qual Life Outcomes. 2011;9:105. doi:10.1186/1477-7525-9-105
24. Conway L, Widjaja E, Smith ML. Single-item measure for assessing quality of life in children with drug-resistant epilepsy. Epilepsia Open. 2017;3(1):46-54. doi:10.1002/epi4.12088
25. Barry MM, Zissi A. Quality of life as an outcome measure in evaluating mental health services: a review of the empirical evidence. Soc Psychiatry Psychiatr Epidemiol. 1997;32(1):38-47. doi:10.1007/BF00800666
26. Skevington SM, Lotfy M, O’Connell KA. The World Health Organization’s WHOQOL-BREF quality of life assessment: psychometric properties and results of the international field trial. Qual Life Res. 2004;13(2):299-310. doi:10.1023/B:QURE.0000018486.91360.00
27. Centers for Medicare & Medicaid Services. Hospital readmissions reduction program (HRRP). Accessed November 1, 2022. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/Readmissions-Reduction-Program
28. Centers for Medicare & Medicaid Services. Patient-reported outcome measures. CMS Measures Management System. Published May 2022. Accessed November 1, 2022. https://www.cms.gov/files/document/blueprint-patient-reported-outcome-measures.pdf
From Milwaukee County Behavioral Health Services, Milwaukee, WI.
Abstract
Objectives: The goal of this study was to determine whether a single-item quality of life (QOL) measure could serve as a useful population health–level metric within the Quadruple Aim framework in a publicly funded behavioral health system.
Design: This was a retrospective, cross-sectional study that examined the correlation between the single-item QOL measure and several other key measures of the social determinants of health and a composite measure of acute service utilization for all patients receiving mental health and substance use services in a community behavioral health system.
Methods: Data were collected for 4488 patients who had at least 1 assessment between October 1, 2020, and September 30, 2021. Data on social determinants of health were obtained through patient self-report; acute service use data were obtained from electronic health records.
Results: Statistical analyses revealed results in the expected direction for all relationships tested. Patients with higher QOL were more likely to report “Good” or better self-rated physical health, be employed, have a private residence, and report recent positive social interactions, and were less likely to have received acute services in the previous 90 days.
Conclusion: A single-item QOL measure shows promise as a general, minimally burdensome whole-system metric that can function as a target for population health management efforts in a large behavioral health system. Future research should explore whether this QOL measure is sensitive to change over time and examine its temporal relationship with other key outcome metrics.
Keywords: Quadruple Aim, single-item measures, social determinants of health, acute service utilization metrics.
The Triple Aim for health care—improving the individual experience of care, increasing the health of populations, and reducing the costs of care—was first proposed in 2008.1 More recently, some have advocated for an expanded focus to include a fourth aim: the quality of staff work life.2 Since this seminal paper was published, many health care systems have endeavored to adopt and implement the Quadruple Aim3,4; however, the concepts representing each of the aims are not universally defined,3 nor are the measures needed to populate the Quadruple Aim always available within the health system in question.5
Although several assessment models and frameworks that provide guidance to stakeholders have been developed,6,7 it is ultimately up to organizations themselves to determine which measures they should deploy to best represent the different quadrants of the Quadruple Aim.6 Evidence suggests, however, that quality measurement, and the administrative time required to conduct it, can be both financially and emotionally burdensome to providers and health systems.8-10 Thus, it is incumbent on organizations to select a set of measures that are not only meaningful but as parsimonious as possible.6,11,12
Quality of life (QOL) is a potential candidate to assess the aim of population health. Brief health-related QOL questions have long been used in epidemiological surveys, such as the Behavioral Risk Factor Surveillance System survey.13 Such questions are also a key component of community health frameworks, such as the County Health Rankings developed by the University of Wisconsin Population Health Institute.14 Furthermore, Humana recently revealed that increasing the number of physical and mental health “Healthy Days” (which are among the Centers for Disease Control and Prevention’s Health-Related Quality of Life questions15) among the members enrolled in their insurance plan would become a major goal for the organization.16,17 Many of these measures, while brief, focus on QOL as a function of health, often as a self-rated construct (from “Poor” to “Excellent”) or in the form of days of poor physical or mental health in the past 30 days,15 rather than evaluating QOL itself; however, several authors have pointed out that health status and QOL are related but distinct concepts.18,19
Brief single-item assessments focused specifically on QOL have been developed and implemented within nonclinical20 and clinical populations, including individuals with cancer,21 adults with disabilities,22 individuals with cystic fibrosis,23 and children with epilepsy.24 Despite the long history of QOL assessment in behavioral health treatment,25 single-item measures have not been widely implemented in this population.
Milwaukee County Behavioral Health Services (BHS), a publicly funded, county-based behavioral health care system in Milwaukee, Wisconsin, provides inpatient and ambulatory treatment, psychiatric emergency care, withdrawal management, care management, crisis services, and other support services to individuals in Milwaukee County. In 2018 the community services arm of BHS began implementing a single QOL question from the World Health Organization’s WHOQOL-BREF26: On a 5-point rating scale of “Very Poor” to “Very Good,” “How would you rate your overall quality of life right now?” Previous research by Atroszko and colleagues,20 which used a similar approach with the same item from the WHOQOL-BREF, reported correlations in the expected direction of the single-item QOL measure with perceived stress, depression, anxiety, loneliness, and daily hours of sleep. This study’s sample, however, comprised opportunistically recruited college students, not a clinical population. Further, the researchers did not examine the relationship of QOL with acute service utilization or other measures of the social determinants of health, such as housing, employment, or social connectedness.
The following study was designed to extend these results by focusing on a clinical population—individuals with mental health or substance use issues—being served in a large, publicly funded behavioral health system in Milwaukee, Wisconsin. The objective of this study was to determine whether a single-item QOL measure could be used as a brief, parsimonious measure of overall population health by examining its relationship with other key outcome measures for patients receiving services from BHS. This study was reviewed and approved by BHS’s Institutional Review Board.
Methods
All patients engaged in nonacute community services are offered a standardized assessment that includes, among other measures, items related to QOL, housing status, employment status, self-rated physical health, and social connectedness. This assessment is administered at intake, discharge, and every 6 months while patients are enrolled in services. Patients who received at least 1 assessment between October 1, 2020, and September 30, 2021, were included in the analyses. Patients receiving crisis, inpatient, or withdrawal management services alone (ie, did not receive any other community-based services) were not offered the standard assessment and thus were not included in the analyses. If patients had more than 1 assessment during this time period, QOL data from the last assessment were used. Data on housing (private residence status, defined as adults living alone or with others without supervision in a house or apartment), employment status, self-rated physical health, and social connectedness (measured by asking people whether they have had positive interactions with family or friends in the past 30 days) were extracted from the same timepoint as well.
Also included in the analyses were rates of acute service utilization, in which any patient with at least 1 visit to BHS’s psychiatric emergency department, withdrawal management facility, or psychiatric inpatient facility in the 90 days prior to the date of the assessment received a code of “Yes,” and any patient who did not receive any of these services received a code of “No.” Chi-square analyses were conducted to determine the relationship between QOL rankings (“Very Poor,” “Poor,” “Neither Good nor Poor,” “Good,” and “Very Good”) and housing, employment, self-rated physical health, social connectedness, and 90-day acute service use. All acute service utilization data were obtained from BHS’s electronic health records system. All data used in the study were stored on a secure, password-protected server. All analyses were conducted with SPSS software (SPSS 28; IBM).
Results
Data were available for 4488 patients who received an assessment between October 1, 2020, and September 30, 2021 (total numbers per item vary because some items had missing data; see supplementary eTables 1-3 for sample size per item). Demographics of the patient sample are listed in Table 1; the demographics of the patients who were missing data for specific outcomes are presented in eTables 1-3.
Statistical analyses revealed results in the expected direction for all relationships tested (Table 2). As patients’ self-reported QOL improved, so did the likelihood of higher rates of self-reported “Good” or better physical health, which was 576% higher among individuals who reported “Very Good” QOL relative to those who reported “Very Poor” QOL. Similarly, when compared with individuals with “Very Poor” QOL, individuals who reported “Very Good” QOL were 21.91% more likely to report having a private residence, 126.7% more likely to report being employed, and 29.17% more likely to report having had positive social interactions with family and friends in the past 30 days. There was an inverse relationship between QOL and the likelihood that a patient had received at least 1 admission for an acute service in the previous 90 days, such that patients who reported “Very Good” QOL were 86.34% less likely to have had an admission compared to patients with “Very Poor” QOL (2.8% vs 20.5%, respectively). The relationships among the criterion variables used in this study are presented in Table 3.

Discussion
The results of this preliminary analysis suggest that self-rated QOL is related to key health, social determinants of health, and acute service utilization metrics. These data are important for several reasons. First, because QOL is a diagnostically agnostic measure, it is a cross-cutting measure to use with clinically diverse populations receiving an array of different services. Second, at 1 item, the QOL measure is extremely brief and therefore minimally onerous to implement for both patients and administratively overburdened providers. Third, its correlation with other key metrics suggests that it can function as a broad population health measure for health care organizations because individuals with higher QOL will also likely have better outcomes in other key areas. This suggests that it has the potential to broadly represent the overall status of a population of patients, thus functioning as a type of “whole system” measure, which the Institute for Healthcare Improvement describes as “a small set of measures that reflect a health system’s overall performance on core dimensions of quality guided by the Triple Aim.”7 These whole system measures can help focus an organization’s strategic initiatives and efforts on the issues that matter most to the patients and community it serves.
The relationship of QOL to acute service utilization deserves special mention. As an administrative measure, utilization is not susceptible to the same response bias as the other self-reported variables. Furthermore, acute services are costly to health systems, and hospital readmissions are associated with payment reductions in the Centers for Medicare and Medicaid Services (CMS) Hospital Readmissions Reduction Program for hospitals that fail to meet certain performance targets.27 Thus, because of its alignment with federal mandates, improved QOL (and potentially concomitant decreases in acute service use) may have significant financial implications for health systems as well.
This study was limited by several factors. First, it was focused on a population receiving publicly funded behavioral health services with strict eligibility requirements, one of which stipulated that individuals must be at 200% or less of the Federal Poverty Level; therefore, the results might not be applicable to health systems with a more clinically or socioeconomically diverse patient population. Second, because these data are cross-sectional, it was not possible to determine whether QOL improved over time or whether changes in QOL covaried longitudinally with the other metrics under observation. For example, if patients’ QOL improved from the first to last assessment, did their employment or residential status improve as well, or were these patients more likely to be employed at their first assessment? Furthermore, if there was covariance, did changes in employment, housing status, and so on precede changes in QOL or vice versa? Multiple longitudinal observations would help to address these questions and will be the focus of future analyses.
Conclusion
This preliminary study suggests that a single-item QOL measure may be a valuable population health–level metric for health systems. It requires little administrative effort on the part of either the clinician or patient. It is also agnostic with regard to clinical issue or treatment approach and can therefore admit of a range of diagnoses or patient-specific, idiosyncratic recovery goals. It is correlated with other key health, social determinants of health, and acute service utilization indicators and can therefore serve as a “whole system” measure because of its ability to broadly represent improvements in an entire population. Furthermore, QOL is patient-centered in that data are obtained through patient self-report, which is a high priority for CMS and other health care organizations.28 In summary, a single-item QOL measure holds promise for health care organizations looking to implement the Quadruple Aim and assess the health of the populations they serve in a manner that is simple, efficient, and patient-centered.
Acknowledgments: The author thanks Jennifer Wittwer for her thoughtful comments on the initial draft of this manuscript and Gary Kraft for his help extracting the data used in the analyses.
Corresponding author: Walter Matthew Drymalski, PhD; [email protected]
Disclosures: None reported.
From Milwaukee County Behavioral Health Services, Milwaukee, WI.
Abstract
Objectives: The goal of this study was to determine whether a single-item quality of life (QOL) measure could serve as a useful population health–level metric within the Quadruple Aim framework in a publicly funded behavioral health system.
Design: This was a retrospective, cross-sectional study that examined the correlation between the single-item QOL measure and several other key measures of the social determinants of health and a composite measure of acute service utilization for all patients receiving mental health and substance use services in a community behavioral health system.
Methods: Data were collected for 4488 patients who had at least 1 assessment between October 1, 2020, and September 30, 2021. Data on social determinants of health were obtained through patient self-report; acute service use data were obtained from electronic health records.
Results: Statistical analyses revealed results in the expected direction for all relationships tested. Patients with higher QOL were more likely to report “Good” or better self-rated physical health, be employed, have a private residence, and report recent positive social interactions, and were less likely to have received acute services in the previous 90 days.
Conclusion: A single-item QOL measure shows promise as a general, minimally burdensome whole-system metric that can function as a target for population health management efforts in a large behavioral health system. Future research should explore whether this QOL measure is sensitive to change over time and examine its temporal relationship with other key outcome metrics.
Keywords: Quadruple Aim, single-item measures, social determinants of health, acute service utilization metrics.
The Triple Aim for health care—improving the individual experience of care, increasing the health of populations, and reducing the costs of care—was first proposed in 2008.1 More recently, some have advocated for an expanded focus to include a fourth aim: the quality of staff work life.2 Since this seminal paper was published, many health care systems have endeavored to adopt and implement the Quadruple Aim3,4; however, the concepts representing each of the aims are not universally defined,3 nor are the measures needed to populate the Quadruple Aim always available within the health system in question.5
Although several assessment models and frameworks that provide guidance to stakeholders have been developed,6,7 it is ultimately up to organizations themselves to determine which measures they should deploy to best represent the different quadrants of the Quadruple Aim.6 Evidence suggests, however, that quality measurement, and the administrative time required to conduct it, can be both financially and emotionally burdensome to providers and health systems.8-10 Thus, it is incumbent on organizations to select a set of measures that are not only meaningful but as parsimonious as possible.6,11,12
Quality of life (QOL) is a potential candidate to assess the aim of population health. Brief health-related QOL questions have long been used in epidemiological surveys, such as the Behavioral Risk Factor Surveillance System survey.13 Such questions are also a key component of community health frameworks, such as the County Health Rankings developed by the University of Wisconsin Population Health Institute.14 Furthermore, Humana recently revealed that increasing the number of physical and mental health “Healthy Days” (which are among the Centers for Disease Control and Prevention’s Health-Related Quality of Life questions15) among the members enrolled in their insurance plan would become a major goal for the organization.16,17 Many of these measures, while brief, focus on QOL as a function of health, often as a self-rated construct (from “Poor” to “Excellent”) or in the form of days of poor physical or mental health in the past 30 days,15 rather than evaluating QOL itself; however, several authors have pointed out that health status and QOL are related but distinct concepts.18,19
Brief single-item assessments focused specifically on QOL have been developed and implemented within nonclinical20 and clinical populations, including individuals with cancer,21 adults with disabilities,22 individuals with cystic fibrosis,23 and children with epilepsy.24 Despite the long history of QOL assessment in behavioral health treatment,25 single-item measures have not been widely implemented in this population.
Milwaukee County Behavioral Health Services (BHS), a publicly funded, county-based behavioral health care system in Milwaukee, Wisconsin, provides inpatient and ambulatory treatment, psychiatric emergency care, withdrawal management, care management, crisis services, and other support services to individuals in Milwaukee County. In 2018 the community services arm of BHS began implementing a single QOL question from the World Health Organization’s WHOQOL-BREF26: On a 5-point rating scale of “Very Poor” to “Very Good,” “How would you rate your overall quality of life right now?” Previous research by Atroszko and colleagues,20 which used a similar approach with the same item from the WHOQOL-BREF, reported correlations in the expected direction of the single-item QOL measure with perceived stress, depression, anxiety, loneliness, and daily hours of sleep. This study’s sample, however, comprised opportunistically recruited college students, not a clinical population. Further, the researchers did not examine the relationship of QOL with acute service utilization or other measures of the social determinants of health, such as housing, employment, or social connectedness.
The following study was designed to extend these results by focusing on a clinical population—individuals with mental health or substance use issues—being served in a large, publicly funded behavioral health system in Milwaukee, Wisconsin. The objective of this study was to determine whether a single-item QOL measure could be used as a brief, parsimonious measure of overall population health by examining its relationship with other key outcome measures for patients receiving services from BHS. This study was reviewed and approved by BHS’s Institutional Review Board.
Methods
All patients engaged in nonacute community services are offered a standardized assessment that includes, among other measures, items related to QOL, housing status, employment status, self-rated physical health, and social connectedness. This assessment is administered at intake, discharge, and every 6 months while patients are enrolled in services. Patients who received at least 1 assessment between October 1, 2020, and September 30, 2021, were included in the analyses. Patients receiving crisis, inpatient, or withdrawal management services alone (ie, did not receive any other community-based services) were not offered the standard assessment and thus were not included in the analyses. If patients had more than 1 assessment during this time period, QOL data from the last assessment were used. Data on housing (private residence status, defined as adults living alone or with others without supervision in a house or apartment), employment status, self-rated physical health, and social connectedness (measured by asking people whether they have had positive interactions with family or friends in the past 30 days) were extracted from the same timepoint as well.
Also included in the analyses were rates of acute service utilization, in which any patient with at least 1 visit to BHS’s psychiatric emergency department, withdrawal management facility, or psychiatric inpatient facility in the 90 days prior to the date of the assessment received a code of “Yes,” and any patient who did not receive any of these services received a code of “No.” Chi-square analyses were conducted to determine the relationship between QOL rankings (“Very Poor,” “Poor,” “Neither Good nor Poor,” “Good,” and “Very Good”) and housing, employment, self-rated physical health, social connectedness, and 90-day acute service use. All acute service utilization data were obtained from BHS’s electronic health records system. All data used in the study were stored on a secure, password-protected server. All analyses were conducted with SPSS software (SPSS 28; IBM).
Results
Data were available for 4488 patients who received an assessment between October 1, 2020, and September 30, 2021 (total numbers per item vary because some items had missing data; see supplementary eTables 1-3 for sample size per item). Demographics of the patient sample are listed in Table 1; the demographics of the patients who were missing data for specific outcomes are presented in eTables 1-3.
Statistical analyses revealed results in the expected direction for all relationships tested (Table 2). As patients’ self-reported QOL improved, so did the likelihood of higher rates of self-reported “Good” or better physical health, which was 576% higher among individuals who reported “Very Good” QOL relative to those who reported “Very Poor” QOL. Similarly, when compared with individuals with “Very Poor” QOL, individuals who reported “Very Good” QOL were 21.91% more likely to report having a private residence, 126.7% more likely to report being employed, and 29.17% more likely to report having had positive social interactions with family and friends in the past 30 days. There was an inverse relationship between QOL and the likelihood that a patient had received at least 1 admission for an acute service in the previous 90 days, such that patients who reported “Very Good” QOL were 86.34% less likely to have had an admission compared to patients with “Very Poor” QOL (2.8% vs 20.5%, respectively). The relationships among the criterion variables used in this study are presented in Table 3.

Discussion
The results of this preliminary analysis suggest that self-rated QOL is related to key health, social determinants of health, and acute service utilization metrics. These data are important for several reasons. First, because QOL is a diagnostically agnostic measure, it is a cross-cutting measure to use with clinically diverse populations receiving an array of different services. Second, at 1 item, the QOL measure is extremely brief and therefore minimally onerous to implement for both patients and administratively overburdened providers. Third, its correlation with other key metrics suggests that it can function as a broad population health measure for health care organizations because individuals with higher QOL will also likely have better outcomes in other key areas. This suggests that it has the potential to broadly represent the overall status of a population of patients, thus functioning as a type of “whole system” measure, which the Institute for Healthcare Improvement describes as “a small set of measures that reflect a health system’s overall performance on core dimensions of quality guided by the Triple Aim.”7 These whole system measures can help focus an organization’s strategic initiatives and efforts on the issues that matter most to the patients and community it serves.
The relationship of QOL to acute service utilization deserves special mention. As an administrative measure, utilization is not susceptible to the same response bias as the other self-reported variables. Furthermore, acute services are costly to health systems, and hospital readmissions are associated with payment reductions in the Centers for Medicare and Medicaid Services (CMS) Hospital Readmissions Reduction Program for hospitals that fail to meet certain performance targets.27 Thus, because of its alignment with federal mandates, improved QOL (and potentially concomitant decreases in acute service use) may have significant financial implications for health systems as well.
This study was limited by several factors. First, it was focused on a population receiving publicly funded behavioral health services with strict eligibility requirements, one of which stipulated that individuals must be at 200% or less of the Federal Poverty Level; therefore, the results might not be applicable to health systems with a more clinically or socioeconomically diverse patient population. Second, because these data are cross-sectional, it was not possible to determine whether QOL improved over time or whether changes in QOL covaried longitudinally with the other metrics under observation. For example, if patients’ QOL improved from the first to last assessment, did their employment or residential status improve as well, or were these patients more likely to be employed at their first assessment? Furthermore, if there was covariance, did changes in employment, housing status, and so on precede changes in QOL or vice versa? Multiple longitudinal observations would help to address these questions and will be the focus of future analyses.
Conclusion
This preliminary study suggests that a single-item QOL measure may be a valuable population health–level metric for health systems. It requires little administrative effort on the part of either the clinician or patient. It is also agnostic with regard to clinical issue or treatment approach and can therefore admit of a range of diagnoses or patient-specific, idiosyncratic recovery goals. It is correlated with other key health, social determinants of health, and acute service utilization indicators and can therefore serve as a “whole system” measure because of its ability to broadly represent improvements in an entire population. Furthermore, QOL is patient-centered in that data are obtained through patient self-report, which is a high priority for CMS and other health care organizations.28 In summary, a single-item QOL measure holds promise for health care organizations looking to implement the Quadruple Aim and assess the health of the populations they serve in a manner that is simple, efficient, and patient-centered.
Acknowledgments: The author thanks Jennifer Wittwer for her thoughtful comments on the initial draft of this manuscript and Gary Kraft for his help extracting the data used in the analyses.
Corresponding author: Walter Matthew Drymalski, PhD; [email protected]
Disclosures: None reported.
1. Berwick DM, Nolan TW, Whittington J. The triple aim: care, health, and cost. Health Aff (Millwood). 2008;27(3):759-769. doi:10.1377/hlthaff.27.3.759
2. Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med. 2014;12(6):573-576. doi:10.1370/afm.1713
3. Hendrikx RJP, Drewes HW, Spreeuwenberg M, et al. Which triple aim related measures are being used to evaluate population management initiatives? An international comparative analysis. Health Policy. 2016;120(5):471-485. doi:10.1016/j.healthpol.2016.03.008
4. Whittington JW, Nolan K, Lewis N, Torres T. Pursuing the triple aim: the first 7 years. Milbank Q. 2015;93(2):263-300. doi:10.1111/1468-0009.12122
5. Ryan BL, Brown JB, Glazier RH, Hutchison B. Examining primary healthcare performance through a triple aim lens. Healthc Policy. 2016;11(3):19-31.
6. Stiefel M, Nolan K. A guide to measuring the Triple Aim: population health, experience of care, and per capita cost. Institute for Healthcare Improvement; 2012. Accessed November 1, 2022. https://nhchc.org/wp-content/uploads/2019/08/ihiguidetomeasuringtripleaimwhitepaper2012.pdf
7. Martin L, Nelson E, Rakover J, Chase A. Whole system measures 2.0: a compass for health system leaders. Institute for Healthcare Improvement; 2016. Accessed November 1, 2022. http://www.ihi.org:80/resources/Pages/IHIWhitePapers/Whole-System-Measures-Compass-for-Health-System-Leaders.aspx
8. Casalino LP, Gans D, Weber R, et al. US physician practices spend more than $15.4 billion annually to report quality measures. Health Aff (Millwood). 2016;35(3):401-406. doi:10.1377/hlthaff.2015.1258
9. Rao SK, Kimball AB, Lehrhoff SR, et al. The impact of administrative burden on academic physicians: results of a hospital-wide physician survey. Acad Med. 2017;92(2):237-243. doi:10.1097/ACM.0000000000001461
10. Woolhandler S, Himmelstein DU. Administrative work consumes one-sixth of U.S. physicians’ working hours and lowers their career satisfaction. Int J Health Serv. 2014;44(4):635-642. doi:10.2190/HS.44.4.a
11. Meyer GS, Nelson EC, Pryor DB, et al. More quality measures versus measuring what matters: a call for balance and parsimony. BMJ Qual Saf. 2012;21(11):964-968. doi:10.1136/bmjqs-2012-001081
12. Vital Signs: Core Metrics for Health and Health Care Progress. Washington, DC: National Academies Press; 2015. doi:10.17226/19402
13. Centers for Disease Control and Prevention. BRFSS questionnaires. Accessed November 1, 2022. https://www.cdc.gov/brfss/questionnaires/index.htm
14. County Health Rankings and Roadmaps. Measures & data sources. University of Wisconsin Population Health Institute. Accessed November 1, 2022. https://www.countyhealthrankings.org/explore-health-rankings/measures-data-sources
15. Centers for Disease Control and Prevention. Healthy days core module (CDC HRQOL-4). Accessed November 1, 2022. https://www.cdc.gov/hrqol/hrqol14_measure.htm
16. Cordier T, Song Y, Cambon J, et al. A bold goal: more healthy days through improved community health. Popul Health Manag. 2018;21(3):202-208. doi:10.1089/pop.2017.0142
17. Slabaugh SL, Shah M, Zack M, et al. Leveraging health-related quality of life in population health management: the case for healthy days. Popul Health Manag. 2017;20(1):13-22. doi:10.1089/pop.2015.0162
18. Karimi M, Brazier J. Health, health-related quality of life, and quality of life: what is the difference? Pharmacoeconomics. 2016;34(7):645-649. doi:10.1007/s40273-016-0389-9
19. Smith KW, Avis NE, Assmann SF. Distinguishing between quality of life and health status in quality of life research: a meta-analysis. Qual Life Res. 1999;8(5):447-459. doi:10.1023/a:1008928518577
20. Atroszko PA, Baginska P, Mokosinska M, et al. Validity and reliability of single-item self-report measures of general quality of life, general health and sleep quality. In: CER Comparative European Research 2015. Sciemcee Publishing; 2015:207-211.
21. Singh JA, Satele D, Pattabasavaiah S, et al. Normative data and clinically significant effect sizes for single-item numerical linear analogue self-assessment (LASA) scales. Health Qual Life Outcomes. 2014;12:187. doi:10.1186/s12955-014-0187-z
22. Siebens HC, Tsukerman D, Adkins RH, et al. Correlates of a single-item quality-of-life measure in people aging with disabilities. Am J Phys Med Rehabil. 2015;94(12):1065-1074. doi:10.1097/PHM.0000000000000298
23. Yohannes AM, Dodd M, Morris J, Webb K. Reliability and validity of a single item measure of quality of life scale for adult patients with cystic fibrosis. Health Qual Life Outcomes. 2011;9:105. doi:10.1186/1477-7525-9-105
24. Conway L, Widjaja E, Smith ML. Single-item measure for assessing quality of life in children with drug-resistant epilepsy. Epilepsia Open. 2017;3(1):46-54. doi:10.1002/epi4.12088
25. Barry MM, Zissi A. Quality of life as an outcome measure in evaluating mental health services: a review of the empirical evidence. Soc Psychiatry Psychiatr Epidemiol. 1997;32(1):38-47. doi:10.1007/BF00800666
26. Skevington SM, Lotfy M, O’Connell KA. The World Health Organization’s WHOQOL-BREF quality of life assessment: psychometric properties and results of the international field trial. Qual Life Res. 2004;13(2):299-310. doi:10.1023/B:QURE.0000018486.91360.00
27. Centers for Medicare & Medicaid Services. Hospital readmissions reduction program (HRRP). Accessed November 1, 2022. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/Readmissions-Reduction-Program
28. Centers for Medicare & Medicaid Services. Patient-reported outcome measures. CMS Measures Management System. Published May 2022. Accessed November 1, 2022. https://www.cms.gov/files/document/blueprint-patient-reported-outcome-measures.pdf
1. Berwick DM, Nolan TW, Whittington J. The triple aim: care, health, and cost. Health Aff (Millwood). 2008;27(3):759-769. doi:10.1377/hlthaff.27.3.759
2. Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med. 2014;12(6):573-576. doi:10.1370/afm.1713
3. Hendrikx RJP, Drewes HW, Spreeuwenberg M, et al. Which triple aim related measures are being used to evaluate population management initiatives? An international comparative analysis. Health Policy. 2016;120(5):471-485. doi:10.1016/j.healthpol.2016.03.008
4. Whittington JW, Nolan K, Lewis N, Torres T. Pursuing the triple aim: the first 7 years. Milbank Q. 2015;93(2):263-300. doi:10.1111/1468-0009.12122
5. Ryan BL, Brown JB, Glazier RH, Hutchison B. Examining primary healthcare performance through a triple aim lens. Healthc Policy. 2016;11(3):19-31.
6. Stiefel M, Nolan K. A guide to measuring the Triple Aim: population health, experience of care, and per capita cost. Institute for Healthcare Improvement; 2012. Accessed November 1, 2022. https://nhchc.org/wp-content/uploads/2019/08/ihiguidetomeasuringtripleaimwhitepaper2012.pdf
7. Martin L, Nelson E, Rakover J, Chase A. Whole system measures 2.0: a compass for health system leaders. Institute for Healthcare Improvement; 2016. Accessed November 1, 2022. http://www.ihi.org:80/resources/Pages/IHIWhitePapers/Whole-System-Measures-Compass-for-Health-System-Leaders.aspx
8. Casalino LP, Gans D, Weber R, et al. US physician practices spend more than $15.4 billion annually to report quality measures. Health Aff (Millwood). 2016;35(3):401-406. doi:10.1377/hlthaff.2015.1258
9. Rao SK, Kimball AB, Lehrhoff SR, et al. The impact of administrative burden on academic physicians: results of a hospital-wide physician survey. Acad Med. 2017;92(2):237-243. doi:10.1097/ACM.0000000000001461
10. Woolhandler S, Himmelstein DU. Administrative work consumes one-sixth of U.S. physicians’ working hours and lowers their career satisfaction. Int J Health Serv. 2014;44(4):635-642. doi:10.2190/HS.44.4.a
11. Meyer GS, Nelson EC, Pryor DB, et al. More quality measures versus measuring what matters: a call for balance and parsimony. BMJ Qual Saf. 2012;21(11):964-968. doi:10.1136/bmjqs-2012-001081
12. Vital Signs: Core Metrics for Health and Health Care Progress. Washington, DC: National Academies Press; 2015. doi:10.17226/19402
13. Centers for Disease Control and Prevention. BRFSS questionnaires. Accessed November 1, 2022. https://www.cdc.gov/brfss/questionnaires/index.htm
14. County Health Rankings and Roadmaps. Measures & data sources. University of Wisconsin Population Health Institute. Accessed November 1, 2022. https://www.countyhealthrankings.org/explore-health-rankings/measures-data-sources
15. Centers for Disease Control and Prevention. Healthy days core module (CDC HRQOL-4). Accessed November 1, 2022. https://www.cdc.gov/hrqol/hrqol14_measure.htm
16. Cordier T, Song Y, Cambon J, et al. A bold goal: more healthy days through improved community health. Popul Health Manag. 2018;21(3):202-208. doi:10.1089/pop.2017.0142
17. Slabaugh SL, Shah M, Zack M, et al. Leveraging health-related quality of life in population health management: the case for healthy days. Popul Health Manag. 2017;20(1):13-22. doi:10.1089/pop.2015.0162
18. Karimi M, Brazier J. Health, health-related quality of life, and quality of life: what is the difference? Pharmacoeconomics. 2016;34(7):645-649. doi:10.1007/s40273-016-0389-9
19. Smith KW, Avis NE, Assmann SF. Distinguishing between quality of life and health status in quality of life research: a meta-analysis. Qual Life Res. 1999;8(5):447-459. doi:10.1023/a:1008928518577
20. Atroszko PA, Baginska P, Mokosinska M, et al. Validity and reliability of single-item self-report measures of general quality of life, general health and sleep quality. In: CER Comparative European Research 2015. Sciemcee Publishing; 2015:207-211.
21. Singh JA, Satele D, Pattabasavaiah S, et al. Normative data and clinically significant effect sizes for single-item numerical linear analogue self-assessment (LASA) scales. Health Qual Life Outcomes. 2014;12:187. doi:10.1186/s12955-014-0187-z
22. Siebens HC, Tsukerman D, Adkins RH, et al. Correlates of a single-item quality-of-life measure in people aging with disabilities. Am J Phys Med Rehabil. 2015;94(12):1065-1074. doi:10.1097/PHM.0000000000000298
23. Yohannes AM, Dodd M, Morris J, Webb K. Reliability and validity of a single item measure of quality of life scale for adult patients with cystic fibrosis. Health Qual Life Outcomes. 2011;9:105. doi:10.1186/1477-7525-9-105
24. Conway L, Widjaja E, Smith ML. Single-item measure for assessing quality of life in children with drug-resistant epilepsy. Epilepsia Open. 2017;3(1):46-54. doi:10.1002/epi4.12088
25. Barry MM, Zissi A. Quality of life as an outcome measure in evaluating mental health services: a review of the empirical evidence. Soc Psychiatry Psychiatr Epidemiol. 1997;32(1):38-47. doi:10.1007/BF00800666
26. Skevington SM, Lotfy M, O’Connell KA. The World Health Organization’s WHOQOL-BREF quality of life assessment: psychometric properties and results of the international field trial. Qual Life Res. 2004;13(2):299-310. doi:10.1023/B:QURE.0000018486.91360.00
27. Centers for Medicare & Medicaid Services. Hospital readmissions reduction program (HRRP). Accessed November 1, 2022. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/Readmissions-Reduction-Program
28. Centers for Medicare & Medicaid Services. Patient-reported outcome measures. CMS Measures Management System. Published May 2022. Accessed November 1, 2022. https://www.cms.gov/files/document/blueprint-patient-reported-outcome-measures.pdf
Neurosurgery Operating Room Efficiency During the COVID-19 Era
From the Department of Neurological Surgery, Vanderbilt University Medical Center, Nashville, TN (Stefan W. Koester, Puja Jagasia, and Drs. Liles, Dambrino IV, Feldman, and Chambless), and the Department of Anesthesiology, Vanderbilt University Medical Center, Nashville, TN (Drs. Mathews and Tiwari).
ABSTRACT
Background: The COVID-19 pandemic has had broad effects on surgical care, including operating room (OR) staffing, personal protective equipment (PPE) utilization, and newly implemented anti-infective measures. Our aim was to assess neurosurgery OR efficiency before the COVID-19 pandemic, during peak COVID-19, and during current times.
Methods: Institutional perioperative databases at a single, high-volume neurosurgical center were queried for operations performed from December 2019 until October 2021. March 12, 2020, the day that the state of Tennessee declared a state of emergency, was chosen as the onset of the COVID-19 pandemic. The 90-day periods before and after this day were used to define the pre-COVID-19, peak-COVID-19, and post-peak restrictions time periods for comparative analysis. Outcomes included delay in first-start and OR turnover time between neurosurgical cases. Preset threshold times were used in analyses to adjust for normal leniency in OR scheduling (15 minutes for first start and 90 minutes for turnover). Univariate analysis used Wilcoxon rank-sum test for continuous outcomes, while chi-square test and Fisher’s exact test were used for categorical comparisons. Significance was defined as P < .05.
Results: First-start time was analyzed in 426 pre-COVID-19, 357 peak-restrictions, and 2304 post-peak-restrictions cases. The unadjusted mean delay length was found to be significantly different between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes, 6 [18] vs 10 [21] vs 8 [20], respectively; P = .004). The adjusted average delay length and proportion of cases delayed beyond the 15-minute threshold were not significantly different. The proportion of cases that started early, as well as significantly early past a 15-minute threshold, have not been impacted. There was no significant change in turnover time during peak restrictions relative to the pre-COVID-19 period (88 [100] minutes vs 85 [95] minutes), and turnover time has since remained unchanged (83 [87] minutes).
Conclusion: Our center was able to maintain OR efficiency before, during, and after peak restrictions even while instituting advanced infection-control strategies. While there were significant changes, delays were relatively small in magnitude.
Keywords: operating room timing, hospital efficiency, socioeconomics, pandemic.
The COVID-19 pandemic has led to major changes in patient care both from a surgical perspective and in regard to inpatient hospital course. Safety protocols nationwide have been implemented to protect both patients and providers. Some elements of surgical care have drastically changed, including operating room (OR) staffing, personal protective equipment (PPE) utilization, and increased sterilization measures. Furloughs, layoffs, and reassignments due to the focus on nonelective and COVID-19–related cases challenged OR staffing and efficiency. Operating room staff with COVID-19 exposures or COVID-19 infections also caused last-minute changes in staffing. All of these scenarios can cause issues due to actual understaffing or due to staff members being pushed into highly specialized areas, such as neurosurgery, in which they have very little experience. A further obstacle to OR efficiency included policy changes involving PPE utilization, sterilization measures, and supply chain shortages of necessary resources such as PPE.
Neurosurgery in particular has been susceptible to COVID-19–related system-wide changes given operator proximity to the patient’s respiratory passages, frequency of emergent cases, and varying anesthetic needs, as well as the high level of specialization needed to perform neurosurgical care. Previous studies have shown a change in the makeup of neurosurgical patients seeking care, as well as in the acuity of neurological consult of these patients.1 A study in orthopedic surgery by Andreata et al demonstrated worsened OR efficiency, with significantly increased first-start and turnover times.2 In the COVID-19 era, OR quality and safety are crucially important to both patients and providers. Providing this safe and effective care in an efficient manner is important for optimal neurosurgical management in the long term.3 Moreover, the financial burden of implementing new protocols and standards can be compounded by additional financial losses due to reduced OR efficiency.
Methods
To analyze the effect of COVID-19 on neurosurgical OR efficiency, institutional perioperative databases at a single high-volume center were queried for operations performed from December 2019 until October 2021. March 12, 2020, was chosen as the onset of COVID-19 for analytic purposes, as this was the date when the state of Tennessee declared a state of emergency. The 90-day periods before and after this date were used for comparative analysis for pre-COVID-19, peak COVID-19, and post-peak-restrictions time periods. The peak COVID-19 period was defined as the 90-day period following the initial onset of COVID-19 and the surge of cases. For comparison purposes, post-peak COVID-19 was defined as the months following the first peak until October 2021 (approximately 17 months). COVID-19 burden was determined using a COVID-19 single-institution census of confirmed cases by polymerase chain reaction (PCR) for which the average number of cases of COVID-19 during a given month was determined. This number is a scaled trend, and a true number of COVID-19 cases in our hospital was not reported.
Neurosurgical and neuroendovascular cases were included in the analysis. Outcomes included delay in first-start and OR turnover time between neurosurgical cases, defined as the time from the patient leaving the room until the next patient entered the room. Preset threshold times were used in analyses to adjust for normal leniency in OR scheduling (15 minutes for first start and 90 minutes for turnover, which is a standard for our single-institution perioperative center). Statistical analyses, including data aggregation, were performed using R, version 4.0.1 (R Foundation for Statistical Computing). Patients’ demographic and clinical characteristics were analyzed using an independent 2-sample t-test for interval variables and a chi-square test for categorical variables. Significance was defined as P < .05.
Results
First-Start Time
First-start time was analyzed in 426 pre-COVID-19, 357 peak-COVID-19, and 2304 post-peak-COVID-19 cases. The unadjusted mean delay length was significantly different between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes, 6 [18] vs 10 [21] vs 8 [20], respectively; P = .004) (Table 1).
The adjusted average delay length and proportion of cases delayed beyond the 15-minute threshold were not significantly different, but they have been slightly higher since the onset of COVID-19. The proportion of cases that have started early, as well as significantly early past a 15-minute threshold, have also trended down since the onset of the COVID-19 pandemic, but this difference was again not significant. The temporal relationship of first-start delay, both unadjusted and adjusted, from December 2019 to October 2021 is shown in Figure 1. The trend of increasing delay is loosely associated with the COVID-19 burden experienced by our hospital. The start of COVID-19 as well as both COVID-19 peaks have been associated with increased delays in our hospital.
Turnover Time
Turnover time was assessed in 437 pre-COVID-19, 278 peak-restrictions, and 2411 post-peak-restrictions cases. Turnover time during peak restrictions was not significantly different from pre-COVID-19 (88 [100] vs 85 [95]) and has since remained relatively unchanged (83 [87], P = .78). A similar trend held for comparisons of proportion of cases with turnover time past 90 minutes and average times past the 90-minute threshold (Table 2). The temporal relationship between COVID-19 burden and turnover time, both unadjusted and adjusted, from December 2019 to October 2021 is shown in Figure 2. Both figures demonstrate a slight initial increase in turnover time delay at the start of COVID-19, which stabilized with little variation thereafter.
Discussion
We analyzed the OR efficiency metrics of first-start and turnover time during the 90-day period before COVID-19 (pre-COVID-19), the 90 days following Tennessee declaring a state of emergency (peak COVID-19), and the time following this period (post-COVID-19) for all neurosurgical and neuroendovascular cases at Vanderbilt University Medical Center (VUMC). We found a significant difference in unadjusted mean delay length in first-start time between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes for pre-COVID-19, peak-COVID-19, and post-COVID-19: 6 [18] vs 10 [21] vs 8 [20], respectively; P = .004). No significant increase in turnover time between cases was found between these 3 time periods. Based on metrics from first-start delay and turnover time, our center was able to maintain OR efficiency before, during, and after peak COVID-19.
After the Centers for Disease Control and Prevention released guidelines recommending deferring elective procedures to conserve beds and PPE, VUMC made the decision to suspend all elective surgical procedures from March 18 to April 24, 2020. Prior research conducted during the COVID-19 pandemic has demonstrated more than 400 types of surgical procedures with negatively impacted outcomes when compared to surgical outcomes from the same time frame in 2018 and 2019.4 For more than 20 of these types of procedures, there was a significant association between procedure delay and adverse patient outcomes.4 Testing protocols for patients prior to surgery varied throughout the pandemic based on vaccination status and type of procedure. Before vaccines became widely available, all patients were required to obtain a PCR SARS-CoV-2 test within 48 to 72 hours of the scheduled procedure. If the patient’s procedure was urgent and testing was not feasible, the patient was treated as a SARS-CoV-2–positive patient, which required all health care workers involved in the case to wear gowns, gloves, surgical masks, and eye protection. Testing patients preoperatively likely helped to maintain OR efficiency since not all patients received test results prior to the scheduled procedure, leading to cancellations of cases and therefore more staff available for fewer cases.
After vaccines became widely available to the public, testing requirements for patients preoperatively were relaxed, and only patients who were not fully vaccinated or severely immunocompromised were required to test prior to procedures. However, approximately 40% of the population in Tennessee was fully vaccinated in 2021, which reflects the patient population of VUMC.5 This means that many patients who received care at VUMC were still tested prior to procedures.
Adopting adequate safety protocols was found to be key for OR efficiency during the COVID-19 pandemic since performing surgery increased the risk of infection for each health care worker in the OR.6 VUMC protocols identified procedures that required enhanced safety measures to prevent infection of health care workers and avoid staffing shortages, which would decrease OR efficiency. Protocols mandated that only anesthesia team members were allowed to be in the OR during intubation and extubation of patients, which could be one factor leading to increased delays and decreased efficiency for some institutions. Methods for neurosurgeons to decrease risk of infection in the OR include postponing all nonurgent cases, reappraising the necessity for general anesthesia and endotracheal intubation, considering alternative surgical approaches that avoid the respiratory tract, and limiting the use of aerosol-generating instruments.7,8 VUMC’s success in implementing these protocols likely explains why our center was able to maintain OR efficiency throughout the COVID-19 pandemic.
A study conducted by Andreata et al showed a significantly increased mean first-case delay and a nonsignificant increased turnover time in orthopedic surgeries in Northern Italy when comparing surgeries performed during the COVID-19 pandemic to those performed prior to COVID-19.2 Other studies have indicated a similar trend in decreased OR efficiency during COVID-19 in other areas around the world.9,10 These findings are not consistent with our own findings for neurosurgical and neuroendovascular surgeries at VUMC, and any change at our institution was relatively immaterial. Factors that threatened to change OR efficiency—but did not result in meaningful changes in our institutional experience—include delays due to pending COVID-19 test results, safety procedures such as PPE donning, and planning difficulties to ensure the existence of teams with non-overlapping providers in the case of a surgeon being infected.2,11-13
Globally, many surgery centers halted all elective surgeries during the initial COVID-19 spike to prevent a PPE shortage and mitigate risk of infection of patients and health care workers.8,12,14 However, there is no centralized definition of which neurosurgical procedures are elective, so that decision was made on a surgeon or center level, which could lead to variability in efficiency trends.14 One study on neurosurgical procedures during COVID-19 found a 30% decline in all cases and a 23% decline in emergent procedures, showing that the decrease in volume was not only due to cancellation of elective procedures.15 This decrease in elective and emergent surgeries created a backlog of surgeries as well as a loss in health care revenue, and caused many patients to go without adequate health care.10 Looking forward, it is imperative that surgical centers study trends in OR efficiency from COVID-19 and learn how to better maintain OR efficiency during future pandemic conditions to prevent a backlog of cases, loss of health care revenue, and decreased health care access.
Limitations
Our data are from a single center and therefore may not be representative of experiences of other hospitals due to different populations and different impacts from COVID-19. However, given our center’s high volume and diverse patient population, we believe our analysis highlights important trends in neurosurgery practice. Notably, data for patient and OR timing are digitally generated and are entered manually by nurses in the electronic medical record, making it prone to errors and variability. This is in our experience, and if any error is present, we believe it is minimal.
Conclusion
The COVID-19 pandemic has had far-reaching effects on health care worldwide, including neurosurgical care. OR efficiency across the United States generally worsened given the stresses of supply chain issues, staffing shortages, and cancellations. At our institution, we were able to maintain OR efficiency during the known COVID-19 peaks until October 2021. Continually functional neurosurgical ORs are important in preventing delays in care and maintaining a steady revenue in order for hospitals and other health care entities to remain solvent. Further study of OR efficiency is needed for health care systems to prepare for future pandemics and other resource-straining events in order to provide optimal patient care.
Corresponding author: Campbell Liles, MD, Vanderbilt University Medical Center, Department of Neurological Surgery, 1161 21st Ave. South, T4224 Medical Center North, Nashville, TN 37232-2380; [email protected]
Disclosures: None reported.
1. Koester SW, Catapano JS, Ma KL, et al. COVID-19 and neurosurgery consultation call volume at a single large tertiary center with a propensity- adjusted analysis. World Neurosurg. 2021;146:e768-e772. doi:10.1016/j.wneu.2020.11.017
2. Andreata M, Faraldi M, Bucci E, Lombardi G, Zagra L. Operating room efficiency and timing during coronavirus disease 2019 outbreak in a referral orthopaedic hospital in Northern Italy. Int Orthop. 2020;44(12):2499-2504. doi:10.1007/s00264-020-04772-x
3. Dexter F, Abouleish AE, Epstein RH, et al. Use of operating room information system data to predict the impact of reducing turnover times on staffing costs. Anesth Analg. 2003;97(4):1119-1126. doi:10.1213/01.ANE.0000082520.68800.79
4. Zheng NS, Warner JL, Osterman TJ, et al. A retrospective approach to evaluating potential adverse outcomes associated with delay of procedures for cardiovascular and cancer-related diagnoses in the context of COVID-19. J Biomed Inform. 2021;113:103657. doi:10.1016/j.jbi.2020.103657
5. Alcendor DJ. Targeting COVID-19 vaccine hesitancy in rural communities in Tennessee: implications for extending the COVID- 19 pandemic in the South. Vaccines (Basel). 2021;9(11):1279. doi:10.3390/vaccines9111279
6. Perrone G, Giuffrida M, Bellini V, et al. Operating room setup: how to improve health care professionals safety during pandemic COVID- 19: a quality improvement study. J Laparoendosc Adv Surg Tech A. 2021;31(1):85-89. doi:10.1089/lap.2020.0592
7. Iorio-Morin C, Hodaie M, Sarica C, et al. Letter: the risk of COVID-19 infection during neurosurgical procedures: a review of severe acute respiratory distress syndrome coronavirus 2 (SARS-CoV-2) modes of transmission and proposed neurosurgery-specific measures for mitigation. Neurosurgery. 2020;87(2):E178-E185. doi:10.1093/ neuros/nyaa157
8. Gupta P, Muthukumar N, Rajshekhar V, et al. Neurosurgery and neurology practices during the novel COVID-19 pandemic: a consensus statement from India. Neurol India. 2020;68(2):246-254. doi:10.4103/0028-3886.283130
9. Mercer ST, Agarwal R, Dayananda KSS, et al. A comparative study looking at trauma and orthopaedic operating efficiency in the COVID-19 era. Perioper Care Oper Room Manag. 2020;21:100142. doi:10.1016/j.pcorm.2020.100142
10. Rozario N, Rozario D. Can machine learning optimize the efficiency of the operating room in the era of COVID-19? Can J Surg. 2020;63(6):E527-E529. doi:10.1503/cjs.016520
11. Toh KHQ, Barazanchi A, Rajaretnam NS, et al. COVID-19 response by New Zealand general surgical departments in tertiary metropolitan hospitals. ANZ J Surg. 2021;91(7-8):1352-1357. doi:10.1111/ ans.17044
12. Moorthy RK, Rajshekhar V. Impact of COVID-19 pandemic on neurosurgical practice in India: a survey on personal protective equipment usage, testing, and perceptions on disease transmission. Neurol India. 2020;68(5):1133-1138. doi:10.4103/0028- 3886.299173
13. Meneghini RM. Techniques and strategies to optimize efficiencies in the office and operating room: getting through the patient backlog and preserving hospital resources. J Arthroplasty. 2021;36(7S):S49-S51. doi:10.1016/j.arth.2021.03.010
14. Jean WC, Ironside NT, Sack KD, et al. The impact of COVID- 19 on neurosurgeons and the strategy for triaging non-emergent operations: a global neurosurgery study. Acta Neurochir (Wien). 2020;162(6):1229-1240. doi:10.1007/s00701-020- 04342-5
15. Raneri F, Rustemi O, Zambon G, et al. Neurosurgery in times of a pandemic: a survey of neurosurgical services during the COVID-19 outbreak in the Veneto region in Italy. Neurosurg Focus. 2020;49(6):E9. doi:10.3171/2020.9.FOCUS20691
From the Department of Neurological Surgery, Vanderbilt University Medical Center, Nashville, TN (Stefan W. Koester, Puja Jagasia, and Drs. Liles, Dambrino IV, Feldman, and Chambless), and the Department of Anesthesiology, Vanderbilt University Medical Center, Nashville, TN (Drs. Mathews and Tiwari).
ABSTRACT
Background: The COVID-19 pandemic has had broad effects on surgical care, including operating room (OR) staffing, personal protective equipment (PPE) utilization, and newly implemented anti-infective measures. Our aim was to assess neurosurgery OR efficiency before the COVID-19 pandemic, during peak COVID-19, and during current times.
Methods: Institutional perioperative databases at a single, high-volume neurosurgical center were queried for operations performed from December 2019 until October 2021. March 12, 2020, the day that the state of Tennessee declared a state of emergency, was chosen as the onset of the COVID-19 pandemic. The 90-day periods before and after this day were used to define the pre-COVID-19, peak-COVID-19, and post-peak restrictions time periods for comparative analysis. Outcomes included delay in first-start and OR turnover time between neurosurgical cases. Preset threshold times were used in analyses to adjust for normal leniency in OR scheduling (15 minutes for first start and 90 minutes for turnover). Univariate analysis used Wilcoxon rank-sum test for continuous outcomes, while chi-square test and Fisher’s exact test were used for categorical comparisons. Significance was defined as P < .05.
Results: First-start time was analyzed in 426 pre-COVID-19, 357 peak-restrictions, and 2304 post-peak-restrictions cases. The unadjusted mean delay length was found to be significantly different between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes, 6 [18] vs 10 [21] vs 8 [20], respectively; P = .004). The adjusted average delay length and proportion of cases delayed beyond the 15-minute threshold were not significantly different. The proportion of cases that started early, as well as significantly early past a 15-minute threshold, have not been impacted. There was no significant change in turnover time during peak restrictions relative to the pre-COVID-19 period (88 [100] minutes vs 85 [95] minutes), and turnover time has since remained unchanged (83 [87] minutes).
Conclusion: Our center was able to maintain OR efficiency before, during, and after peak restrictions even while instituting advanced infection-control strategies. While there were significant changes, delays were relatively small in magnitude.
Keywords: operating room timing, hospital efficiency, socioeconomics, pandemic.
The COVID-19 pandemic has led to major changes in patient care both from a surgical perspective and in regard to inpatient hospital course. Safety protocols nationwide have been implemented to protect both patients and providers. Some elements of surgical care have drastically changed, including operating room (OR) staffing, personal protective equipment (PPE) utilization, and increased sterilization measures. Furloughs, layoffs, and reassignments due to the focus on nonelective and COVID-19–related cases challenged OR staffing and efficiency. Operating room staff with COVID-19 exposures or COVID-19 infections also caused last-minute changes in staffing. All of these scenarios can cause issues due to actual understaffing or due to staff members being pushed into highly specialized areas, such as neurosurgery, in which they have very little experience. A further obstacle to OR efficiency included policy changes involving PPE utilization, sterilization measures, and supply chain shortages of necessary resources such as PPE.
Neurosurgery in particular has been susceptible to COVID-19–related system-wide changes given operator proximity to the patient’s respiratory passages, frequency of emergent cases, and varying anesthetic needs, as well as the high level of specialization needed to perform neurosurgical care. Previous studies have shown a change in the makeup of neurosurgical patients seeking care, as well as in the acuity of neurological consult of these patients.1 A study in orthopedic surgery by Andreata et al demonstrated worsened OR efficiency, with significantly increased first-start and turnover times.2 In the COVID-19 era, OR quality and safety are crucially important to both patients and providers. Providing this safe and effective care in an efficient manner is important for optimal neurosurgical management in the long term.3 Moreover, the financial burden of implementing new protocols and standards can be compounded by additional financial losses due to reduced OR efficiency.
Methods
To analyze the effect of COVID-19 on neurosurgical OR efficiency, institutional perioperative databases at a single high-volume center were queried for operations performed from December 2019 until October 2021. March 12, 2020, was chosen as the onset of COVID-19 for analytic purposes, as this was the date when the state of Tennessee declared a state of emergency. The 90-day periods before and after this date were used for comparative analysis for pre-COVID-19, peak COVID-19, and post-peak-restrictions time periods. The peak COVID-19 period was defined as the 90-day period following the initial onset of COVID-19 and the surge of cases. For comparison purposes, post-peak COVID-19 was defined as the months following the first peak until October 2021 (approximately 17 months). COVID-19 burden was determined using a COVID-19 single-institution census of confirmed cases by polymerase chain reaction (PCR) for which the average number of cases of COVID-19 during a given month was determined. This number is a scaled trend, and a true number of COVID-19 cases in our hospital was not reported.
Neurosurgical and neuroendovascular cases were included in the analysis. Outcomes included delay in first-start and OR turnover time between neurosurgical cases, defined as the time from the patient leaving the room until the next patient entered the room. Preset threshold times were used in analyses to adjust for normal leniency in OR scheduling (15 minutes for first start and 90 minutes for turnover, which is a standard for our single-institution perioperative center). Statistical analyses, including data aggregation, were performed using R, version 4.0.1 (R Foundation for Statistical Computing). Patients’ demographic and clinical characteristics were analyzed using an independent 2-sample t-test for interval variables and a chi-square test for categorical variables. Significance was defined as P < .05.
Results
First-Start Time
First-start time was analyzed in 426 pre-COVID-19, 357 peak-COVID-19, and 2304 post-peak-COVID-19 cases. The unadjusted mean delay length was significantly different between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes, 6 [18] vs 10 [21] vs 8 [20], respectively; P = .004) (Table 1).
The adjusted average delay length and proportion of cases delayed beyond the 15-minute threshold were not significantly different, but they have been slightly higher since the onset of COVID-19. The proportion of cases that have started early, as well as significantly early past a 15-minute threshold, have also trended down since the onset of the COVID-19 pandemic, but this difference was again not significant. The temporal relationship of first-start delay, both unadjusted and adjusted, from December 2019 to October 2021 is shown in Figure 1. The trend of increasing delay is loosely associated with the COVID-19 burden experienced by our hospital. The start of COVID-19 as well as both COVID-19 peaks have been associated with increased delays in our hospital.
Turnover Time
Turnover time was assessed in 437 pre-COVID-19, 278 peak-restrictions, and 2411 post-peak-restrictions cases. Turnover time during peak restrictions was not significantly different from pre-COVID-19 (88 [100] vs 85 [95]) and has since remained relatively unchanged (83 [87], P = .78). A similar trend held for comparisons of proportion of cases with turnover time past 90 minutes and average times past the 90-minute threshold (Table 2). The temporal relationship between COVID-19 burden and turnover time, both unadjusted and adjusted, from December 2019 to October 2021 is shown in Figure 2. Both figures demonstrate a slight initial increase in turnover time delay at the start of COVID-19, which stabilized with little variation thereafter.
Discussion
We analyzed the OR efficiency metrics of first-start and turnover time during the 90-day period before COVID-19 (pre-COVID-19), the 90 days following Tennessee declaring a state of emergency (peak COVID-19), and the time following this period (post-COVID-19) for all neurosurgical and neuroendovascular cases at Vanderbilt University Medical Center (VUMC). We found a significant difference in unadjusted mean delay length in first-start time between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes for pre-COVID-19, peak-COVID-19, and post-COVID-19: 6 [18] vs 10 [21] vs 8 [20], respectively; P = .004). No significant increase in turnover time between cases was found between these 3 time periods. Based on metrics from first-start delay and turnover time, our center was able to maintain OR efficiency before, during, and after peak COVID-19.
After the Centers for Disease Control and Prevention released guidelines recommending deferring elective procedures to conserve beds and PPE, VUMC made the decision to suspend all elective surgical procedures from March 18 to April 24, 2020. Prior research conducted during the COVID-19 pandemic has demonstrated more than 400 types of surgical procedures with negatively impacted outcomes when compared to surgical outcomes from the same time frame in 2018 and 2019.4 For more than 20 of these types of procedures, there was a significant association between procedure delay and adverse patient outcomes.4 Testing protocols for patients prior to surgery varied throughout the pandemic based on vaccination status and type of procedure. Before vaccines became widely available, all patients were required to obtain a PCR SARS-CoV-2 test within 48 to 72 hours of the scheduled procedure. If the patient’s procedure was urgent and testing was not feasible, the patient was treated as a SARS-CoV-2–positive patient, which required all health care workers involved in the case to wear gowns, gloves, surgical masks, and eye protection. Testing patients preoperatively likely helped to maintain OR efficiency since not all patients received test results prior to the scheduled procedure, leading to cancellations of cases and therefore more staff available for fewer cases.
After vaccines became widely available to the public, testing requirements for patients preoperatively were relaxed, and only patients who were not fully vaccinated or severely immunocompromised were required to test prior to procedures. However, approximately 40% of the population in Tennessee was fully vaccinated in 2021, which reflects the patient population of VUMC.5 This means that many patients who received care at VUMC were still tested prior to procedures.
Adopting adequate safety protocols was found to be key for OR efficiency during the COVID-19 pandemic since performing surgery increased the risk of infection for each health care worker in the OR.6 VUMC protocols identified procedures that required enhanced safety measures to prevent infection of health care workers and avoid staffing shortages, which would decrease OR efficiency. Protocols mandated that only anesthesia team members were allowed to be in the OR during intubation and extubation of patients, which could be one factor leading to increased delays and decreased efficiency for some institutions. Methods for neurosurgeons to decrease risk of infection in the OR include postponing all nonurgent cases, reappraising the necessity for general anesthesia and endotracheal intubation, considering alternative surgical approaches that avoid the respiratory tract, and limiting the use of aerosol-generating instruments.7,8 VUMC’s success in implementing these protocols likely explains why our center was able to maintain OR efficiency throughout the COVID-19 pandemic.
A study conducted by Andreata et al showed a significantly increased mean first-case delay and a nonsignificant increased turnover time in orthopedic surgeries in Northern Italy when comparing surgeries performed during the COVID-19 pandemic to those performed prior to COVID-19.2 Other studies have indicated a similar trend in decreased OR efficiency during COVID-19 in other areas around the world.9,10 These findings are not consistent with our own findings for neurosurgical and neuroendovascular surgeries at VUMC, and any change at our institution was relatively immaterial. Factors that threatened to change OR efficiency—but did not result in meaningful changes in our institutional experience—include delays due to pending COVID-19 test results, safety procedures such as PPE donning, and planning difficulties to ensure the existence of teams with non-overlapping providers in the case of a surgeon being infected.2,11-13
Globally, many surgery centers halted all elective surgeries during the initial COVID-19 spike to prevent a PPE shortage and mitigate risk of infection of patients and health care workers.8,12,14 However, there is no centralized definition of which neurosurgical procedures are elective, so that decision was made on a surgeon or center level, which could lead to variability in efficiency trends.14 One study on neurosurgical procedures during COVID-19 found a 30% decline in all cases and a 23% decline in emergent procedures, showing that the decrease in volume was not only due to cancellation of elective procedures.15 This decrease in elective and emergent surgeries created a backlog of surgeries as well as a loss in health care revenue, and caused many patients to go without adequate health care.10 Looking forward, it is imperative that surgical centers study trends in OR efficiency from COVID-19 and learn how to better maintain OR efficiency during future pandemic conditions to prevent a backlog of cases, loss of health care revenue, and decreased health care access.
Limitations
Our data are from a single center and therefore may not be representative of experiences of other hospitals due to different populations and different impacts from COVID-19. However, given our center’s high volume and diverse patient population, we believe our analysis highlights important trends in neurosurgery practice. Notably, data for patient and OR timing are digitally generated and are entered manually by nurses in the electronic medical record, making it prone to errors and variability. This is in our experience, and if any error is present, we believe it is minimal.
Conclusion
The COVID-19 pandemic has had far-reaching effects on health care worldwide, including neurosurgical care. OR efficiency across the United States generally worsened given the stresses of supply chain issues, staffing shortages, and cancellations. At our institution, we were able to maintain OR efficiency during the known COVID-19 peaks until October 2021. Continually functional neurosurgical ORs are important in preventing delays in care and maintaining a steady revenue in order for hospitals and other health care entities to remain solvent. Further study of OR efficiency is needed for health care systems to prepare for future pandemics and other resource-straining events in order to provide optimal patient care.
Corresponding author: Campbell Liles, MD, Vanderbilt University Medical Center, Department of Neurological Surgery, 1161 21st Ave. South, T4224 Medical Center North, Nashville, TN 37232-2380; [email protected]
Disclosures: None reported.
From the Department of Neurological Surgery, Vanderbilt University Medical Center, Nashville, TN (Stefan W. Koester, Puja Jagasia, and Drs. Liles, Dambrino IV, Feldman, and Chambless), and the Department of Anesthesiology, Vanderbilt University Medical Center, Nashville, TN (Drs. Mathews and Tiwari).
ABSTRACT
Background: The COVID-19 pandemic has had broad effects on surgical care, including operating room (OR) staffing, personal protective equipment (PPE) utilization, and newly implemented anti-infective measures. Our aim was to assess neurosurgery OR efficiency before the COVID-19 pandemic, during peak COVID-19, and during current times.
Methods: Institutional perioperative databases at a single, high-volume neurosurgical center were queried for operations performed from December 2019 until October 2021. March 12, 2020, the day that the state of Tennessee declared a state of emergency, was chosen as the onset of the COVID-19 pandemic. The 90-day periods before and after this day were used to define the pre-COVID-19, peak-COVID-19, and post-peak restrictions time periods for comparative analysis. Outcomes included delay in first-start and OR turnover time between neurosurgical cases. Preset threshold times were used in analyses to adjust for normal leniency in OR scheduling (15 minutes for first start and 90 minutes for turnover). Univariate analysis used Wilcoxon rank-sum test for continuous outcomes, while chi-square test and Fisher’s exact test were used for categorical comparisons. Significance was defined as P < .05.
Results: First-start time was analyzed in 426 pre-COVID-19, 357 peak-restrictions, and 2304 post-peak-restrictions cases. The unadjusted mean delay length was found to be significantly different between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes, 6 [18] vs 10 [21] vs 8 [20], respectively; P = .004). The adjusted average delay length and proportion of cases delayed beyond the 15-minute threshold were not significantly different. The proportion of cases that started early, as well as significantly early past a 15-minute threshold, have not been impacted. There was no significant change in turnover time during peak restrictions relative to the pre-COVID-19 period (88 [100] minutes vs 85 [95] minutes), and turnover time has since remained unchanged (83 [87] minutes).
Conclusion: Our center was able to maintain OR efficiency before, during, and after peak restrictions even while instituting advanced infection-control strategies. While there were significant changes, delays were relatively small in magnitude.
Keywords: operating room timing, hospital efficiency, socioeconomics, pandemic.
The COVID-19 pandemic has led to major changes in patient care both from a surgical perspective and in regard to inpatient hospital course. Safety protocols nationwide have been implemented to protect both patients and providers. Some elements of surgical care have drastically changed, including operating room (OR) staffing, personal protective equipment (PPE) utilization, and increased sterilization measures. Furloughs, layoffs, and reassignments due to the focus on nonelective and COVID-19–related cases challenged OR staffing and efficiency. Operating room staff with COVID-19 exposures or COVID-19 infections also caused last-minute changes in staffing. All of these scenarios can cause issues due to actual understaffing or due to staff members being pushed into highly specialized areas, such as neurosurgery, in which they have very little experience. A further obstacle to OR efficiency included policy changes involving PPE utilization, sterilization measures, and supply chain shortages of necessary resources such as PPE.
Neurosurgery in particular has been susceptible to COVID-19–related system-wide changes given operator proximity to the patient’s respiratory passages, frequency of emergent cases, and varying anesthetic needs, as well as the high level of specialization needed to perform neurosurgical care. Previous studies have shown a change in the makeup of neurosurgical patients seeking care, as well as in the acuity of neurological consult of these patients.1 A study in orthopedic surgery by Andreata et al demonstrated worsened OR efficiency, with significantly increased first-start and turnover times.2 In the COVID-19 era, OR quality and safety are crucially important to both patients and providers. Providing this safe and effective care in an efficient manner is important for optimal neurosurgical management in the long term.3 Moreover, the financial burden of implementing new protocols and standards can be compounded by additional financial losses due to reduced OR efficiency.
Methods
To analyze the effect of COVID-19 on neurosurgical OR efficiency, institutional perioperative databases at a single high-volume center were queried for operations performed from December 2019 until October 2021. March 12, 2020, was chosen as the onset of COVID-19 for analytic purposes, as this was the date when the state of Tennessee declared a state of emergency. The 90-day periods before and after this date were used for comparative analysis for pre-COVID-19, peak COVID-19, and post-peak-restrictions time periods. The peak COVID-19 period was defined as the 90-day period following the initial onset of COVID-19 and the surge of cases. For comparison purposes, post-peak COVID-19 was defined as the months following the first peak until October 2021 (approximately 17 months). COVID-19 burden was determined using a COVID-19 single-institution census of confirmed cases by polymerase chain reaction (PCR) for which the average number of cases of COVID-19 during a given month was determined. This number is a scaled trend, and a true number of COVID-19 cases in our hospital was not reported.
Neurosurgical and neuroendovascular cases were included in the analysis. Outcomes included delay in first-start and OR turnover time between neurosurgical cases, defined as the time from the patient leaving the room until the next patient entered the room. Preset threshold times were used in analyses to adjust for normal leniency in OR scheduling (15 minutes for first start and 90 minutes for turnover, which is a standard for our single-institution perioperative center). Statistical analyses, including data aggregation, were performed using R, version 4.0.1 (R Foundation for Statistical Computing). Patients’ demographic and clinical characteristics were analyzed using an independent 2-sample t-test for interval variables and a chi-square test for categorical variables. Significance was defined as P < .05.
Results
First-Start Time
First-start time was analyzed in 426 pre-COVID-19, 357 peak-COVID-19, and 2304 post-peak-COVID-19 cases. The unadjusted mean delay length was significantly different between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes, 6 [18] vs 10 [21] vs 8 [20], respectively; P = .004) (Table 1).
The adjusted average delay length and proportion of cases delayed beyond the 15-minute threshold were not significantly different, but they have been slightly higher since the onset of COVID-19. The proportion of cases that have started early, as well as significantly early past a 15-minute threshold, have also trended down since the onset of the COVID-19 pandemic, but this difference was again not significant. The temporal relationship of first-start delay, both unadjusted and adjusted, from December 2019 to October 2021 is shown in Figure 1. The trend of increasing delay is loosely associated with the COVID-19 burden experienced by our hospital. The start of COVID-19 as well as both COVID-19 peaks have been associated with increased delays in our hospital.
Turnover Time
Turnover time was assessed in 437 pre-COVID-19, 278 peak-restrictions, and 2411 post-peak-restrictions cases. Turnover time during peak restrictions was not significantly different from pre-COVID-19 (88 [100] vs 85 [95]) and has since remained relatively unchanged (83 [87], P = .78). A similar trend held for comparisons of proportion of cases with turnover time past 90 minutes and average times past the 90-minute threshold (Table 2). The temporal relationship between COVID-19 burden and turnover time, both unadjusted and adjusted, from December 2019 to October 2021 is shown in Figure 2. Both figures demonstrate a slight initial increase in turnover time delay at the start of COVID-19, which stabilized with little variation thereafter.
Discussion
We analyzed the OR efficiency metrics of first-start and turnover time during the 90-day period before COVID-19 (pre-COVID-19), the 90 days following Tennessee declaring a state of emergency (peak COVID-19), and the time following this period (post-COVID-19) for all neurosurgical and neuroendovascular cases at Vanderbilt University Medical Center (VUMC). We found a significant difference in unadjusted mean delay length in first-start time between the time periods, but the magnitude of increase in minutes was immaterial (mean [SD] minutes for pre-COVID-19, peak-COVID-19, and post-COVID-19: 6 [18] vs 10 [21] vs 8 [20], respectively; P = .004). No significant increase in turnover time between cases was found between these 3 time periods. Based on metrics from first-start delay and turnover time, our center was able to maintain OR efficiency before, during, and after peak COVID-19.
After the Centers for Disease Control and Prevention released guidelines recommending deferring elective procedures to conserve beds and PPE, VUMC made the decision to suspend all elective surgical procedures from March 18 to April 24, 2020. Prior research conducted during the COVID-19 pandemic has demonstrated more than 400 types of surgical procedures with negatively impacted outcomes when compared to surgical outcomes from the same time frame in 2018 and 2019.4 For more than 20 of these types of procedures, there was a significant association between procedure delay and adverse patient outcomes.4 Testing protocols for patients prior to surgery varied throughout the pandemic based on vaccination status and type of procedure. Before vaccines became widely available, all patients were required to obtain a PCR SARS-CoV-2 test within 48 to 72 hours of the scheduled procedure. If the patient’s procedure was urgent and testing was not feasible, the patient was treated as a SARS-CoV-2–positive patient, which required all health care workers involved in the case to wear gowns, gloves, surgical masks, and eye protection. Testing patients preoperatively likely helped to maintain OR efficiency since not all patients received test results prior to the scheduled procedure, leading to cancellations of cases and therefore more staff available for fewer cases.
After vaccines became widely available to the public, testing requirements for patients preoperatively were relaxed, and only patients who were not fully vaccinated or severely immunocompromised were required to test prior to procedures. However, approximately 40% of the population in Tennessee was fully vaccinated in 2021, which reflects the patient population of VUMC.5 This means that many patients who received care at VUMC were still tested prior to procedures.
Adopting adequate safety protocols was found to be key for OR efficiency during the COVID-19 pandemic since performing surgery increased the risk of infection for each health care worker in the OR.6 VUMC protocols identified procedures that required enhanced safety measures to prevent infection of health care workers and avoid staffing shortages, which would decrease OR efficiency. Protocols mandated that only anesthesia team members were allowed to be in the OR during intubation and extubation of patients, which could be one factor leading to increased delays and decreased efficiency for some institutions. Methods for neurosurgeons to decrease risk of infection in the OR include postponing all nonurgent cases, reappraising the necessity for general anesthesia and endotracheal intubation, considering alternative surgical approaches that avoid the respiratory tract, and limiting the use of aerosol-generating instruments.7,8 VUMC’s success in implementing these protocols likely explains why our center was able to maintain OR efficiency throughout the COVID-19 pandemic.
A study conducted by Andreata et al showed a significantly increased mean first-case delay and a nonsignificant increased turnover time in orthopedic surgeries in Northern Italy when comparing surgeries performed during the COVID-19 pandemic to those performed prior to COVID-19.2 Other studies have indicated a similar trend in decreased OR efficiency during COVID-19 in other areas around the world.9,10 These findings are not consistent with our own findings for neurosurgical and neuroendovascular surgeries at VUMC, and any change at our institution was relatively immaterial. Factors that threatened to change OR efficiency—but did not result in meaningful changes in our institutional experience—include delays due to pending COVID-19 test results, safety procedures such as PPE donning, and planning difficulties to ensure the existence of teams with non-overlapping providers in the case of a surgeon being infected.2,11-13
Globally, many surgery centers halted all elective surgeries during the initial COVID-19 spike to prevent a PPE shortage and mitigate risk of infection of patients and health care workers.8,12,14 However, there is no centralized definition of which neurosurgical procedures are elective, so that decision was made on a surgeon or center level, which could lead to variability in efficiency trends.14 One study on neurosurgical procedures during COVID-19 found a 30% decline in all cases and a 23% decline in emergent procedures, showing that the decrease in volume was not only due to cancellation of elective procedures.15 This decrease in elective and emergent surgeries created a backlog of surgeries as well as a loss in health care revenue, and caused many patients to go without adequate health care.10 Looking forward, it is imperative that surgical centers study trends in OR efficiency from COVID-19 and learn how to better maintain OR efficiency during future pandemic conditions to prevent a backlog of cases, loss of health care revenue, and decreased health care access.
Limitations
Our data are from a single center and therefore may not be representative of experiences of other hospitals due to different populations and different impacts from COVID-19. However, given our center’s high volume and diverse patient population, we believe our analysis highlights important trends in neurosurgery practice. Notably, data for patient and OR timing are digitally generated and are entered manually by nurses in the electronic medical record, making it prone to errors and variability. This is in our experience, and if any error is present, we believe it is minimal.
Conclusion
The COVID-19 pandemic has had far-reaching effects on health care worldwide, including neurosurgical care. OR efficiency across the United States generally worsened given the stresses of supply chain issues, staffing shortages, and cancellations. At our institution, we were able to maintain OR efficiency during the known COVID-19 peaks until October 2021. Continually functional neurosurgical ORs are important in preventing delays in care and maintaining a steady revenue in order for hospitals and other health care entities to remain solvent. Further study of OR efficiency is needed for health care systems to prepare for future pandemics and other resource-straining events in order to provide optimal patient care.
Corresponding author: Campbell Liles, MD, Vanderbilt University Medical Center, Department of Neurological Surgery, 1161 21st Ave. South, T4224 Medical Center North, Nashville, TN 37232-2380; [email protected]
Disclosures: None reported.
1. Koester SW, Catapano JS, Ma KL, et al. COVID-19 and neurosurgery consultation call volume at a single large tertiary center with a propensity- adjusted analysis. World Neurosurg. 2021;146:e768-e772. doi:10.1016/j.wneu.2020.11.017
2. Andreata M, Faraldi M, Bucci E, Lombardi G, Zagra L. Operating room efficiency and timing during coronavirus disease 2019 outbreak in a referral orthopaedic hospital in Northern Italy. Int Orthop. 2020;44(12):2499-2504. doi:10.1007/s00264-020-04772-x
3. Dexter F, Abouleish AE, Epstein RH, et al. Use of operating room information system data to predict the impact of reducing turnover times on staffing costs. Anesth Analg. 2003;97(4):1119-1126. doi:10.1213/01.ANE.0000082520.68800.79
4. Zheng NS, Warner JL, Osterman TJ, et al. A retrospective approach to evaluating potential adverse outcomes associated with delay of procedures for cardiovascular and cancer-related diagnoses in the context of COVID-19. J Biomed Inform. 2021;113:103657. doi:10.1016/j.jbi.2020.103657
5. Alcendor DJ. Targeting COVID-19 vaccine hesitancy in rural communities in Tennessee: implications for extending the COVID- 19 pandemic in the South. Vaccines (Basel). 2021;9(11):1279. doi:10.3390/vaccines9111279
6. Perrone G, Giuffrida M, Bellini V, et al. Operating room setup: how to improve health care professionals safety during pandemic COVID- 19: a quality improvement study. J Laparoendosc Adv Surg Tech A. 2021;31(1):85-89. doi:10.1089/lap.2020.0592
7. Iorio-Morin C, Hodaie M, Sarica C, et al. Letter: the risk of COVID-19 infection during neurosurgical procedures: a review of severe acute respiratory distress syndrome coronavirus 2 (SARS-CoV-2) modes of transmission and proposed neurosurgery-specific measures for mitigation. Neurosurgery. 2020;87(2):E178-E185. doi:10.1093/ neuros/nyaa157
8. Gupta P, Muthukumar N, Rajshekhar V, et al. Neurosurgery and neurology practices during the novel COVID-19 pandemic: a consensus statement from India. Neurol India. 2020;68(2):246-254. doi:10.4103/0028-3886.283130
9. Mercer ST, Agarwal R, Dayananda KSS, et al. A comparative study looking at trauma and orthopaedic operating efficiency in the COVID-19 era. Perioper Care Oper Room Manag. 2020;21:100142. doi:10.1016/j.pcorm.2020.100142
10. Rozario N, Rozario D. Can machine learning optimize the efficiency of the operating room in the era of COVID-19? Can J Surg. 2020;63(6):E527-E529. doi:10.1503/cjs.016520
11. Toh KHQ, Barazanchi A, Rajaretnam NS, et al. COVID-19 response by New Zealand general surgical departments in tertiary metropolitan hospitals. ANZ J Surg. 2021;91(7-8):1352-1357. doi:10.1111/ ans.17044
12. Moorthy RK, Rajshekhar V. Impact of COVID-19 pandemic on neurosurgical practice in India: a survey on personal protective equipment usage, testing, and perceptions on disease transmission. Neurol India. 2020;68(5):1133-1138. doi:10.4103/0028- 3886.299173
13. Meneghini RM. Techniques and strategies to optimize efficiencies in the office and operating room: getting through the patient backlog and preserving hospital resources. J Arthroplasty. 2021;36(7S):S49-S51. doi:10.1016/j.arth.2021.03.010
14. Jean WC, Ironside NT, Sack KD, et al. The impact of COVID- 19 on neurosurgeons and the strategy for triaging non-emergent operations: a global neurosurgery study. Acta Neurochir (Wien). 2020;162(6):1229-1240. doi:10.1007/s00701-020- 04342-5
15. Raneri F, Rustemi O, Zambon G, et al. Neurosurgery in times of a pandemic: a survey of neurosurgical services during the COVID-19 outbreak in the Veneto region in Italy. Neurosurg Focus. 2020;49(6):E9. doi:10.3171/2020.9.FOCUS20691
1. Koester SW, Catapano JS, Ma KL, et al. COVID-19 and neurosurgery consultation call volume at a single large tertiary center with a propensity- adjusted analysis. World Neurosurg. 2021;146:e768-e772. doi:10.1016/j.wneu.2020.11.017
2. Andreata M, Faraldi M, Bucci E, Lombardi G, Zagra L. Operating room efficiency and timing during coronavirus disease 2019 outbreak in a referral orthopaedic hospital in Northern Italy. Int Orthop. 2020;44(12):2499-2504. doi:10.1007/s00264-020-04772-x
3. Dexter F, Abouleish AE, Epstein RH, et al. Use of operating room information system data to predict the impact of reducing turnover times on staffing costs. Anesth Analg. 2003;97(4):1119-1126. doi:10.1213/01.ANE.0000082520.68800.79
4. Zheng NS, Warner JL, Osterman TJ, et al. A retrospective approach to evaluating potential adverse outcomes associated with delay of procedures for cardiovascular and cancer-related diagnoses in the context of COVID-19. J Biomed Inform. 2021;113:103657. doi:10.1016/j.jbi.2020.103657
5. Alcendor DJ. Targeting COVID-19 vaccine hesitancy in rural communities in Tennessee: implications for extending the COVID- 19 pandemic in the South. Vaccines (Basel). 2021;9(11):1279. doi:10.3390/vaccines9111279
6. Perrone G, Giuffrida M, Bellini V, et al. Operating room setup: how to improve health care professionals safety during pandemic COVID- 19: a quality improvement study. J Laparoendosc Adv Surg Tech A. 2021;31(1):85-89. doi:10.1089/lap.2020.0592
7. Iorio-Morin C, Hodaie M, Sarica C, et al. Letter: the risk of COVID-19 infection during neurosurgical procedures: a review of severe acute respiratory distress syndrome coronavirus 2 (SARS-CoV-2) modes of transmission and proposed neurosurgery-specific measures for mitigation. Neurosurgery. 2020;87(2):E178-E185. doi:10.1093/ neuros/nyaa157
8. Gupta P, Muthukumar N, Rajshekhar V, et al. Neurosurgery and neurology practices during the novel COVID-19 pandemic: a consensus statement from India. Neurol India. 2020;68(2):246-254. doi:10.4103/0028-3886.283130
9. Mercer ST, Agarwal R, Dayananda KSS, et al. A comparative study looking at trauma and orthopaedic operating efficiency in the COVID-19 era. Perioper Care Oper Room Manag. 2020;21:100142. doi:10.1016/j.pcorm.2020.100142
10. Rozario N, Rozario D. Can machine learning optimize the efficiency of the operating room in the era of COVID-19? Can J Surg. 2020;63(6):E527-E529. doi:10.1503/cjs.016520
11. Toh KHQ, Barazanchi A, Rajaretnam NS, et al. COVID-19 response by New Zealand general surgical departments in tertiary metropolitan hospitals. ANZ J Surg. 2021;91(7-8):1352-1357. doi:10.1111/ ans.17044
12. Moorthy RK, Rajshekhar V. Impact of COVID-19 pandemic on neurosurgical practice in India: a survey on personal protective equipment usage, testing, and perceptions on disease transmission. Neurol India. 2020;68(5):1133-1138. doi:10.4103/0028- 3886.299173
13. Meneghini RM. Techniques and strategies to optimize efficiencies in the office and operating room: getting through the patient backlog and preserving hospital resources. J Arthroplasty. 2021;36(7S):S49-S51. doi:10.1016/j.arth.2021.03.010
14. Jean WC, Ironside NT, Sack KD, et al. The impact of COVID- 19 on neurosurgeons and the strategy for triaging non-emergent operations: a global neurosurgery study. Acta Neurochir (Wien). 2020;162(6):1229-1240. doi:10.1007/s00701-020- 04342-5
15. Raneri F, Rustemi O, Zambon G, et al. Neurosurgery in times of a pandemic: a survey of neurosurgical services during the COVID-19 outbreak in the Veneto region in Italy. Neurosurg Focus. 2020;49(6):E9. doi:10.3171/2020.9.FOCUS20691
Best Practice Implementation and Clinical Inertia
From the Department of Medicine, Brigham and Women’s Hospital, and Harvard Medical School, Boston, MA.
Clinical inertia is defined as the failure of clinicians to initiate or escalate guideline-directed medical therapy to achieve treatment goals for well-defined clinical conditions.1,2 Evidence-based guidelines recommend optimal disease management with readily available medical therapies throughout the phases of clinical care. Unfortunately, the care provided to individual patients undergoes multiple modifications throughout the disease course, resulting in divergent pathways, significant deviations from treatment guidelines, and failure of “safeguard” checkpoints to reinstate, initiate, optimize, or stop treatments. Clinical inertia generally describes rigidity or resistance to change around implementing evidence-based guidelines. Furthermore, this term describes treatment behavior on the part of an individual clinician, not organizational inertia, which generally encompasses both internal (immediate clinical practice settings) and external factors (national and international guidelines and recommendations), eventually leading to resistance to optimizing disease treatment and therapeutic regimens. Individual clinicians’ clinical inertia in the form of resistance to guideline implementation and evidence-based principles can be one factor that drives organizational inertia. In turn, such individual behavior can be dictated by personal beliefs, knowledge, interpretation, skills, management principles, and biases. The terms therapeutic inertia or clinical inertia should not be confused with nonadherence from the patient’s standpoint when the clinician follows the best practice guidelines.3
Clinical inertia has been described in several clinical domains, including diabetes,4,5 hypertension,6,7 heart failure,8 depression,9 pulmonary medicine,10 and complex disease management.11 Clinicians can set suboptimal treatment goals due to specific beliefs and attitudes around optimal therapeutic goals. For example, when treating a patient with a chronic disease that is presently stable, a clinician could elect to initiate suboptimal treatment, as escalation of treatment might not be the priority in stable disease; they also may have concerns about overtreatment. Other factors that can contribute to clinical inertia (ie, undertreatment in the presence of indications for treatment) include those related to the patient, the clinical setting, and the organization, along with the importance of individualizing therapies in specific patients. Organizational inertia is the initial global resistance by the system to implementation, which can slow the dissemination and adaptation of best practices but eventually declines over time. Individual clinical inertia, on the other hand, will likely persist after the system-level rollout of guideline-based approaches.
The trajectory of dissemination, implementation, and adaptation of innovations and best practices is illustrated in the Figure. When the guidelines and medical societies endorse the adaptation of innovations or practice change after the benefits of such innovations/change have been established by the regulatory bodies, uptake can be hindered by both organizational and clinical inertia. Overcoming inertia to system-level changes requires addressing individual clinicians, along with practice and organizational factors, in order to ensure systematic adaptations. From the clinicians’ view, training and cognitive interventions to improve the adaptation and coping skills can improve understanding of treatment options through standardized educational and behavioral modification tools, direct and indirect feedback around performance, and decision support through a continuous improvement approach on both individual and system levels.
Addressing inertia in clinical practice requires a deep understanding of the individual and organizational elements that foster resistance to adapting best practice models. Research that explores tools and approaches to overcome inertia in managing complex diseases is a key step in advancing clinical innovation and disseminating best practices.
Corresponding author: Ebrahim Barkoudah, MD, MPH; [email protected]
Disclosures: None reported.
1. Phillips LS, Branch WT, Cook CB, et al. Clinical inertia. Ann Intern Med. 2001;135(9):825-834. doi:10.7326/0003-4819-135-9-200111060-00012
2. Allen JD, Curtiss FR, Fairman KA. Nonadherence, clinical inertia, or therapeutic inertia? J Manag Care Pharm. 2009;15(8):690-695. doi:10.18553/jmcp.2009.15.8.690
3. Zafar A, Davies M, Azhar A, Khunti K. Clinical inertia in management of T2DM. Prim Care Diabetes. 2010;4(4):203-207. doi:10.1016/j.pcd.2010.07.003
4. Khunti K, Davies MJ. Clinical inertia—time to reappraise the terminology? Prim Care Diabetes. 2017;11(2):105-106. doi:10.1016/j.pcd.2017.01.007
5. O’Connor PJ. Overcome clinical inertia to control systolic blood pressure. Arch Intern Med. 2003;163(22):2677-2678. doi:10.1001/archinte.163.22.2677
6. Faria C, Wenzel M, Lee KW, et al. A narrative review of clinical inertia: focus on hypertension. J Am Soc Hypertens. 2009;3(4):267-276. doi:10.1016/j.jash.2009.03.001
7. Jarjour M, Henri C, de Denus S, et al. Care gaps in adherence to heart failure guidelines: clinical inertia or physiological limitations? JACC Heart Fail. 2020;8(9):725-738. doi:10.1016/j.jchf.2020.04.019
8. Henke RM, Zaslavsky AM, McGuire TG, et al. Clinical inertia in depression treatment. Med Care. 2009;47(9):959-67. doi:10.1097/MLR.0b013e31819a5da0
9. Cooke CE, Sidel M, Belletti DA, Fuhlbrigge AL. Clinical inertia in the management of chronic obstructive pulmonary disease. COPD. 2012;9(1):73-80. doi:10.3109/15412555.2011.631957
10. Whitford DL, Al-Anjawi HA, Al-Baharna MM. Impact of clinical inertia on cardiovascular risk factors in patients with diabetes. Prim Care Diabetes. 2014;8(2):133-138. doi:10.1016/j.pcd.2013.10.007
From the Department of Medicine, Brigham and Women’s Hospital, and Harvard Medical School, Boston, MA.
Clinical inertia is defined as the failure of clinicians to initiate or escalate guideline-directed medical therapy to achieve treatment goals for well-defined clinical conditions.1,2 Evidence-based guidelines recommend optimal disease management with readily available medical therapies throughout the phases of clinical care. Unfortunately, the care provided to individual patients undergoes multiple modifications throughout the disease course, resulting in divergent pathways, significant deviations from treatment guidelines, and failure of “safeguard” checkpoints to reinstate, initiate, optimize, or stop treatments. Clinical inertia generally describes rigidity or resistance to change around implementing evidence-based guidelines. Furthermore, this term describes treatment behavior on the part of an individual clinician, not organizational inertia, which generally encompasses both internal (immediate clinical practice settings) and external factors (national and international guidelines and recommendations), eventually leading to resistance to optimizing disease treatment and therapeutic regimens. Individual clinicians’ clinical inertia in the form of resistance to guideline implementation and evidence-based principles can be one factor that drives organizational inertia. In turn, such individual behavior can be dictated by personal beliefs, knowledge, interpretation, skills, management principles, and biases. The terms therapeutic inertia or clinical inertia should not be confused with nonadherence from the patient’s standpoint when the clinician follows the best practice guidelines.3
Clinical inertia has been described in several clinical domains, including diabetes,4,5 hypertension,6,7 heart failure,8 depression,9 pulmonary medicine,10 and complex disease management.11 Clinicians can set suboptimal treatment goals due to specific beliefs and attitudes around optimal therapeutic goals. For example, when treating a patient with a chronic disease that is presently stable, a clinician could elect to initiate suboptimal treatment, as escalation of treatment might not be the priority in stable disease; they also may have concerns about overtreatment. Other factors that can contribute to clinical inertia (ie, undertreatment in the presence of indications for treatment) include those related to the patient, the clinical setting, and the organization, along with the importance of individualizing therapies in specific patients. Organizational inertia is the initial global resistance by the system to implementation, which can slow the dissemination and adaptation of best practices but eventually declines over time. Individual clinical inertia, on the other hand, will likely persist after the system-level rollout of guideline-based approaches.
The trajectory of dissemination, implementation, and adaptation of innovations and best practices is illustrated in the Figure. When the guidelines and medical societies endorse the adaptation of innovations or practice change after the benefits of such innovations/change have been established by the regulatory bodies, uptake can be hindered by both organizational and clinical inertia. Overcoming inertia to system-level changes requires addressing individual clinicians, along with practice and organizational factors, in order to ensure systematic adaptations. From the clinicians’ view, training and cognitive interventions to improve the adaptation and coping skills can improve understanding of treatment options through standardized educational and behavioral modification tools, direct and indirect feedback around performance, and decision support through a continuous improvement approach on both individual and system levels.
Addressing inertia in clinical practice requires a deep understanding of the individual and organizational elements that foster resistance to adapting best practice models. Research that explores tools and approaches to overcome inertia in managing complex diseases is a key step in advancing clinical innovation and disseminating best practices.
Corresponding author: Ebrahim Barkoudah, MD, MPH; [email protected]
Disclosures: None reported.
From the Department of Medicine, Brigham and Women’s Hospital, and Harvard Medical School, Boston, MA.
Clinical inertia is defined as the failure of clinicians to initiate or escalate guideline-directed medical therapy to achieve treatment goals for well-defined clinical conditions.1,2 Evidence-based guidelines recommend optimal disease management with readily available medical therapies throughout the phases of clinical care. Unfortunately, the care provided to individual patients undergoes multiple modifications throughout the disease course, resulting in divergent pathways, significant deviations from treatment guidelines, and failure of “safeguard” checkpoints to reinstate, initiate, optimize, or stop treatments. Clinical inertia generally describes rigidity or resistance to change around implementing evidence-based guidelines. Furthermore, this term describes treatment behavior on the part of an individual clinician, not organizational inertia, which generally encompasses both internal (immediate clinical practice settings) and external factors (national and international guidelines and recommendations), eventually leading to resistance to optimizing disease treatment and therapeutic regimens. Individual clinicians’ clinical inertia in the form of resistance to guideline implementation and evidence-based principles can be one factor that drives organizational inertia. In turn, such individual behavior can be dictated by personal beliefs, knowledge, interpretation, skills, management principles, and biases. The terms therapeutic inertia or clinical inertia should not be confused with nonadherence from the patient’s standpoint when the clinician follows the best practice guidelines.3
Clinical inertia has been described in several clinical domains, including diabetes,4,5 hypertension,6,7 heart failure,8 depression,9 pulmonary medicine,10 and complex disease management.11 Clinicians can set suboptimal treatment goals due to specific beliefs and attitudes around optimal therapeutic goals. For example, when treating a patient with a chronic disease that is presently stable, a clinician could elect to initiate suboptimal treatment, as escalation of treatment might not be the priority in stable disease; they also may have concerns about overtreatment. Other factors that can contribute to clinical inertia (ie, undertreatment in the presence of indications for treatment) include those related to the patient, the clinical setting, and the organization, along with the importance of individualizing therapies in specific patients. Organizational inertia is the initial global resistance by the system to implementation, which can slow the dissemination and adaptation of best practices but eventually declines over time. Individual clinical inertia, on the other hand, will likely persist after the system-level rollout of guideline-based approaches.
The trajectory of dissemination, implementation, and adaptation of innovations and best practices is illustrated in the Figure. When the guidelines and medical societies endorse the adaptation of innovations or practice change after the benefits of such innovations/change have been established by the regulatory bodies, uptake can be hindered by both organizational and clinical inertia. Overcoming inertia to system-level changes requires addressing individual clinicians, along with practice and organizational factors, in order to ensure systematic adaptations. From the clinicians’ view, training and cognitive interventions to improve the adaptation and coping skills can improve understanding of treatment options through standardized educational and behavioral modification tools, direct and indirect feedback around performance, and decision support through a continuous improvement approach on both individual and system levels.
Addressing inertia in clinical practice requires a deep understanding of the individual and organizational elements that foster resistance to adapting best practice models. Research that explores tools and approaches to overcome inertia in managing complex diseases is a key step in advancing clinical innovation and disseminating best practices.
Corresponding author: Ebrahim Barkoudah, MD, MPH; [email protected]
Disclosures: None reported.
1. Phillips LS, Branch WT, Cook CB, et al. Clinical inertia. Ann Intern Med. 2001;135(9):825-834. doi:10.7326/0003-4819-135-9-200111060-00012
2. Allen JD, Curtiss FR, Fairman KA. Nonadherence, clinical inertia, or therapeutic inertia? J Manag Care Pharm. 2009;15(8):690-695. doi:10.18553/jmcp.2009.15.8.690
3. Zafar A, Davies M, Azhar A, Khunti K. Clinical inertia in management of T2DM. Prim Care Diabetes. 2010;4(4):203-207. doi:10.1016/j.pcd.2010.07.003
4. Khunti K, Davies MJ. Clinical inertia—time to reappraise the terminology? Prim Care Diabetes. 2017;11(2):105-106. doi:10.1016/j.pcd.2017.01.007
5. O’Connor PJ. Overcome clinical inertia to control systolic blood pressure. Arch Intern Med. 2003;163(22):2677-2678. doi:10.1001/archinte.163.22.2677
6. Faria C, Wenzel M, Lee KW, et al. A narrative review of clinical inertia: focus on hypertension. J Am Soc Hypertens. 2009;3(4):267-276. doi:10.1016/j.jash.2009.03.001
7. Jarjour M, Henri C, de Denus S, et al. Care gaps in adherence to heart failure guidelines: clinical inertia or physiological limitations? JACC Heart Fail. 2020;8(9):725-738. doi:10.1016/j.jchf.2020.04.019
8. Henke RM, Zaslavsky AM, McGuire TG, et al. Clinical inertia in depression treatment. Med Care. 2009;47(9):959-67. doi:10.1097/MLR.0b013e31819a5da0
9. Cooke CE, Sidel M, Belletti DA, Fuhlbrigge AL. Clinical inertia in the management of chronic obstructive pulmonary disease. COPD. 2012;9(1):73-80. doi:10.3109/15412555.2011.631957
10. Whitford DL, Al-Anjawi HA, Al-Baharna MM. Impact of clinical inertia on cardiovascular risk factors in patients with diabetes. Prim Care Diabetes. 2014;8(2):133-138. doi:10.1016/j.pcd.2013.10.007
1. Phillips LS, Branch WT, Cook CB, et al. Clinical inertia. Ann Intern Med. 2001;135(9):825-834. doi:10.7326/0003-4819-135-9-200111060-00012
2. Allen JD, Curtiss FR, Fairman KA. Nonadherence, clinical inertia, or therapeutic inertia? J Manag Care Pharm. 2009;15(8):690-695. doi:10.18553/jmcp.2009.15.8.690
3. Zafar A, Davies M, Azhar A, Khunti K. Clinical inertia in management of T2DM. Prim Care Diabetes. 2010;4(4):203-207. doi:10.1016/j.pcd.2010.07.003
4. Khunti K, Davies MJ. Clinical inertia—time to reappraise the terminology? Prim Care Diabetes. 2017;11(2):105-106. doi:10.1016/j.pcd.2017.01.007
5. O’Connor PJ. Overcome clinical inertia to control systolic blood pressure. Arch Intern Med. 2003;163(22):2677-2678. doi:10.1001/archinte.163.22.2677
6. Faria C, Wenzel M, Lee KW, et al. A narrative review of clinical inertia: focus on hypertension. J Am Soc Hypertens. 2009;3(4):267-276. doi:10.1016/j.jash.2009.03.001
7. Jarjour M, Henri C, de Denus S, et al. Care gaps in adherence to heart failure guidelines: clinical inertia or physiological limitations? JACC Heart Fail. 2020;8(9):725-738. doi:10.1016/j.jchf.2020.04.019
8. Henke RM, Zaslavsky AM, McGuire TG, et al. Clinical inertia in depression treatment. Med Care. 2009;47(9):959-67. doi:10.1097/MLR.0b013e31819a5da0
9. Cooke CE, Sidel M, Belletti DA, Fuhlbrigge AL. Clinical inertia in the management of chronic obstructive pulmonary disease. COPD. 2012;9(1):73-80. doi:10.3109/15412555.2011.631957
10. Whitford DL, Al-Anjawi HA, Al-Baharna MM. Impact of clinical inertia on cardiovascular risk factors in patients with diabetes. Prim Care Diabetes. 2014;8(2):133-138. doi:10.1016/j.pcd.2013.10.007
The Role of Revascularization and Viability Testing in Patients With Multivessel Coronary Artery Disease and Severely Reduced Ejection Fraction
Study 1 Overview (STICHES Investigators)
Objective: To assess the survival benefit of coronary-artery bypass grafting (CABG) added to guideline-directed medical therapy, compared to optimal medical therapy (OMT) alone, in patients with coronary artery disease, heart failure, and severe left ventricular dysfunction. Design: Multicenter, randomized, prospective study with extended follow-up (median duration of 9.8 years).
Setting and participants: A total of 1212 patients with left ventricular ejection fraction (LVEF) of 35% or less and coronary artery disease were randomized to medical therapy plus CABG or OMT alone at 127 clinical sites in 26 countries.
Main outcome measures: The primary endpoint was death from any cause. Main secondary endpoints were death from cardiovascular causes and a composite outcome of death from any cause or hospitalization for cardiovascular causes.
Main results: There were 359 primary outcome all-cause deaths (58.9%) in the CABG group and 398 (66.1%) in the medical therapy group (hazard ratio [HR], 0.84; 95% CI, 0.73-0.97; P = .02). Death from cardiovascular causes was reported in 247 patients (40.5%) in the CABG group and 297 patients (49.3%) in the medical therapy group (HR, 0.79; 95% CI, 0.66-0.93; P < .01). The composite outcome of death from any cause or hospitalization for cardiovascular causes occurred in 467 patients (76.6%) in the CABG group and 467 patients (87.0%) in the medical therapy group (HR, 0.72; 95% CI, 0.64-0.82; P < .01).
Conclusion: Over a median follow-up of 9.8 years in patients with ischemic cardiomyopathy with severely reduced ejection fraction, the rates of death from any cause, death from cardiovascular causes, and the composite of death from any cause or hospitalization for cardiovascular causes were significantly lower in patients undergoing CABG than in patients receiving medical therapy alone.
Study 2 Overview (REVIVED BCIS Trial Group)
Objective: To assess whether percutaneous coronary intervention (PCI) can improve survival and left ventricular function in patients with severe left ventricular systolic dysfunction as compared to OMT alone.
Design: Multicenter, randomized, prospective study.
Setting and participants: A total of 700 patients with LVEF <35% with severe coronary artery disease amendable to PCI and demonstrable myocardial viability were randomly assigned to either PCI plus optimal medical therapy (PCI group) or OMT alone (OMT group).
Main outcome measures: The primary outcome was death from any cause or hospitalization for heart failure. The main secondary outcomes were LVEF at 6 and 12 months and quality of life (QOL) scores.
Main results: Over a median follow-up of 41 months, the primary outcome was reported in 129 patients (37.2%) in the PCI group and in 134 patients (38.0%) in the OMT group (HR, 0.99; 95% CI, 0.78-1.27; P = .96). The LVEF was similar in the 2 groups at 6 months (mean difference, –1.6 percentage points; 95% CI, –3.7 to 0.5) and at 12 months (mean difference, 0.9 percentage points; 95% CI, –1.7 to 3.4). QOL scores at 6 and 12 months favored the PCI group, but the difference had diminished at 24 months.
Conclusion: In patients with severe ischemic cardiomyopathy, revascularization by PCI in addition to OMT did not result in a lower incidence of death from any cause or hospitalization from heart failure.
Commentary
Coronary artery disease is the most common cause of heart failure with reduced ejection fraction and an important cause of mortality.1 Patients with ischemic cardiomyopathy with reduced ejection fraction are often considered for revascularization in addition to OMT and device therapies. Although there have been multiple retrospective studies and registries suggesting that cardiac outcomes and LVEF improve with revascularization, the number of large-scale prospective studies that assessed this clinical question and randomized patients to revascularization plus OMT compared to OMT alone has been limited.
In the Surgical Treatment for Ischemic Heart Failure (STICH) study,2,3 eligible patients had coronary artery disease amendable to CABG and a LVEF of 35% or less. Patients (N = 1212) were randomly assigned to CABG plus OMT or OMT alone between July 2002 and May 2007. The original study, with a median follow-up of 5 years, did not show survival benefit, but the investigators reported that the primary outcome of death from any cause was significantly lower in the CABG group compared to OMT alone when follow-up of the same study population was extended to 9.8 years (58.9% vs 66.1%, P = .02). The findings from this study led to a class I guideline recommendation of CABG over medical therapy in patients with multivessel disease and low ejection fraction.4
Since the STICH trial was designed, there have been significant improvements in devices and techniques used for PCI, and the procedure is now widely performed in patients with multivessel disease.5 The advantages of PCI over CABG include shorter recovery times and lower risk of immediate complications. In this context, the recently reported Revascularization for Ischemic Ventricular Dysfunction (REVIVED) study assessed clinical outcomes in patients with severe coronary artery disease and reduced ejection fraction by randomizing patients to either PCI with OMT or OMT alone.6 At a median follow-up of 3.5 years, the investigators found no difference in the primary outcome of death from any cause or hospitalization for heart failure (37.2% vs 38.0%; 95% CI, 0.78-1.28; P = .96). Moreover, the degree of LVEF improvement, assessed by follow-up echocardiogram read by the core lab, showed no difference in the degree of LVEF improvement between groups at 6 and 12 months. Finally, although results of the QOL assessment using the Kansas City Cardiomyopathy Questionnaire (KCCQ), a validated, patient-reported, heart-failure-specific QOL scale, favored the PCI group at 6 and 12 months of follow-up, the difference had diminished at 24 months.
The main strength of the REVIVED study was that it targeted a patient population with severe coronary artery disease, including left main disease and severely reduced ejection fraction, that historically have been excluded from large-scale randomized controlled studies evaluating PCI with OMT compared to OMT alone.7 However, there are several points to consider when interpreting the results of this study. First, further details of the PCI procedures are necessary. The REVIVED study recommended revascularization of all territories with viable myocardium; the anatomical revascularization index utilizing the British Cardiovascular Intervention Society (BCIS) Jeopardy Score was 71%. It is important to note that this jeopardy score was operator-reported and the core-lab adjudicated anatomical revascularization rate may be lower. Although viability testing primarily utilizing cardiac magnetic resonance imaging was performed in most patients, correlation between the revascularization territory and the viable segments has yet to be reported. Moreover, procedural details such as use of intravascular ultrasound and physiological testing, known to improve clinical outcome, need to be reported.8,9
Second, there is a high prevalence of ischemic cardiomyopathy, and it is important to note that the patients included in this study were highly selected from daily clinical practice, as evidenced by the prolonged enrollment period (8 years). Individuals were largely stable patients with less complex coronary anatomy as evidenced by the median interval from angiography to randomization of 80 days. Taking into consideration the degree of left ventricular dysfunction for patients included in the trial, only 14% of the patients had left main disease and half of the patients only had 2-vessel disease. The severity of the left main disease also needs to be clarified as it is likely that patients the operator determined to be critical were not enrolled in the study. Furthermore, the standard of care based on the STICH trial is to refer patients with severe multivessel coronary artery disease to CABG, making it more likely that patients with more severe and complex disease were not included in this trial. It is also important to note that this study enrolled patients with stable ischemic heart disease, and the data do not apply to patients presenting with acute coronary syndrome.
Third, although the primary outcome was similar between the groups, the secondary outcome of unplanned revascularization was lower in the PCI group. In addition, the rate of acute myocardial infarction (MI) was similar between the 2 groups, but the rate of spontaneous MI was lower in the PCI group compared to the OMT group (5.2% vs 9.3%) as 40% of MI cases in the PCI group were periprocedural MIs. The correlation between periprocedural MI and long-term outcomes has been modest compared to spontaneous MI. Moreover, with the longer follow-up, the number of spontaneous MI cases is expected to rise while the number of periprocedural MI cases is not. Extending the follow-up period is also important, as the STICH extension trial showed a statistically significant difference at 10-year follow up despite negative results at the time of the original publication.
Fourth, the REVIVED trial randomized a significantly lower number of patients compared to the STICH trial, and the authors reported fewer primary-outcome events than the estimated number needed to achieve the power to assess the primary hypothesis. In addition, significant improvements in medical treatment for heart failure with reduced ejection fraction since the STICH trial make comparison of PCI vs CABG in this patient population unfeasible.
Finally, although severe angina was not an exclusion criterion, two-thirds of the patients enrolled had no angina, and only 2% of the patients had baseline severe angina. This is important to consider when interpreting the results of the patient-reported health status as previous studies have shown that patients with worse angina at baseline derive the largest improvement in their QOL,10,11 and symptom improvement is the main indication for PCI in patients with stable ischemic heart disease.
Applications for Clinical Practice and System Implementation
In patients with severe left ventricular systolic dysfunction and multivessel stable ischemic heart disease who are well compensated and have little or no angina at baseline, OMT alone as an initial strategy may be considered against the addition of PCI after careful risk and benefit discussion. Further details about revascularization and extended follow-up data from the REVIVED trial are necessary.
Practice Points
- Patients with ischemic cardiomyopathy with reduced ejection fraction have been an understudied population in previous studies.
- Further studies are necessary to understand the benefits of revascularization and the role of viability testing in this population.
– Taishi Hirai MD, and Ziad Sayed Ahmad, MD
University of Missouri, Columbia, MO
1. Nowbar AN, Gitto M, Howard JP, et al. Mortality from ischemic heart disease. Circ Cardiovasc Qual Outcomes. 2019;12(6):e005375. doi:10.1161/CIRCOUTCOMES
2. Velazquez EJ, Lee KL, Deja MA, et al; for the STICH Investigators. Coronary-artery bypass surgery in patients with left ventricular dysfunction. N Engl J Med. 2011;364(17):1607-1616. doi:10.1056/NEJMoa1100356
3. Velazquez EJ, Lee KL, Jones RH, et al. Coronary-artery bypass surgery in patients with ischemic cardiomyopathy. N Engl J Med. 2016;374(16):1511-1520. doi:10.1056/NEJMoa1602001
4. Lawton JS, Tamis-Holland JE, Bangalore S, et al. 2021 ACC/AHA/SCAI guideline for coronary artery revascularization: a report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. J Am Coll Cardiol. 2022;79(2):e21-e129. doi:10.1016/j.jacc.2021.09.006
5. Kirtane AJ, Doshi D, Leon MB, et al. Treatment of higher-risk patients with an indication for revascularization: evolution within the field of contemporary percutaneous coronary intervention. Circulation. 2016;134(5):422-431. doi:10.1161/CIRCULATIONAHA
6. Perera D, Clayton T, O’Kane PD, et al. Percutaneous revascularization for ischemic left ventricular dysfunction. N Engl J Med. 2022;387(15):1351-1360. doi:10.1056/NEJMoa2206606
7. Maron DJ, Hochman JS, Reynolds HR, et al. Initial invasive or conservative strategy for stable coronary disease. Circulation. 2020;142(18):1725-1735. doi:10.1161/CIRCULATIONAHA
8. De Bruyne B, Pijls NH, Kalesan B, et al. Fractional flow reserve-guided PCI versus medical therapy in stable coronary disease. N Engl J Med. 2012;367(11):991-1001. doi:10.1056/NEJMoa1205361
9. Zhang J, Gao X, Kan J, et al. Intravascular ultrasound versus angiography-guided drug-eluting stent implantation: The ULTIMATE trial. J Am Coll Cardiol. 2018;72(24):3126-3137. doi:10.1016/j.jacc.2018.09.013
10. Spertus JA, Jones PG, Maron DJ, et al. Health-status outcomes with invasive or conservative care in coronary disease. N Engl J Med. 2020;382(15):1408-1419. doi:10.1056/NEJMoa1916370
11. Hirai T, Grantham JA, Sapontis J, et al. Quality of life changes after chronic total occlusion angioplasty in patients with baseline refractory angina. Circ Cardiovasc Interv. 2019;12:e007558. doi:10.1161/CIRCINTERVENTIONS.118.007558
Study 1 Overview (STICHES Investigators)
Objective: To assess the survival benefit of coronary-artery bypass grafting (CABG) added to guideline-directed medical therapy, compared to optimal medical therapy (OMT) alone, in patients with coronary artery disease, heart failure, and severe left ventricular dysfunction. Design: Multicenter, randomized, prospective study with extended follow-up (median duration of 9.8 years).
Setting and participants: A total of 1212 patients with left ventricular ejection fraction (LVEF) of 35% or less and coronary artery disease were randomized to medical therapy plus CABG or OMT alone at 127 clinical sites in 26 countries.
Main outcome measures: The primary endpoint was death from any cause. Main secondary endpoints were death from cardiovascular causes and a composite outcome of death from any cause or hospitalization for cardiovascular causes.
Main results: There were 359 primary outcome all-cause deaths (58.9%) in the CABG group and 398 (66.1%) in the medical therapy group (hazard ratio [HR], 0.84; 95% CI, 0.73-0.97; P = .02). Death from cardiovascular causes was reported in 247 patients (40.5%) in the CABG group and 297 patients (49.3%) in the medical therapy group (HR, 0.79; 95% CI, 0.66-0.93; P < .01). The composite outcome of death from any cause or hospitalization for cardiovascular causes occurred in 467 patients (76.6%) in the CABG group and 467 patients (87.0%) in the medical therapy group (HR, 0.72; 95% CI, 0.64-0.82; P < .01).
Conclusion: Over a median follow-up of 9.8 years in patients with ischemic cardiomyopathy with severely reduced ejection fraction, the rates of death from any cause, death from cardiovascular causes, and the composite of death from any cause or hospitalization for cardiovascular causes were significantly lower in patients undergoing CABG than in patients receiving medical therapy alone.
Study 2 Overview (REVIVED BCIS Trial Group)
Objective: To assess whether percutaneous coronary intervention (PCI) can improve survival and left ventricular function in patients with severe left ventricular systolic dysfunction as compared to OMT alone.
Design: Multicenter, randomized, prospective study.
Setting and participants: A total of 700 patients with LVEF <35% with severe coronary artery disease amendable to PCI and demonstrable myocardial viability were randomly assigned to either PCI plus optimal medical therapy (PCI group) or OMT alone (OMT group).
Main outcome measures: The primary outcome was death from any cause or hospitalization for heart failure. The main secondary outcomes were LVEF at 6 and 12 months and quality of life (QOL) scores.
Main results: Over a median follow-up of 41 months, the primary outcome was reported in 129 patients (37.2%) in the PCI group and in 134 patients (38.0%) in the OMT group (HR, 0.99; 95% CI, 0.78-1.27; P = .96). The LVEF was similar in the 2 groups at 6 months (mean difference, –1.6 percentage points; 95% CI, –3.7 to 0.5) and at 12 months (mean difference, 0.9 percentage points; 95% CI, –1.7 to 3.4). QOL scores at 6 and 12 months favored the PCI group, but the difference had diminished at 24 months.
Conclusion: In patients with severe ischemic cardiomyopathy, revascularization by PCI in addition to OMT did not result in a lower incidence of death from any cause or hospitalization from heart failure.
Commentary
Coronary artery disease is the most common cause of heart failure with reduced ejection fraction and an important cause of mortality.1 Patients with ischemic cardiomyopathy with reduced ejection fraction are often considered for revascularization in addition to OMT and device therapies. Although there have been multiple retrospective studies and registries suggesting that cardiac outcomes and LVEF improve with revascularization, the number of large-scale prospective studies that assessed this clinical question and randomized patients to revascularization plus OMT compared to OMT alone has been limited.
In the Surgical Treatment for Ischemic Heart Failure (STICH) study,2,3 eligible patients had coronary artery disease amendable to CABG and a LVEF of 35% or less. Patients (N = 1212) were randomly assigned to CABG plus OMT or OMT alone between July 2002 and May 2007. The original study, with a median follow-up of 5 years, did not show survival benefit, but the investigators reported that the primary outcome of death from any cause was significantly lower in the CABG group compared to OMT alone when follow-up of the same study population was extended to 9.8 years (58.9% vs 66.1%, P = .02). The findings from this study led to a class I guideline recommendation of CABG over medical therapy in patients with multivessel disease and low ejection fraction.4
Since the STICH trial was designed, there have been significant improvements in devices and techniques used for PCI, and the procedure is now widely performed in patients with multivessel disease.5 The advantages of PCI over CABG include shorter recovery times and lower risk of immediate complications. In this context, the recently reported Revascularization for Ischemic Ventricular Dysfunction (REVIVED) study assessed clinical outcomes in patients with severe coronary artery disease and reduced ejection fraction by randomizing patients to either PCI with OMT or OMT alone.6 At a median follow-up of 3.5 years, the investigators found no difference in the primary outcome of death from any cause or hospitalization for heart failure (37.2% vs 38.0%; 95% CI, 0.78-1.28; P = .96). Moreover, the degree of LVEF improvement, assessed by follow-up echocardiogram read by the core lab, showed no difference in the degree of LVEF improvement between groups at 6 and 12 months. Finally, although results of the QOL assessment using the Kansas City Cardiomyopathy Questionnaire (KCCQ), a validated, patient-reported, heart-failure-specific QOL scale, favored the PCI group at 6 and 12 months of follow-up, the difference had diminished at 24 months.
The main strength of the REVIVED study was that it targeted a patient population with severe coronary artery disease, including left main disease and severely reduced ejection fraction, that historically have been excluded from large-scale randomized controlled studies evaluating PCI with OMT compared to OMT alone.7 However, there are several points to consider when interpreting the results of this study. First, further details of the PCI procedures are necessary. The REVIVED study recommended revascularization of all territories with viable myocardium; the anatomical revascularization index utilizing the British Cardiovascular Intervention Society (BCIS) Jeopardy Score was 71%. It is important to note that this jeopardy score was operator-reported and the core-lab adjudicated anatomical revascularization rate may be lower. Although viability testing primarily utilizing cardiac magnetic resonance imaging was performed in most patients, correlation between the revascularization territory and the viable segments has yet to be reported. Moreover, procedural details such as use of intravascular ultrasound and physiological testing, known to improve clinical outcome, need to be reported.8,9
Second, there is a high prevalence of ischemic cardiomyopathy, and it is important to note that the patients included in this study were highly selected from daily clinical practice, as evidenced by the prolonged enrollment period (8 years). Individuals were largely stable patients with less complex coronary anatomy as evidenced by the median interval from angiography to randomization of 80 days. Taking into consideration the degree of left ventricular dysfunction for patients included in the trial, only 14% of the patients had left main disease and half of the patients only had 2-vessel disease. The severity of the left main disease also needs to be clarified as it is likely that patients the operator determined to be critical were not enrolled in the study. Furthermore, the standard of care based on the STICH trial is to refer patients with severe multivessel coronary artery disease to CABG, making it more likely that patients with more severe and complex disease were not included in this trial. It is also important to note that this study enrolled patients with stable ischemic heart disease, and the data do not apply to patients presenting with acute coronary syndrome.
Third, although the primary outcome was similar between the groups, the secondary outcome of unplanned revascularization was lower in the PCI group. In addition, the rate of acute myocardial infarction (MI) was similar between the 2 groups, but the rate of spontaneous MI was lower in the PCI group compared to the OMT group (5.2% vs 9.3%) as 40% of MI cases in the PCI group were periprocedural MIs. The correlation between periprocedural MI and long-term outcomes has been modest compared to spontaneous MI. Moreover, with the longer follow-up, the number of spontaneous MI cases is expected to rise while the number of periprocedural MI cases is not. Extending the follow-up period is also important, as the STICH extension trial showed a statistically significant difference at 10-year follow up despite negative results at the time of the original publication.
Fourth, the REVIVED trial randomized a significantly lower number of patients compared to the STICH trial, and the authors reported fewer primary-outcome events than the estimated number needed to achieve the power to assess the primary hypothesis. In addition, significant improvements in medical treatment for heart failure with reduced ejection fraction since the STICH trial make comparison of PCI vs CABG in this patient population unfeasible.
Finally, although severe angina was not an exclusion criterion, two-thirds of the patients enrolled had no angina, and only 2% of the patients had baseline severe angina. This is important to consider when interpreting the results of the patient-reported health status as previous studies have shown that patients with worse angina at baseline derive the largest improvement in their QOL,10,11 and symptom improvement is the main indication for PCI in patients with stable ischemic heart disease.
Applications for Clinical Practice and System Implementation
In patients with severe left ventricular systolic dysfunction and multivessel stable ischemic heart disease who are well compensated and have little or no angina at baseline, OMT alone as an initial strategy may be considered against the addition of PCI after careful risk and benefit discussion. Further details about revascularization and extended follow-up data from the REVIVED trial are necessary.
Practice Points
- Patients with ischemic cardiomyopathy with reduced ejection fraction have been an understudied population in previous studies.
- Further studies are necessary to understand the benefits of revascularization and the role of viability testing in this population.
– Taishi Hirai MD, and Ziad Sayed Ahmad, MD
University of Missouri, Columbia, MO
Study 1 Overview (STICHES Investigators)
Objective: To assess the survival benefit of coronary-artery bypass grafting (CABG) added to guideline-directed medical therapy, compared to optimal medical therapy (OMT) alone, in patients with coronary artery disease, heart failure, and severe left ventricular dysfunction. Design: Multicenter, randomized, prospective study with extended follow-up (median duration of 9.8 years).
Setting and participants: A total of 1212 patients with left ventricular ejection fraction (LVEF) of 35% or less and coronary artery disease were randomized to medical therapy plus CABG or OMT alone at 127 clinical sites in 26 countries.
Main outcome measures: The primary endpoint was death from any cause. Main secondary endpoints were death from cardiovascular causes and a composite outcome of death from any cause or hospitalization for cardiovascular causes.
Main results: There were 359 primary outcome all-cause deaths (58.9%) in the CABG group and 398 (66.1%) in the medical therapy group (hazard ratio [HR], 0.84; 95% CI, 0.73-0.97; P = .02). Death from cardiovascular causes was reported in 247 patients (40.5%) in the CABG group and 297 patients (49.3%) in the medical therapy group (HR, 0.79; 95% CI, 0.66-0.93; P < .01). The composite outcome of death from any cause or hospitalization for cardiovascular causes occurred in 467 patients (76.6%) in the CABG group and 467 patients (87.0%) in the medical therapy group (HR, 0.72; 95% CI, 0.64-0.82; P < .01).
Conclusion: Over a median follow-up of 9.8 years in patients with ischemic cardiomyopathy with severely reduced ejection fraction, the rates of death from any cause, death from cardiovascular causes, and the composite of death from any cause or hospitalization for cardiovascular causes were significantly lower in patients undergoing CABG than in patients receiving medical therapy alone.
Study 2 Overview (REVIVED BCIS Trial Group)
Objective: To assess whether percutaneous coronary intervention (PCI) can improve survival and left ventricular function in patients with severe left ventricular systolic dysfunction as compared to OMT alone.
Design: Multicenter, randomized, prospective study.
Setting and participants: A total of 700 patients with LVEF <35% with severe coronary artery disease amendable to PCI and demonstrable myocardial viability were randomly assigned to either PCI plus optimal medical therapy (PCI group) or OMT alone (OMT group).
Main outcome measures: The primary outcome was death from any cause or hospitalization for heart failure. The main secondary outcomes were LVEF at 6 and 12 months and quality of life (QOL) scores.
Main results: Over a median follow-up of 41 months, the primary outcome was reported in 129 patients (37.2%) in the PCI group and in 134 patients (38.0%) in the OMT group (HR, 0.99; 95% CI, 0.78-1.27; P = .96). The LVEF was similar in the 2 groups at 6 months (mean difference, –1.6 percentage points; 95% CI, –3.7 to 0.5) and at 12 months (mean difference, 0.9 percentage points; 95% CI, –1.7 to 3.4). QOL scores at 6 and 12 months favored the PCI group, but the difference had diminished at 24 months.
Conclusion: In patients with severe ischemic cardiomyopathy, revascularization by PCI in addition to OMT did not result in a lower incidence of death from any cause or hospitalization from heart failure.
Commentary
Coronary artery disease is the most common cause of heart failure with reduced ejection fraction and an important cause of mortality.1 Patients with ischemic cardiomyopathy with reduced ejection fraction are often considered for revascularization in addition to OMT and device therapies. Although there have been multiple retrospective studies and registries suggesting that cardiac outcomes and LVEF improve with revascularization, the number of large-scale prospective studies that assessed this clinical question and randomized patients to revascularization plus OMT compared to OMT alone has been limited.
In the Surgical Treatment for Ischemic Heart Failure (STICH) study,2,3 eligible patients had coronary artery disease amendable to CABG and a LVEF of 35% or less. Patients (N = 1212) were randomly assigned to CABG plus OMT or OMT alone between July 2002 and May 2007. The original study, with a median follow-up of 5 years, did not show survival benefit, but the investigators reported that the primary outcome of death from any cause was significantly lower in the CABG group compared to OMT alone when follow-up of the same study population was extended to 9.8 years (58.9% vs 66.1%, P = .02). The findings from this study led to a class I guideline recommendation of CABG over medical therapy in patients with multivessel disease and low ejection fraction.4
Since the STICH trial was designed, there have been significant improvements in devices and techniques used for PCI, and the procedure is now widely performed in patients with multivessel disease.5 The advantages of PCI over CABG include shorter recovery times and lower risk of immediate complications. In this context, the recently reported Revascularization for Ischemic Ventricular Dysfunction (REVIVED) study assessed clinical outcomes in patients with severe coronary artery disease and reduced ejection fraction by randomizing patients to either PCI with OMT or OMT alone.6 At a median follow-up of 3.5 years, the investigators found no difference in the primary outcome of death from any cause or hospitalization for heart failure (37.2% vs 38.0%; 95% CI, 0.78-1.28; P = .96). Moreover, the degree of LVEF improvement, assessed by follow-up echocardiogram read by the core lab, showed no difference in the degree of LVEF improvement between groups at 6 and 12 months. Finally, although results of the QOL assessment using the Kansas City Cardiomyopathy Questionnaire (KCCQ), a validated, patient-reported, heart-failure-specific QOL scale, favored the PCI group at 6 and 12 months of follow-up, the difference had diminished at 24 months.
The main strength of the REVIVED study was that it targeted a patient population with severe coronary artery disease, including left main disease and severely reduced ejection fraction, that historically have been excluded from large-scale randomized controlled studies evaluating PCI with OMT compared to OMT alone.7 However, there are several points to consider when interpreting the results of this study. First, further details of the PCI procedures are necessary. The REVIVED study recommended revascularization of all territories with viable myocardium; the anatomical revascularization index utilizing the British Cardiovascular Intervention Society (BCIS) Jeopardy Score was 71%. It is important to note that this jeopardy score was operator-reported and the core-lab adjudicated anatomical revascularization rate may be lower. Although viability testing primarily utilizing cardiac magnetic resonance imaging was performed in most patients, correlation between the revascularization territory and the viable segments has yet to be reported. Moreover, procedural details such as use of intravascular ultrasound and physiological testing, known to improve clinical outcome, need to be reported.8,9
Second, there is a high prevalence of ischemic cardiomyopathy, and it is important to note that the patients included in this study were highly selected from daily clinical practice, as evidenced by the prolonged enrollment period (8 years). Individuals were largely stable patients with less complex coronary anatomy as evidenced by the median interval from angiography to randomization of 80 days. Taking into consideration the degree of left ventricular dysfunction for patients included in the trial, only 14% of the patients had left main disease and half of the patients only had 2-vessel disease. The severity of the left main disease also needs to be clarified as it is likely that patients the operator determined to be critical were not enrolled in the study. Furthermore, the standard of care based on the STICH trial is to refer patients with severe multivessel coronary artery disease to CABG, making it more likely that patients with more severe and complex disease were not included in this trial. It is also important to note that this study enrolled patients with stable ischemic heart disease, and the data do not apply to patients presenting with acute coronary syndrome.
Third, although the primary outcome was similar between the groups, the secondary outcome of unplanned revascularization was lower in the PCI group. In addition, the rate of acute myocardial infarction (MI) was similar between the 2 groups, but the rate of spontaneous MI was lower in the PCI group compared to the OMT group (5.2% vs 9.3%) as 40% of MI cases in the PCI group were periprocedural MIs. The correlation between periprocedural MI and long-term outcomes has been modest compared to spontaneous MI. Moreover, with the longer follow-up, the number of spontaneous MI cases is expected to rise while the number of periprocedural MI cases is not. Extending the follow-up period is also important, as the STICH extension trial showed a statistically significant difference at 10-year follow up despite negative results at the time of the original publication.
Fourth, the REVIVED trial randomized a significantly lower number of patients compared to the STICH trial, and the authors reported fewer primary-outcome events than the estimated number needed to achieve the power to assess the primary hypothesis. In addition, significant improvements in medical treatment for heart failure with reduced ejection fraction since the STICH trial make comparison of PCI vs CABG in this patient population unfeasible.
Finally, although severe angina was not an exclusion criterion, two-thirds of the patients enrolled had no angina, and only 2% of the patients had baseline severe angina. This is important to consider when interpreting the results of the patient-reported health status as previous studies have shown that patients with worse angina at baseline derive the largest improvement in their QOL,10,11 and symptom improvement is the main indication for PCI in patients with stable ischemic heart disease.
Applications for Clinical Practice and System Implementation
In patients with severe left ventricular systolic dysfunction and multivessel stable ischemic heart disease who are well compensated and have little or no angina at baseline, OMT alone as an initial strategy may be considered against the addition of PCI after careful risk and benefit discussion. Further details about revascularization and extended follow-up data from the REVIVED trial are necessary.
Practice Points
- Patients with ischemic cardiomyopathy with reduced ejection fraction have been an understudied population in previous studies.
- Further studies are necessary to understand the benefits of revascularization and the role of viability testing in this population.
– Taishi Hirai MD, and Ziad Sayed Ahmad, MD
University of Missouri, Columbia, MO
1. Nowbar AN, Gitto M, Howard JP, et al. Mortality from ischemic heart disease. Circ Cardiovasc Qual Outcomes. 2019;12(6):e005375. doi:10.1161/CIRCOUTCOMES
2. Velazquez EJ, Lee KL, Deja MA, et al; for the STICH Investigators. Coronary-artery bypass surgery in patients with left ventricular dysfunction. N Engl J Med. 2011;364(17):1607-1616. doi:10.1056/NEJMoa1100356
3. Velazquez EJ, Lee KL, Jones RH, et al. Coronary-artery bypass surgery in patients with ischemic cardiomyopathy. N Engl J Med. 2016;374(16):1511-1520. doi:10.1056/NEJMoa1602001
4. Lawton JS, Tamis-Holland JE, Bangalore S, et al. 2021 ACC/AHA/SCAI guideline for coronary artery revascularization: a report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. J Am Coll Cardiol. 2022;79(2):e21-e129. doi:10.1016/j.jacc.2021.09.006
5. Kirtane AJ, Doshi D, Leon MB, et al. Treatment of higher-risk patients with an indication for revascularization: evolution within the field of contemporary percutaneous coronary intervention. Circulation. 2016;134(5):422-431. doi:10.1161/CIRCULATIONAHA
6. Perera D, Clayton T, O’Kane PD, et al. Percutaneous revascularization for ischemic left ventricular dysfunction. N Engl J Med. 2022;387(15):1351-1360. doi:10.1056/NEJMoa2206606
7. Maron DJ, Hochman JS, Reynolds HR, et al. Initial invasive or conservative strategy for stable coronary disease. Circulation. 2020;142(18):1725-1735. doi:10.1161/CIRCULATIONAHA
8. De Bruyne B, Pijls NH, Kalesan B, et al. Fractional flow reserve-guided PCI versus medical therapy in stable coronary disease. N Engl J Med. 2012;367(11):991-1001. doi:10.1056/NEJMoa1205361
9. Zhang J, Gao X, Kan J, et al. Intravascular ultrasound versus angiography-guided drug-eluting stent implantation: The ULTIMATE trial. J Am Coll Cardiol. 2018;72(24):3126-3137. doi:10.1016/j.jacc.2018.09.013
10. Spertus JA, Jones PG, Maron DJ, et al. Health-status outcomes with invasive or conservative care in coronary disease. N Engl J Med. 2020;382(15):1408-1419. doi:10.1056/NEJMoa1916370
11. Hirai T, Grantham JA, Sapontis J, et al. Quality of life changes after chronic total occlusion angioplasty in patients with baseline refractory angina. Circ Cardiovasc Interv. 2019;12:e007558. doi:10.1161/CIRCINTERVENTIONS.118.007558
1. Nowbar AN, Gitto M, Howard JP, et al. Mortality from ischemic heart disease. Circ Cardiovasc Qual Outcomes. 2019;12(6):e005375. doi:10.1161/CIRCOUTCOMES
2. Velazquez EJ, Lee KL, Deja MA, et al; for the STICH Investigators. Coronary-artery bypass surgery in patients with left ventricular dysfunction. N Engl J Med. 2011;364(17):1607-1616. doi:10.1056/NEJMoa1100356
3. Velazquez EJ, Lee KL, Jones RH, et al. Coronary-artery bypass surgery in patients with ischemic cardiomyopathy. N Engl J Med. 2016;374(16):1511-1520. doi:10.1056/NEJMoa1602001
4. Lawton JS, Tamis-Holland JE, Bangalore S, et al. 2021 ACC/AHA/SCAI guideline for coronary artery revascularization: a report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. J Am Coll Cardiol. 2022;79(2):e21-e129. doi:10.1016/j.jacc.2021.09.006
5. Kirtane AJ, Doshi D, Leon MB, et al. Treatment of higher-risk patients with an indication for revascularization: evolution within the field of contemporary percutaneous coronary intervention. Circulation. 2016;134(5):422-431. doi:10.1161/CIRCULATIONAHA
6. Perera D, Clayton T, O’Kane PD, et al. Percutaneous revascularization for ischemic left ventricular dysfunction. N Engl J Med. 2022;387(15):1351-1360. doi:10.1056/NEJMoa2206606
7. Maron DJ, Hochman JS, Reynolds HR, et al. Initial invasive or conservative strategy for stable coronary disease. Circulation. 2020;142(18):1725-1735. doi:10.1161/CIRCULATIONAHA
8. De Bruyne B, Pijls NH, Kalesan B, et al. Fractional flow reserve-guided PCI versus medical therapy in stable coronary disease. N Engl J Med. 2012;367(11):991-1001. doi:10.1056/NEJMoa1205361
9. Zhang J, Gao X, Kan J, et al. Intravascular ultrasound versus angiography-guided drug-eluting stent implantation: The ULTIMATE trial. J Am Coll Cardiol. 2018;72(24):3126-3137. doi:10.1016/j.jacc.2018.09.013
10. Spertus JA, Jones PG, Maron DJ, et al. Health-status outcomes with invasive or conservative care in coronary disease. N Engl J Med. 2020;382(15):1408-1419. doi:10.1056/NEJMoa1916370
11. Hirai T, Grantham JA, Sapontis J, et al. Quality of life changes after chronic total occlusion angioplasty in patients with baseline refractory angina. Circ Cardiovasc Interv. 2019;12:e007558. doi:10.1161/CIRCINTERVENTIONS.118.007558
Anesthetic Choices and Postoperative Delirium Incidence: Propofol vs Sevoflurane
Study 1 Overview (Chang et al)
Objective: To assess the incidence of postoperative delirium (POD) following propofol- vs sevoflurane-based anesthesia in geriatric spine surgery patients.
Design: Retrospective, single-blinded observational study of propofol- and sevoflurane-based anesthesia cohorts.
Setting and participants: Patients eligible for this study were aged 65 years or older admitted to the SMG-SNU Boramae Medical Center (Seoul, South Korea). All patients underwent general anesthesia either via intravenous propofol or inhalational sevoflurane for spine surgery between January 2015 and December 2019. Patients were retrospectively identified via electronic medical records. Patient exclusion criteria included preoperative delirium, history of dementia, psychiatric disease, alcoholism, hepatic or renal dysfunction, postoperative mechanical ventilation dependence, other surgery within the recent 6 months, maintenance of intraoperative anesthesia with combined anesthetics, or incomplete medical record.
Main outcome measures: The primary outcome was the incidence of POD after administration of propofol- and sevoflurane-based anesthesia during hospitalization. Patients were screened for POD regularly by attending nurses using the Nursing Delirium Screening Scale (disorientation, inappropriate behavior, inappropriate communication, hallucination, and psychomotor retardation) during the entirety of the patient’s hospital stay; if 1 or more screening criteria were met, a psychiatrist was consulted for the proper diagnosis and management of delirium. A psychiatric diagnosis was required for a case to be counted toward the incidence of POD in this study. Secondary outcomes included postoperative 30-day complications (angina, myocardial infarction, transient ischemic attack/stroke, pneumonia, deep vein thrombosis, pulmonary embolism, acute kidney injury, or infection) and length of postoperative hospital stay.
Main results: POD occurred in 29 patients (10.3%) out of the total cohort of 281. POD was more common in the sevoflurane group than in the propofol group (15.7% vs 5.0%; P = .003). Using multivariable logistic regression, inhalational sevoflurane was associated with an increased risk of POD as compared to propofol-based anesthesia (odds ratio [OR], 4.120; 95% CI, 1.549-10.954; P = .005). There was no association between choice of anesthetic and postoperative 30-day complications or the length of postoperative hospital stay. Both older age (OR, 1.242; 95% CI, 1.130-1.366; P < .001) and higher pain score at postoperative day 1 (OR, 1.338; 95% CI, 1.056-1.696; P = .016) were associated with increased risk of POD.
Conclusion: Propofol-based anesthesia was associated with a lower incidence of and risk for POD than sevoflurane-based anesthesia in older patients undergoing spine surgery.
Study 2 Overview (Mei et al)
Objective: To determine the incidence and duration of POD in older patients after total knee/hip replacement (TKR/THR) under intravenous propofol or inhalational sevoflurane general anesthesia.
Design: Randomized clinical trial of propofol and sevoflurane groups.
Setting and participants: This study was conducted at the Shanghai Tenth People’s Hospital and involved 209 participants enrolled between June 2016 and November 2019. All participants were 60 years of age or older, scheduled for TKR/THR surgery under general anesthesia, American Society of Anesthesiologists (ASA) class I to III, and assessed to be of normal cognitive function preoperatively via a Mini-Mental State Examination. Participant exclusion criteria included preexisting delirium as assessed by the Confusion Assessment Method (CAM), prior diagnosed neurological diseases (eg, Parkinson’s disease), prior diagnosed mental disorders (eg, schizophrenia), or impaired vision or hearing that would influence cognitive assessments. All participants were randomly assigned to either sevoflurane or propofol anesthesia for their surgery via a computer-generated list. Of these, 103 received inhalational sevoflurane and 106 received intravenous propofol. All participants received standardized postoperative care.
Main outcome measures: All participants were interviewed by investigators, who were blinded to the anesthesia regimen, twice daily on postoperative days 1, 2, and 3 using CAM and a CAM-based scoring system (CAM-S) to assess delirium severity. The CAM encapsulated 4 criteria: acute onset and fluctuating course, agitation, disorganized thinking, and altered level of consciousness. To diagnose delirium, both the first and second criteria must be met, in addition to either the third or fourth criterion. The averages of the scores across the 3 postoperative days indicated delirium severity, while the incidence and duration of delirium was assessed by the presence of delirium as determined by CAM on any postoperative day.
Main results: All eligible participants (N = 209; mean [SD] age 71.2 [6.7] years; 29.2% male) were included in the final analysis. The incidence of POD was not statistically different between the propofol and sevoflurane groups (33.0% vs 23.3%; P = .119, Chi-square test). It was estimated that 316 participants in each arm of the study were needed to detect statistical differences. The number of days of POD per person were higher with propofol anesthesia as compared to sevoflurane (0.5 [0.8] vs 0.3 [0.5]; P = .049, Student’s t-test).
Conclusion: This underpowered study showed a 9.7% difference in the incidence of POD between older adults who received propofol (33.0%) and sevoflurane (23.3%) after THR/TKR. Further studies with a larger sample size are needed to compare general anesthetics and their role in POD.
Commentary
Delirium is characterized by an acute state of confusion with fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is often caused by medications and/or their related adverse effects, infections, electrolyte imbalances, and other clinical etiologies. Delirium often manifests in post-surgical settings, disproportionately affecting older patients and leading to increased risk of morbidity, mortality, hospital length of stay, and health care costs.1 Intraoperative risk factors for POD are determined by the degree of operative stress (eg, lower-risk surgeries put the patient at reduced risk for POD as compared to higher-risk surgeries) and are additive to preexisting patient-specific risk factors, such as older age and functional impairment.1 Because operative stress is associated with risk for POD, limiting operative stress in controlled ways, such as through the choice of anesthetic agent administered, may be a pragmatic way to manage operative risks and optimize outcomes, especially when serving a surgically vulnerable population.
In Study 1, Chang et al sought to assess whether 2 commonly utilized general anesthetics, propofol and sevoflurane, in older patients undergoing spine surgery differentially affected the incidence of POD. In this retrospective, single-blinded observational study of 281 geriatric patients, the researchers found that sevoflurane was associated with a higher risk of POD as compared to propofol. However, these anesthetics were not associated with surgical outcomes such as postoperative 30-day complications or the length of postoperative hospital stay. While these findings added new knowledge to this field of research, several limitations should be kept in mind when interpreting this study’s results. For instance, the sample size was relatively small, with all cases selected from a single center utilizing a retrospective analysis. In addition, although a standardized nursing screening tool was used as a method for delirium detection, hypoactive delirium or less symptomatic delirium may have been missed, which in turn would lead to an underestimation of POD incidence. The latter is a common limitation in delirium research.
In Study 2, Mei et al similarly explored the effects of general anesthetics on POD in older surgical patients. Specifically, using a randomized clinical trial design, the investigators compared propofol with sevoflurane in older patients who underwent TKR/THR, and their roles in POD severity and duration. Although the incidence of POD was higher in those who received propofol compared to sevoflurane, this trial was underpowered and the results did not reach statistical significance. In addition, while the duration of POD was slightly longer in the propofol group compared to the sevoflurane group (0.5 vs 0.3 days), it was unclear if this finding was clinically significant. Similar to many research studies in POD, limitations of Study 2 included a small sample size of 209 patients, with all participants enrolled from a single center. On the other hand, this study illustrated the feasibility of a method that allowed reproducible prospective assessment of POD time course using CAM and CAM-S.
Applications for Clinical Practice and System Implementation
The delineation of risk factors that contribute to delirium after surgery in older patients is key to mitigating risks for POD and improving clinical outcomes. An important step towards a better understanding of these modifiable risk factors is to clearly quantify intraoperative risk of POD attributable to specific anesthetics. While preclinical studies have shown differential neurotoxicity effects of propofol and sevoflurane, their impact on clinically important neurologic outcomes such as delirium and cognitive decline remains poorly understood. Although Studies 1 and 2 both provided head-to-head comparisons of propofol and sevoflurane as risk factors for POD in high-operative-stress surgeries in older patients, the results were inconsistent. That being said, this small incremental increase in knowledge was not unexpected in the course of discovery around a clinically complex research question. Importantly, these studies provided evidence regarding the methodological approaches that could be taken to further this line of research.
The mediating factors of the differences on neurologic outcomes between anesthetic agents are likely pharmacological, biological, and methodological. Pharmacologically, the differences between target receptors, such as GABAA (propofol, etomidate) or NMDA (ketamine), could be a defining feature in the difference in incidence of POD. Additionally, secondary actions of anesthetic agents on glycine, nicotinic, and acetylcholine receptors could play a role as well. Biologically, genes such as CYP2E1, CYP2B6, CYP2C9, GSTP1, UGT1A9, SULT1A1, and NQO1 have all been identified as genetic factors in the metabolism of anesthetics, and variations in such genes could result in different responses to anesthetics.2 Methodologically, routes of anesthetic administration (eg, inhalation vs intravenous), preexisting anatomical structures, or confounding medical conditions (eg, lower respiratory volume due to older age) may influence POD incidence, duration, or severity. Moreover, methodological differences between Studies 1 and 2, such as surgeries performed (spinal vs TKR/THR), patient populations (South Korean vs Chinese), and the diagnosis and monitoring of delirium (retrospective screening and diagnosis vs prospective CAM/CAM-S) may impact delirium outcomes. Thus, these factors should be considered in the design of future clinical trials undertaken to investigate the effects of anesthetics on POD.
Given the high prevalence of delirium and its associated adverse outcomes in the immediate postoperative period in older patients, further research is warranted to determine how anesthetics affect POD in order to optimize perioperative care and mitigate risks in this vulnerable population. Moreover, parallel investigations into how anesthetics differentially impact the development of transient or longer-term cognitive impairment after a surgical procedure (ie, postoperative cognitive dysfunction) in older adults are urgently needed in order to improve their cognitive health.
Practice Points
- Intravenous propofol and inhalational sevoflurane may be differentially associated with incidence, duration, and severity of POD in geriatric surgical patients.
- Further larger-scale studies are warranted to clarify the role of anesthetic choice in POD in order to optimize surgical outcomes in older patients.
–Jared Doan, BS, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai
1. Dasgupta M, Dumbrell AC. Preoperative risk assessment for delirium after noncardiac surgery: a systematic review. J Am Geriatr Soc. 2006;54(10):1578-1589. doi:10.1111/j.1532-5415.2006.00893.x
2. Mikstacki A, Skrzypczak-Zielinska M, Tamowicz B, et al. The impact of genetic factors on response to anaesthetics. Adv Med Sci. 2013;58(1):9-14. doi:10.2478/v10039-012-0065-z
Study 1 Overview (Chang et al)
Objective: To assess the incidence of postoperative delirium (POD) following propofol- vs sevoflurane-based anesthesia in geriatric spine surgery patients.
Design: Retrospective, single-blinded observational study of propofol- and sevoflurane-based anesthesia cohorts.
Setting and participants: Patients eligible for this study were aged 65 years or older admitted to the SMG-SNU Boramae Medical Center (Seoul, South Korea). All patients underwent general anesthesia either via intravenous propofol or inhalational sevoflurane for spine surgery between January 2015 and December 2019. Patients were retrospectively identified via electronic medical records. Patient exclusion criteria included preoperative delirium, history of dementia, psychiatric disease, alcoholism, hepatic or renal dysfunction, postoperative mechanical ventilation dependence, other surgery within the recent 6 months, maintenance of intraoperative anesthesia with combined anesthetics, or incomplete medical record.
Main outcome measures: The primary outcome was the incidence of POD after administration of propofol- and sevoflurane-based anesthesia during hospitalization. Patients were screened for POD regularly by attending nurses using the Nursing Delirium Screening Scale (disorientation, inappropriate behavior, inappropriate communication, hallucination, and psychomotor retardation) during the entirety of the patient’s hospital stay; if 1 or more screening criteria were met, a psychiatrist was consulted for the proper diagnosis and management of delirium. A psychiatric diagnosis was required for a case to be counted toward the incidence of POD in this study. Secondary outcomes included postoperative 30-day complications (angina, myocardial infarction, transient ischemic attack/stroke, pneumonia, deep vein thrombosis, pulmonary embolism, acute kidney injury, or infection) and length of postoperative hospital stay.
Main results: POD occurred in 29 patients (10.3%) out of the total cohort of 281. POD was more common in the sevoflurane group than in the propofol group (15.7% vs 5.0%; P = .003). Using multivariable logistic regression, inhalational sevoflurane was associated with an increased risk of POD as compared to propofol-based anesthesia (odds ratio [OR], 4.120; 95% CI, 1.549-10.954; P = .005). There was no association between choice of anesthetic and postoperative 30-day complications or the length of postoperative hospital stay. Both older age (OR, 1.242; 95% CI, 1.130-1.366; P < .001) and higher pain score at postoperative day 1 (OR, 1.338; 95% CI, 1.056-1.696; P = .016) were associated with increased risk of POD.
Conclusion: Propofol-based anesthesia was associated with a lower incidence of and risk for POD than sevoflurane-based anesthesia in older patients undergoing spine surgery.
Study 2 Overview (Mei et al)
Objective: To determine the incidence and duration of POD in older patients after total knee/hip replacement (TKR/THR) under intravenous propofol or inhalational sevoflurane general anesthesia.
Design: Randomized clinical trial of propofol and sevoflurane groups.
Setting and participants: This study was conducted at the Shanghai Tenth People’s Hospital and involved 209 participants enrolled between June 2016 and November 2019. All participants were 60 years of age or older, scheduled for TKR/THR surgery under general anesthesia, American Society of Anesthesiologists (ASA) class I to III, and assessed to be of normal cognitive function preoperatively via a Mini-Mental State Examination. Participant exclusion criteria included preexisting delirium as assessed by the Confusion Assessment Method (CAM), prior diagnosed neurological diseases (eg, Parkinson’s disease), prior diagnosed mental disorders (eg, schizophrenia), or impaired vision or hearing that would influence cognitive assessments. All participants were randomly assigned to either sevoflurane or propofol anesthesia for their surgery via a computer-generated list. Of these, 103 received inhalational sevoflurane and 106 received intravenous propofol. All participants received standardized postoperative care.
Main outcome measures: All participants were interviewed by investigators, who were blinded to the anesthesia regimen, twice daily on postoperative days 1, 2, and 3 using CAM and a CAM-based scoring system (CAM-S) to assess delirium severity. The CAM encapsulated 4 criteria: acute onset and fluctuating course, agitation, disorganized thinking, and altered level of consciousness. To diagnose delirium, both the first and second criteria must be met, in addition to either the third or fourth criterion. The averages of the scores across the 3 postoperative days indicated delirium severity, while the incidence and duration of delirium was assessed by the presence of delirium as determined by CAM on any postoperative day.
Main results: All eligible participants (N = 209; mean [SD] age 71.2 [6.7] years; 29.2% male) were included in the final analysis. The incidence of POD was not statistically different between the propofol and sevoflurane groups (33.0% vs 23.3%; P = .119, Chi-square test). It was estimated that 316 participants in each arm of the study were needed to detect statistical differences. The number of days of POD per person were higher with propofol anesthesia as compared to sevoflurane (0.5 [0.8] vs 0.3 [0.5]; P = .049, Student’s t-test).
Conclusion: This underpowered study showed a 9.7% difference in the incidence of POD between older adults who received propofol (33.0%) and sevoflurane (23.3%) after THR/TKR. Further studies with a larger sample size are needed to compare general anesthetics and their role in POD.
Commentary
Delirium is characterized by an acute state of confusion with fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is often caused by medications and/or their related adverse effects, infections, electrolyte imbalances, and other clinical etiologies. Delirium often manifests in post-surgical settings, disproportionately affecting older patients and leading to increased risk of morbidity, mortality, hospital length of stay, and health care costs.1 Intraoperative risk factors for POD are determined by the degree of operative stress (eg, lower-risk surgeries put the patient at reduced risk for POD as compared to higher-risk surgeries) and are additive to preexisting patient-specific risk factors, such as older age and functional impairment.1 Because operative stress is associated with risk for POD, limiting operative stress in controlled ways, such as through the choice of anesthetic agent administered, may be a pragmatic way to manage operative risks and optimize outcomes, especially when serving a surgically vulnerable population.
In Study 1, Chang et al sought to assess whether 2 commonly utilized general anesthetics, propofol and sevoflurane, in older patients undergoing spine surgery differentially affected the incidence of POD. In this retrospective, single-blinded observational study of 281 geriatric patients, the researchers found that sevoflurane was associated with a higher risk of POD as compared to propofol. However, these anesthetics were not associated with surgical outcomes such as postoperative 30-day complications or the length of postoperative hospital stay. While these findings added new knowledge to this field of research, several limitations should be kept in mind when interpreting this study’s results. For instance, the sample size was relatively small, with all cases selected from a single center utilizing a retrospective analysis. In addition, although a standardized nursing screening tool was used as a method for delirium detection, hypoactive delirium or less symptomatic delirium may have been missed, which in turn would lead to an underestimation of POD incidence. The latter is a common limitation in delirium research.
In Study 2, Mei et al similarly explored the effects of general anesthetics on POD in older surgical patients. Specifically, using a randomized clinical trial design, the investigators compared propofol with sevoflurane in older patients who underwent TKR/THR, and their roles in POD severity and duration. Although the incidence of POD was higher in those who received propofol compared to sevoflurane, this trial was underpowered and the results did not reach statistical significance. In addition, while the duration of POD was slightly longer in the propofol group compared to the sevoflurane group (0.5 vs 0.3 days), it was unclear if this finding was clinically significant. Similar to many research studies in POD, limitations of Study 2 included a small sample size of 209 patients, with all participants enrolled from a single center. On the other hand, this study illustrated the feasibility of a method that allowed reproducible prospective assessment of POD time course using CAM and CAM-S.
Applications for Clinical Practice and System Implementation
The delineation of risk factors that contribute to delirium after surgery in older patients is key to mitigating risks for POD and improving clinical outcomes. An important step towards a better understanding of these modifiable risk factors is to clearly quantify intraoperative risk of POD attributable to specific anesthetics. While preclinical studies have shown differential neurotoxicity effects of propofol and sevoflurane, their impact on clinically important neurologic outcomes such as delirium and cognitive decline remains poorly understood. Although Studies 1 and 2 both provided head-to-head comparisons of propofol and sevoflurane as risk factors for POD in high-operative-stress surgeries in older patients, the results were inconsistent. That being said, this small incremental increase in knowledge was not unexpected in the course of discovery around a clinically complex research question. Importantly, these studies provided evidence regarding the methodological approaches that could be taken to further this line of research.
The mediating factors of the differences on neurologic outcomes between anesthetic agents are likely pharmacological, biological, and methodological. Pharmacologically, the differences between target receptors, such as GABAA (propofol, etomidate) or NMDA (ketamine), could be a defining feature in the difference in incidence of POD. Additionally, secondary actions of anesthetic agents on glycine, nicotinic, and acetylcholine receptors could play a role as well. Biologically, genes such as CYP2E1, CYP2B6, CYP2C9, GSTP1, UGT1A9, SULT1A1, and NQO1 have all been identified as genetic factors in the metabolism of anesthetics, and variations in such genes could result in different responses to anesthetics.2 Methodologically, routes of anesthetic administration (eg, inhalation vs intravenous), preexisting anatomical structures, or confounding medical conditions (eg, lower respiratory volume due to older age) may influence POD incidence, duration, or severity. Moreover, methodological differences between Studies 1 and 2, such as surgeries performed (spinal vs TKR/THR), patient populations (South Korean vs Chinese), and the diagnosis and monitoring of delirium (retrospective screening and diagnosis vs prospective CAM/CAM-S) may impact delirium outcomes. Thus, these factors should be considered in the design of future clinical trials undertaken to investigate the effects of anesthetics on POD.
Given the high prevalence of delirium and its associated adverse outcomes in the immediate postoperative period in older patients, further research is warranted to determine how anesthetics affect POD in order to optimize perioperative care and mitigate risks in this vulnerable population. Moreover, parallel investigations into how anesthetics differentially impact the development of transient or longer-term cognitive impairment after a surgical procedure (ie, postoperative cognitive dysfunction) in older adults are urgently needed in order to improve their cognitive health.
Practice Points
- Intravenous propofol and inhalational sevoflurane may be differentially associated with incidence, duration, and severity of POD in geriatric surgical patients.
- Further larger-scale studies are warranted to clarify the role of anesthetic choice in POD in order to optimize surgical outcomes in older patients.
–Jared Doan, BS, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai
Study 1 Overview (Chang et al)
Objective: To assess the incidence of postoperative delirium (POD) following propofol- vs sevoflurane-based anesthesia in geriatric spine surgery patients.
Design: Retrospective, single-blinded observational study of propofol- and sevoflurane-based anesthesia cohorts.
Setting and participants: Patients eligible for this study were aged 65 years or older admitted to the SMG-SNU Boramae Medical Center (Seoul, South Korea). All patients underwent general anesthesia either via intravenous propofol or inhalational sevoflurane for spine surgery between January 2015 and December 2019. Patients were retrospectively identified via electronic medical records. Patient exclusion criteria included preoperative delirium, history of dementia, psychiatric disease, alcoholism, hepatic or renal dysfunction, postoperative mechanical ventilation dependence, other surgery within the recent 6 months, maintenance of intraoperative anesthesia with combined anesthetics, or incomplete medical record.
Main outcome measures: The primary outcome was the incidence of POD after administration of propofol- and sevoflurane-based anesthesia during hospitalization. Patients were screened for POD regularly by attending nurses using the Nursing Delirium Screening Scale (disorientation, inappropriate behavior, inappropriate communication, hallucination, and psychomotor retardation) during the entirety of the patient’s hospital stay; if 1 or more screening criteria were met, a psychiatrist was consulted for the proper diagnosis and management of delirium. A psychiatric diagnosis was required for a case to be counted toward the incidence of POD in this study. Secondary outcomes included postoperative 30-day complications (angina, myocardial infarction, transient ischemic attack/stroke, pneumonia, deep vein thrombosis, pulmonary embolism, acute kidney injury, or infection) and length of postoperative hospital stay.
Main results: POD occurred in 29 patients (10.3%) out of the total cohort of 281. POD was more common in the sevoflurane group than in the propofol group (15.7% vs 5.0%; P = .003). Using multivariable logistic regression, inhalational sevoflurane was associated with an increased risk of POD as compared to propofol-based anesthesia (odds ratio [OR], 4.120; 95% CI, 1.549-10.954; P = .005). There was no association between choice of anesthetic and postoperative 30-day complications or the length of postoperative hospital stay. Both older age (OR, 1.242; 95% CI, 1.130-1.366; P < .001) and higher pain score at postoperative day 1 (OR, 1.338; 95% CI, 1.056-1.696; P = .016) were associated with increased risk of POD.
Conclusion: Propofol-based anesthesia was associated with a lower incidence of and risk for POD than sevoflurane-based anesthesia in older patients undergoing spine surgery.
Study 2 Overview (Mei et al)
Objective: To determine the incidence and duration of POD in older patients after total knee/hip replacement (TKR/THR) under intravenous propofol or inhalational sevoflurane general anesthesia.
Design: Randomized clinical trial of propofol and sevoflurane groups.
Setting and participants: This study was conducted at the Shanghai Tenth People’s Hospital and involved 209 participants enrolled between June 2016 and November 2019. All participants were 60 years of age or older, scheduled for TKR/THR surgery under general anesthesia, American Society of Anesthesiologists (ASA) class I to III, and assessed to be of normal cognitive function preoperatively via a Mini-Mental State Examination. Participant exclusion criteria included preexisting delirium as assessed by the Confusion Assessment Method (CAM), prior diagnosed neurological diseases (eg, Parkinson’s disease), prior diagnosed mental disorders (eg, schizophrenia), or impaired vision or hearing that would influence cognitive assessments. All participants were randomly assigned to either sevoflurane or propofol anesthesia for their surgery via a computer-generated list. Of these, 103 received inhalational sevoflurane and 106 received intravenous propofol. All participants received standardized postoperative care.
Main outcome measures: All participants were interviewed by investigators, who were blinded to the anesthesia regimen, twice daily on postoperative days 1, 2, and 3 using CAM and a CAM-based scoring system (CAM-S) to assess delirium severity. The CAM encapsulated 4 criteria: acute onset and fluctuating course, agitation, disorganized thinking, and altered level of consciousness. To diagnose delirium, both the first and second criteria must be met, in addition to either the third or fourth criterion. The averages of the scores across the 3 postoperative days indicated delirium severity, while the incidence and duration of delirium was assessed by the presence of delirium as determined by CAM on any postoperative day.
Main results: All eligible participants (N = 209; mean [SD] age 71.2 [6.7] years; 29.2% male) were included in the final analysis. The incidence of POD was not statistically different between the propofol and sevoflurane groups (33.0% vs 23.3%; P = .119, Chi-square test). It was estimated that 316 participants in each arm of the study were needed to detect statistical differences. The number of days of POD per person were higher with propofol anesthesia as compared to sevoflurane (0.5 [0.8] vs 0.3 [0.5]; P = .049, Student’s t-test).
Conclusion: This underpowered study showed a 9.7% difference in the incidence of POD between older adults who received propofol (33.0%) and sevoflurane (23.3%) after THR/TKR. Further studies with a larger sample size are needed to compare general anesthetics and their role in POD.
Commentary
Delirium is characterized by an acute state of confusion with fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is often caused by medications and/or their related adverse effects, infections, electrolyte imbalances, and other clinical etiologies. Delirium often manifests in post-surgical settings, disproportionately affecting older patients and leading to increased risk of morbidity, mortality, hospital length of stay, and health care costs.1 Intraoperative risk factors for POD are determined by the degree of operative stress (eg, lower-risk surgeries put the patient at reduced risk for POD as compared to higher-risk surgeries) and are additive to preexisting patient-specific risk factors, such as older age and functional impairment.1 Because operative stress is associated with risk for POD, limiting operative stress in controlled ways, such as through the choice of anesthetic agent administered, may be a pragmatic way to manage operative risks and optimize outcomes, especially when serving a surgically vulnerable population.
In Study 1, Chang et al sought to assess whether 2 commonly utilized general anesthetics, propofol and sevoflurane, in older patients undergoing spine surgery differentially affected the incidence of POD. In this retrospective, single-blinded observational study of 281 geriatric patients, the researchers found that sevoflurane was associated with a higher risk of POD as compared to propofol. However, these anesthetics were not associated with surgical outcomes such as postoperative 30-day complications or the length of postoperative hospital stay. While these findings added new knowledge to this field of research, several limitations should be kept in mind when interpreting this study’s results. For instance, the sample size was relatively small, with all cases selected from a single center utilizing a retrospective analysis. In addition, although a standardized nursing screening tool was used as a method for delirium detection, hypoactive delirium or less symptomatic delirium may have been missed, which in turn would lead to an underestimation of POD incidence. The latter is a common limitation in delirium research.
In Study 2, Mei et al similarly explored the effects of general anesthetics on POD in older surgical patients. Specifically, using a randomized clinical trial design, the investigators compared propofol with sevoflurane in older patients who underwent TKR/THR, and their roles in POD severity and duration. Although the incidence of POD was higher in those who received propofol compared to sevoflurane, this trial was underpowered and the results did not reach statistical significance. In addition, while the duration of POD was slightly longer in the propofol group compared to the sevoflurane group (0.5 vs 0.3 days), it was unclear if this finding was clinically significant. Similar to many research studies in POD, limitations of Study 2 included a small sample size of 209 patients, with all participants enrolled from a single center. On the other hand, this study illustrated the feasibility of a method that allowed reproducible prospective assessment of POD time course using CAM and CAM-S.
Applications for Clinical Practice and System Implementation
The delineation of risk factors that contribute to delirium after surgery in older patients is key to mitigating risks for POD and improving clinical outcomes. An important step towards a better understanding of these modifiable risk factors is to clearly quantify intraoperative risk of POD attributable to specific anesthetics. While preclinical studies have shown differential neurotoxicity effects of propofol and sevoflurane, their impact on clinically important neurologic outcomes such as delirium and cognitive decline remains poorly understood. Although Studies 1 and 2 both provided head-to-head comparisons of propofol and sevoflurane as risk factors for POD in high-operative-stress surgeries in older patients, the results were inconsistent. That being said, this small incremental increase in knowledge was not unexpected in the course of discovery around a clinically complex research question. Importantly, these studies provided evidence regarding the methodological approaches that could be taken to further this line of research.
The mediating factors of the differences on neurologic outcomes between anesthetic agents are likely pharmacological, biological, and methodological. Pharmacologically, the differences between target receptors, such as GABAA (propofol, etomidate) or NMDA (ketamine), could be a defining feature in the difference in incidence of POD. Additionally, secondary actions of anesthetic agents on glycine, nicotinic, and acetylcholine receptors could play a role as well. Biologically, genes such as CYP2E1, CYP2B6, CYP2C9, GSTP1, UGT1A9, SULT1A1, and NQO1 have all been identified as genetic factors in the metabolism of anesthetics, and variations in such genes could result in different responses to anesthetics.2 Methodologically, routes of anesthetic administration (eg, inhalation vs intravenous), preexisting anatomical structures, or confounding medical conditions (eg, lower respiratory volume due to older age) may influence POD incidence, duration, or severity. Moreover, methodological differences between Studies 1 and 2, such as surgeries performed (spinal vs TKR/THR), patient populations (South Korean vs Chinese), and the diagnosis and monitoring of delirium (retrospective screening and diagnosis vs prospective CAM/CAM-S) may impact delirium outcomes. Thus, these factors should be considered in the design of future clinical trials undertaken to investigate the effects of anesthetics on POD.
Given the high prevalence of delirium and its associated adverse outcomes in the immediate postoperative period in older patients, further research is warranted to determine how anesthetics affect POD in order to optimize perioperative care and mitigate risks in this vulnerable population. Moreover, parallel investigations into how anesthetics differentially impact the development of transient or longer-term cognitive impairment after a surgical procedure (ie, postoperative cognitive dysfunction) in older adults are urgently needed in order to improve their cognitive health.
Practice Points
- Intravenous propofol and inhalational sevoflurane may be differentially associated with incidence, duration, and severity of POD in geriatric surgical patients.
- Further larger-scale studies are warranted to clarify the role of anesthetic choice in POD in order to optimize surgical outcomes in older patients.
–Jared Doan, BS, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai
1. Dasgupta M, Dumbrell AC. Preoperative risk assessment for delirium after noncardiac surgery: a systematic review. J Am Geriatr Soc. 2006;54(10):1578-1589. doi:10.1111/j.1532-5415.2006.00893.x
2. Mikstacki A, Skrzypczak-Zielinska M, Tamowicz B, et al. The impact of genetic factors on response to anaesthetics. Adv Med Sci. 2013;58(1):9-14. doi:10.2478/v10039-012-0065-z
1. Dasgupta M, Dumbrell AC. Preoperative risk assessment for delirium after noncardiac surgery: a systematic review. J Am Geriatr Soc. 2006;54(10):1578-1589. doi:10.1111/j.1532-5415.2006.00893.x
2. Mikstacki A, Skrzypczak-Zielinska M, Tamowicz B, et al. The impact of genetic factors on response to anaesthetics. Adv Med Sci. 2013;58(1):9-14. doi:10.2478/v10039-012-0065-z
Effectiveness of Colonoscopy for Colorectal Cancer Screening in Reducing Cancer-Related Mortality: Interpreting the Results From Two Ongoing Randomized Trials
Study 1 Overview (Bretthauer et al)
Objective: To evaluate the impact of screening colonoscopy on colon cancer–related death.
Design: Randomized trial conducted in 4 European countries.
Setting and participants: Presumptively healthy men and women between the ages of 55 and 64 years were selected from population registries in Poland, Norway, Sweden, and the Netherlands between 2009 and 2014. Eligible participants had not previously undergone screening. Patients with a diagnosis of colon cancer before trial entry were excluded.
Intervention: Participants were randomly assigned in a 1:2 ratio to undergo colonoscopy screening by invitation or to no invitation and no screening. Participants were randomized using a computer-generated allocation algorithm. Patients were stratified by age, sex, and municipality.
Main outcome measures: The primary endpoint of the study was risk of colorectal cancer and related death after a median follow-up of 10 to 15 years. The main secondary endpoint was death from any cause.
Main results: The study reported follow-up data from 84,585 participants (89.1% of all participants originally included in the trial). The remaining participants were either excluded or data could not be included due to lack of follow-up data from the usual-care group. Men (50.1%) and women (49.9%) were equally represented. The median age at entry was 59 years. The median follow-up was 10 years. Characteristics were otherwise balanced. Good bowel preparation was reported in 91% of all participants. Cecal intubation was achieved in 96.8% of all participants. The percentage of patients who underwent screening was 42% for the group, but screening rates varied by country (33%-60%). Colorectal cancer was diagnosed at screening in 62 participants (0.5% of screening group). Adenomas were detected in 30.7% of participants; 15 patients had polypectomy-related major bleeding. There were no perforations.
The risk of colorectal cancer at 10 years was 0.98% in the invited-to-screen group and 1.2% in the usual-care group (risk ratio, 0.82; 95% CI, 0.7-0.93). The reported number needed to invite to prevent 1 case of colon cancer in a 10-year period was 455. The risk of colorectal cancer–related death at 10 years was 0.28% in the invited-to-screen group and 0.31% in the usual-care group (risk ratio, 0.9; 95% CI, 0.64-1.16). An adjusted per-protocol analysis was performed to account for the estimated effect of screening if all participants assigned to the screening group underwent screening. In this analysis, the risk of colorectal cancer at 10 years was decreased from 1.22% to 0.84% (risk ratio, 0.69; 95% CI, 0.66-0.83).
Conclusion: Based on the results of this European randomized trial, the risk of colorectal cancer at 10 years was lower among those who were invited to undergo screening.
Study 2 Overview (Forsberg et al)
Objective: To investigate the effect of colorectal cancer screening with once-only colonoscopy or fecal immunochemical testing (FIT) on colorectal cancer mortality and incidence.
Design: Randomized controlled trial in Sweden utilizing a population registry.
Setting and participants: Patients aged 60 years at the time of entry were identified from a population-based registry from the Swedish Tax Agency.
Intervention: Individuals were assigned by an independent statistician to once-only colonoscopy, 2 rounds of FIT 2 years apart, or a control group in which no intervention was performed. Patients were assigned in a 1:6 ratio for colonoscopy vs control and a 1:2 ratio for FIT vs control.
Main outcome measures: The primary endpoint of the trial was colorectal cancer incidence and mortality.
Main results: A total of 278,280 participants were included in the study from March 1, 2014, through December 31, 2020 (31,140 in the colonoscopy group, 60,300 in the FIT group, and 186,840 in the control group). Of those in the colonoscopy group, 35% underwent colonoscopy, and 55% of those in the FIT group participated in testing. Colorectal cancer was detected in 0.16% (49) of people in the colonoscopy group and 0.2% (121) of people in the FIT test group (relative risk, 0.78; 95% CI, 0.56-1.09). The advanced adenoma detection rate was 2.05% in the colonoscopy group and 1.61% in the FIT group (relative risk, 1.27; 95% CI, 1.15-1.41). There were 2 perforations noted in the colonoscopy group and 15 major bleeding events. More right-sided adenomas were detected in the colonoscopy group.
Conclusion: The results of the current study highlight similar detection rates in the colonoscopy and FIT group. Should further follow-up show a benefit in disease-specific mortality, such screening strategies could be translated into population-based screening programs.
Commentary
The first colonoscopy screening recommendations were established in the mid 1990s in the United States, and over the subsequent 2 decades colonoscopy has been the recommended method and main modality for colorectal cancer screening in this country. The advantage of colonoscopy over other screening modalities (sigmoidoscopy and fecal-based testing) is that it can examine the entire large bowel and allow for removal of potential precancerous lesions. However, data to support colonoscopy as a screening modality for colorectal cancer are largely based on cohort studies.1,2 These studies have reported a significant reduction in the incidence of colon cancer. Additionally, colorectal cancer mortality was notably lower in the screened populations. For example, one study among health professionals found a nearly 70% reduction in colorectal cancer mortality in those who underwent at least 1 screening colonoscopy.3
There has been a lack of randomized clinical data to validate the efficacy of colonoscopy screening for reducing colorectal cancer–related deaths. The current study by Bretthauer et al addresses an important need and enhances our understanding of the efficacy of colorectal cancer screening with colonoscopy. In this randomized trial involving more than 84,000 participants from Poland, Norway, Sweden, and the Netherlands, there was a noted 18% decrease in the risk of colorectal cancer over a 10-year period in the intention-to-screen population. The reduction in the risk of death from colorectal cancer was not statistically significant (risk ratio, 0.90; 95% CI, 0.64-1.16). These results are surprising and certainly raise the question as to whether previous studies overestimated the effectiveness of colonoscopy in reducing the risk of colorectal cancer–related deaths. There are several limitations to the Bretthauer et al study, however.
Perhaps the most important limitation is the fact that only 42% of participants in the invited-to-screen cohort underwent screening colonoscopy. Therefore, this raises the question of whether the efficacy noted is simply due to a lack of participation in the screening protocol. In the adjusted per-protocol analysis, colonoscopy was estimated to reduce the risk of colorectal cancer by 31% and the risk of colorectal cancer–related death by around 50%. These findings are more in line with prior published studies regarding the efficacy of colorectal cancer screening. The authors plan to repeat this analysis at 15 years, and it is possible that the risk of colorectal cancer and colorectal cancer–related death can be reduced on subsequent follow-up.
While the results of the Bretthauer et al trial are important, randomized trials that directly compare the effectiveness of different colorectal cancer screening strategies are lacking. The Forsberg et al trial, also an ongoing study, seeks to address this vitally important gap in our current data. The SCREESCO trial is a study that compares the efficacy of colonoscopy with FIT every 2 years or no screening. The currently reported data are preliminary but show a similarly low rate of colonoscopy screening in those invited to do so (35%). This is a similar limitation to that noted in the Bretthauer et al study. Furthermore, there is some question regarding colonoscopy quality in this study, which had a very low reported adenoma detection rate.
While the current studies are important and provide quality randomized data on the effect of colorectal cancer screening, there remain many unanswered questions. Should the results presented by Bretthauer et al represent the current real-world scenario, then colonoscopy screening may not be viewed as an effective screening tool compared to simpler, less-invasive modalities (ie, FIT). Further follow-up from the SCREESCO trial will help shed light on this question. However, there are concerns with this study, including a very low participation rate, which could greatly underestimate the effectiveness of screening. Additional analysis and longer follow-up will be vital to fully understand the benefits of screening colonoscopy. In the meantime, screening remains an important tool for early detection of colorectal cancer and remains a category A recommendation by the United States Preventive Services Task Force.4
Applications for Clinical Practice and System Implementation
Current guidelines continue to strongly recommend screening for colorectal cancer for persons between 45 and 75 years of age (category B recommendation for those aged 45 to 49 years per the United States Preventive Services Task Force). Stool-based tests and direct visualization tests are both endorsed as screening options. Further follow-up from the presented studies is needed to help shed light on the magnitude of benefit of these modalities.
Practice Points
- Current guidelines continue to strongly recommend screening for colon cancer in those aged 45 to 75 years.
- The optimal modality for screening and the impact of screening on cancer-related mortality requires longer- term follow-up from these ongoing studies.
–Daniel Isaac, DO, MS
1. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for Colorectal Cancer: An Evidence Update for the U.S. Preventive Services Task Force [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2021 May. Report No.: 20-05271-EF-1.
2. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for colorectal cancer: updated evidence report and systematic review for the US Preventive Services Task Force. JAMA. 2021;325(19):1978-1998. doi:10.1001/jama.2021.4417
3. Nishihara R, Wu K, Lochhead P, et al. Long-term colorectal-cancer incidence and mortality after lower endoscopy. N Engl J Med. 2013;369(12):1095-1105. doi:10.1056/NEJMoa1301969
4. U.S. Preventive Services Task Force. Colorectal cancer: screening. Published May 18, 2021. Accessed November 8, 2022. https://uspreventiveservicestaskforce.org/uspstf/recommendation/colorectal-cancer-screening
Study 1 Overview (Bretthauer et al)
Objective: To evaluate the impact of screening colonoscopy on colon cancer–related death.
Design: Randomized trial conducted in 4 European countries.
Setting and participants: Presumptively healthy men and women between the ages of 55 and 64 years were selected from population registries in Poland, Norway, Sweden, and the Netherlands between 2009 and 2014. Eligible participants had not previously undergone screening. Patients with a diagnosis of colon cancer before trial entry were excluded.
Intervention: Participants were randomly assigned in a 1:2 ratio to undergo colonoscopy screening by invitation or to no invitation and no screening. Participants were randomized using a computer-generated allocation algorithm. Patients were stratified by age, sex, and municipality.
Main outcome measures: The primary endpoint of the study was risk of colorectal cancer and related death after a median follow-up of 10 to 15 years. The main secondary endpoint was death from any cause.
Main results: The study reported follow-up data from 84,585 participants (89.1% of all participants originally included in the trial). The remaining participants were either excluded or data could not be included due to lack of follow-up data from the usual-care group. Men (50.1%) and women (49.9%) were equally represented. The median age at entry was 59 years. The median follow-up was 10 years. Characteristics were otherwise balanced. Good bowel preparation was reported in 91% of all participants. Cecal intubation was achieved in 96.8% of all participants. The percentage of patients who underwent screening was 42% for the group, but screening rates varied by country (33%-60%). Colorectal cancer was diagnosed at screening in 62 participants (0.5% of screening group). Adenomas were detected in 30.7% of participants; 15 patients had polypectomy-related major bleeding. There were no perforations.
The risk of colorectal cancer at 10 years was 0.98% in the invited-to-screen group and 1.2% in the usual-care group (risk ratio, 0.82; 95% CI, 0.7-0.93). The reported number needed to invite to prevent 1 case of colon cancer in a 10-year period was 455. The risk of colorectal cancer–related death at 10 years was 0.28% in the invited-to-screen group and 0.31% in the usual-care group (risk ratio, 0.9; 95% CI, 0.64-1.16). An adjusted per-protocol analysis was performed to account for the estimated effect of screening if all participants assigned to the screening group underwent screening. In this analysis, the risk of colorectal cancer at 10 years was decreased from 1.22% to 0.84% (risk ratio, 0.69; 95% CI, 0.66-0.83).
Conclusion: Based on the results of this European randomized trial, the risk of colorectal cancer at 10 years was lower among those who were invited to undergo screening.
Study 2 Overview (Forsberg et al)
Objective: To investigate the effect of colorectal cancer screening with once-only colonoscopy or fecal immunochemical testing (FIT) on colorectal cancer mortality and incidence.
Design: Randomized controlled trial in Sweden utilizing a population registry.
Setting and participants: Patients aged 60 years at the time of entry were identified from a population-based registry from the Swedish Tax Agency.
Intervention: Individuals were assigned by an independent statistician to once-only colonoscopy, 2 rounds of FIT 2 years apart, or a control group in which no intervention was performed. Patients were assigned in a 1:6 ratio for colonoscopy vs control and a 1:2 ratio for FIT vs control.
Main outcome measures: The primary endpoint of the trial was colorectal cancer incidence and mortality.
Main results: A total of 278,280 participants were included in the study from March 1, 2014, through December 31, 2020 (31,140 in the colonoscopy group, 60,300 in the FIT group, and 186,840 in the control group). Of those in the colonoscopy group, 35% underwent colonoscopy, and 55% of those in the FIT group participated in testing. Colorectal cancer was detected in 0.16% (49) of people in the colonoscopy group and 0.2% (121) of people in the FIT test group (relative risk, 0.78; 95% CI, 0.56-1.09). The advanced adenoma detection rate was 2.05% in the colonoscopy group and 1.61% in the FIT group (relative risk, 1.27; 95% CI, 1.15-1.41). There were 2 perforations noted in the colonoscopy group and 15 major bleeding events. More right-sided adenomas were detected in the colonoscopy group.
Conclusion: The results of the current study highlight similar detection rates in the colonoscopy and FIT group. Should further follow-up show a benefit in disease-specific mortality, such screening strategies could be translated into population-based screening programs.
Commentary
The first colonoscopy screening recommendations were established in the mid 1990s in the United States, and over the subsequent 2 decades colonoscopy has been the recommended method and main modality for colorectal cancer screening in this country. The advantage of colonoscopy over other screening modalities (sigmoidoscopy and fecal-based testing) is that it can examine the entire large bowel and allow for removal of potential precancerous lesions. However, data to support colonoscopy as a screening modality for colorectal cancer are largely based on cohort studies.1,2 These studies have reported a significant reduction in the incidence of colon cancer. Additionally, colorectal cancer mortality was notably lower in the screened populations. For example, one study among health professionals found a nearly 70% reduction in colorectal cancer mortality in those who underwent at least 1 screening colonoscopy.3
There has been a lack of randomized clinical data to validate the efficacy of colonoscopy screening for reducing colorectal cancer–related deaths. The current study by Bretthauer et al addresses an important need and enhances our understanding of the efficacy of colorectal cancer screening with colonoscopy. In this randomized trial involving more than 84,000 participants from Poland, Norway, Sweden, and the Netherlands, there was a noted 18% decrease in the risk of colorectal cancer over a 10-year period in the intention-to-screen population. The reduction in the risk of death from colorectal cancer was not statistically significant (risk ratio, 0.90; 95% CI, 0.64-1.16). These results are surprising and certainly raise the question as to whether previous studies overestimated the effectiveness of colonoscopy in reducing the risk of colorectal cancer–related deaths. There are several limitations to the Bretthauer et al study, however.
Perhaps the most important limitation is the fact that only 42% of participants in the invited-to-screen cohort underwent screening colonoscopy. Therefore, this raises the question of whether the efficacy noted is simply due to a lack of participation in the screening protocol. In the adjusted per-protocol analysis, colonoscopy was estimated to reduce the risk of colorectal cancer by 31% and the risk of colorectal cancer–related death by around 50%. These findings are more in line with prior published studies regarding the efficacy of colorectal cancer screening. The authors plan to repeat this analysis at 15 years, and it is possible that the risk of colorectal cancer and colorectal cancer–related death can be reduced on subsequent follow-up.
While the results of the Bretthauer et al trial are important, randomized trials that directly compare the effectiveness of different colorectal cancer screening strategies are lacking. The Forsberg et al trial, also an ongoing study, seeks to address this vitally important gap in our current data. The SCREESCO trial is a study that compares the efficacy of colonoscopy with FIT every 2 years or no screening. The currently reported data are preliminary but show a similarly low rate of colonoscopy screening in those invited to do so (35%). This is a similar limitation to that noted in the Bretthauer et al study. Furthermore, there is some question regarding colonoscopy quality in this study, which had a very low reported adenoma detection rate.
While the current studies are important and provide quality randomized data on the effect of colorectal cancer screening, there remain many unanswered questions. Should the results presented by Bretthauer et al represent the current real-world scenario, then colonoscopy screening may not be viewed as an effective screening tool compared to simpler, less-invasive modalities (ie, FIT). Further follow-up from the SCREESCO trial will help shed light on this question. However, there are concerns with this study, including a very low participation rate, which could greatly underestimate the effectiveness of screening. Additional analysis and longer follow-up will be vital to fully understand the benefits of screening colonoscopy. In the meantime, screening remains an important tool for early detection of colorectal cancer and remains a category A recommendation by the United States Preventive Services Task Force.4
Applications for Clinical Practice and System Implementation
Current guidelines continue to strongly recommend screening for colorectal cancer for persons between 45 and 75 years of age (category B recommendation for those aged 45 to 49 years per the United States Preventive Services Task Force). Stool-based tests and direct visualization tests are both endorsed as screening options. Further follow-up from the presented studies is needed to help shed light on the magnitude of benefit of these modalities.
Practice Points
- Current guidelines continue to strongly recommend screening for colon cancer in those aged 45 to 75 years.
- The optimal modality for screening and the impact of screening on cancer-related mortality requires longer- term follow-up from these ongoing studies.
–Daniel Isaac, DO, MS
Study 1 Overview (Bretthauer et al)
Objective: To evaluate the impact of screening colonoscopy on colon cancer–related death.
Design: Randomized trial conducted in 4 European countries.
Setting and participants: Presumptively healthy men and women between the ages of 55 and 64 years were selected from population registries in Poland, Norway, Sweden, and the Netherlands between 2009 and 2014. Eligible participants had not previously undergone screening. Patients with a diagnosis of colon cancer before trial entry were excluded.
Intervention: Participants were randomly assigned in a 1:2 ratio to undergo colonoscopy screening by invitation or to no invitation and no screening. Participants were randomized using a computer-generated allocation algorithm. Patients were stratified by age, sex, and municipality.
Main outcome measures: The primary endpoint of the study was risk of colorectal cancer and related death after a median follow-up of 10 to 15 years. The main secondary endpoint was death from any cause.
Main results: The study reported follow-up data from 84,585 participants (89.1% of all participants originally included in the trial). The remaining participants were either excluded or data could not be included due to lack of follow-up data from the usual-care group. Men (50.1%) and women (49.9%) were equally represented. The median age at entry was 59 years. The median follow-up was 10 years. Characteristics were otherwise balanced. Good bowel preparation was reported in 91% of all participants. Cecal intubation was achieved in 96.8% of all participants. The percentage of patients who underwent screening was 42% for the group, but screening rates varied by country (33%-60%). Colorectal cancer was diagnosed at screening in 62 participants (0.5% of screening group). Adenomas were detected in 30.7% of participants; 15 patients had polypectomy-related major bleeding. There were no perforations.
The risk of colorectal cancer at 10 years was 0.98% in the invited-to-screen group and 1.2% in the usual-care group (risk ratio, 0.82; 95% CI, 0.7-0.93). The reported number needed to invite to prevent 1 case of colon cancer in a 10-year period was 455. The risk of colorectal cancer–related death at 10 years was 0.28% in the invited-to-screen group and 0.31% in the usual-care group (risk ratio, 0.9; 95% CI, 0.64-1.16). An adjusted per-protocol analysis was performed to account for the estimated effect of screening if all participants assigned to the screening group underwent screening. In this analysis, the risk of colorectal cancer at 10 years was decreased from 1.22% to 0.84% (risk ratio, 0.69; 95% CI, 0.66-0.83).
Conclusion: Based on the results of this European randomized trial, the risk of colorectal cancer at 10 years was lower among those who were invited to undergo screening.
Study 2 Overview (Forsberg et al)
Objective: To investigate the effect of colorectal cancer screening with once-only colonoscopy or fecal immunochemical testing (FIT) on colorectal cancer mortality and incidence.
Design: Randomized controlled trial in Sweden utilizing a population registry.
Setting and participants: Patients aged 60 years at the time of entry were identified from a population-based registry from the Swedish Tax Agency.
Intervention: Individuals were assigned by an independent statistician to once-only colonoscopy, 2 rounds of FIT 2 years apart, or a control group in which no intervention was performed. Patients were assigned in a 1:6 ratio for colonoscopy vs control and a 1:2 ratio for FIT vs control.
Main outcome measures: The primary endpoint of the trial was colorectal cancer incidence and mortality.
Main results: A total of 278,280 participants were included in the study from March 1, 2014, through December 31, 2020 (31,140 in the colonoscopy group, 60,300 in the FIT group, and 186,840 in the control group). Of those in the colonoscopy group, 35% underwent colonoscopy, and 55% of those in the FIT group participated in testing. Colorectal cancer was detected in 0.16% (49) of people in the colonoscopy group and 0.2% (121) of people in the FIT test group (relative risk, 0.78; 95% CI, 0.56-1.09). The advanced adenoma detection rate was 2.05% in the colonoscopy group and 1.61% in the FIT group (relative risk, 1.27; 95% CI, 1.15-1.41). There were 2 perforations noted in the colonoscopy group and 15 major bleeding events. More right-sided adenomas were detected in the colonoscopy group.
Conclusion: The results of the current study highlight similar detection rates in the colonoscopy and FIT group. Should further follow-up show a benefit in disease-specific mortality, such screening strategies could be translated into population-based screening programs.
Commentary
The first colonoscopy screening recommendations were established in the mid 1990s in the United States, and over the subsequent 2 decades colonoscopy has been the recommended method and main modality for colorectal cancer screening in this country. The advantage of colonoscopy over other screening modalities (sigmoidoscopy and fecal-based testing) is that it can examine the entire large bowel and allow for removal of potential precancerous lesions. However, data to support colonoscopy as a screening modality for colorectal cancer are largely based on cohort studies.1,2 These studies have reported a significant reduction in the incidence of colon cancer. Additionally, colorectal cancer mortality was notably lower in the screened populations. For example, one study among health professionals found a nearly 70% reduction in colorectal cancer mortality in those who underwent at least 1 screening colonoscopy.3
There has been a lack of randomized clinical data to validate the efficacy of colonoscopy screening for reducing colorectal cancer–related deaths. The current study by Bretthauer et al addresses an important need and enhances our understanding of the efficacy of colorectal cancer screening with colonoscopy. In this randomized trial involving more than 84,000 participants from Poland, Norway, Sweden, and the Netherlands, there was a noted 18% decrease in the risk of colorectal cancer over a 10-year period in the intention-to-screen population. The reduction in the risk of death from colorectal cancer was not statistically significant (risk ratio, 0.90; 95% CI, 0.64-1.16). These results are surprising and certainly raise the question as to whether previous studies overestimated the effectiveness of colonoscopy in reducing the risk of colorectal cancer–related deaths. There are several limitations to the Bretthauer et al study, however.
Perhaps the most important limitation is the fact that only 42% of participants in the invited-to-screen cohort underwent screening colonoscopy. Therefore, this raises the question of whether the efficacy noted is simply due to a lack of participation in the screening protocol. In the adjusted per-protocol analysis, colonoscopy was estimated to reduce the risk of colorectal cancer by 31% and the risk of colorectal cancer–related death by around 50%. These findings are more in line with prior published studies regarding the efficacy of colorectal cancer screening. The authors plan to repeat this analysis at 15 years, and it is possible that the risk of colorectal cancer and colorectal cancer–related death can be reduced on subsequent follow-up.
While the results of the Bretthauer et al trial are important, randomized trials that directly compare the effectiveness of different colorectal cancer screening strategies are lacking. The Forsberg et al trial, also an ongoing study, seeks to address this vitally important gap in our current data. The SCREESCO trial is a study that compares the efficacy of colonoscopy with FIT every 2 years or no screening. The currently reported data are preliminary but show a similarly low rate of colonoscopy screening in those invited to do so (35%). This is a similar limitation to that noted in the Bretthauer et al study. Furthermore, there is some question regarding colonoscopy quality in this study, which had a very low reported adenoma detection rate.
While the current studies are important and provide quality randomized data on the effect of colorectal cancer screening, there remain many unanswered questions. Should the results presented by Bretthauer et al represent the current real-world scenario, then colonoscopy screening may not be viewed as an effective screening tool compared to simpler, less-invasive modalities (ie, FIT). Further follow-up from the SCREESCO trial will help shed light on this question. However, there are concerns with this study, including a very low participation rate, which could greatly underestimate the effectiveness of screening. Additional analysis and longer follow-up will be vital to fully understand the benefits of screening colonoscopy. In the meantime, screening remains an important tool for early detection of colorectal cancer and remains a category A recommendation by the United States Preventive Services Task Force.4
Applications for Clinical Practice and System Implementation
Current guidelines continue to strongly recommend screening for colorectal cancer for persons between 45 and 75 years of age (category B recommendation for those aged 45 to 49 years per the United States Preventive Services Task Force). Stool-based tests and direct visualization tests are both endorsed as screening options. Further follow-up from the presented studies is needed to help shed light on the magnitude of benefit of these modalities.
Practice Points
- Current guidelines continue to strongly recommend screening for colon cancer in those aged 45 to 75 years.
- The optimal modality for screening and the impact of screening on cancer-related mortality requires longer- term follow-up from these ongoing studies.
–Daniel Isaac, DO, MS
1. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for Colorectal Cancer: An Evidence Update for the U.S. Preventive Services Task Force [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2021 May. Report No.: 20-05271-EF-1.
2. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for colorectal cancer: updated evidence report and systematic review for the US Preventive Services Task Force. JAMA. 2021;325(19):1978-1998. doi:10.1001/jama.2021.4417
3. Nishihara R, Wu K, Lochhead P, et al. Long-term colorectal-cancer incidence and mortality after lower endoscopy. N Engl J Med. 2013;369(12):1095-1105. doi:10.1056/NEJMoa1301969
4. U.S. Preventive Services Task Force. Colorectal cancer: screening. Published May 18, 2021. Accessed November 8, 2022. https://uspreventiveservicestaskforce.org/uspstf/recommendation/colorectal-cancer-screening
1. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for Colorectal Cancer: An Evidence Update for the U.S. Preventive Services Task Force [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2021 May. Report No.: 20-05271-EF-1.
2. Lin JS, Perdue LA, Henrikson NB, Bean SI, Blasi PR. Screening for colorectal cancer: updated evidence report and systematic review for the US Preventive Services Task Force. JAMA. 2021;325(19):1978-1998. doi:10.1001/jama.2021.4417
3. Nishihara R, Wu K, Lochhead P, et al. Long-term colorectal-cancer incidence and mortality after lower endoscopy. N Engl J Med. 2013;369(12):1095-1105. doi:10.1056/NEJMoa1301969
4. U.S. Preventive Services Task Force. Colorectal cancer: screening. Published May 18, 2021. Accessed November 8, 2022. https://uspreventiveservicestaskforce.org/uspstf/recommendation/colorectal-cancer-screening
Safety and Efficacy of GLP-1 Receptor Agonists and SGLT2 Inhibitors Among Veterans With Type 2 Diabetes
Selecting the best medication regimen for a patient with type 2 diabetes mellitus (T2DM) depends on many factors, such as glycemic control, adherence, adverse effect (AE) profile, and comorbid conditions.1 Selected agents from 2 newer medication classes, glucagon-like peptide 1 receptor agonists (GLP-1 RA) and sodium-glucose cotransporter 2 inhibitors (SGLT2i), have demonstrated cardiovascular and renal protective properties, creating a new paradigm in management.
The American Diabetes Association recommends medications with proven benefit in cardiovascular disease (CVD), such as the GLP-1 RAs liraglutide, injectable semaglutide, or dulaglutide, or the SGLT2i empagliflozin or canagliflozin, as second-line after metformin in patients with established atherosclerotic CVD or indicators of high risk to reduce the risk of major adverse cardiovascular events (MACE).1 SGLT2i are preferred in patients with diabetic kidney disease, and GLP-1 RAs are next in line for selection of agents with proven nephroprotection (liraglutide, injectable semaglutide, dulaglutide). The mechanisms of these benefits are not fully understood but may be due to their extraglycemic effects. The classes likely induce these benefits by different mechanisms: SGLT2i by hemodynamic effects and GLP-1 RAs by anti-inflammatory mechanisms.2 Although there is much interest, evidence is limited regarding the cardiovascular and renal protection benefits of these agents used in combination.
The combined use of GLP-1 RA and SGLT2i agents demonstrated greater benefit than separate use in trials with nonveteran populations.3-7 These studies evaluated effects on hemoglobin A1c (HbA1c) levels, weight loss, blood pressure (BP), and estimated glomerular filtration rate (eGFR).A meta-analysis of 7 trials found that the combination of GLP-1 RA and SGLT2i reduced HbA1c levels, body weight, and systolic blood pressure (SBP).8 All of the changes were statistically significant except for body weight with combination vs SGLT2i alone. Combination therapy was not associated with increased risk of severe hypoglycemia compared with either therapy separately.
The purpose of our study was to evaluate the safety and efficacy of the combined use of GLP-1 RA and SGLT2i in a real-world, US Department of Veterans Affairs (VA) population with T2DM.
Methods
This study was a pre-post, retrospective, single-center chart review. Subjects served as their own control. The project was reviewed and approved by the VA Ann Arbor Healthcare System Institutional Review Board. Subjects prescribed both a GLP-1 RA (semaglutide or liraglutide) and SGLT2i (empagliflozin) between January 1, 2014, and November 10, 2019, were extracted from the Corporate Data Warehouse (CDW) for possible inclusion in the study.
Patients were excluded if they received < 12 weeks of combination GLP-1 RA and SGLT2i therapy or did not have a corresponding 12-week HbA1c level. Patients also were excluded if they had < 12 weeks of monotherapy before starting combination therapy or did not have a baseline HbA1c level, or if the start date of combination therapy was not recorded in the VA electronic health record (EHR). We reviewed data for each patient from 6 months before to 1 year after the second agent was started. Start of the first agent (GLP-1 RA or SGLT2i) was recorded as the date the prescription was picked up in-person or 7 days after release date if mailed to the patient. Start of the second agent (GLP-1 RA or SGLT2i) was defined as baseline and was the date the prescription was picked up in person or 7 days after the release date if mailed.
Baseline measures were taken anytime from 8 weeks after the start of the first agent through 2 weeks after the start of the second agent. Data collected included age, sex, race, height, weight, BP, HbA1c levels, serum creatinine (SCr), eGFR, classes of medications for the treatment of T2DM, and the number of prescribed antihypertensive medications. HbA1c levels, SCr, eGFR, weight, and BP also were collected at 12 weeks (within 8-21 weeks); 26 weeks (within 22-35 weeks); and 52 weeks (within 36-57 weeks) of combination therapy. We reviewed progress notes and laboratory results to determine AEs within 26 weeks before initiating second agent (baseline) and 0 to 26 weeks and 26 to 52 weeks after initiating combination therapy.
The primary objective was to determine the effect on HbA1c levels at 12 weeks when using a GLP-1 RA and SGLT2i in combination vs separately. Secondary objectives were to determine change from baseline in mean body weight, BP, SCr, and eGFR at 12, 26, and 52 weeks; change in HbA1c levels at 26 and 52 weeks; and incidence of prespecified adverse drug reactions during combination therapy vs separately.
Statistical Analysis
Assuming a SD of 1, 80% power, significance level of P < .05, 2-sided test, and a correlation between baseline and follow-up of 0.5, we determined that a sample size of 34 subjects was required to detect a 0.5% change in baseline HbA1c level at 12 weeks. A t test (or Wilcoxon signed rank test if outcome not normally distributed) was conducted to examine whether the expected change from baseline was different from 0 for continuous outcomes. Median change from baseline was reported for SCr as a nonparametric t test (Wilcoxon signed rank test) was used.
Results
We identified 110 patients for possible study inclusion and 39 met eligibility criteria. After record review, 30 patients were excluded for receiving < 12 weeks of combination therapy or no 12 week HbA1c level; 26 patients were excluded for receiving < 12 weeks of monotherapy before starting combination therapy or no baseline HbA1c level; and 15 patients were excluded for lack of documentation in the VA EHR. Of the 39 patients included, 24 (62%) were prescribed empagliflozin first and then 8 started liraglutide and 16 started semaglutide.
HbA1c levels decreased by 1% after 12 weeks of combination therapy compared with baseline (P < .001), and this reduction was sustained through the duration of the study period (Table 2).
The most common AE during the trial was hypoglycemia, which was mostly mild (level 1) (Table 3).
Discussion
This study evaluated the safety and efficacy of combined use of semaglutide or liraglutide and empagliflozin in a veteran population with T2DM. The retrospective chart review captured real-world practice and outcomes. Combination therapy was associated with a significant reduction in HbA1c levels, body weight, and SBP compared with either agent alone. No significant change was seen in DBP, SCr, or eGFR. Overall, the combination of GLP-1 RA and SGLT2i medications demonstrated a good safety profile with most patients reporting no AEs.
Several other studies have assessed the safety and efficacy of using GLP-1 RA and SGLT2i in combination. The DURATION 8 trial is the only double-blind trial to randomize subjects to receive either exenatide once weekly, dapagliflozin, or the combination of both for up to 52 weeks.3 Other controlled trials required stable background therapy with either SGLT2i or GLP-1 RA before randomization to receive the other class or placebo and had durations between 18 and 30 weeks.4-7 The AWARD 10 trial studied the combination of canagliflozin and dulaglutide, which both have proven CVD benefit.4 Other studies did not restrict SGLT2i or GLP-1 RA background therapy to agents with proven CVD benefit.5-7 The present study evaluated the combination of empagliflozin plus liraglutide or semaglutide, agents that all have proven CVD benefit.
A meta-analysis of 7 trials, including those previously mentioned, was conducted to evaluate the combination of GLP-1 RA and SGLT2i.8 The combination significantly reduced HbA1c levels by 0.61% and 0.85% compared with GLP-1 RA or SGLT2i, respectively. Our trial showed greater HbA1c level reduction of 1% with combination therapy compared with either agent separately. This may have been due in part to a higher baseline HbA1c level in our real-world veteran population. The meta-analysis found the combination decreased body weight 2.6 kg and 1.5 kg compared with GLP-1 RA or SGLT2i, respectively.8 This only reached significance with comparison vs GLP-1 RA alone. Our study demonstrated impressive weight loss of up to about 5 kg after 26 and 52 weeks of combination therapy. This is equivalent to about 5% weight loss from baseline, which is clinically significant.9 Liraglutide and semaglutide are the GLP-1 RAs associated with the greatest weight loss, which may contribute to greater weight loss efficacy seen in the present trial.1
In our trial SBP fell lower compared with the meta-analysis. Combination therapy significantly reduced SBP by 4.1 mm Hg and 2.7 mm Hg compared with GLP-1 RA or SGLT2i, respectively, in the meta-analysis.8 We observed a significant 9 to 12 mm Hg reduction in SBP after 26 to 52 weeks of combination therapy compared with baseline. This reduction occurred despite relatively controlled SBP at baseline (135 mm Hg). Each reduction of 10 mm Hg in SBP significantly reduces the risk of MACE, stroke, and heart failure, making our results clinically significant.10 Neither the meta-analysis nor present study found a significant difference in DBP or eGFR with combination therapy.
AEs were similar in this trial compared with the meta-analysis. Combination treatment with GLP-1 RA and SGLT2i did not increase the incidence of severe hypoglycemia in either study.8 Hypoglycemia was the most common AE in this study, but frequency was similar with combination and separate therapy. Both medication classes are associated with low or no risk of hypoglycemia on their own.1 Baseline medications likely contributed to episodes of hypoglycemia seen in this study: About 80% of patients were prescribed basal insulin, 15% were prescribed a sulfonylurea, and 13% were prescribed prandial insulin. There is limited overlap between the known AEs of GLP-1 RA and SGLT2i, making combination therapy a safe option for use in patients with T2DM.
Our study confirms greater reduction in HbA1c levels, weight, and SBP in veterans taking GLP-1 RA and SGLT2i medications in combination compared with separate use in a real-world setting in a veteran population. The magnitude of change seen in this population appears greater compared with previous studies.
Limitations
There were several limitations to our study. Given the retrospective nature, many patients included in the study did not have bloodwork drawn during the specified time frames. Because of this, many patients were excluded and missing data on renal outcomes limited the power to detect differences. Data regarding AEs were limited to what was recorded in the EHR, which may underrepresent the AEs that patients experienced. Finally, our study size was small, consisting primarily of a White and male population, which may limit generalizability.
Further research is needed to validate these findings in this population and should include a larger study population. The impact of combining GLP-1 RA with SGLT2i on cardiorenal outcomes is an important area of ongoing research.
ConclusionS
The combined use of GLP-1 RA and SGLT2i resulted in significant improvement in HbA1c levels, weight, and SBP compared with separate use in this real-world study of a VA population with T2DM. The combination was well tolerated overall. Awareness of these results can facilitate optimal care and outcomes in the VA population.
Acknowledgments
Serena Kelley, PharmD, and Michael Brenner, PharmD, assisted with study design and initial data collection. Julie Strominger, MS, provided statistical support.
1. American Diabetes Association. 9. Pharmacologic approaches to glycemic treatment: standards of medical care in diabetes-2021. Diabetes Care. 2021;44(suppl 1):S111-S124. doi.10.2337/dc21-S009
2. DeFronzo RA. Combination therapy with GLP-1 receptor agonist and SGLT2 inhibitor. Diabetes Obes Metab. 2017;19(10):1353-1362. doi.10.1111/dom.12982
3. Jabbour S, Frias J, Guja C, Hardy E, Ahmed A, Ohman P. Effects of exenatide once weekly plus dapagliflozin, exenatide once weekly, or dapagliflozin, added to metformin monotherapy, on body weight, systolic blood pressure, and triglycerides in patients with type 2 diabetes in the DURATION-8 study. Diabetes Obes Metab. 2018;20(6):1515-1519. doi:10.1111/dom.13206
4. Ludvik B, Frias J, Tinahones F, et al. Dulaglutide as add-on therapy to SGLT2 inhibitors in patients with inadequately controlled type 2 diabetes (AWARD-10): a 24-week, randomised, double-blind, placebo-controlled trial. Lancet Diabetes Endocrinol. 2018;6(5):370-381. doi:10.1016/S2213-8587(18)30023-8
5. Blonde L, Belousova L, Fainberg U, et al. Liraglutide as add-on to sodium-glucose co-transporter-2 inhibitors in patients with inadequately controlled type 2 diabetes: LIRA-ADD2SGLT2i, a 26-week, randomized, double-blind, placebo-controlled trial. Diabetes Obes Metab. 2020;22(6):929-937. doi:10.1111/dom.13978
6. Fulcher G, Matthews D, Perkovic V, et al; CANVAS trial collaborative group. Efficacy and safety of canagliflozin when used in conjunction with incretin-mimetic therapy in patients with type 2 diabetes. Diabetes Obes Metab. 2016;18(1):82-91. doi:10.1111/dom.12589
7. Zinman B, Bhosekar V, Busch R, et al. Semaglutide once weekly as add-on to SGLT-2 inhibitor therapy in type 2 diabetes (SUSTAIN 9): a randomised, placebo-controlled trial. Lancet Diabetes Endocrinol. 2019;7(5):356-367. doi:10.1016/S2213-8587(19)30066-X
8. Mantsiou C, Karagiannis T, Kakotrichi P, et al. Glucagon-like peptide-1 receptor agonists and sodium-glucose co-transporter-2 inhibitors as combination therapy for type 2 diabetes: a systematic review and meta-analysis. Diabetes Obes Metab. 2020;22(10):1857-1868. doi:10.1111/dom.14108
9. US Department of Veterans Affairs, Department of Defense. VA/DoD clinical practice guideline for the management of adult overweight and obesity. Version 3.0. Accessed August 18, 2022. www.healthquality.va.gov/guidelines/CD/obesity/VADoDObesityCPGFinal5087242020.pdf
10. Ettehad D, Emdin CA, Kiran A, et al. Blood pressure lowering for prevention of cardiovascular disease and death: a systematic review and meta-analysis. Lancet. 2015;387(10022):957-967. doi.10.1016/S0140-6736(15)01225-8
Selecting the best medication regimen for a patient with type 2 diabetes mellitus (T2DM) depends on many factors, such as glycemic control, adherence, adverse effect (AE) profile, and comorbid conditions.1 Selected agents from 2 newer medication classes, glucagon-like peptide 1 receptor agonists (GLP-1 RA) and sodium-glucose cotransporter 2 inhibitors (SGLT2i), have demonstrated cardiovascular and renal protective properties, creating a new paradigm in management.
The American Diabetes Association recommends medications with proven benefit in cardiovascular disease (CVD), such as the GLP-1 RAs liraglutide, injectable semaglutide, or dulaglutide, or the SGLT2i empagliflozin or canagliflozin, as second-line after metformin in patients with established atherosclerotic CVD or indicators of high risk to reduce the risk of major adverse cardiovascular events (MACE).1 SGLT2i are preferred in patients with diabetic kidney disease, and GLP-1 RAs are next in line for selection of agents with proven nephroprotection (liraglutide, injectable semaglutide, dulaglutide). The mechanisms of these benefits are not fully understood but may be due to their extraglycemic effects. The classes likely induce these benefits by different mechanisms: SGLT2i by hemodynamic effects and GLP-1 RAs by anti-inflammatory mechanisms.2 Although there is much interest, evidence is limited regarding the cardiovascular and renal protection benefits of these agents used in combination.
The combined use of GLP-1 RA and SGLT2i agents demonstrated greater benefit than separate use in trials with nonveteran populations.3-7 These studies evaluated effects on hemoglobin A1c (HbA1c) levels, weight loss, blood pressure (BP), and estimated glomerular filtration rate (eGFR).A meta-analysis of 7 trials found that the combination of GLP-1 RA and SGLT2i reduced HbA1c levels, body weight, and systolic blood pressure (SBP).8 All of the changes were statistically significant except for body weight with combination vs SGLT2i alone. Combination therapy was not associated with increased risk of severe hypoglycemia compared with either therapy separately.
The purpose of our study was to evaluate the safety and efficacy of the combined use of GLP-1 RA and SGLT2i in a real-world, US Department of Veterans Affairs (VA) population with T2DM.
Methods
This study was a pre-post, retrospective, single-center chart review. Subjects served as their own control. The project was reviewed and approved by the VA Ann Arbor Healthcare System Institutional Review Board. Subjects prescribed both a GLP-1 RA (semaglutide or liraglutide) and SGLT2i (empagliflozin) between January 1, 2014, and November 10, 2019, were extracted from the Corporate Data Warehouse (CDW) for possible inclusion in the study.
Patients were excluded if they received < 12 weeks of combination GLP-1 RA and SGLT2i therapy or did not have a corresponding 12-week HbA1c level. Patients also were excluded if they had < 12 weeks of monotherapy before starting combination therapy or did not have a baseline HbA1c level, or if the start date of combination therapy was not recorded in the VA electronic health record (EHR). We reviewed data for each patient from 6 months before to 1 year after the second agent was started. Start of the first agent (GLP-1 RA or SGLT2i) was recorded as the date the prescription was picked up in-person or 7 days after release date if mailed to the patient. Start of the second agent (GLP-1 RA or SGLT2i) was defined as baseline and was the date the prescription was picked up in person or 7 days after the release date if mailed.
Baseline measures were taken anytime from 8 weeks after the start of the first agent through 2 weeks after the start of the second agent. Data collected included age, sex, race, height, weight, BP, HbA1c levels, serum creatinine (SCr), eGFR, classes of medications for the treatment of T2DM, and the number of prescribed antihypertensive medications. HbA1c levels, SCr, eGFR, weight, and BP also were collected at 12 weeks (within 8-21 weeks); 26 weeks (within 22-35 weeks); and 52 weeks (within 36-57 weeks) of combination therapy. We reviewed progress notes and laboratory results to determine AEs within 26 weeks before initiating second agent (baseline) and 0 to 26 weeks and 26 to 52 weeks after initiating combination therapy.
The primary objective was to determine the effect on HbA1c levels at 12 weeks when using a GLP-1 RA and SGLT2i in combination vs separately. Secondary objectives were to determine change from baseline in mean body weight, BP, SCr, and eGFR at 12, 26, and 52 weeks; change in HbA1c levels at 26 and 52 weeks; and incidence of prespecified adverse drug reactions during combination therapy vs separately.
Statistical Analysis
Assuming a SD of 1, 80% power, significance level of P < .05, 2-sided test, and a correlation between baseline and follow-up of 0.5, we determined that a sample size of 34 subjects was required to detect a 0.5% change in baseline HbA1c level at 12 weeks. A t test (or Wilcoxon signed rank test if outcome not normally distributed) was conducted to examine whether the expected change from baseline was different from 0 for continuous outcomes. Median change from baseline was reported for SCr as a nonparametric t test (Wilcoxon signed rank test) was used.
Results
We identified 110 patients for possible study inclusion and 39 met eligibility criteria. After record review, 30 patients were excluded for receiving < 12 weeks of combination therapy or no 12 week HbA1c level; 26 patients were excluded for receiving < 12 weeks of monotherapy before starting combination therapy or no baseline HbA1c level; and 15 patients were excluded for lack of documentation in the VA EHR. Of the 39 patients included, 24 (62%) were prescribed empagliflozin first and then 8 started liraglutide and 16 started semaglutide.
HbA1c levels decreased by 1% after 12 weeks of combination therapy compared with baseline (P < .001), and this reduction was sustained through the duration of the study period (Table 2).
The most common AE during the trial was hypoglycemia, which was mostly mild (level 1) (Table 3).
Discussion
This study evaluated the safety and efficacy of combined use of semaglutide or liraglutide and empagliflozin in a veteran population with T2DM. The retrospective chart review captured real-world practice and outcomes. Combination therapy was associated with a significant reduction in HbA1c levels, body weight, and SBP compared with either agent alone. No significant change was seen in DBP, SCr, or eGFR. Overall, the combination of GLP-1 RA and SGLT2i medications demonstrated a good safety profile with most patients reporting no AEs.
Several other studies have assessed the safety and efficacy of using GLP-1 RA and SGLT2i in combination. The DURATION 8 trial is the only double-blind trial to randomize subjects to receive either exenatide once weekly, dapagliflozin, or the combination of both for up to 52 weeks.3 Other controlled trials required stable background therapy with either SGLT2i or GLP-1 RA before randomization to receive the other class or placebo and had durations between 18 and 30 weeks.4-7 The AWARD 10 trial studied the combination of canagliflozin and dulaglutide, which both have proven CVD benefit.4 Other studies did not restrict SGLT2i or GLP-1 RA background therapy to agents with proven CVD benefit.5-7 The present study evaluated the combination of empagliflozin plus liraglutide or semaglutide, agents that all have proven CVD benefit.
A meta-analysis of 7 trials, including those previously mentioned, was conducted to evaluate the combination of GLP-1 RA and SGLT2i.8 The combination significantly reduced HbA1c levels by 0.61% and 0.85% compared with GLP-1 RA or SGLT2i, respectively. Our trial showed greater HbA1c level reduction of 1% with combination therapy compared with either agent separately. This may have been due in part to a higher baseline HbA1c level in our real-world veteran population. The meta-analysis found the combination decreased body weight 2.6 kg and 1.5 kg compared with GLP-1 RA or SGLT2i, respectively.8 This only reached significance with comparison vs GLP-1 RA alone. Our study demonstrated impressive weight loss of up to about 5 kg after 26 and 52 weeks of combination therapy. This is equivalent to about 5% weight loss from baseline, which is clinically significant.9 Liraglutide and semaglutide are the GLP-1 RAs associated with the greatest weight loss, which may contribute to greater weight loss efficacy seen in the present trial.1
In our trial SBP fell lower compared with the meta-analysis. Combination therapy significantly reduced SBP by 4.1 mm Hg and 2.7 mm Hg compared with GLP-1 RA or SGLT2i, respectively, in the meta-analysis.8 We observed a significant 9 to 12 mm Hg reduction in SBP after 26 to 52 weeks of combination therapy compared with baseline. This reduction occurred despite relatively controlled SBP at baseline (135 mm Hg). Each reduction of 10 mm Hg in SBP significantly reduces the risk of MACE, stroke, and heart failure, making our results clinically significant.10 Neither the meta-analysis nor present study found a significant difference in DBP or eGFR with combination therapy.
AEs were similar in this trial compared with the meta-analysis. Combination treatment with GLP-1 RA and SGLT2i did not increase the incidence of severe hypoglycemia in either study.8 Hypoglycemia was the most common AE in this study, but frequency was similar with combination and separate therapy. Both medication classes are associated with low or no risk of hypoglycemia on their own.1 Baseline medications likely contributed to episodes of hypoglycemia seen in this study: About 80% of patients were prescribed basal insulin, 15% were prescribed a sulfonylurea, and 13% were prescribed prandial insulin. There is limited overlap between the known AEs of GLP-1 RA and SGLT2i, making combination therapy a safe option for use in patients with T2DM.
Our study confirms greater reduction in HbA1c levels, weight, and SBP in veterans taking GLP-1 RA and SGLT2i medications in combination compared with separate use in a real-world setting in a veteran population. The magnitude of change seen in this population appears greater compared with previous studies.
Limitations
There were several limitations to our study. Given the retrospective nature, many patients included in the study did not have bloodwork drawn during the specified time frames. Because of this, many patients were excluded and missing data on renal outcomes limited the power to detect differences. Data regarding AEs were limited to what was recorded in the EHR, which may underrepresent the AEs that patients experienced. Finally, our study size was small, consisting primarily of a White and male population, which may limit generalizability.
Further research is needed to validate these findings in this population and should include a larger study population. The impact of combining GLP-1 RA with SGLT2i on cardiorenal outcomes is an important area of ongoing research.
ConclusionS
The combined use of GLP-1 RA and SGLT2i resulted in significant improvement in HbA1c levels, weight, and SBP compared with separate use in this real-world study of a VA population with T2DM. The combination was well tolerated overall. Awareness of these results can facilitate optimal care and outcomes in the VA population.
Acknowledgments
Serena Kelley, PharmD, and Michael Brenner, PharmD, assisted with study design and initial data collection. Julie Strominger, MS, provided statistical support.
Selecting the best medication regimen for a patient with type 2 diabetes mellitus (T2DM) depends on many factors, such as glycemic control, adherence, adverse effect (AE) profile, and comorbid conditions.1 Selected agents from 2 newer medication classes, glucagon-like peptide 1 receptor agonists (GLP-1 RA) and sodium-glucose cotransporter 2 inhibitors (SGLT2i), have demonstrated cardiovascular and renal protective properties, creating a new paradigm in management.
The American Diabetes Association recommends medications with proven benefit in cardiovascular disease (CVD), such as the GLP-1 RAs liraglutide, injectable semaglutide, or dulaglutide, or the SGLT2i empagliflozin or canagliflozin, as second-line after metformin in patients with established atherosclerotic CVD or indicators of high risk to reduce the risk of major adverse cardiovascular events (MACE).1 SGLT2i are preferred in patients with diabetic kidney disease, and GLP-1 RAs are next in line for selection of agents with proven nephroprotection (liraglutide, injectable semaglutide, dulaglutide). The mechanisms of these benefits are not fully understood but may be due to their extraglycemic effects. The classes likely induce these benefits by different mechanisms: SGLT2i by hemodynamic effects and GLP-1 RAs by anti-inflammatory mechanisms.2 Although there is much interest, evidence is limited regarding the cardiovascular and renal protection benefits of these agents used in combination.
The combined use of GLP-1 RA and SGLT2i agents demonstrated greater benefit than separate use in trials with nonveteran populations.3-7 These studies evaluated effects on hemoglobin A1c (HbA1c) levels, weight loss, blood pressure (BP), and estimated glomerular filtration rate (eGFR).A meta-analysis of 7 trials found that the combination of GLP-1 RA and SGLT2i reduced HbA1c levels, body weight, and systolic blood pressure (SBP).8 All of the changes were statistically significant except for body weight with combination vs SGLT2i alone. Combination therapy was not associated with increased risk of severe hypoglycemia compared with either therapy separately.
The purpose of our study was to evaluate the safety and efficacy of the combined use of GLP-1 RA and SGLT2i in a real-world, US Department of Veterans Affairs (VA) population with T2DM.
Methods
This study was a pre-post, retrospective, single-center chart review. Subjects served as their own control. The project was reviewed and approved by the VA Ann Arbor Healthcare System Institutional Review Board. Subjects prescribed both a GLP-1 RA (semaglutide or liraglutide) and SGLT2i (empagliflozin) between January 1, 2014, and November 10, 2019, were extracted from the Corporate Data Warehouse (CDW) for possible inclusion in the study.
Patients were excluded if they received < 12 weeks of combination GLP-1 RA and SGLT2i therapy or did not have a corresponding 12-week HbA1c level. Patients also were excluded if they had < 12 weeks of monotherapy before starting combination therapy or did not have a baseline HbA1c level, or if the start date of combination therapy was not recorded in the VA electronic health record (EHR). We reviewed data for each patient from 6 months before to 1 year after the second agent was started. Start of the first agent (GLP-1 RA or SGLT2i) was recorded as the date the prescription was picked up in-person or 7 days after release date if mailed to the patient. Start of the second agent (GLP-1 RA or SGLT2i) was defined as baseline and was the date the prescription was picked up in person or 7 days after the release date if mailed.
Baseline measures were taken anytime from 8 weeks after the start of the first agent through 2 weeks after the start of the second agent. Data collected included age, sex, race, height, weight, BP, HbA1c levels, serum creatinine (SCr), eGFR, classes of medications for the treatment of T2DM, and the number of prescribed antihypertensive medications. HbA1c levels, SCr, eGFR, weight, and BP also were collected at 12 weeks (within 8-21 weeks); 26 weeks (within 22-35 weeks); and 52 weeks (within 36-57 weeks) of combination therapy. We reviewed progress notes and laboratory results to determine AEs within 26 weeks before initiating second agent (baseline) and 0 to 26 weeks and 26 to 52 weeks after initiating combination therapy.
The primary objective was to determine the effect on HbA1c levels at 12 weeks when using a GLP-1 RA and SGLT2i in combination vs separately. Secondary objectives were to determine change from baseline in mean body weight, BP, SCr, and eGFR at 12, 26, and 52 weeks; change in HbA1c levels at 26 and 52 weeks; and incidence of prespecified adverse drug reactions during combination therapy vs separately.
Statistical Analysis
Assuming a SD of 1, 80% power, significance level of P < .05, 2-sided test, and a correlation between baseline and follow-up of 0.5, we determined that a sample size of 34 subjects was required to detect a 0.5% change in baseline HbA1c level at 12 weeks. A t test (or Wilcoxon signed rank test if outcome not normally distributed) was conducted to examine whether the expected change from baseline was different from 0 for continuous outcomes. Median change from baseline was reported for SCr as a nonparametric t test (Wilcoxon signed rank test) was used.
Results
We identified 110 patients for possible study inclusion and 39 met eligibility criteria. After record review, 30 patients were excluded for receiving < 12 weeks of combination therapy or no 12 week HbA1c level; 26 patients were excluded for receiving < 12 weeks of monotherapy before starting combination therapy or no baseline HbA1c level; and 15 patients were excluded for lack of documentation in the VA EHR. Of the 39 patients included, 24 (62%) were prescribed empagliflozin first and then 8 started liraglutide and 16 started semaglutide.
HbA1c levels decreased by 1% after 12 weeks of combination therapy compared with baseline (P < .001), and this reduction was sustained through the duration of the study period (Table 2).
The most common AE during the trial was hypoglycemia, which was mostly mild (level 1) (Table 3).
Discussion
This study evaluated the safety and efficacy of combined use of semaglutide or liraglutide and empagliflozin in a veteran population with T2DM. The retrospective chart review captured real-world practice and outcomes. Combination therapy was associated with a significant reduction in HbA1c levels, body weight, and SBP compared with either agent alone. No significant change was seen in DBP, SCr, or eGFR. Overall, the combination of GLP-1 RA and SGLT2i medications demonstrated a good safety profile with most patients reporting no AEs.
Several other studies have assessed the safety and efficacy of using GLP-1 RA and SGLT2i in combination. The DURATION 8 trial is the only double-blind trial to randomize subjects to receive either exenatide once weekly, dapagliflozin, or the combination of both for up to 52 weeks.3 Other controlled trials required stable background therapy with either SGLT2i or GLP-1 RA before randomization to receive the other class or placebo and had durations between 18 and 30 weeks.4-7 The AWARD 10 trial studied the combination of canagliflozin and dulaglutide, which both have proven CVD benefit.4 Other studies did not restrict SGLT2i or GLP-1 RA background therapy to agents with proven CVD benefit.5-7 The present study evaluated the combination of empagliflozin plus liraglutide or semaglutide, agents that all have proven CVD benefit.
A meta-analysis of 7 trials, including those previously mentioned, was conducted to evaluate the combination of GLP-1 RA and SGLT2i.8 The combination significantly reduced HbA1c levels by 0.61% and 0.85% compared with GLP-1 RA or SGLT2i, respectively. Our trial showed greater HbA1c level reduction of 1% with combination therapy compared with either agent separately. This may have been due in part to a higher baseline HbA1c level in our real-world veteran population. The meta-analysis found the combination decreased body weight 2.6 kg and 1.5 kg compared with GLP-1 RA or SGLT2i, respectively.8 This only reached significance with comparison vs GLP-1 RA alone. Our study demonstrated impressive weight loss of up to about 5 kg after 26 and 52 weeks of combination therapy. This is equivalent to about 5% weight loss from baseline, which is clinically significant.9 Liraglutide and semaglutide are the GLP-1 RAs associated with the greatest weight loss, which may contribute to greater weight loss efficacy seen in the present trial.1
In our trial SBP fell lower compared with the meta-analysis. Combination therapy significantly reduced SBP by 4.1 mm Hg and 2.7 mm Hg compared with GLP-1 RA or SGLT2i, respectively, in the meta-analysis.8 We observed a significant 9 to 12 mm Hg reduction in SBP after 26 to 52 weeks of combination therapy compared with baseline. This reduction occurred despite relatively controlled SBP at baseline (135 mm Hg). Each reduction of 10 mm Hg in SBP significantly reduces the risk of MACE, stroke, and heart failure, making our results clinically significant.10 Neither the meta-analysis nor present study found a significant difference in DBP or eGFR with combination therapy.
AEs were similar in this trial compared with the meta-analysis. Combination treatment with GLP-1 RA and SGLT2i did not increase the incidence of severe hypoglycemia in either study.8 Hypoglycemia was the most common AE in this study, but frequency was similar with combination and separate therapy. Both medication classes are associated with low or no risk of hypoglycemia on their own.1 Baseline medications likely contributed to episodes of hypoglycemia seen in this study: About 80% of patients were prescribed basal insulin, 15% were prescribed a sulfonylurea, and 13% were prescribed prandial insulin. There is limited overlap between the known AEs of GLP-1 RA and SGLT2i, making combination therapy a safe option for use in patients with T2DM.
Our study confirms greater reduction in HbA1c levels, weight, and SBP in veterans taking GLP-1 RA and SGLT2i medications in combination compared with separate use in a real-world setting in a veteran population. The magnitude of change seen in this population appears greater compared with previous studies.
Limitations
There were several limitations to our study. Given the retrospective nature, many patients included in the study did not have bloodwork drawn during the specified time frames. Because of this, many patients were excluded and missing data on renal outcomes limited the power to detect differences. Data regarding AEs were limited to what was recorded in the EHR, which may underrepresent the AEs that patients experienced. Finally, our study size was small, consisting primarily of a White and male population, which may limit generalizability.
Further research is needed to validate these findings in this population and should include a larger study population. The impact of combining GLP-1 RA with SGLT2i on cardiorenal outcomes is an important area of ongoing research.
ConclusionS
The combined use of GLP-1 RA and SGLT2i resulted in significant improvement in HbA1c levels, weight, and SBP compared with separate use in this real-world study of a VA population with T2DM. The combination was well tolerated overall. Awareness of these results can facilitate optimal care and outcomes in the VA population.
Acknowledgments
Serena Kelley, PharmD, and Michael Brenner, PharmD, assisted with study design and initial data collection. Julie Strominger, MS, provided statistical support.
1. American Diabetes Association. 9. Pharmacologic approaches to glycemic treatment: standards of medical care in diabetes-2021. Diabetes Care. 2021;44(suppl 1):S111-S124. doi.10.2337/dc21-S009
2. DeFronzo RA. Combination therapy with GLP-1 receptor agonist and SGLT2 inhibitor. Diabetes Obes Metab. 2017;19(10):1353-1362. doi.10.1111/dom.12982
3. Jabbour S, Frias J, Guja C, Hardy E, Ahmed A, Ohman P. Effects of exenatide once weekly plus dapagliflozin, exenatide once weekly, or dapagliflozin, added to metformin monotherapy, on body weight, systolic blood pressure, and triglycerides in patients with type 2 diabetes in the DURATION-8 study. Diabetes Obes Metab. 2018;20(6):1515-1519. doi:10.1111/dom.13206
4. Ludvik B, Frias J, Tinahones F, et al. Dulaglutide as add-on therapy to SGLT2 inhibitors in patients with inadequately controlled type 2 diabetes (AWARD-10): a 24-week, randomised, double-blind, placebo-controlled trial. Lancet Diabetes Endocrinol. 2018;6(5):370-381. doi:10.1016/S2213-8587(18)30023-8
5. Blonde L, Belousova L, Fainberg U, et al. Liraglutide as add-on to sodium-glucose co-transporter-2 inhibitors in patients with inadequately controlled type 2 diabetes: LIRA-ADD2SGLT2i, a 26-week, randomized, double-blind, placebo-controlled trial. Diabetes Obes Metab. 2020;22(6):929-937. doi:10.1111/dom.13978
6. Fulcher G, Matthews D, Perkovic V, et al; CANVAS trial collaborative group. Efficacy and safety of canagliflozin when used in conjunction with incretin-mimetic therapy in patients with type 2 diabetes. Diabetes Obes Metab. 2016;18(1):82-91. doi:10.1111/dom.12589
7. Zinman B, Bhosekar V, Busch R, et al. Semaglutide once weekly as add-on to SGLT-2 inhibitor therapy in type 2 diabetes (SUSTAIN 9): a randomised, placebo-controlled trial. Lancet Diabetes Endocrinol. 2019;7(5):356-367. doi:10.1016/S2213-8587(19)30066-X
8. Mantsiou C, Karagiannis T, Kakotrichi P, et al. Glucagon-like peptide-1 receptor agonists and sodium-glucose co-transporter-2 inhibitors as combination therapy for type 2 diabetes: a systematic review and meta-analysis. Diabetes Obes Metab. 2020;22(10):1857-1868. doi:10.1111/dom.14108
9. US Department of Veterans Affairs, Department of Defense. VA/DoD clinical practice guideline for the management of adult overweight and obesity. Version 3.0. Accessed August 18, 2022. www.healthquality.va.gov/guidelines/CD/obesity/VADoDObesityCPGFinal5087242020.pdf
10. Ettehad D, Emdin CA, Kiran A, et al. Blood pressure lowering for prevention of cardiovascular disease and death: a systematic review and meta-analysis. Lancet. 2015;387(10022):957-967. doi.10.1016/S0140-6736(15)01225-8
1. American Diabetes Association. 9. Pharmacologic approaches to glycemic treatment: standards of medical care in diabetes-2021. Diabetes Care. 2021;44(suppl 1):S111-S124. doi.10.2337/dc21-S009
2. DeFronzo RA. Combination therapy with GLP-1 receptor agonist and SGLT2 inhibitor. Diabetes Obes Metab. 2017;19(10):1353-1362. doi.10.1111/dom.12982
3. Jabbour S, Frias J, Guja C, Hardy E, Ahmed A, Ohman P. Effects of exenatide once weekly plus dapagliflozin, exenatide once weekly, or dapagliflozin, added to metformin monotherapy, on body weight, systolic blood pressure, and triglycerides in patients with type 2 diabetes in the DURATION-8 study. Diabetes Obes Metab. 2018;20(6):1515-1519. doi:10.1111/dom.13206
4. Ludvik B, Frias J, Tinahones F, et al. Dulaglutide as add-on therapy to SGLT2 inhibitors in patients with inadequately controlled type 2 diabetes (AWARD-10): a 24-week, randomised, double-blind, placebo-controlled trial. Lancet Diabetes Endocrinol. 2018;6(5):370-381. doi:10.1016/S2213-8587(18)30023-8
5. Blonde L, Belousova L, Fainberg U, et al. Liraglutide as add-on to sodium-glucose co-transporter-2 inhibitors in patients with inadequately controlled type 2 diabetes: LIRA-ADD2SGLT2i, a 26-week, randomized, double-blind, placebo-controlled trial. Diabetes Obes Metab. 2020;22(6):929-937. doi:10.1111/dom.13978
6. Fulcher G, Matthews D, Perkovic V, et al; CANVAS trial collaborative group. Efficacy and safety of canagliflozin when used in conjunction with incretin-mimetic therapy in patients with type 2 diabetes. Diabetes Obes Metab. 2016;18(1):82-91. doi:10.1111/dom.12589
7. Zinman B, Bhosekar V, Busch R, et al. Semaglutide once weekly as add-on to SGLT-2 inhibitor therapy in type 2 diabetes (SUSTAIN 9): a randomised, placebo-controlled trial. Lancet Diabetes Endocrinol. 2019;7(5):356-367. doi:10.1016/S2213-8587(19)30066-X
8. Mantsiou C, Karagiannis T, Kakotrichi P, et al. Glucagon-like peptide-1 receptor agonists and sodium-glucose co-transporter-2 inhibitors as combination therapy for type 2 diabetes: a systematic review and meta-analysis. Diabetes Obes Metab. 2020;22(10):1857-1868. doi:10.1111/dom.14108
9. US Department of Veterans Affairs, Department of Defense. VA/DoD clinical practice guideline for the management of adult overweight and obesity. Version 3.0. Accessed August 18, 2022. www.healthquality.va.gov/guidelines/CD/obesity/VADoDObesityCPGFinal5087242020.pdf
10. Ettehad D, Emdin CA, Kiran A, et al. Blood pressure lowering for prevention of cardiovascular disease and death: a systematic review and meta-analysis. Lancet. 2015;387(10022):957-967. doi.10.1016/S0140-6736(15)01225-8
Make room for continuous glucose monitoring in type 2 diabetes management
A1C has been used to estimate 3-month glycemic control in patients with diabetes. However, A1C monitoring alone does not provide insight into daily glycemic variation, which is valuable in clinical management because tight glycemic control (defined as A1C < 7.0%) has been shown to reduce the risk of microvascular complications. Prior to the approval of glucagon-like peptide-1 receptor agonists and sodium-glucose co-transporter 2 inhibitors by the US Food and Drug Administration for the treatment of type 2 diabetes (T2D), reduction in the risk of macrovascular complications (aside from nonfatal myocardial infarction) was more difficult to achieve than it is now; some patients had a worse outcome with overly aggressive glycemic control.1
Previously, the use of a continuous glucose monitor (CGM) was limited to patients with type 1 diabetes who required basal and bolus insulin. However, technological advances have led to more patient-friendly and affordable devices, making CGMs more available. As such, the American Diabetes Association (ADA), in its 2022 Standards of Medical Care in Diabetes, recommends that clinicians offer continuous glucose monitoring to adults with T2D who require multiple daily injections, and based on a given patient’s ability, preferences, and needs.2
In this article, we discuss, first, the intricacies of CGMs and, second, what the evidence says about their use so that physicians can confidently recommend, and educate patients on, effective utilization of CGMs to obtain an individualized target of glycemic control.
Continuous glucose monitoring: A glossary
CGMs are characterized by who possesses the device and how data are recorded. This review is not about professional CGMs, which are owned by the health care provider and consist of a sensor that is applied in the clinic and returned to clinic for downloading of data1; rather, we focus on the novel category of nonprofessional, or personal, CGMs.
Three words to remember. Every CGM has 3 common components:
- The reader (also known as a receiver) is a handheld device that allows a patient to scan a sensor (see definition below) and instantaneously collect a glucose reading. The patient can use a standalone reader; a smartphone or other smart device with an associated app that serves as a reader; or both.
- A sensor is inserted subcutaneously to measure interstitial glucose. The lifespan of a sensor is 10 to 14 days.
- A transmitter relays information from the sensor to the reader.
The technology behind a CGM
CGM sensors measure interstitial glucose by means of a chemical reaction involving glucose oxidase and an oxidation-reduction cofactor, measuring the generation of hydrogen peroxide.3 Interstitial glucose readings lag behind plasma blood glucose readings by 2 to 21 minutes.4,5 Although this lag time is often not clinically significant, situations such as aerobic exercise and a rapidly changing glucose level might warrant confirmation by means of fingerstick measurement.5 It is common for CGM readings to vary slightly from venipuncture or fingerstick glucose readings.
What CGMs are availableto your patients?
Intermittently scanned CGMs (isCGMs) measure the glucose level continuously; the patient must scan a sensor to display and record the glucose level.6 Prolonged periods without scanning result in gaps in glycemic data.7,8
Continue to: Two isCGM systems...
Two isCGM systems are available: the FreeStyle Libre 14 day and the FreeStyle Libre 2 (both from Abbott).a Both consist of a reader and a disposable sensor, applied to the back of the arm, that is worn for 14 days. If the patient has a compatible smartphone or other smart device, the reader can be replaced by the smart device with the downloaded FreeStyle Libre or FreeStyle Libre 2 app.
To activate a new sensor, the patient applies the sensor, then scans it. Once activated, scanning the sensor provides the current glucose reading and recalls the last 8 hours of data. In addition to providing an instantaneous glucose reading, the display also provides a trend arrow indicating the direction and degree to which the glucose level is changing (TABLE 110,14,15). This feature helps patients avoid hypoglycemic episodes by allowing them to preemptively correct if the arrow indicates a rapidly declining glucose level.
For the first 12 hours after a new sensor is activated, and when a glucose reading is < 70 mg/dL, patients should be instructed to avoid making treatment decisions and encouraged to utilize fingerstick glucose readings. FreeStyle Libre 14 day does not allow a glucose level alarm to be set; the system cannot detect these events without scanning the sensor.10 Bluetooth connectivity does allow FreeStyle Libre 2 users to set a glucose alarm if the reader or smart device is within 20 feet of the sensor. A default alarm is set to activate at 70 mg/dL (“low”) and 240 mg/dL (“high”); low and high alarm settings are also customizable. Because both FreeStyle Libre devices store 8 hours of data, patients must scan the sensor every 8 hours for a comprehensive glycemic report.14
FreeStyle Libre CGMs allow patients to add therapy notes, including time and amount of insulin administered and carbohydrates ingested. Readers for both devices function as a glucometer that is compatible with Abbott FreeStyle Precision Neo test strips.
Real-time CGMs (rtCGMs) measure and display glucose levels continuously for the duration of the life of the sensor, without the need to scan. Three rtCGM systems are available: Dexcom G6, Medtronic Guardian 3, and Senseonics Eversense E3.
Continue to: Dexcom G6...
Dexcom G6 is the first Dexcom CGM that does not require fingerstick calibration and the only rtCGM available in the United States that does not require patient calibration. This system comprises a single-use sensor replaced every 10 days; a transmitter that is transferred to each new sensor and replaced every 3 months; and an optional receiver that can be omitted if the patient prefers to utilize a smart device.
Dexcom G6 never requires a patient to scan a sensor. Instead, the receiver (or smart device) utilizes Bluetooth technology to obtain blood glucose readings if it is positioned within 20 feet of the transmitter. Patients can set both hypoglycemic and hyperglycemic alarms to predict events within 20 minutes. Similar to the functionality of the FreeStyle Libre systems, Dexcom G6 provides the opportunity to log lifestyle events, including insulin dosing, carbohydrate ingestion, exercise, and sick days.15
Medtronic Guardian 3 comprises the multi-use Guardian Connect Transmitter that is replaced annually and a single-use Guardian Sensor that is replaced every 7 days. Guardian 3 requires twice-daily fingerstick glucose calibration, which reduces the convenience of a CGM.
Guardian 3 allows the user to set alarm levels, providing predictive alerts 10 to 60 minutes before set glucose levels are reached. Patients must utilize a smart device to connect through Bluetooth to the CareLink Connect app and remain within 20 feet of the transmitter to provide continuous glucose readings. The CareLink Connect app allows patients to document exercise, calibration of fingerstick readings, meals, and insulin administration.16
Senseonics Eversense E3 consists of a 3.5 mm × 18.3 mm sensor inserted subcutaneously in the upper arm once every 180 days; a removable transmitter that attaches to an adhesive patch placed over the sensor; and a smart device with the Eversense app. The transmitter has a 1-year rechargeable battery and provides the patient with on-body vibration alerts even when they are not near their smart device.
Continue to: The Eversense E3 transmitter...
The Eversense E3 transmitter can be removed and reapplied without affecting the life of the sensor; however, no glucose data will be collected during this time. Once the transmitter is reapplied, it takes 10 minutes for the sensor to begin communicating with the transmitter. Eversense provides predictive alerts as long as 30 minutes before hyperglycemic or hypoglycemic events. The device requires twice-daily fingerstick calibrations.17
A comparison of the specifications and capabilities of the personal CGMs discussed here is provided in TABLE 2.10,14-17
The evidence, reviewed
Clinical outcomes evidence with CGMs in patients with T2D is sparse. Most studies that support improved clinical outcomes enrolled patients with type 1 diabetes who were treated with intensive insulin regimens. Many studies utilized rtCGMs that are capable of incorporating a hypoglycemic alarm, and results might not be generalizable to isCGMs.18,19 In this article, we review only the continuous glucose monitoring literature in which subjects had T2D.
Evidence for is CGMs
The REPLACE trial compared outcomes in patients with T2D who used an isCGM vs those who self-monitored blood glucose (SMBG); both groups were being treated with intensive insulin regimens. Both groups had similar glucose reductions, but the time in the hypoglycemic range (see “Clinical targets,” in the text that follows) was significantly shorter in the isCGM group.20
A randomized controlled trial (RCT) that compared intermittently scanned continuous glucose monitoring and SMBG in patients with T2D who received multiple doses of insulin daily demonstrated a significant A1C reduction of 0.82% with an isCGM and 0.33% with SMBG, with no difference in the rate of hypoglycemic events, over 10 weeks.21
Continue to: Chart review
Chart review. Data extracted from chart reviews in Austria, France, and Germany demonstrated a mean improvement in A1C of 0.9% among patients when using a CGM after using SMBG previously.22
A retrospective review of patients with T2D who were not taking bolus insulin and who used a CGM had a reduction in A1C from 10.1% to 8.6% over 60 to 300 days.23
Evidence for rtCGMs
The DIAMOND study included a subset of patients with T2D who used an rtCGM and were compared to a subset who received usual care. The primary outcome was the change in A1C. A 0.3% greater reduction was observed in the CGM group at 24 weeks. There was no difference in hypoglycemic events between the 2 groups; there were few events in either group.24
An RCT demonstrated a similar reduction in A1C in rtCGM users and in nonusers over 1 year.25 However, patients who used the rtCGM by protocol demonstrated the greatest reduction in A1C. The CGM utilized in this trial required regular fingerstick calibration, which likely led to poorer adherence to protocol than would have been the case had the trial utilized a CGM that did not require calibration.
A prospective trial demonstrated that utilization of an rtCGM only 3 days per month for 3 consecutive months was associated with (1) significant improvement in A1C (a decrease of 1.1% in the CGM group, compared to a decrease of 0.4% in the SMBG group) and (2) numerous lifestyle modifications, including reduction in total caloric intake, weight loss, decreased body mass index, and an increase in total weekly exercise.26 This trial demonstrated that CGMs might be beneficial earlier in the course of disease by reinforcing lifestyle changes.
Continue to: The MOBILE trial
The MOBILE trial demonstrated that use of an rtCGM reduced baseline A1C from 9.1% to 8.0% in the CGM group, compared to 9.0% to 8.4% in the non-CGM group.27
Practical utilization of CGMs
Patient education
Detailed patient education resources—for initial setup, sensor application, methods to ensure appropriate sensor adhesion, and app and platform assistance—are available on each manufacturer’s website.
Clinical targets
In 2019, the Advanced Technologies & Treatments for Diabetes Congress determined that what is known as the time in range metric yields the most practical data to help clinicians manage glycemic control.28 The time in range metric comprises:
- time in the target glucose range (TIR)
- time below the target glucose range (TBR)
- time above the target glucose range (TAR).
TIR glucose ranges are modifiable and based on the A1C goal. For example, if the A1C goal is < 7.0%, the TIR glucose range is 70-180 mg/dL. If a patient maintains TIR > 70% for 3 months, the measured A1C will correlate well with the goal. Each 10% fluctuation in TIR from the goal of 70% corresponds to a difference of approximately 0.5% in A1C. Therefore, TIR of approximately 50% predicts an A1C of 8.0%.28
A retrospective review of 1440 patients with CGM data demonstrated that progression of retinopathy and development of microalbuminuria increased 64% and 40%, respectively, over 10 years for each 10% reduction in TIR—highlighting the importance of TIR and consistent glycemic control.29 Importantly, the CGM sensor must be active ≥ 70% of the wearable time to provide adequate TIR data.30
Continue to: Concerns about accuracy
Concerns about accuracy
There is no universally accepted standard for determining the accuracy of a CGM; however, the mean absolute relative difference (MARD) is the most common statistic referenced. MARD is calculated as the average of the absolute error between all CGM values and matched reference values that are usually obtained from SMBG.31 The lower the MARD percentage, the better the accuracy of the CGM. A MARD of ≤ 10% is considered acceptable for making therapeutic decisions.30
Package labeling for all CGMs recommends that patients have access to a fingerstick glucometer to verify CGM readings when concerns about accuracy exist. If a sensor becomes dislodged, it can malfunction or lose accuracy. Patients should not try to re-apply the sensor; instead, they should remove and discard the sensor and apply a new one. TABLE 210,14-17 compares MARD for CGMs and lists substances that might affect the accuracy of a CGM.
Patient–provider data-sharing platforms
FreeStyle Libre and Libre 2. Providers create a LibreView Practice ID at www.Libre View.com. Patient data-sharing depends on whether they are using a smart device, a reader, or both. Patients can utilize both the smart device and the reader but must upload data from the reader at regular intervals to provide a comprehensive report that will merge data from the smart device (ie, data that have been uploaded automatically) and the reader.7
Dexcom G6. Clinicians create a Dexcom CLARITY account at https://clarity.dexcom.com and add patients to a practice list or gain access to a share code generated by the patient. Patients must download the Dexcom CLARITY app to create an account; once the account is established, readings will be transmitted to the clinic automatically.15 A patient who is utilizing a nonsmart-device reader must upload data manually to their web-based CLARITY account.
Family and caregiver access
Beyond sharing CGM data with clinic staff, an important feature available with FreeStyle Libre and Dexcom systems is the ability to share data with friends and caregivers. The relevant platforms and apps are listed in TABLE 2.10,14-17
Continue to: Insurance coverage, cost, and accessibility
Insurance coverage, cost, and accessibility
Medicare Part B has established criteria by which patients with T2D qualify for a CGM (TABLE 332). A Medicare patient who has been determined to be eligible is responsible for 20% of the out-of-pocket expense of the CGM and supplies once their deductible is met. Once Medicare covers a CGM, the patient is no longer able to obtain fingerstick glucose supplies through Medicare; they must pay the cash price for any fingerstick supplies that are determined to be necessary.32
Patients with private insurance can obtain CGM supplies through their preferred pharmacy when the order is written as a prescription (the same as for fingerstick glucometers). That is not the case for patients with Medicare because not all US distributors and pharmacies are contracted to bill Medicare Part B for CGM supplies. A list of distributors and eligible pharmacies can be found on each manufacturer’s website.
Risk–benefit analysis
CGMs are associated with few risks overall. The predominant adverse effect is contact dermatitis; the prevalence of CGM-associated contact dermatitis is difficult to quantify and differs from device to device.
FreeStyle Libre. In a retrospective review of records of patients with diabetes, researchers determined that a cutaneous adverse event occurred in approximately 5.5% of 1036 patients who utilized a FreeStyle Libre sensor.33 Of that percentage, 3.8% of dermatitis cases were determined to be allergic in nature and related to isobornyl acrylate (IBOA), a chemical constituent of the sensor’s adhesive that is not used in the FreeStyle Libre 2. Among patients who wore a sensor and developed allergic contact dermatitis, interventions such as a barrier film were of limited utility in alleviating or preventing further cutaneous eruption.33
Dexcom G6. The prevalence of Dexcom G6–associated allergic contact dermatitis is more difficult to ascertain (the IBOA adhesive was replaced in October 2019) but has been reported to be less common than with FreeStyle Libre,34 a finding that corroborates our anecdotal clinical experience. Although Dexcom sensors no longer contain IBOA, cases of allergic contact dermatitis are still reported.35 We propose that the lower incidence of cutaneous reactions associated with the Dexcom G6 sensor might be due to the absence of IBOA and shorter contact time with skin.
Continue to: In general, patients should be...
In general, patients should be counseled to rotate the location of the sensor and to use only specific barrier products that are recommended on each manufacturer’s website. The use of other barriers that are not specifically recommended might compromise the accuracy of the sensor.
Summing up
As CGM technology improves, it is likely that more and more of your patients will utilize one of these devices. The value of CGMs has been documented, but any endorsement of their use is qualified:
- Data from many older RCTs of patients with T2D who utilize a CGM did not demonstrate a significant reduction in A1C20,24,36; however, real-world observational data do show a greater reduction in A1C.
- From a safety standpoint, contact dermatitis is the primary drawback of CGMs.
- CGMs can provide patients and clinicians with a comprehensive picture of daily glucose trends, which can help patients make lifestyle changes and serve as a positive reinforcement for the effects of diet and exercise. Analysis of glucose trends can also help clinicians confidently make decisions about when to intensify or taper a medication regimen, based on data that is reported more often than 90-day A1C changes.
Health insurance coverage will continue to dictate access to CGM technology for many patients. When a CGM is reimbursable by the patient’s insurance, consider offering it as an option—even for patients who do not require an intensive insulin regimen.
a The US Food and Drug Administration cleared a new Abbott CGM, FreeStyle Libre 3, earlier this year; however, the device is not yet available for purchase. With advances in monitoring technology, several other manufacturers also anticipate introducing novel CGMs. (See “Continuous glucose monitors: The next generation.” )
SIDEBAR
Continuous glucose monitors: The next generation9-13
Expect new continuous glucose monitoring devices to be introduced to US and European health care markets in the near future.
FreeStyle Libre 3 (Abbott) was cleared by the US Food and Drug Administration in May 2022, although it is not yet available for purchase. The manufacturer promotes the device as having the smallest sensor of any continuous glucose monitor (the diameter and thickness of 2 stacked pennies); improved mean absolute relative difference; the ability to provide real-time glucose level readings; and 50% greater range of Bluetooth connectivity (about 10 extra feet).9,10
Dexcom G7 (Dexcom) has a sensor that is 60% smaller than the Dexcom G6 sensor and a 30-minute warm-up time, compared to 120 minutes for the G6.11 The device has received European Union CE mark approval.
Guardian 4 Sensor (Medtronic) does not require fingerstick calibration. The device has also received European Union CE mark approval12 but is available only for investigational use in the United States.
Eversense XL technology is similar to that of the Eversense E3, including a 180-day sensor.13 The device, which has received European Union CE mark approval, includes a removable smart transmitter.
CORRESPONDENCE
Kevin Schleich, PharmD, BCACP, Departments of Pharmaceutical Care and Family Medicine, University of Iowa, 200 Hawkins Drive, 01102-D PFP, Iowa City, IA, 52242; [email protected]
1. Rodríguez-Gutiérrez R, Montori VM. Glycemic control for patients with type 2 diabetes mellitus: our evolving faith in the face of evidence. Circ Cardiovasc Qual Outcomes. 2016;9:504-512. doi: 10.1161/CIRCOUTCOMES.116.002901
2. Draznin B, Aroda VR, Bakris G, et al; doi: 10.2337/dc22-S007
. 7. Diabetes technology: standards of medical care in diabetes—2022. Diabetes Care. 2021;45(suppl 1):S97-S112.3. Olczuk D, Priefer R. A history of continuous glucose monitors (CGMs) in self-monitoring of diabetes mellitus. Diabetes Metab Syndr. 2018;12:181-187. doi: 10.1016/j.dsx.2017.09.005
4. Alva S, Bailey T, Brazg R, et al. Accuracy of a 14-day factory-calibrated continuous glucose monitoring system with advanced algorithm in pediatric and adult population with diabetes. J Diabetes Sci Technol. 2022;16:70-77. doi: 10.1177/1932296820958754
5. Zaharieva DP, Turksoy K, McGaugh SM, et al. Lag time remains with newer real-time continuous glucose monitoring technology during aerobic exercise in adults living with type 1 diabetes. Diabetes Technol Ther. 2019;21:313-321. doi: 10.1089/dia.2018.0364
6. American Diabetes Association. 2. Classification and diagnosis of diabetes: standards of medical care in diabetes—2021. Diabetes Care. 2021;44(suppl 1):S15-S33. doi: 10.2337/dc21-S002
7. FreeStyle Libre systems: The #1 CGM used in the US. Abbott. Updated May 2022. Accessed October 22, 2022. www.freestyleprovider.abbott/us-en/home.html
8. Rowland K. Choosing Wisely: 10 practices to stop—or adopt—to reduce overuse in health care. J Fam Pract. 2020;69:396-400.
9. Tucker ME. FDA clears Abbott Freestyle Libre 3 glucose sensor. MDedge. June 1, 2022. Accessed October 21, 2022. www.mdedge.com/endocrinology/article/255095/diabetes/fda-clears-abbott-freestyle-libre-3-glucose-sensor
10. Manage your diabetes with more confidence. Abbott. Updated May 2022. Accessed October 22, 2022. www.freestyle.abbott/us-en/home.html
11. Whooley S. Dexcom CEO Kevin Sayer says G7 will be ‘wonderful’. Drug Delivery Business News. July 19, 2021. Accessed October 21, 2022. www.drugdeliverybusiness.com/dexcom-ceo-kevin-sayer-says-g7-will-be-wonderful
12. Medtronic secures two CE mark approvals for Guardian 4 Sensor & for InPen MDI Smart Insulin Pen. Medtronic. Press release. May 26, 2021. Accessed October 22, 2022. https://news.medtronic.com/2021-05-26-Medtronic-Secures-Two-CE-Mark-Approvals-for-Guardian-4-Sensor-for-InPen-MDI-Smart-Insulin-Pen
13. Eversense—up to 180 days of freedom [XL CGM System]. Senseonics. Accessed September 14, 2022. https://global.eversensediabetes.com
14. FreeStyle Libre 2 User’s Manual. Abbott. Revised August 24, 2022. Accessed October 2, 2022. https://freestyleserver.com/Payloads/IFU/2022/q3/ART46983-001_rev-A.pdf
15. Dexcom G6 Continuous Glucose Monitoring System user guide. Dexcom. Revised March 2022. Accessed October 21, 2022. https://s3-us-west-2.amazonaws.com/dexcompdf/G6-CGM-Users-Guide.pdf
16. Guardian Connect System user guide. Medtronic. 2020. Accessed October 21, 2022. www.medtronicdiabetes.com/sites/default/files/library/download-library/user-guides/Guardian-Connect-System-User-Guide.pdf
17. Eversense E3 user guides. Senseonics. 2022. Accessed October 22, 2022. www.ascensiadiabetes.com/eversense/user-guides/
18. Battelino T, Conget I, Olsen B, et al; SWITCH Study Group. The use and efficacy of continuous glucose monitoring in type 1 diabetes treated with insulin pump therapy: a randomised controlled trial. Diabetologia. 2012;55:3155-3162. doi: 10.1007/s00125-012-2708-9
19. Weinzimer S, Miller K, Beck R, et al; doi: 10.2337/dc09-1502
Effectiveness of continuous glucose monitoring in a clinical care environment: evidence from the Juvenile Diabetes Research Foundation continuous glucose monitoring (JDRF-CGM) trial. Diabetes Care. 2010;33:17-22.20. Haak T, Hanaire H, Ajjan R, et al. Flash glucose-sensing technology as a replacement for blood glucose monitoring for the management of insulin-treated type 2 diabetes: a multicenter, open-label randomized controlled trial. Diabetes Ther. 2017;8:55-73. doi: 10.1007/s13300-016-0223-6
21. Yaron M, Roitman E, Aharon-Hananel G, et al. Effect of flash glucose monitoring technology on glycemic control and treatment satisfaction in patients with type 2 diabetes. Diabetes Care. 2019;42:1178-1184. doi: 10.2337/dc18-0166
22. Kröger J, Fasching P, Hanaire H. Three European retrospective real-world chart review studies to determine the effectiveness of flash glucose monitoring on HbA1c in adults with type 2 diabetes. Diabetes Ther. 2020;11:279-291. doi: 10.1007/s13300-019-00741-9
23. Wright EE, Jr, Kerr MSD, Reyes IJ, et al. Use of flash continuous glucose monitoring is associated with A1C reduction in people with type 2 diabetes treated with basal insulin or noninsulin therapy. Diabetes Spectr. 2021;34:184-189. doi: 10.2337/ds20-0069
24. Beck RW, Riddlesworth TD, Ruedy K, et al; DIAMOND Study Group. Continuous glucose monitoring versus usual care in patients with type 2 diabetes receiving multiple daily insulin injections: a randomized trial. Ann Intern Med. 2017;167:365-374. doi: 10.7326/M16-2855
25. Vigersky RA, Fonda SJ, Chellappa M, et al. Short- and long-term effects of real-time continuous glucose monitoring in patients with type 2 diabetes. Diabetes Care. 2012;35:32-38. doi: 10.2337/dc11-1438
26. Yoo HJ, An HG, Park SY, et al. Use of a real time continuous glucose monitoring system as a motivational device for poorly controlled type 2 diabetes. Diabetes Res Clin Pract. 2008;82:73-79. doi: 10.1016/j.diabres.2008.06.015
27. Martens T, Beck RW, Bailey R, et al; MOBILE Study Group. Effect of continuous glucose monitoring on glycemic control in patients with type 2 diabetes treated with basal insulin: a randomized clinical trial. JAMA. 2021;325:2262-2272. doi: 10.1001/jama.2021.7444
28. Battelino T, Danne T, Bergenstal RM, et al. Clinical targets for continuous glucose monitoring data interpretation: recommendations from the international consensus on time in range. Diabetes Care. 2019;42:1593-1603. doi: 10.2337/dci19-0028
29. Beck RW, Bergenstal RM, Riddlesworth TD, et al. Validation of time in range as an outcome measure for diabetes clinical trials. Diabetes Care. 2019;42:400-405. doi: 10.2337/dc18-1444
30. Freckmann G. Basics and use of continuous glucose monitoring (CGM) in diabetes therapy. Journal of Laboratory Medicine. 2020;44:71-79. doi: 10.1515/labmed-2019-0189
31. Danne T, Nimri R, Battelino T, et al. International consensus on use of continuous glucose monitoring. Diabetes Care. 2017;40:1631-1640. doi: 10.2337/dc17-1600
32. Glucose monitors. Centers for Medicare & Medicaid Services. April 22, 2022. Accessed October 22, 2022. www.cms.gov/medicare-coverage-database/view/lcd.aspx?lcdid=33822
33. Pyl J, Dendooven E, Van Eekelen I, et al. Prevalence and prevention of contact dermatitis caused by FreeStyle Libre: a monocentric experience. Diabetes Care. 2020;43:918-920. doi: 10.2337/dc19-1354
34. Smith J, Bleiker T, Narang I. Cutaneous reactions to glucose sensors: a sticky problem [Abstract 677]. Arch Dis Child. 2021;106 (suppl 1):A80.
35. MAUDE Adverse event report: Dexcom, Inc G6 Sensor. U.S. Food & Drug Administration. Updated September 30, 2022. Accessed October 21, 2022. www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfmaude/detail.cfm?mdrfoi__id=11064819&pc=MDS
36. New JP, Ajjan R, Pfeiffer AFH, et al. Continuous glucose monitoring in people with diabetes: the randomized controlled Glucose Level Awareness in Diabetes Study (GLADIS). Diabet Med. 2015;32:609-617. doi: 10.1111/dme.12713
A1C has been used to estimate 3-month glycemic control in patients with diabetes. However, A1C monitoring alone does not provide insight into daily glycemic variation, which is valuable in clinical management because tight glycemic control (defined as A1C < 7.0%) has been shown to reduce the risk of microvascular complications. Prior to the approval of glucagon-like peptide-1 receptor agonists and sodium-glucose co-transporter 2 inhibitors by the US Food and Drug Administration for the treatment of type 2 diabetes (T2D), reduction in the risk of macrovascular complications (aside from nonfatal myocardial infarction) was more difficult to achieve than it is now; some patients had a worse outcome with overly aggressive glycemic control.1
Previously, the use of a continuous glucose monitor (CGM) was limited to patients with type 1 diabetes who required basal and bolus insulin. However, technological advances have led to more patient-friendly and affordable devices, making CGMs more available. As such, the American Diabetes Association (ADA), in its 2022 Standards of Medical Care in Diabetes, recommends that clinicians offer continuous glucose monitoring to adults with T2D who require multiple daily injections, and based on a given patient’s ability, preferences, and needs.2
In this article, we discuss, first, the intricacies of CGMs and, second, what the evidence says about their use so that physicians can confidently recommend, and educate patients on, effective utilization of CGMs to obtain an individualized target of glycemic control.
Continuous glucose monitoring: A glossary
CGMs are characterized by who possesses the device and how data are recorded. This review is not about professional CGMs, which are owned by the health care provider and consist of a sensor that is applied in the clinic and returned to clinic for downloading of data1; rather, we focus on the novel category of nonprofessional, or personal, CGMs.
Three words to remember. Every CGM has 3 common components:
- The reader (also known as a receiver) is a handheld device that allows a patient to scan a sensor (see definition below) and instantaneously collect a glucose reading. The patient can use a standalone reader; a smartphone or other smart device with an associated app that serves as a reader; or both.
- A sensor is inserted subcutaneously to measure interstitial glucose. The lifespan of a sensor is 10 to 14 days.
- A transmitter relays information from the sensor to the reader.
The technology behind a CGM
CGM sensors measure interstitial glucose by means of a chemical reaction involving glucose oxidase and an oxidation-reduction cofactor, measuring the generation of hydrogen peroxide.3 Interstitial glucose readings lag behind plasma blood glucose readings by 2 to 21 minutes.4,5 Although this lag time is often not clinically significant, situations such as aerobic exercise and a rapidly changing glucose level might warrant confirmation by means of fingerstick measurement.5 It is common for CGM readings to vary slightly from venipuncture or fingerstick glucose readings.
What CGMs are availableto your patients?
Intermittently scanned CGMs (isCGMs) measure the glucose level continuously; the patient must scan a sensor to display and record the glucose level.6 Prolonged periods without scanning result in gaps in glycemic data.7,8
Continue to: Two isCGM systems...
Two isCGM systems are available: the FreeStyle Libre 14 day and the FreeStyle Libre 2 (both from Abbott).a Both consist of a reader and a disposable sensor, applied to the back of the arm, that is worn for 14 days. If the patient has a compatible smartphone or other smart device, the reader can be replaced by the smart device with the downloaded FreeStyle Libre or FreeStyle Libre 2 app.
To activate a new sensor, the patient applies the sensor, then scans it. Once activated, scanning the sensor provides the current glucose reading and recalls the last 8 hours of data. In addition to providing an instantaneous glucose reading, the display also provides a trend arrow indicating the direction and degree to which the glucose level is changing (TABLE 110,14,15). This feature helps patients avoid hypoglycemic episodes by allowing them to preemptively correct if the arrow indicates a rapidly declining glucose level.
For the first 12 hours after a new sensor is activated, and when a glucose reading is < 70 mg/dL, patients should be instructed to avoid making treatment decisions and encouraged to utilize fingerstick glucose readings. FreeStyle Libre 14 day does not allow a glucose level alarm to be set; the system cannot detect these events without scanning the sensor.10 Bluetooth connectivity does allow FreeStyle Libre 2 users to set a glucose alarm if the reader or smart device is within 20 feet of the sensor. A default alarm is set to activate at 70 mg/dL (“low”) and 240 mg/dL (“high”); low and high alarm settings are also customizable. Because both FreeStyle Libre devices store 8 hours of data, patients must scan the sensor every 8 hours for a comprehensive glycemic report.14
FreeStyle Libre CGMs allow patients to add therapy notes, including time and amount of insulin administered and carbohydrates ingested. Readers for both devices function as a glucometer that is compatible with Abbott FreeStyle Precision Neo test strips.
Real-time CGMs (rtCGMs) measure and display glucose levels continuously for the duration of the life of the sensor, without the need to scan. Three rtCGM systems are available: Dexcom G6, Medtronic Guardian 3, and Senseonics Eversense E3.
Continue to: Dexcom G6...
Dexcom G6 is the first Dexcom CGM that does not require fingerstick calibration and the only rtCGM available in the United States that does not require patient calibration. This system comprises a single-use sensor replaced every 10 days; a transmitter that is transferred to each new sensor and replaced every 3 months; and an optional receiver that can be omitted if the patient prefers to utilize a smart device.
Dexcom G6 never requires a patient to scan a sensor. Instead, the receiver (or smart device) utilizes Bluetooth technology to obtain blood glucose readings if it is positioned within 20 feet of the transmitter. Patients can set both hypoglycemic and hyperglycemic alarms to predict events within 20 minutes. Similar to the functionality of the FreeStyle Libre systems, Dexcom G6 provides the opportunity to log lifestyle events, including insulin dosing, carbohydrate ingestion, exercise, and sick days.15
Medtronic Guardian 3 comprises the multi-use Guardian Connect Transmitter that is replaced annually and a single-use Guardian Sensor that is replaced every 7 days. Guardian 3 requires twice-daily fingerstick glucose calibration, which reduces the convenience of a CGM.
Guardian 3 allows the user to set alarm levels, providing predictive alerts 10 to 60 minutes before set glucose levels are reached. Patients must utilize a smart device to connect through Bluetooth to the CareLink Connect app and remain within 20 feet of the transmitter to provide continuous glucose readings. The CareLink Connect app allows patients to document exercise, calibration of fingerstick readings, meals, and insulin administration.16
Senseonics Eversense E3 consists of a 3.5 mm × 18.3 mm sensor inserted subcutaneously in the upper arm once every 180 days; a removable transmitter that attaches to an adhesive patch placed over the sensor; and a smart device with the Eversense app. The transmitter has a 1-year rechargeable battery and provides the patient with on-body vibration alerts even when they are not near their smart device.
Continue to: The Eversense E3 transmitter...
The Eversense E3 transmitter can be removed and reapplied without affecting the life of the sensor; however, no glucose data will be collected during this time. Once the transmitter is reapplied, it takes 10 minutes for the sensor to begin communicating with the transmitter. Eversense provides predictive alerts as long as 30 minutes before hyperglycemic or hypoglycemic events. The device requires twice-daily fingerstick calibrations.17
A comparison of the specifications and capabilities of the personal CGMs discussed here is provided in TABLE 2.10,14-17
The evidence, reviewed
Clinical outcomes evidence with CGMs in patients with T2D is sparse. Most studies that support improved clinical outcomes enrolled patients with type 1 diabetes who were treated with intensive insulin regimens. Many studies utilized rtCGMs that are capable of incorporating a hypoglycemic alarm, and results might not be generalizable to isCGMs.18,19 In this article, we review only the continuous glucose monitoring literature in which subjects had T2D.
Evidence for is CGMs
The REPLACE trial compared outcomes in patients with T2D who used an isCGM vs those who self-monitored blood glucose (SMBG); both groups were being treated with intensive insulin regimens. Both groups had similar glucose reductions, but the time in the hypoglycemic range (see “Clinical targets,” in the text that follows) was significantly shorter in the isCGM group.20
A randomized controlled trial (RCT) that compared intermittently scanned continuous glucose monitoring and SMBG in patients with T2D who received multiple doses of insulin daily demonstrated a significant A1C reduction of 0.82% with an isCGM and 0.33% with SMBG, with no difference in the rate of hypoglycemic events, over 10 weeks.21
Continue to: Chart review
Chart review. Data extracted from chart reviews in Austria, France, and Germany demonstrated a mean improvement in A1C of 0.9% among patients when using a CGM after using SMBG previously.22
A retrospective review of patients with T2D who were not taking bolus insulin and who used a CGM had a reduction in A1C from 10.1% to 8.6% over 60 to 300 days.23
Evidence for rtCGMs
The DIAMOND study included a subset of patients with T2D who used an rtCGM and were compared to a subset who received usual care. The primary outcome was the change in A1C. A 0.3% greater reduction was observed in the CGM group at 24 weeks. There was no difference in hypoglycemic events between the 2 groups; there were few events in either group.24
An RCT demonstrated a similar reduction in A1C in rtCGM users and in nonusers over 1 year.25 However, patients who used the rtCGM by protocol demonstrated the greatest reduction in A1C. The CGM utilized in this trial required regular fingerstick calibration, which likely led to poorer adherence to protocol than would have been the case had the trial utilized a CGM that did not require calibration.
A prospective trial demonstrated that utilization of an rtCGM only 3 days per month for 3 consecutive months was associated with (1) significant improvement in A1C (a decrease of 1.1% in the CGM group, compared to a decrease of 0.4% in the SMBG group) and (2) numerous lifestyle modifications, including reduction in total caloric intake, weight loss, decreased body mass index, and an increase in total weekly exercise.26 This trial demonstrated that CGMs might be beneficial earlier in the course of disease by reinforcing lifestyle changes.
Continue to: The MOBILE trial
The MOBILE trial demonstrated that use of an rtCGM reduced baseline A1C from 9.1% to 8.0% in the CGM group, compared to 9.0% to 8.4% in the non-CGM group.27
Practical utilization of CGMs
Patient education
Detailed patient education resources—for initial setup, sensor application, methods to ensure appropriate sensor adhesion, and app and platform assistance—are available on each manufacturer’s website.
Clinical targets
In 2019, the Advanced Technologies & Treatments for Diabetes Congress determined that what is known as the time in range metric yields the most practical data to help clinicians manage glycemic control.28 The time in range metric comprises:
- time in the target glucose range (TIR)
- time below the target glucose range (TBR)
- time above the target glucose range (TAR).
TIR glucose ranges are modifiable and based on the A1C goal. For example, if the A1C goal is < 7.0%, the TIR glucose range is 70-180 mg/dL. If a patient maintains TIR > 70% for 3 months, the measured A1C will correlate well with the goal. Each 10% fluctuation in TIR from the goal of 70% corresponds to a difference of approximately 0.5% in A1C. Therefore, TIR of approximately 50% predicts an A1C of 8.0%.28
A retrospective review of 1440 patients with CGM data demonstrated that progression of retinopathy and development of microalbuminuria increased 64% and 40%, respectively, over 10 years for each 10% reduction in TIR—highlighting the importance of TIR and consistent glycemic control.29 Importantly, the CGM sensor must be active ≥ 70% of the wearable time to provide adequate TIR data.30
Continue to: Concerns about accuracy
Concerns about accuracy
There is no universally accepted standard for determining the accuracy of a CGM; however, the mean absolute relative difference (MARD) is the most common statistic referenced. MARD is calculated as the average of the absolute error between all CGM values and matched reference values that are usually obtained from SMBG.31 The lower the MARD percentage, the better the accuracy of the CGM. A MARD of ≤ 10% is considered acceptable for making therapeutic decisions.30
Package labeling for all CGMs recommends that patients have access to a fingerstick glucometer to verify CGM readings when concerns about accuracy exist. If a sensor becomes dislodged, it can malfunction or lose accuracy. Patients should not try to re-apply the sensor; instead, they should remove and discard the sensor and apply a new one. TABLE 210,14-17 compares MARD for CGMs and lists substances that might affect the accuracy of a CGM.
Patient–provider data-sharing platforms
FreeStyle Libre and Libre 2. Providers create a LibreView Practice ID at www.Libre View.com. Patient data-sharing depends on whether they are using a smart device, a reader, or both. Patients can utilize both the smart device and the reader but must upload data from the reader at regular intervals to provide a comprehensive report that will merge data from the smart device (ie, data that have been uploaded automatically) and the reader.7
Dexcom G6. Clinicians create a Dexcom CLARITY account at https://clarity.dexcom.com and add patients to a practice list or gain access to a share code generated by the patient. Patients must download the Dexcom CLARITY app to create an account; once the account is established, readings will be transmitted to the clinic automatically.15 A patient who is utilizing a nonsmart-device reader must upload data manually to their web-based CLARITY account.
Family and caregiver access
Beyond sharing CGM data with clinic staff, an important feature available with FreeStyle Libre and Dexcom systems is the ability to share data with friends and caregivers. The relevant platforms and apps are listed in TABLE 2.10,14-17
Continue to: Insurance coverage, cost, and accessibility
Insurance coverage, cost, and accessibility
Medicare Part B has established criteria by which patients with T2D qualify for a CGM (TABLE 332). A Medicare patient who has been determined to be eligible is responsible for 20% of the out-of-pocket expense of the CGM and supplies once their deductible is met. Once Medicare covers a CGM, the patient is no longer able to obtain fingerstick glucose supplies through Medicare; they must pay the cash price for any fingerstick supplies that are determined to be necessary.32
Patients with private insurance can obtain CGM supplies through their preferred pharmacy when the order is written as a prescription (the same as for fingerstick glucometers). That is not the case for patients with Medicare because not all US distributors and pharmacies are contracted to bill Medicare Part B for CGM supplies. A list of distributors and eligible pharmacies can be found on each manufacturer’s website.
Risk–benefit analysis
CGMs are associated with few risks overall. The predominant adverse effect is contact dermatitis; the prevalence of CGM-associated contact dermatitis is difficult to quantify and differs from device to device.
FreeStyle Libre. In a retrospective review of records of patients with diabetes, researchers determined that a cutaneous adverse event occurred in approximately 5.5% of 1036 patients who utilized a FreeStyle Libre sensor.33 Of that percentage, 3.8% of dermatitis cases were determined to be allergic in nature and related to isobornyl acrylate (IBOA), a chemical constituent of the sensor’s adhesive that is not used in the FreeStyle Libre 2. Among patients who wore a sensor and developed allergic contact dermatitis, interventions such as a barrier film were of limited utility in alleviating or preventing further cutaneous eruption.33
Dexcom G6. The prevalence of Dexcom G6–associated allergic contact dermatitis is more difficult to ascertain (the IBOA adhesive was replaced in October 2019) but has been reported to be less common than with FreeStyle Libre,34 a finding that corroborates our anecdotal clinical experience. Although Dexcom sensors no longer contain IBOA, cases of allergic contact dermatitis are still reported.35 We propose that the lower incidence of cutaneous reactions associated with the Dexcom G6 sensor might be due to the absence of IBOA and shorter contact time with skin.
Continue to: In general, patients should be...
In general, patients should be counseled to rotate the location of the sensor and to use only specific barrier products that are recommended on each manufacturer’s website. The use of other barriers that are not specifically recommended might compromise the accuracy of the sensor.
Summing up
As CGM technology improves, it is likely that more and more of your patients will utilize one of these devices. The value of CGMs has been documented, but any endorsement of their use is qualified:
- Data from many older RCTs of patients with T2D who utilize a CGM did not demonstrate a significant reduction in A1C20,24,36; however, real-world observational data do show a greater reduction in A1C.
- From a safety standpoint, contact dermatitis is the primary drawback of CGMs.
- CGMs can provide patients and clinicians with a comprehensive picture of daily glucose trends, which can help patients make lifestyle changes and serve as a positive reinforcement for the effects of diet and exercise. Analysis of glucose trends can also help clinicians confidently make decisions about when to intensify or taper a medication regimen, based on data that is reported more often than 90-day A1C changes.
Health insurance coverage will continue to dictate access to CGM technology for many patients. When a CGM is reimbursable by the patient’s insurance, consider offering it as an option—even for patients who do not require an intensive insulin regimen.
a The US Food and Drug Administration cleared a new Abbott CGM, FreeStyle Libre 3, earlier this year; however, the device is not yet available for purchase. With advances in monitoring technology, several other manufacturers also anticipate introducing novel CGMs. (See “Continuous glucose monitors: The next generation.” )
SIDEBAR
Continuous glucose monitors: The next generation9-13
Expect new continuous glucose monitoring devices to be introduced to US and European health care markets in the near future.
FreeStyle Libre 3 (Abbott) was cleared by the US Food and Drug Administration in May 2022, although it is not yet available for purchase. The manufacturer promotes the device as having the smallest sensor of any continuous glucose monitor (the diameter and thickness of 2 stacked pennies); improved mean absolute relative difference; the ability to provide real-time glucose level readings; and 50% greater range of Bluetooth connectivity (about 10 extra feet).9,10
Dexcom G7 (Dexcom) has a sensor that is 60% smaller than the Dexcom G6 sensor and a 30-minute warm-up time, compared to 120 minutes for the G6.11 The device has received European Union CE mark approval.
Guardian 4 Sensor (Medtronic) does not require fingerstick calibration. The device has also received European Union CE mark approval12 but is available only for investigational use in the United States.
Eversense XL technology is similar to that of the Eversense E3, including a 180-day sensor.13 The device, which has received European Union CE mark approval, includes a removable smart transmitter.
CORRESPONDENCE
Kevin Schleich, PharmD, BCACP, Departments of Pharmaceutical Care and Family Medicine, University of Iowa, 200 Hawkins Drive, 01102-D PFP, Iowa City, IA, 52242; [email protected]
A1C has been used to estimate 3-month glycemic control in patients with diabetes. However, A1C monitoring alone does not provide insight into daily glycemic variation, which is valuable in clinical management because tight glycemic control (defined as A1C < 7.0%) has been shown to reduce the risk of microvascular complications. Prior to the approval of glucagon-like peptide-1 receptor agonists and sodium-glucose co-transporter 2 inhibitors by the US Food and Drug Administration for the treatment of type 2 diabetes (T2D), reduction in the risk of macrovascular complications (aside from nonfatal myocardial infarction) was more difficult to achieve than it is now; some patients had a worse outcome with overly aggressive glycemic control.1
Previously, the use of a continuous glucose monitor (CGM) was limited to patients with type 1 diabetes who required basal and bolus insulin. However, technological advances have led to more patient-friendly and affordable devices, making CGMs more available. As such, the American Diabetes Association (ADA), in its 2022 Standards of Medical Care in Diabetes, recommends that clinicians offer continuous glucose monitoring to adults with T2D who require multiple daily injections, and based on a given patient’s ability, preferences, and needs.2
In this article, we discuss, first, the intricacies of CGMs and, second, what the evidence says about their use so that physicians can confidently recommend, and educate patients on, effective utilization of CGMs to obtain an individualized target of glycemic control.
Continuous glucose monitoring: A glossary
CGMs are characterized by who possesses the device and how data are recorded. This review is not about professional CGMs, which are owned by the health care provider and consist of a sensor that is applied in the clinic and returned to clinic for downloading of data1; rather, we focus on the novel category of nonprofessional, or personal, CGMs.
Three words to remember. Every CGM has 3 common components:
- The reader (also known as a receiver) is a handheld device that allows a patient to scan a sensor (see definition below) and instantaneously collect a glucose reading. The patient can use a standalone reader; a smartphone or other smart device with an associated app that serves as a reader; or both.
- A sensor is inserted subcutaneously to measure interstitial glucose. The lifespan of a sensor is 10 to 14 days.
- A transmitter relays information from the sensor to the reader.
The technology behind a CGM
CGM sensors measure interstitial glucose by means of a chemical reaction involving glucose oxidase and an oxidation-reduction cofactor, measuring the generation of hydrogen peroxide.3 Interstitial glucose readings lag behind plasma blood glucose readings by 2 to 21 minutes.4,5 Although this lag time is often not clinically significant, situations such as aerobic exercise and a rapidly changing glucose level might warrant confirmation by means of fingerstick measurement.5 It is common for CGM readings to vary slightly from venipuncture or fingerstick glucose readings.
What CGMs are availableto your patients?
Intermittently scanned CGMs (isCGMs) measure the glucose level continuously; the patient must scan a sensor to display and record the glucose level.6 Prolonged periods without scanning result in gaps in glycemic data.7,8
Continue to: Two isCGM systems...
Two isCGM systems are available: the FreeStyle Libre 14 day and the FreeStyle Libre 2 (both from Abbott).a Both consist of a reader and a disposable sensor, applied to the back of the arm, that is worn for 14 days. If the patient has a compatible smartphone or other smart device, the reader can be replaced by the smart device with the downloaded FreeStyle Libre or FreeStyle Libre 2 app.
To activate a new sensor, the patient applies the sensor, then scans it. Once activated, scanning the sensor provides the current glucose reading and recalls the last 8 hours of data. In addition to providing an instantaneous glucose reading, the display also provides a trend arrow indicating the direction and degree to which the glucose level is changing (TABLE 110,14,15). This feature helps patients avoid hypoglycemic episodes by allowing them to preemptively correct if the arrow indicates a rapidly declining glucose level.
For the first 12 hours after a new sensor is activated, and when a glucose reading is < 70 mg/dL, patients should be instructed to avoid making treatment decisions and encouraged to utilize fingerstick glucose readings. FreeStyle Libre 14 day does not allow a glucose level alarm to be set; the system cannot detect these events without scanning the sensor.10 Bluetooth connectivity does allow FreeStyle Libre 2 users to set a glucose alarm if the reader or smart device is within 20 feet of the sensor. A default alarm is set to activate at 70 mg/dL (“low”) and 240 mg/dL (“high”); low and high alarm settings are also customizable. Because both FreeStyle Libre devices store 8 hours of data, patients must scan the sensor every 8 hours for a comprehensive glycemic report.14
FreeStyle Libre CGMs allow patients to add therapy notes, including time and amount of insulin administered and carbohydrates ingested. Readers for both devices function as a glucometer that is compatible with Abbott FreeStyle Precision Neo test strips.
Real-time CGMs (rtCGMs) measure and display glucose levels continuously for the duration of the life of the sensor, without the need to scan. Three rtCGM systems are available: Dexcom G6, Medtronic Guardian 3, and Senseonics Eversense E3.
Continue to: Dexcom G6...
Dexcom G6 is the first Dexcom CGM that does not require fingerstick calibration and the only rtCGM available in the United States that does not require patient calibration. This system comprises a single-use sensor replaced every 10 days; a transmitter that is transferred to each new sensor and replaced every 3 months; and an optional receiver that can be omitted if the patient prefers to utilize a smart device.
Dexcom G6 never requires a patient to scan a sensor. Instead, the receiver (or smart device) utilizes Bluetooth technology to obtain blood glucose readings if it is positioned within 20 feet of the transmitter. Patients can set both hypoglycemic and hyperglycemic alarms to predict events within 20 minutes. Similar to the functionality of the FreeStyle Libre systems, Dexcom G6 provides the opportunity to log lifestyle events, including insulin dosing, carbohydrate ingestion, exercise, and sick days.15
Medtronic Guardian 3 comprises the multi-use Guardian Connect Transmitter that is replaced annually and a single-use Guardian Sensor that is replaced every 7 days. Guardian 3 requires twice-daily fingerstick glucose calibration, which reduces the convenience of a CGM.
Guardian 3 allows the user to set alarm levels, providing predictive alerts 10 to 60 minutes before set glucose levels are reached. Patients must utilize a smart device to connect through Bluetooth to the CareLink Connect app and remain within 20 feet of the transmitter to provide continuous glucose readings. The CareLink Connect app allows patients to document exercise, calibration of fingerstick readings, meals, and insulin administration.16
Senseonics Eversense E3 consists of a 3.5 mm × 18.3 mm sensor inserted subcutaneously in the upper arm once every 180 days; a removable transmitter that attaches to an adhesive patch placed over the sensor; and a smart device with the Eversense app. The transmitter has a 1-year rechargeable battery and provides the patient with on-body vibration alerts even when they are not near their smart device.
Continue to: The Eversense E3 transmitter...
The Eversense E3 transmitter can be removed and reapplied without affecting the life of the sensor; however, no glucose data will be collected during this time. Once the transmitter is reapplied, it takes 10 minutes for the sensor to begin communicating with the transmitter. Eversense provides predictive alerts as long as 30 minutes before hyperglycemic or hypoglycemic events. The device requires twice-daily fingerstick calibrations.17
A comparison of the specifications and capabilities of the personal CGMs discussed here is provided in TABLE 2.10,14-17
The evidence, reviewed
Clinical outcomes evidence with CGMs in patients with T2D is sparse. Most studies that support improved clinical outcomes enrolled patients with type 1 diabetes who were treated with intensive insulin regimens. Many studies utilized rtCGMs that are capable of incorporating a hypoglycemic alarm, and results might not be generalizable to isCGMs.18,19 In this article, we review only the continuous glucose monitoring literature in which subjects had T2D.
Evidence for is CGMs
The REPLACE trial compared outcomes in patients with T2D who used an isCGM vs those who self-monitored blood glucose (SMBG); both groups were being treated with intensive insulin regimens. Both groups had similar glucose reductions, but the time in the hypoglycemic range (see “Clinical targets,” in the text that follows) was significantly shorter in the isCGM group.20
A randomized controlled trial (RCT) that compared intermittently scanned continuous glucose monitoring and SMBG in patients with T2D who received multiple doses of insulin daily demonstrated a significant A1C reduction of 0.82% with an isCGM and 0.33% with SMBG, with no difference in the rate of hypoglycemic events, over 10 weeks.21
Continue to: Chart review
Chart review. Data extracted from chart reviews in Austria, France, and Germany demonstrated a mean improvement in A1C of 0.9% among patients when using a CGM after using SMBG previously.22
A retrospective review of patients with T2D who were not taking bolus insulin and who used a CGM had a reduction in A1C from 10.1% to 8.6% over 60 to 300 days.23
Evidence for rtCGMs
The DIAMOND study included a subset of patients with T2D who used an rtCGM and were compared to a subset who received usual care. The primary outcome was the change in A1C. A 0.3% greater reduction was observed in the CGM group at 24 weeks. There was no difference in hypoglycemic events between the 2 groups; there were few events in either group.24
An RCT demonstrated a similar reduction in A1C in rtCGM users and in nonusers over 1 year.25 However, patients who used the rtCGM by protocol demonstrated the greatest reduction in A1C. The CGM utilized in this trial required regular fingerstick calibration, which likely led to poorer adherence to protocol than would have been the case had the trial utilized a CGM that did not require calibration.
A prospective trial demonstrated that utilization of an rtCGM only 3 days per month for 3 consecutive months was associated with (1) significant improvement in A1C (a decrease of 1.1% in the CGM group, compared to a decrease of 0.4% in the SMBG group) and (2) numerous lifestyle modifications, including reduction in total caloric intake, weight loss, decreased body mass index, and an increase in total weekly exercise.26 This trial demonstrated that CGMs might be beneficial earlier in the course of disease by reinforcing lifestyle changes.
Continue to: The MOBILE trial
The MOBILE trial demonstrated that use of an rtCGM reduced baseline A1C from 9.1% to 8.0% in the CGM group, compared to 9.0% to 8.4% in the non-CGM group.27
Practical utilization of CGMs
Patient education
Detailed patient education resources—for initial setup, sensor application, methods to ensure appropriate sensor adhesion, and app and platform assistance—are available on each manufacturer’s website.
Clinical targets
In 2019, the Advanced Technologies & Treatments for Diabetes Congress determined that what is known as the time in range metric yields the most practical data to help clinicians manage glycemic control.28 The time in range metric comprises:
- time in the target glucose range (TIR)
- time below the target glucose range (TBR)
- time above the target glucose range (TAR).
TIR glucose ranges are modifiable and based on the A1C goal. For example, if the A1C goal is < 7.0%, the TIR glucose range is 70-180 mg/dL. If a patient maintains TIR > 70% for 3 months, the measured A1C will correlate well with the goal. Each 10% fluctuation in TIR from the goal of 70% corresponds to a difference of approximately 0.5% in A1C. Therefore, TIR of approximately 50% predicts an A1C of 8.0%.28
A retrospective review of 1440 patients with CGM data demonstrated that progression of retinopathy and development of microalbuminuria increased 64% and 40%, respectively, over 10 years for each 10% reduction in TIR—highlighting the importance of TIR and consistent glycemic control.29 Importantly, the CGM sensor must be active ≥ 70% of the wearable time to provide adequate TIR data.30
Continue to: Concerns about accuracy
Concerns about accuracy
There is no universally accepted standard for determining the accuracy of a CGM; however, the mean absolute relative difference (MARD) is the most common statistic referenced. MARD is calculated as the average of the absolute error between all CGM values and matched reference values that are usually obtained from SMBG.31 The lower the MARD percentage, the better the accuracy of the CGM. A MARD of ≤ 10% is considered acceptable for making therapeutic decisions.30
Package labeling for all CGMs recommends that patients have access to a fingerstick glucometer to verify CGM readings when concerns about accuracy exist. If a sensor becomes dislodged, it can malfunction or lose accuracy. Patients should not try to re-apply the sensor; instead, they should remove and discard the sensor and apply a new one. TABLE 210,14-17 compares MARD for CGMs and lists substances that might affect the accuracy of a CGM.
Patient–provider data-sharing platforms
FreeStyle Libre and Libre 2. Providers create a LibreView Practice ID at www.Libre View.com. Patient data-sharing depends on whether they are using a smart device, a reader, or both. Patients can utilize both the smart device and the reader but must upload data from the reader at regular intervals to provide a comprehensive report that will merge data from the smart device (ie, data that have been uploaded automatically) and the reader.7
Dexcom G6. Clinicians create a Dexcom CLARITY account at https://clarity.dexcom.com and add patients to a practice list or gain access to a share code generated by the patient. Patients must download the Dexcom CLARITY app to create an account; once the account is established, readings will be transmitted to the clinic automatically.15 A patient who is utilizing a nonsmart-device reader must upload data manually to their web-based CLARITY account.
Family and caregiver access
Beyond sharing CGM data with clinic staff, an important feature available with FreeStyle Libre and Dexcom systems is the ability to share data with friends and caregivers. The relevant platforms and apps are listed in TABLE 2.10,14-17
Continue to: Insurance coverage, cost, and accessibility
Insurance coverage, cost, and accessibility
Medicare Part B has established criteria by which patients with T2D qualify for a CGM (TABLE 332). A Medicare patient who has been determined to be eligible is responsible for 20% of the out-of-pocket expense of the CGM and supplies once their deductible is met. Once Medicare covers a CGM, the patient is no longer able to obtain fingerstick glucose supplies through Medicare; they must pay the cash price for any fingerstick supplies that are determined to be necessary.32
Patients with private insurance can obtain CGM supplies through their preferred pharmacy when the order is written as a prescription (the same as for fingerstick glucometers). That is not the case for patients with Medicare because not all US distributors and pharmacies are contracted to bill Medicare Part B for CGM supplies. A list of distributors and eligible pharmacies can be found on each manufacturer’s website.
Risk–benefit analysis
CGMs are associated with few risks overall. The predominant adverse effect is contact dermatitis; the prevalence of CGM-associated contact dermatitis is difficult to quantify and differs from device to device.
FreeStyle Libre. In a retrospective review of records of patients with diabetes, researchers determined that a cutaneous adverse event occurred in approximately 5.5% of 1036 patients who utilized a FreeStyle Libre sensor.33 Of that percentage, 3.8% of dermatitis cases were determined to be allergic in nature and related to isobornyl acrylate (IBOA), a chemical constituent of the sensor’s adhesive that is not used in the FreeStyle Libre 2. Among patients who wore a sensor and developed allergic contact dermatitis, interventions such as a barrier film were of limited utility in alleviating or preventing further cutaneous eruption.33
Dexcom G6. The prevalence of Dexcom G6–associated allergic contact dermatitis is more difficult to ascertain (the IBOA adhesive was replaced in October 2019) but has been reported to be less common than with FreeStyle Libre,34 a finding that corroborates our anecdotal clinical experience. Although Dexcom sensors no longer contain IBOA, cases of allergic contact dermatitis are still reported.35 We propose that the lower incidence of cutaneous reactions associated with the Dexcom G6 sensor might be due to the absence of IBOA and shorter contact time with skin.
Continue to: In general, patients should be...
In general, patients should be counseled to rotate the location of the sensor and to use only specific barrier products that are recommended on each manufacturer’s website. The use of other barriers that are not specifically recommended might compromise the accuracy of the sensor.
Summing up
As CGM technology improves, it is likely that more and more of your patients will utilize one of these devices. The value of CGMs has been documented, but any endorsement of their use is qualified:
- Data from many older RCTs of patients with T2D who utilize a CGM did not demonstrate a significant reduction in A1C20,24,36; however, real-world observational data do show a greater reduction in A1C.
- From a safety standpoint, contact dermatitis is the primary drawback of CGMs.
- CGMs can provide patients and clinicians with a comprehensive picture of daily glucose trends, which can help patients make lifestyle changes and serve as a positive reinforcement for the effects of diet and exercise. Analysis of glucose trends can also help clinicians confidently make decisions about when to intensify or taper a medication regimen, based on data that is reported more often than 90-day A1C changes.
Health insurance coverage will continue to dictate access to CGM technology for many patients. When a CGM is reimbursable by the patient’s insurance, consider offering it as an option—even for patients who do not require an intensive insulin regimen.
a The US Food and Drug Administration cleared a new Abbott CGM, FreeStyle Libre 3, earlier this year; however, the device is not yet available for purchase. With advances in monitoring technology, several other manufacturers also anticipate introducing novel CGMs. (See “Continuous glucose monitors: The next generation.” )
SIDEBAR
Continuous glucose monitors: The next generation9-13
Expect new continuous glucose monitoring devices to be introduced to US and European health care markets in the near future.
FreeStyle Libre 3 (Abbott) was cleared by the US Food and Drug Administration in May 2022, although it is not yet available for purchase. The manufacturer promotes the device as having the smallest sensor of any continuous glucose monitor (the diameter and thickness of 2 stacked pennies); improved mean absolute relative difference; the ability to provide real-time glucose level readings; and 50% greater range of Bluetooth connectivity (about 10 extra feet).9,10
Dexcom G7 (Dexcom) has a sensor that is 60% smaller than the Dexcom G6 sensor and a 30-minute warm-up time, compared to 120 minutes for the G6.11 The device has received European Union CE mark approval.
Guardian 4 Sensor (Medtronic) does not require fingerstick calibration. The device has also received European Union CE mark approval12 but is available only for investigational use in the United States.
Eversense XL technology is similar to that of the Eversense E3, including a 180-day sensor.13 The device, which has received European Union CE mark approval, includes a removable smart transmitter.
CORRESPONDENCE
Kevin Schleich, PharmD, BCACP, Departments of Pharmaceutical Care and Family Medicine, University of Iowa, 200 Hawkins Drive, 01102-D PFP, Iowa City, IA, 52242; [email protected]
1. Rodríguez-Gutiérrez R, Montori VM. Glycemic control for patients with type 2 diabetes mellitus: our evolving faith in the face of evidence. Circ Cardiovasc Qual Outcomes. 2016;9:504-512. doi: 10.1161/CIRCOUTCOMES.116.002901
2. Draznin B, Aroda VR, Bakris G, et al; doi: 10.2337/dc22-S007
. 7. Diabetes technology: standards of medical care in diabetes—2022. Diabetes Care. 2021;45(suppl 1):S97-S112.3. Olczuk D, Priefer R. A history of continuous glucose monitors (CGMs) in self-monitoring of diabetes mellitus. Diabetes Metab Syndr. 2018;12:181-187. doi: 10.1016/j.dsx.2017.09.005
4. Alva S, Bailey T, Brazg R, et al. Accuracy of a 14-day factory-calibrated continuous glucose monitoring system with advanced algorithm in pediatric and adult population with diabetes. J Diabetes Sci Technol. 2022;16:70-77. doi: 10.1177/1932296820958754
5. Zaharieva DP, Turksoy K, McGaugh SM, et al. Lag time remains with newer real-time continuous glucose monitoring technology during aerobic exercise in adults living with type 1 diabetes. Diabetes Technol Ther. 2019;21:313-321. doi: 10.1089/dia.2018.0364
6. American Diabetes Association. 2. Classification and diagnosis of diabetes: standards of medical care in diabetes—2021. Diabetes Care. 2021;44(suppl 1):S15-S33. doi: 10.2337/dc21-S002
7. FreeStyle Libre systems: The #1 CGM used in the US. Abbott. Updated May 2022. Accessed October 22, 2022. www.freestyleprovider.abbott/us-en/home.html
8. Rowland K. Choosing Wisely: 10 practices to stop—or adopt—to reduce overuse in health care. J Fam Pract. 2020;69:396-400.
9. Tucker ME. FDA clears Abbott Freestyle Libre 3 glucose sensor. MDedge. June 1, 2022. Accessed October 21, 2022. www.mdedge.com/endocrinology/article/255095/diabetes/fda-clears-abbott-freestyle-libre-3-glucose-sensor
10. Manage your diabetes with more confidence. Abbott. Updated May 2022. Accessed October 22, 2022. www.freestyle.abbott/us-en/home.html
11. Whooley S. Dexcom CEO Kevin Sayer says G7 will be ‘wonderful’. Drug Delivery Business News. July 19, 2021. Accessed October 21, 2022. www.drugdeliverybusiness.com/dexcom-ceo-kevin-sayer-says-g7-will-be-wonderful
12. Medtronic secures two CE mark approvals for Guardian 4 Sensor & for InPen MDI Smart Insulin Pen. Medtronic. Press release. May 26, 2021. Accessed October 22, 2022. https://news.medtronic.com/2021-05-26-Medtronic-Secures-Two-CE-Mark-Approvals-for-Guardian-4-Sensor-for-InPen-MDI-Smart-Insulin-Pen
13. Eversense—up to 180 days of freedom [XL CGM System]. Senseonics. Accessed September 14, 2022. https://global.eversensediabetes.com
14. FreeStyle Libre 2 User’s Manual. Abbott. Revised August 24, 2022. Accessed October 2, 2022. https://freestyleserver.com/Payloads/IFU/2022/q3/ART46983-001_rev-A.pdf
15. Dexcom G6 Continuous Glucose Monitoring System user guide. Dexcom. Revised March 2022. Accessed October 21, 2022. https://s3-us-west-2.amazonaws.com/dexcompdf/G6-CGM-Users-Guide.pdf
16. Guardian Connect System user guide. Medtronic. 2020. Accessed October 21, 2022. www.medtronicdiabetes.com/sites/default/files/library/download-library/user-guides/Guardian-Connect-System-User-Guide.pdf
17. Eversense E3 user guides. Senseonics. 2022. Accessed October 22, 2022. www.ascensiadiabetes.com/eversense/user-guides/
18. Battelino T, Conget I, Olsen B, et al; SWITCH Study Group. The use and efficacy of continuous glucose monitoring in type 1 diabetes treated with insulin pump therapy: a randomised controlled trial. Diabetologia. 2012;55:3155-3162. doi: 10.1007/s00125-012-2708-9
19. Weinzimer S, Miller K, Beck R, et al; doi: 10.2337/dc09-1502
Effectiveness of continuous glucose monitoring in a clinical care environment: evidence from the Juvenile Diabetes Research Foundation continuous glucose monitoring (JDRF-CGM) trial. Diabetes Care. 2010;33:17-22.20. Haak T, Hanaire H, Ajjan R, et al. Flash glucose-sensing technology as a replacement for blood glucose monitoring for the management of insulin-treated type 2 diabetes: a multicenter, open-label randomized controlled trial. Diabetes Ther. 2017;8:55-73. doi: 10.1007/s13300-016-0223-6
21. Yaron M, Roitman E, Aharon-Hananel G, et al. Effect of flash glucose monitoring technology on glycemic control and treatment satisfaction in patients with type 2 diabetes. Diabetes Care. 2019;42:1178-1184. doi: 10.2337/dc18-0166
22. Kröger J, Fasching P, Hanaire H. Three European retrospective real-world chart review studies to determine the effectiveness of flash glucose monitoring on HbA1c in adults with type 2 diabetes. Diabetes Ther. 2020;11:279-291. doi: 10.1007/s13300-019-00741-9
23. Wright EE, Jr, Kerr MSD, Reyes IJ, et al. Use of flash continuous glucose monitoring is associated with A1C reduction in people with type 2 diabetes treated with basal insulin or noninsulin therapy. Diabetes Spectr. 2021;34:184-189. doi: 10.2337/ds20-0069
24. Beck RW, Riddlesworth TD, Ruedy K, et al; DIAMOND Study Group. Continuous glucose monitoring versus usual care in patients with type 2 diabetes receiving multiple daily insulin injections: a randomized trial. Ann Intern Med. 2017;167:365-374. doi: 10.7326/M16-2855
25. Vigersky RA, Fonda SJ, Chellappa M, et al. Short- and long-term effects of real-time continuous glucose monitoring in patients with type 2 diabetes. Diabetes Care. 2012;35:32-38. doi: 10.2337/dc11-1438
26. Yoo HJ, An HG, Park SY, et al. Use of a real time continuous glucose monitoring system as a motivational device for poorly controlled type 2 diabetes. Diabetes Res Clin Pract. 2008;82:73-79. doi: 10.1016/j.diabres.2008.06.015
27. Martens T, Beck RW, Bailey R, et al; MOBILE Study Group. Effect of continuous glucose monitoring on glycemic control in patients with type 2 diabetes treated with basal insulin: a randomized clinical trial. JAMA. 2021;325:2262-2272. doi: 10.1001/jama.2021.7444
28. Battelino T, Danne T, Bergenstal RM, et al. Clinical targets for continuous glucose monitoring data interpretation: recommendations from the international consensus on time in range. Diabetes Care. 2019;42:1593-1603. doi: 10.2337/dci19-0028
29. Beck RW, Bergenstal RM, Riddlesworth TD, et al. Validation of time in range as an outcome measure for diabetes clinical trials. Diabetes Care. 2019;42:400-405. doi: 10.2337/dc18-1444
30. Freckmann G. Basics and use of continuous glucose monitoring (CGM) in diabetes therapy. Journal of Laboratory Medicine. 2020;44:71-79. doi: 10.1515/labmed-2019-0189
31. Danne T, Nimri R, Battelino T, et al. International consensus on use of continuous glucose monitoring. Diabetes Care. 2017;40:1631-1640. doi: 10.2337/dc17-1600
32. Glucose monitors. Centers for Medicare & Medicaid Services. April 22, 2022. Accessed October 22, 2022. www.cms.gov/medicare-coverage-database/view/lcd.aspx?lcdid=33822
33. Pyl J, Dendooven E, Van Eekelen I, et al. Prevalence and prevention of contact dermatitis caused by FreeStyle Libre: a monocentric experience. Diabetes Care. 2020;43:918-920. doi: 10.2337/dc19-1354
34. Smith J, Bleiker T, Narang I. Cutaneous reactions to glucose sensors: a sticky problem [Abstract 677]. Arch Dis Child. 2021;106 (suppl 1):A80.
35. MAUDE Adverse event report: Dexcom, Inc G6 Sensor. U.S. Food & Drug Administration. Updated September 30, 2022. Accessed October 21, 2022. www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfmaude/detail.cfm?mdrfoi__id=11064819&pc=MDS
36. New JP, Ajjan R, Pfeiffer AFH, et al. Continuous glucose monitoring in people with diabetes: the randomized controlled Glucose Level Awareness in Diabetes Study (GLADIS). Diabet Med. 2015;32:609-617. doi: 10.1111/dme.12713
1. Rodríguez-Gutiérrez R, Montori VM. Glycemic control for patients with type 2 diabetes mellitus: our evolving faith in the face of evidence. Circ Cardiovasc Qual Outcomes. 2016;9:504-512. doi: 10.1161/CIRCOUTCOMES.116.002901
2. Draznin B, Aroda VR, Bakris G, et al; doi: 10.2337/dc22-S007
. 7. Diabetes technology: standards of medical care in diabetes—2022. Diabetes Care. 2021;45(suppl 1):S97-S112.3. Olczuk D, Priefer R. A history of continuous glucose monitors (CGMs) in self-monitoring of diabetes mellitus. Diabetes Metab Syndr. 2018;12:181-187. doi: 10.1016/j.dsx.2017.09.005
4. Alva S, Bailey T, Brazg R, et al. Accuracy of a 14-day factory-calibrated continuous glucose monitoring system with advanced algorithm in pediatric and adult population with diabetes. J Diabetes Sci Technol. 2022;16:70-77. doi: 10.1177/1932296820958754
5. Zaharieva DP, Turksoy K, McGaugh SM, et al. Lag time remains with newer real-time continuous glucose monitoring technology during aerobic exercise in adults living with type 1 diabetes. Diabetes Technol Ther. 2019;21:313-321. doi: 10.1089/dia.2018.0364
6. American Diabetes Association. 2. Classification and diagnosis of diabetes: standards of medical care in diabetes—2021. Diabetes Care. 2021;44(suppl 1):S15-S33. doi: 10.2337/dc21-S002
7. FreeStyle Libre systems: The #1 CGM used in the US. Abbott. Updated May 2022. Accessed October 22, 2022. www.freestyleprovider.abbott/us-en/home.html
8. Rowland K. Choosing Wisely: 10 practices to stop—or adopt—to reduce overuse in health care. J Fam Pract. 2020;69:396-400.
9. Tucker ME. FDA clears Abbott Freestyle Libre 3 glucose sensor. MDedge. June 1, 2022. Accessed October 21, 2022. www.mdedge.com/endocrinology/article/255095/diabetes/fda-clears-abbott-freestyle-libre-3-glucose-sensor
10. Manage your diabetes with more confidence. Abbott. Updated May 2022. Accessed October 22, 2022. www.freestyle.abbott/us-en/home.html
11. Whooley S. Dexcom CEO Kevin Sayer says G7 will be ‘wonderful’. Drug Delivery Business News. July 19, 2021. Accessed October 21, 2022. www.drugdeliverybusiness.com/dexcom-ceo-kevin-sayer-says-g7-will-be-wonderful
12. Medtronic secures two CE mark approvals for Guardian 4 Sensor & for InPen MDI Smart Insulin Pen. Medtronic. Press release. May 26, 2021. Accessed October 22, 2022. https://news.medtronic.com/2021-05-26-Medtronic-Secures-Two-CE-Mark-Approvals-for-Guardian-4-Sensor-for-InPen-MDI-Smart-Insulin-Pen
13. Eversense—up to 180 days of freedom [XL CGM System]. Senseonics. Accessed September 14, 2022. https://global.eversensediabetes.com
14. FreeStyle Libre 2 User’s Manual. Abbott. Revised August 24, 2022. Accessed October 2, 2022. https://freestyleserver.com/Payloads/IFU/2022/q3/ART46983-001_rev-A.pdf
15. Dexcom G6 Continuous Glucose Monitoring System user guide. Dexcom. Revised March 2022. Accessed October 21, 2022. https://s3-us-west-2.amazonaws.com/dexcompdf/G6-CGM-Users-Guide.pdf
16. Guardian Connect System user guide. Medtronic. 2020. Accessed October 21, 2022. www.medtronicdiabetes.com/sites/default/files/library/download-library/user-guides/Guardian-Connect-System-User-Guide.pdf
17. Eversense E3 user guides. Senseonics. 2022. Accessed October 22, 2022. www.ascensiadiabetes.com/eversense/user-guides/
18. Battelino T, Conget I, Olsen B, et al; SWITCH Study Group. The use and efficacy of continuous glucose monitoring in type 1 diabetes treated with insulin pump therapy: a randomised controlled trial. Diabetologia. 2012;55:3155-3162. doi: 10.1007/s00125-012-2708-9
19. Weinzimer S, Miller K, Beck R, et al; doi: 10.2337/dc09-1502
Effectiveness of continuous glucose monitoring in a clinical care environment: evidence from the Juvenile Diabetes Research Foundation continuous glucose monitoring (JDRF-CGM) trial. Diabetes Care. 2010;33:17-22.20. Haak T, Hanaire H, Ajjan R, et al. Flash glucose-sensing technology as a replacement for blood glucose monitoring for the management of insulin-treated type 2 diabetes: a multicenter, open-label randomized controlled trial. Diabetes Ther. 2017;8:55-73. doi: 10.1007/s13300-016-0223-6
21. Yaron M, Roitman E, Aharon-Hananel G, et al. Effect of flash glucose monitoring technology on glycemic control and treatment satisfaction in patients with type 2 diabetes. Diabetes Care. 2019;42:1178-1184. doi: 10.2337/dc18-0166
22. Kröger J, Fasching P, Hanaire H. Three European retrospective real-world chart review studies to determine the effectiveness of flash glucose monitoring on HbA1c in adults with type 2 diabetes. Diabetes Ther. 2020;11:279-291. doi: 10.1007/s13300-019-00741-9
23. Wright EE, Jr, Kerr MSD, Reyes IJ, et al. Use of flash continuous glucose monitoring is associated with A1C reduction in people with type 2 diabetes treated with basal insulin or noninsulin therapy. Diabetes Spectr. 2021;34:184-189. doi: 10.2337/ds20-0069
24. Beck RW, Riddlesworth TD, Ruedy K, et al; DIAMOND Study Group. Continuous glucose monitoring versus usual care in patients with type 2 diabetes receiving multiple daily insulin injections: a randomized trial. Ann Intern Med. 2017;167:365-374. doi: 10.7326/M16-2855
25. Vigersky RA, Fonda SJ, Chellappa M, et al. Short- and long-term effects of real-time continuous glucose monitoring in patients with type 2 diabetes. Diabetes Care. 2012;35:32-38. doi: 10.2337/dc11-1438
26. Yoo HJ, An HG, Park SY, et al. Use of a real time continuous glucose monitoring system as a motivational device for poorly controlled type 2 diabetes. Diabetes Res Clin Pract. 2008;82:73-79. doi: 10.1016/j.diabres.2008.06.015
27. Martens T, Beck RW, Bailey R, et al; MOBILE Study Group. Effect of continuous glucose monitoring on glycemic control in patients with type 2 diabetes treated with basal insulin: a randomized clinical trial. JAMA. 2021;325:2262-2272. doi: 10.1001/jama.2021.7444
28. Battelino T, Danne T, Bergenstal RM, et al. Clinical targets for continuous glucose monitoring data interpretation: recommendations from the international consensus on time in range. Diabetes Care. 2019;42:1593-1603. doi: 10.2337/dci19-0028
29. Beck RW, Bergenstal RM, Riddlesworth TD, et al. Validation of time in range as an outcome measure for diabetes clinical trials. Diabetes Care. 2019;42:400-405. doi: 10.2337/dc18-1444
30. Freckmann G. Basics and use of continuous glucose monitoring (CGM) in diabetes therapy. Journal of Laboratory Medicine. 2020;44:71-79. doi: 10.1515/labmed-2019-0189
31. Danne T, Nimri R, Battelino T, et al. International consensus on use of continuous glucose monitoring. Diabetes Care. 2017;40:1631-1640. doi: 10.2337/dc17-1600
32. Glucose monitors. Centers for Medicare & Medicaid Services. April 22, 2022. Accessed October 22, 2022. www.cms.gov/medicare-coverage-database/view/lcd.aspx?lcdid=33822
33. Pyl J, Dendooven E, Van Eekelen I, et al. Prevalence and prevention of contact dermatitis caused by FreeStyle Libre: a monocentric experience. Diabetes Care. 2020;43:918-920. doi: 10.2337/dc19-1354
34. Smith J, Bleiker T, Narang I. Cutaneous reactions to glucose sensors: a sticky problem [Abstract 677]. Arch Dis Child. 2021;106 (suppl 1):A80.
35. MAUDE Adverse event report: Dexcom, Inc G6 Sensor. U.S. Food & Drug Administration. Updated September 30, 2022. Accessed October 21, 2022. www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfmaude/detail.cfm?mdrfoi__id=11064819&pc=MDS
36. New JP, Ajjan R, Pfeiffer AFH, et al. Continuous glucose monitoring in people with diabetes: the randomized controlled Glucose Level Awareness in Diabetes Study (GLADIS). Diabet Med. 2015;32:609-617. doi: 10.1111/dme.12713
PRACTICE RECOMMENDATIONS
› Initiate continuous glucose monitoring early in the disease process, based on a patient’s needs or preferences. C
› Interpret a continuous glucose monitor (CGM) report with the understanding that time within target range is the most important factor to evaluate. Minimizing or eliminating time below range is of paramount importance. B
› Advise patients who use a CGM to continue to have access to a glucometer and instruct them on appropriate times when such confirmation might be necessary. B
Strength of recommendation (SOR)
A Good-quality patient-oriented evidence
B Inconsistent or limited-quality patient-oriented evidence
C Consensus, usual practice, opinion, disease-oriented evidence, case series