User login
Fewer groin infections with closed incision negative pressure therapy after vascular surgery
Closed incision negative pressure therapy (ciNPT) was found to reduce surgical site infections (SSI) in vascular surgery, according to the results of a prospective, randomized, industry-sponsored trial of patients who underwent vascular surgery for peripheral artery disease (PAD) published online in the European Journal of Vascular and Endovascular Surgery.
The investigator-initiated Reduction of Groin Wound Infections After Vascular Surgery by Using an Incision Management System trial (NCT02395159) included 204 patients who underwent vascular surgery involving longitudinal groin incision to treat the lower extremity or the iliac arteries between July 2015 and May 2017 at two study centers.
The primary endpoint was the occurrence of SSI assessed by the Szilagyi classification (grades I-III). The mean patient age was nearly 67 years and 70% were men. In terms of PAD staging, 52% were stage 2B, 28% were stage 3, and 19% were stage 4. Among the patients, 45% had a previous groin incision and 42% had diabetes.
All patients underwent similar preoperative treatment: hair shaving and preparation with Poly Alcohol (Antiseptica, Pulheim, Germany) and Braunoderm (Braun, Melsungen, Germany). At 30 minutes preincision, patients received intravenous antibiotic treatment (1.5 g cefuroxime or 600 mg clindamycin, if allergic to penicillin). After closure, the incision and surrounding skin area was cleaned and dried using sterile gauze. In the control group, a sterile adhesive wound dressing was applied to the wound, which was changed daily. In the treatment group, ciNPT was applied under sterile conditions in the operating room using the Prevena device, which exerts a continuous negative pressure of 125 mm Hg on the closed incision during the time of application. The device was removed at 5-7 days postoperatively, and no further wound dressings were used in the treatment group unless an SSI occurred.
The control group experienced more frequent SSIs (33.3%) than the intervention group (13.2%) (P =.0015). This difference was based on an increased rate of Szilagyi grade I SSI in the control group (24.6% vs. 8.1%, P = .0012), according to Alexander Gombert, MD, of the University Hospital Aachen (Germany), and his colleagues. The absolute risk difference based on the Szilagyi classification was –20.1 per 100 (95% confidence interval, –31.9 to –8.2).
In addition, there was a statistically significantly lower rate of SSI when using ciNPT within the subgroups at greater risk of infection, compared with controls: PAD stage greater than or equal to 3 (P less than .001), body mass index greater than 25 kg/m2 (P less than .001), and previous groin incision (P = .016).
There were no statistical differences between the two groups in Szilagyi grade II and III SSIs (which occurred in 5.8% of all procedures).
No potentially device-related complications were observed in the trial and there were no failures of the device seen.
“The use of ciNPT rather than standard wound dressing after groin incision as access for vascular surgery was associated with a reduced rate of superficial SSI classified by Szilagyi, suggesting that ciNPT may be useful for reducing the SSI rate among high-risk patients,” the researchers concluded.
The trial was funded by Acelity. Dr. Gombert received travel grants from Acelity.
SOURCE: Gombert A et al. Eur J Vasc Surg. 2018 Jul 2. doi: 10.1016/j.ejvs.2018.05.018.
Closed incision negative pressure therapy (ciNPT) was found to reduce surgical site infections (SSI) in vascular surgery, according to the results of a prospective, randomized, industry-sponsored trial of patients who underwent vascular surgery for peripheral artery disease (PAD) published online in the European Journal of Vascular and Endovascular Surgery.
The investigator-initiated Reduction of Groin Wound Infections After Vascular Surgery by Using an Incision Management System trial (NCT02395159) included 204 patients who underwent vascular surgery involving longitudinal groin incision to treat the lower extremity or the iliac arteries between July 2015 and May 2017 at two study centers.
The primary endpoint was the occurrence of SSI assessed by the Szilagyi classification (grades I-III). The mean patient age was nearly 67 years and 70% were men. In terms of PAD staging, 52% were stage 2B, 28% were stage 3, and 19% were stage 4. Among the patients, 45% had a previous groin incision and 42% had diabetes.
All patients underwent similar preoperative treatment: hair shaving and preparation with Poly Alcohol (Antiseptica, Pulheim, Germany) and Braunoderm (Braun, Melsungen, Germany). At 30 minutes preincision, patients received intravenous antibiotic treatment (1.5 g cefuroxime or 600 mg clindamycin, if allergic to penicillin). After closure, the incision and surrounding skin area was cleaned and dried using sterile gauze. In the control group, a sterile adhesive wound dressing was applied to the wound, which was changed daily. In the treatment group, ciNPT was applied under sterile conditions in the operating room using the Prevena device, which exerts a continuous negative pressure of 125 mm Hg on the closed incision during the time of application. The device was removed at 5-7 days postoperatively, and no further wound dressings were used in the treatment group unless an SSI occurred.
The control group experienced more frequent SSIs (33.3%) than the intervention group (13.2%) (P =.0015). This difference was based on an increased rate of Szilagyi grade I SSI in the control group (24.6% vs. 8.1%, P = .0012), according to Alexander Gombert, MD, of the University Hospital Aachen (Germany), and his colleagues. The absolute risk difference based on the Szilagyi classification was –20.1 per 100 (95% confidence interval, –31.9 to –8.2).
In addition, there was a statistically significantly lower rate of SSI when using ciNPT within the subgroups at greater risk of infection, compared with controls: PAD stage greater than or equal to 3 (P less than .001), body mass index greater than 25 kg/m2 (P less than .001), and previous groin incision (P = .016).
There were no statistical differences between the two groups in Szilagyi grade II and III SSIs (which occurred in 5.8% of all procedures).
No potentially device-related complications were observed in the trial and there were no failures of the device seen.
“The use of ciNPT rather than standard wound dressing after groin incision as access for vascular surgery was associated with a reduced rate of superficial SSI classified by Szilagyi, suggesting that ciNPT may be useful for reducing the SSI rate among high-risk patients,” the researchers concluded.
The trial was funded by Acelity. Dr. Gombert received travel grants from Acelity.
SOURCE: Gombert A et al. Eur J Vasc Surg. 2018 Jul 2. doi: 10.1016/j.ejvs.2018.05.018.
Closed incision negative pressure therapy (ciNPT) was found to reduce surgical site infections (SSI) in vascular surgery, according to the results of a prospective, randomized, industry-sponsored trial of patients who underwent vascular surgery for peripheral artery disease (PAD) published online in the European Journal of Vascular and Endovascular Surgery.
The investigator-initiated Reduction of Groin Wound Infections After Vascular Surgery by Using an Incision Management System trial (NCT02395159) included 204 patients who underwent vascular surgery involving longitudinal groin incision to treat the lower extremity or the iliac arteries between July 2015 and May 2017 at two study centers.
The primary endpoint was the occurrence of SSI assessed by the Szilagyi classification (grades I-III). The mean patient age was nearly 67 years and 70% were men. In terms of PAD staging, 52% were stage 2B, 28% were stage 3, and 19% were stage 4. Among the patients, 45% had a previous groin incision and 42% had diabetes.
All patients underwent similar preoperative treatment: hair shaving and preparation with Poly Alcohol (Antiseptica, Pulheim, Germany) and Braunoderm (Braun, Melsungen, Germany). At 30 minutes preincision, patients received intravenous antibiotic treatment (1.5 g cefuroxime or 600 mg clindamycin, if allergic to penicillin). After closure, the incision and surrounding skin area was cleaned and dried using sterile gauze. In the control group, a sterile adhesive wound dressing was applied to the wound, which was changed daily. In the treatment group, ciNPT was applied under sterile conditions in the operating room using the Prevena device, which exerts a continuous negative pressure of 125 mm Hg on the closed incision during the time of application. The device was removed at 5-7 days postoperatively, and no further wound dressings were used in the treatment group unless an SSI occurred.
The control group experienced more frequent SSIs (33.3%) than the intervention group (13.2%) (P =.0015). This difference was based on an increased rate of Szilagyi grade I SSI in the control group (24.6% vs. 8.1%, P = .0012), according to Alexander Gombert, MD, of the University Hospital Aachen (Germany), and his colleagues. The absolute risk difference based on the Szilagyi classification was –20.1 per 100 (95% confidence interval, –31.9 to –8.2).
In addition, there was a statistically significantly lower rate of SSI when using ciNPT within the subgroups at greater risk of infection, compared with controls: PAD stage greater than or equal to 3 (P less than .001), body mass index greater than 25 kg/m2 (P less than .001), and previous groin incision (P = .016).
There were no statistical differences between the two groups in Szilagyi grade II and III SSIs (which occurred in 5.8% of all procedures).
No potentially device-related complications were observed in the trial and there were no failures of the device seen.
“The use of ciNPT rather than standard wound dressing after groin incision as access for vascular surgery was associated with a reduced rate of superficial SSI classified by Szilagyi, suggesting that ciNPT may be useful for reducing the SSI rate among high-risk patients,” the researchers concluded.
The trial was funded by Acelity. Dr. Gombert received travel grants from Acelity.
SOURCE: Gombert A et al. Eur J Vasc Surg. 2018 Jul 2. doi: 10.1016/j.ejvs.2018.05.018.
FROM THE EUROPEAN JOURNAL OF VASCULAR AND ENDOVASCULAR SURGERY
Key clinical point: Closed incision negative pressure therapy lessened the incidence of groin infection after vascular surgery.
Major finding: The control group experienced more frequent surgical site infections (33.3%) than the intervention group (13.2%) (P =.0015).
Study details: A randomized, controlled trial of 204 patients with peripheral artery disease who underwent vascular surgery.
Disclosures: The trial was funded by Acelity. Dr. Gombert received travel grants from Acelity.
Source: Gombert A et al. Eur J Vasc Surg. 2018 Jul 2. doi: 10.1016/j.ejvs.2018.05.018.
Nivolumab plus ipilimumab boosts response rate in refractory esophagogastric cancer
Nivolumab alone or in combination with ipilimumab met multiple endpoints against metastatic or locally advanced chemotherapy-refractory esophagogastric cancer in the recent phase 1/2 CheckMate-032 trial, thereby opening doors to a future phase 3 trial.
The agents demonstrated “clinically meaningful antitumor activity,” reported Yelena Y. Janjigian, MD, of Memorial Sloan Kettering Cancer Center, New York, and her coauthors.
After the 2017 ATTRACTION-2 trial demonstrated improved survival rates, “nivolumab was approved in Japan for the treatment of patients with chemotherapy-refractory gastric and gastroesophageal junction [GEJ] cancers regardless of programmed death-ligand 1 [PD-L1] status,” the authors wrote in the Journal of Clinical Oncology.
Nivolumab is a checkpoint inhibitor, like pembrolizumab, which “was approved for the treatment of patients with chemotherapy-refractory PD-L1–positive gastric/GEJ cancer on the basis of the promising clinical activity observed in the KEYNOTE-059 trial,” the authors noted. Testing nivolumab in a Western population would therefore build on these previous trials. Combining nivolumab, a PD-l inhibitor, with ipilimumab, a monoclonal antibody targeting cytotoxic T-lymphocyte antigen 4, was based on “synergistic activity” reported in preclinical models, the authors wrote.
Results from the ongoing CheckMate-032 trial included 160 patients with metastatic or locally advanced chemotherapy-refractory esophageal, gastric, or gastroesophageal junction cancer treated at centers in Europe and the United States. Just under 80% of patients had received two or more prior therapies.
In the present trial, patients were given one of three treatment regimens: nivolumab 3 mg/kg every 2 weeks, nivolumab 1 mg/kg plus ipilimumab 3 mg/kg every 3 weeks for four cycles (NIVO1 + IPI3), or nivolumab 3 mg/kg plus ipilimumab 1 mg/kg every 3 weeks for four cycles (NIVO3 + IPI1). The primary endpoint was objective response rate (ORR). Secondary endpoints included 12-month progression-free survival and 12-month overall survival (OS).
Patients in the NIVO1 + IPI3 group achieved the best ORR (24%) and 12-month progression-free survival (17%) and also showed a promising 12-month OS (35%), second only to nivolumab monotherapy (39%). PD-L1 status was not predictive of treatment response.
Although NIVO1 + IPI3 was the most clinically effective, almost half (47%) of these patients also had grade 3 or higher adverse events, compared with more favorable rates of 17% and 27% for nivolumab monotherapy and NIVO3 + IPI1, respectively.
Still, the authors concluded, “on the basis of the numerically higher overall response and landmark OS rates in the NIVO1 + IPI3 arm, this combination was considered more likely to offer clinical benefit relative to currently available treatment regimens for first-line metastatic esophagogastric cancer and was selected for further evaluation in the phase 3 CheckMate-649 study (NCT02872116).” This trial, along with another to investigate nivolumab in the adjuvant setting (NCT02743494), are ongoing.
CheckMate-032 was supported by Bristol-Myers Squibb. The authors also reported funding from Merck, Incyte, Gilead Sciences, and others.
SOURCE: Janjigian YY et al. J Clin Oncol. 2018 Aug 15. doi: 10.1200/JCO.2017.76.6212.
Nivolumab alone or in combination with ipilimumab met multiple endpoints against metastatic or locally advanced chemotherapy-refractory esophagogastric cancer in the recent phase 1/2 CheckMate-032 trial, thereby opening doors to a future phase 3 trial.
The agents demonstrated “clinically meaningful antitumor activity,” reported Yelena Y. Janjigian, MD, of Memorial Sloan Kettering Cancer Center, New York, and her coauthors.
After the 2017 ATTRACTION-2 trial demonstrated improved survival rates, “nivolumab was approved in Japan for the treatment of patients with chemotherapy-refractory gastric and gastroesophageal junction [GEJ] cancers regardless of programmed death-ligand 1 [PD-L1] status,” the authors wrote in the Journal of Clinical Oncology.
Nivolumab is a checkpoint inhibitor, like pembrolizumab, which “was approved for the treatment of patients with chemotherapy-refractory PD-L1–positive gastric/GEJ cancer on the basis of the promising clinical activity observed in the KEYNOTE-059 trial,” the authors noted. Testing nivolumab in a Western population would therefore build on these previous trials. Combining nivolumab, a PD-l inhibitor, with ipilimumab, a monoclonal antibody targeting cytotoxic T-lymphocyte antigen 4, was based on “synergistic activity” reported in preclinical models, the authors wrote.
Results from the ongoing CheckMate-032 trial included 160 patients with metastatic or locally advanced chemotherapy-refractory esophageal, gastric, or gastroesophageal junction cancer treated at centers in Europe and the United States. Just under 80% of patients had received two or more prior therapies.
In the present trial, patients were given one of three treatment regimens: nivolumab 3 mg/kg every 2 weeks, nivolumab 1 mg/kg plus ipilimumab 3 mg/kg every 3 weeks for four cycles (NIVO1 + IPI3), or nivolumab 3 mg/kg plus ipilimumab 1 mg/kg every 3 weeks for four cycles (NIVO3 + IPI1). The primary endpoint was objective response rate (ORR). Secondary endpoints included 12-month progression-free survival and 12-month overall survival (OS).
Patients in the NIVO1 + IPI3 group achieved the best ORR (24%) and 12-month progression-free survival (17%) and also showed a promising 12-month OS (35%), second only to nivolumab monotherapy (39%). PD-L1 status was not predictive of treatment response.
Although NIVO1 + IPI3 was the most clinically effective, almost half (47%) of these patients also had grade 3 or higher adverse events, compared with more favorable rates of 17% and 27% for nivolumab monotherapy and NIVO3 + IPI1, respectively.
Still, the authors concluded, “on the basis of the numerically higher overall response and landmark OS rates in the NIVO1 + IPI3 arm, this combination was considered more likely to offer clinical benefit relative to currently available treatment regimens for first-line metastatic esophagogastric cancer and was selected for further evaluation in the phase 3 CheckMate-649 study (NCT02872116).” This trial, along with another to investigate nivolumab in the adjuvant setting (NCT02743494), are ongoing.
CheckMate-032 was supported by Bristol-Myers Squibb. The authors also reported funding from Merck, Incyte, Gilead Sciences, and others.
SOURCE: Janjigian YY et al. J Clin Oncol. 2018 Aug 15. doi: 10.1200/JCO.2017.76.6212.
Nivolumab alone or in combination with ipilimumab met multiple endpoints against metastatic or locally advanced chemotherapy-refractory esophagogastric cancer in the recent phase 1/2 CheckMate-032 trial, thereby opening doors to a future phase 3 trial.
The agents demonstrated “clinically meaningful antitumor activity,” reported Yelena Y. Janjigian, MD, of Memorial Sloan Kettering Cancer Center, New York, and her coauthors.
After the 2017 ATTRACTION-2 trial demonstrated improved survival rates, “nivolumab was approved in Japan for the treatment of patients with chemotherapy-refractory gastric and gastroesophageal junction [GEJ] cancers regardless of programmed death-ligand 1 [PD-L1] status,” the authors wrote in the Journal of Clinical Oncology.
Nivolumab is a checkpoint inhibitor, like pembrolizumab, which “was approved for the treatment of patients with chemotherapy-refractory PD-L1–positive gastric/GEJ cancer on the basis of the promising clinical activity observed in the KEYNOTE-059 trial,” the authors noted. Testing nivolumab in a Western population would therefore build on these previous trials. Combining nivolumab, a PD-l inhibitor, with ipilimumab, a monoclonal antibody targeting cytotoxic T-lymphocyte antigen 4, was based on “synergistic activity” reported in preclinical models, the authors wrote.
Results from the ongoing CheckMate-032 trial included 160 patients with metastatic or locally advanced chemotherapy-refractory esophageal, gastric, or gastroesophageal junction cancer treated at centers in Europe and the United States. Just under 80% of patients had received two or more prior therapies.
In the present trial, patients were given one of three treatment regimens: nivolumab 3 mg/kg every 2 weeks, nivolumab 1 mg/kg plus ipilimumab 3 mg/kg every 3 weeks for four cycles (NIVO1 + IPI3), or nivolumab 3 mg/kg plus ipilimumab 1 mg/kg every 3 weeks for four cycles (NIVO3 + IPI1). The primary endpoint was objective response rate (ORR). Secondary endpoints included 12-month progression-free survival and 12-month overall survival (OS).
Patients in the NIVO1 + IPI3 group achieved the best ORR (24%) and 12-month progression-free survival (17%) and also showed a promising 12-month OS (35%), second only to nivolumab monotherapy (39%). PD-L1 status was not predictive of treatment response.
Although NIVO1 + IPI3 was the most clinically effective, almost half (47%) of these patients also had grade 3 or higher adverse events, compared with more favorable rates of 17% and 27% for nivolumab monotherapy and NIVO3 + IPI1, respectively.
Still, the authors concluded, “on the basis of the numerically higher overall response and landmark OS rates in the NIVO1 + IPI3 arm, this combination was considered more likely to offer clinical benefit relative to currently available treatment regimens for first-line metastatic esophagogastric cancer and was selected for further evaluation in the phase 3 CheckMate-649 study (NCT02872116).” This trial, along with another to investigate nivolumab in the adjuvant setting (NCT02743494), are ongoing.
CheckMate-032 was supported by Bristol-Myers Squibb. The authors also reported funding from Merck, Incyte, Gilead Sciences, and others.
SOURCE: Janjigian YY et al. J Clin Oncol. 2018 Aug 15. doi: 10.1200/JCO.2017.76.6212.
FROM THE JOURNAL OF CLINICAL ONCOLOGY
Key clinical point: Both nivolumab and nivolumab plus ipilimumab were effective in patients with chemotherapy-refractory esophagogastric cancer.
Major finding: Treatment with nivolumab plus ipilimumab was associated with an objective response rate of 24%.
Study details: CheckMate-032 is an ongoing phase 1/2 trial involving 160 patients with metastatic or locally advanced chemotherapy-refractory esophageal, gastric, or gastroesophageal junction cancer from centers in Europe and the United States.
Disclosures: The study was supported by Bristol-Myers Squibb. The authors also reported funding from Merck, Incyte, Gilead Sciences, and others.
Source: Janjigian YY et al. J Clin Oncol. 2018 Aug 15. doi: 10.1200/JCO.2017.76.6212.
Replacing warfarin with a NOAC in patients on chronic anticoagulation therapy
Hospitalists must consider clinical factors and patient preferences
Case
A 70-year old woman with hypertension, diabetes, nonischemic stroke, moderate renal insufficiency (creatinine clearance [CrCl] 45 mL/min), heart failure, and nonvalvular atrial fibrillation (AF) on warfarin is admitted because of a very supratherapeutic INR. She reports labile INR values despite strict adherence to her medication regimen. Her cancer screening tests had previously been unremarkable. She inquires about the risks and benefits of switching to a novel oral anticoagulant (NOAC) as advertised on television. Should you consider it while she is still in the hospital?
Brief overview of the issue
Lifelong anticoagulation therapy is common among patients with AF or recurrent venous thromboembolism (VTE). Until the advent of NOACs, a great majority of patients were prescribed warfarin, the oral vitamin K antagonist that requires regular blood tests for monitoring of the INR. In contrast to warfarin, NOACs are direct-acting agents (hence also known as “direct oral anticoagulants” or DOACs) that are selective for one specific coagulation factor, either thrombin (e.g., dabigatran) or factor Xa (e.g., rivaroxaban, apixaban, and edoxaban, all with an “X” in their names).
NOACS have been studied and approved by the Food and Drug Administration for nonvalvular AF, i.e., patients without rheumatic mitral stenosis, mechanical or bioprosthetic heart valve, or prior mitral valve repair. Compared to warfarin, NOACS have fewer drug or food interactions, have more predictable pharmacokinetics, and may be associated with reduced risk of major bleeding depending on the agent. The latter is a particularly attractive feature of NOAC therapy, especially when its use is considered among older patients at risk of intracranial hemorrhage (ICH), such as those with previous strokes, ICH, or reduced renal function. Unfortunately, data on the efficacy and safety of the use of NOACs in certain patient populations (e.g., those with severe renal insufficiency, active malignancy, the elderly, patients with suboptimal medication adherence) are generally lacking.
Overview of the data
There are no randomized controlled trials (RCTs) addressing the clinical benefits of switching from warfarin to NOAC therapy. However, based on a number of RCTs comparing warfarin to individual NOACs and their related meta-analyses, the following conclusions may be made about their attributes:
1. Noninferiority to warfarin in reducing the risk of ischemic stroke in AF.
2. Association with a lower rate of major bleeds (statistically significant or trend) and a lower rate of ICH and hemorrhagic strokes compared to warfarin.
3. Association with a higher rate of gastrointestinal bleeding compared to warfarin (except for apixaban, low-dose dabigatran, and edoxaban1).
4. Association with a decreased rate of all stroke and thromboembolism events compared to warfarin.
5. Association with a slightly decreased all-cause mortality in AF compared to warfarin in many studies,2-8 but not all.1,9
6. Noninferiority to warfarin in all-cause mortality in patients with VTE and for its secondary prevention.1,4
NOACS should be used with caution or avoided altogether in patients with severe liver disease or renal insufficiency (see Table 1).
Potential advantages and disadvantages of NOAC therapy are listed in Table 2.
It should be emphasized that in patients with cancer or hypercoagulable state, no clear efficacy or safety data are currently available for the use of NOACs.
The 2016 CHEST guideline on antithrombotic therapy for VTE recommends NOACs over warfarin.10 The 2012 European Society of Cardiology AF guidelines also recommend NOACs over warfarin.11 However, the 2014 American College of Cardiology/American Heart Association/Heart Rhythm Society guidelines on AF state that it is not necessary to change to a NOAC when patients are “stable, easily controlled, and satisfied with warfarin therapy.”12
Data from a relatively small, short-term study examining the safety of switching patients from warfarin to a NOAC suggest that although bleeding events are relatively common (12%) following such a switch, major bleeding and cardiac or cerebrovascular events are rare.10
Application of the data to our original case
Given a high calculated CHADS2VASC score of 8 in our patient, she has a clear indication for anticoagulation for AF. Her history of labile INRs, ischemic stroke, and moderate renal insufficiency place her at high risk for ICH.
A NOAC may reduce this risk but possibly at the expense of an increased risk for a gastrointestinal bleed. More importantly, however, she may be a good candidate for a switch to a NOAC because of her labile INRs despite good medication adherence. Her warfarin can be held while hospitalized and a NOAC may be initiated when the INR falls below 2.
Prior to discharge, potential cost of the drug to the patient should be explored and discussed. It is also important to involve the primary care physician in the decision-making process. Ultimately, selection of an appropriate NOAC should be based on a careful review of its risks and benefits, clinical factors, patient preference, and shared decision making.
Bottom line
Hospitalists are in a great position to discuss a switch to a NOAC in selected patients with history of good medication adherence and labile INRs or ICH risk factors.
Dr. Geisler, Dr. Liao, and Dr. Manian are hospitalists at Massachusetts General Hospital in Boston.
References
1. Sharma M et al. Efficacy and harms of direct oral anticoagulants in the elderly for stroke prevention in atrial fibrillation and secondary prevention of venous thromboembolism: Systematic review and meta-analysis. Circulation. 2015;132(3):194-204.
2. Ruff CT et al. Comparison of the efficacy and safety of new oral anticoagulants with warfarin in patients with atrial fibrillation: A meta-analysis of randomised trials. Lancet. 2014;383(9921):955-62.
3. Dentali F et al. Efficacy and safety of the novel oral anticoagulants in atrial fibrillation: A systematic review and meta-analysis of the literature. Circulation. 2012;126(20):2381-91.
4. Adam SS et al. Comparative effectiveness of warfarin and new oral anticoagulants for the management of atrial fibrillation and venous thromboembolism: A systematic review. Ann Intern Med. 2012;157(11):796-807.
5. Bruins Slot KM and Berge E. Factor Xa inhibitors versus vitamin K antagonists for preventing cerebral or systemic embolism in patients with atrial fibrillation. Cochrane Database Syst Rev. 2013(8):CD008980.
6. Gomez-Outes A et al. Dabigatran, rivaroxaban, or apixaban versus warfarin in patients with nonvalvular atrial fibrillation: A systematic review and meta-analysis of subgroups. Thrombosis. 2013;2013:640723.
7. Miller CS et al. Meta-analysis of efficacy and safety of new oral anticoagulants (dabigatran, rivaroxaban, apixaban) versus warfarin in patients with atrial fibrillation. Am J Cardiol. 2012;110(3):453-60.
8. Baker WL and Phung OJ. Systematic review and adjusted indirect comparison meta-analysis of oral anticoagulants in atrial fibrillation. Circ Cardiovasc Qual Outcomes. 2012;5(5):711-19.
9. Ntaios G et al. Nonvitamin-K-antagonist oral anticoagulants in patients with atrial fibrillation and previous stroke or transient ischemic attack: A systematic review and meta-analysis of randomized controlled trials. Stroke. 2012;43(12):3298-304.
10. Kearon C et al. Antithrombotic therapy for VTE disease: CHEST guideline and expert panel report. Chest. 2016;149(2):315-52.
11. Camm AJ et al. 2012 focused update of the ESC guidelines for the management of atrial fibrillation: an update of the 2010 ESC guidelines for the management of atrial fibrillation – developed with the special contribution of the European Heart Rhythm Association. Europace. 2012;14(10):1385-413.
12. January CT et al. 2014 AHA/ACC/HRS guideline for the management of patients with atrial fibrillation: A report of the American College of Cardiology/American Heart Association task force on practice guidelines and the Heart Rhythm Society. Circulation. 2014;130(23):e199-267.
Quiz
When considering a switch from warfarin to a NOAC, all the following factors should be considered a potential advantage, except:
A. No need for routing lab monitoring.
B. Lower risk of gastrointestinal bleeding.
C. Fewer drug interactions.
D. Lower rates of intracranial bleed and hemorrhagic stroke.
The correct answer is B. NOACs have been associated with lower risk of intracranial bleed and hemorrhagic stroke but not gastrointestinal bleed. Routine lab monitoring is not necessary during their use and they are associated with fewer drug interactions compared to warfarin.
Key Points
- NOACs represent a clear advancement in our anticoagulation armamentarium.
- Potential advantages of their use include lower rates of intracranial bleed and hemorrhagic strokes, fewer drug or food interactions, and lack of need for routing lab monitoring.
- Potential disadvantages of their use include increased rates of gastrointestinal bleed with some agents, general lack of availability of reversal agents, higher drug cost, unsuitability in patients with poor medication compliance, and lack of efficacy data in certain patient populations.
- Decision to switch from warfarin to a NOAC should thoroughly consider its pros and cons, clinical factors, and patient preferences.
Hospitalists must consider clinical factors and patient preferences
Hospitalists must consider clinical factors and patient preferences
Case
A 70-year old woman with hypertension, diabetes, nonischemic stroke, moderate renal insufficiency (creatinine clearance [CrCl] 45 mL/min), heart failure, and nonvalvular atrial fibrillation (AF) on warfarin is admitted because of a very supratherapeutic INR. She reports labile INR values despite strict adherence to her medication regimen. Her cancer screening tests had previously been unremarkable. She inquires about the risks and benefits of switching to a novel oral anticoagulant (NOAC) as advertised on television. Should you consider it while she is still in the hospital?
Brief overview of the issue
Lifelong anticoagulation therapy is common among patients with AF or recurrent venous thromboembolism (VTE). Until the advent of NOACs, a great majority of patients were prescribed warfarin, the oral vitamin K antagonist that requires regular blood tests for monitoring of the INR. In contrast to warfarin, NOACs are direct-acting agents (hence also known as “direct oral anticoagulants” or DOACs) that are selective for one specific coagulation factor, either thrombin (e.g., dabigatran) or factor Xa (e.g., rivaroxaban, apixaban, and edoxaban, all with an “X” in their names).
NOACS have been studied and approved by the Food and Drug Administration for nonvalvular AF, i.e., patients without rheumatic mitral stenosis, mechanical or bioprosthetic heart valve, or prior mitral valve repair. Compared to warfarin, NOACS have fewer drug or food interactions, have more predictable pharmacokinetics, and may be associated with reduced risk of major bleeding depending on the agent. The latter is a particularly attractive feature of NOAC therapy, especially when its use is considered among older patients at risk of intracranial hemorrhage (ICH), such as those with previous strokes, ICH, or reduced renal function. Unfortunately, data on the efficacy and safety of the use of NOACs in certain patient populations (e.g., those with severe renal insufficiency, active malignancy, the elderly, patients with suboptimal medication adherence) are generally lacking.
Overview of the data
There are no randomized controlled trials (RCTs) addressing the clinical benefits of switching from warfarin to NOAC therapy. However, based on a number of RCTs comparing warfarin to individual NOACs and their related meta-analyses, the following conclusions may be made about their attributes:
1. Noninferiority to warfarin in reducing the risk of ischemic stroke in AF.
2. Association with a lower rate of major bleeds (statistically significant or trend) and a lower rate of ICH and hemorrhagic strokes compared to warfarin.
3. Association with a higher rate of gastrointestinal bleeding compared to warfarin (except for apixaban, low-dose dabigatran, and edoxaban1).
4. Association with a decreased rate of all stroke and thromboembolism events compared to warfarin.
5. Association with a slightly decreased all-cause mortality in AF compared to warfarin in many studies,2-8 but not all.1,9
6. Noninferiority to warfarin in all-cause mortality in patients with VTE and for its secondary prevention.1,4
NOACS should be used with caution or avoided altogether in patients with severe liver disease or renal insufficiency (see Table 1).
Potential advantages and disadvantages of NOAC therapy are listed in Table 2.
It should be emphasized that in patients with cancer or hypercoagulable state, no clear efficacy or safety data are currently available for the use of NOACs.
The 2016 CHEST guideline on antithrombotic therapy for VTE recommends NOACs over warfarin.10 The 2012 European Society of Cardiology AF guidelines also recommend NOACs over warfarin.11 However, the 2014 American College of Cardiology/American Heart Association/Heart Rhythm Society guidelines on AF state that it is not necessary to change to a NOAC when patients are “stable, easily controlled, and satisfied with warfarin therapy.”12
Data from a relatively small, short-term study examining the safety of switching patients from warfarin to a NOAC suggest that although bleeding events are relatively common (12%) following such a switch, major bleeding and cardiac or cerebrovascular events are rare.10
Application of the data to our original case
Given a high calculated CHADS2VASC score of 8 in our patient, she has a clear indication for anticoagulation for AF. Her history of labile INRs, ischemic stroke, and moderate renal insufficiency place her at high risk for ICH.
A NOAC may reduce this risk but possibly at the expense of an increased risk for a gastrointestinal bleed. More importantly, however, she may be a good candidate for a switch to a NOAC because of her labile INRs despite good medication adherence. Her warfarin can be held while hospitalized and a NOAC may be initiated when the INR falls below 2.
Prior to discharge, potential cost of the drug to the patient should be explored and discussed. It is also important to involve the primary care physician in the decision-making process. Ultimately, selection of an appropriate NOAC should be based on a careful review of its risks and benefits, clinical factors, patient preference, and shared decision making.
Bottom line
Hospitalists are in a great position to discuss a switch to a NOAC in selected patients with history of good medication adherence and labile INRs or ICH risk factors.
Dr. Geisler, Dr. Liao, and Dr. Manian are hospitalists at Massachusetts General Hospital in Boston.
References
1. Sharma M et al. Efficacy and harms of direct oral anticoagulants in the elderly for stroke prevention in atrial fibrillation and secondary prevention of venous thromboembolism: Systematic review and meta-analysis. Circulation. 2015;132(3):194-204.
2. Ruff CT et al. Comparison of the efficacy and safety of new oral anticoagulants with warfarin in patients with atrial fibrillation: A meta-analysis of randomised trials. Lancet. 2014;383(9921):955-62.
3. Dentali F et al. Efficacy and safety of the novel oral anticoagulants in atrial fibrillation: A systematic review and meta-analysis of the literature. Circulation. 2012;126(20):2381-91.
4. Adam SS et al. Comparative effectiveness of warfarin and new oral anticoagulants for the management of atrial fibrillation and venous thromboembolism: A systematic review. Ann Intern Med. 2012;157(11):796-807.
5. Bruins Slot KM and Berge E. Factor Xa inhibitors versus vitamin K antagonists for preventing cerebral or systemic embolism in patients with atrial fibrillation. Cochrane Database Syst Rev. 2013(8):CD008980.
6. Gomez-Outes A et al. Dabigatran, rivaroxaban, or apixaban versus warfarin in patients with nonvalvular atrial fibrillation: A systematic review and meta-analysis of subgroups. Thrombosis. 2013;2013:640723.
7. Miller CS et al. Meta-analysis of efficacy and safety of new oral anticoagulants (dabigatran, rivaroxaban, apixaban) versus warfarin in patients with atrial fibrillation. Am J Cardiol. 2012;110(3):453-60.
8. Baker WL and Phung OJ. Systematic review and adjusted indirect comparison meta-analysis of oral anticoagulants in atrial fibrillation. Circ Cardiovasc Qual Outcomes. 2012;5(5):711-19.
9. Ntaios G et al. Nonvitamin-K-antagonist oral anticoagulants in patients with atrial fibrillation and previous stroke or transient ischemic attack: A systematic review and meta-analysis of randomized controlled trials. Stroke. 2012;43(12):3298-304.
10. Kearon C et al. Antithrombotic therapy for VTE disease: CHEST guideline and expert panel report. Chest. 2016;149(2):315-52.
11. Camm AJ et al. 2012 focused update of the ESC guidelines for the management of atrial fibrillation: an update of the 2010 ESC guidelines for the management of atrial fibrillation – developed with the special contribution of the European Heart Rhythm Association. Europace. 2012;14(10):1385-413.
12. January CT et al. 2014 AHA/ACC/HRS guideline for the management of patients with atrial fibrillation: A report of the American College of Cardiology/American Heart Association task force on practice guidelines and the Heart Rhythm Society. Circulation. 2014;130(23):e199-267.
Quiz
When considering a switch from warfarin to a NOAC, all the following factors should be considered a potential advantage, except:
A. No need for routing lab monitoring.
B. Lower risk of gastrointestinal bleeding.
C. Fewer drug interactions.
D. Lower rates of intracranial bleed and hemorrhagic stroke.
The correct answer is B. NOACs have been associated with lower risk of intracranial bleed and hemorrhagic stroke but not gastrointestinal bleed. Routine lab monitoring is not necessary during their use and they are associated with fewer drug interactions compared to warfarin.
Key Points
- NOACs represent a clear advancement in our anticoagulation armamentarium.
- Potential advantages of their use include lower rates of intracranial bleed and hemorrhagic strokes, fewer drug or food interactions, and lack of need for routing lab monitoring.
- Potential disadvantages of their use include increased rates of gastrointestinal bleed with some agents, general lack of availability of reversal agents, higher drug cost, unsuitability in patients with poor medication compliance, and lack of efficacy data in certain patient populations.
- Decision to switch from warfarin to a NOAC should thoroughly consider its pros and cons, clinical factors, and patient preferences.
Case
A 70-year old woman with hypertension, diabetes, nonischemic stroke, moderate renal insufficiency (creatinine clearance [CrCl] 45 mL/min), heart failure, and nonvalvular atrial fibrillation (AF) on warfarin is admitted because of a very supratherapeutic INR. She reports labile INR values despite strict adherence to her medication regimen. Her cancer screening tests had previously been unremarkable. She inquires about the risks and benefits of switching to a novel oral anticoagulant (NOAC) as advertised on television. Should you consider it while she is still in the hospital?
Brief overview of the issue
Lifelong anticoagulation therapy is common among patients with AF or recurrent venous thromboembolism (VTE). Until the advent of NOACs, a great majority of patients were prescribed warfarin, the oral vitamin K antagonist that requires regular blood tests for monitoring of the INR. In contrast to warfarin, NOACs are direct-acting agents (hence also known as “direct oral anticoagulants” or DOACs) that are selective for one specific coagulation factor, either thrombin (e.g., dabigatran) or factor Xa (e.g., rivaroxaban, apixaban, and edoxaban, all with an “X” in their names).
NOACS have been studied and approved by the Food and Drug Administration for nonvalvular AF, i.e., patients without rheumatic mitral stenosis, mechanical or bioprosthetic heart valve, or prior mitral valve repair. Compared to warfarin, NOACS have fewer drug or food interactions, have more predictable pharmacokinetics, and may be associated with reduced risk of major bleeding depending on the agent. The latter is a particularly attractive feature of NOAC therapy, especially when its use is considered among older patients at risk of intracranial hemorrhage (ICH), such as those with previous strokes, ICH, or reduced renal function. Unfortunately, data on the efficacy and safety of the use of NOACs in certain patient populations (e.g., those with severe renal insufficiency, active malignancy, the elderly, patients with suboptimal medication adherence) are generally lacking.
Overview of the data
There are no randomized controlled trials (RCTs) addressing the clinical benefits of switching from warfarin to NOAC therapy. However, based on a number of RCTs comparing warfarin to individual NOACs and their related meta-analyses, the following conclusions may be made about their attributes:
1. Noninferiority to warfarin in reducing the risk of ischemic stroke in AF.
2. Association with a lower rate of major bleeds (statistically significant or trend) and a lower rate of ICH and hemorrhagic strokes compared to warfarin.
3. Association with a higher rate of gastrointestinal bleeding compared to warfarin (except for apixaban, low-dose dabigatran, and edoxaban1).
4. Association with a decreased rate of all stroke and thromboembolism events compared to warfarin.
5. Association with a slightly decreased all-cause mortality in AF compared to warfarin in many studies,2-8 but not all.1,9
6. Noninferiority to warfarin in all-cause mortality in patients with VTE and for its secondary prevention.1,4
NOACS should be used with caution or avoided altogether in patients with severe liver disease or renal insufficiency (see Table 1).
Potential advantages and disadvantages of NOAC therapy are listed in Table 2.
It should be emphasized that in patients with cancer or hypercoagulable state, no clear efficacy or safety data are currently available for the use of NOACs.
The 2016 CHEST guideline on antithrombotic therapy for VTE recommends NOACs over warfarin.10 The 2012 European Society of Cardiology AF guidelines also recommend NOACs over warfarin.11 However, the 2014 American College of Cardiology/American Heart Association/Heart Rhythm Society guidelines on AF state that it is not necessary to change to a NOAC when patients are “stable, easily controlled, and satisfied with warfarin therapy.”12
Data from a relatively small, short-term study examining the safety of switching patients from warfarin to a NOAC suggest that although bleeding events are relatively common (12%) following such a switch, major bleeding and cardiac or cerebrovascular events are rare.10
Application of the data to our original case
Given a high calculated CHADS2VASC score of 8 in our patient, she has a clear indication for anticoagulation for AF. Her history of labile INRs, ischemic stroke, and moderate renal insufficiency place her at high risk for ICH.
A NOAC may reduce this risk but possibly at the expense of an increased risk for a gastrointestinal bleed. More importantly, however, she may be a good candidate for a switch to a NOAC because of her labile INRs despite good medication adherence. Her warfarin can be held while hospitalized and a NOAC may be initiated when the INR falls below 2.
Prior to discharge, potential cost of the drug to the patient should be explored and discussed. It is also important to involve the primary care physician in the decision-making process. Ultimately, selection of an appropriate NOAC should be based on a careful review of its risks and benefits, clinical factors, patient preference, and shared decision making.
Bottom line
Hospitalists are in a great position to discuss a switch to a NOAC in selected patients with history of good medication adherence and labile INRs or ICH risk factors.
Dr. Geisler, Dr. Liao, and Dr. Manian are hospitalists at Massachusetts General Hospital in Boston.
References
1. Sharma M et al. Efficacy and harms of direct oral anticoagulants in the elderly for stroke prevention in atrial fibrillation and secondary prevention of venous thromboembolism: Systematic review and meta-analysis. Circulation. 2015;132(3):194-204.
2. Ruff CT et al. Comparison of the efficacy and safety of new oral anticoagulants with warfarin in patients with atrial fibrillation: A meta-analysis of randomised trials. Lancet. 2014;383(9921):955-62.
3. Dentali F et al. Efficacy and safety of the novel oral anticoagulants in atrial fibrillation: A systematic review and meta-analysis of the literature. Circulation. 2012;126(20):2381-91.
4. Adam SS et al. Comparative effectiveness of warfarin and new oral anticoagulants for the management of atrial fibrillation and venous thromboembolism: A systematic review. Ann Intern Med. 2012;157(11):796-807.
5. Bruins Slot KM and Berge E. Factor Xa inhibitors versus vitamin K antagonists for preventing cerebral or systemic embolism in patients with atrial fibrillation. Cochrane Database Syst Rev. 2013(8):CD008980.
6. Gomez-Outes A et al. Dabigatran, rivaroxaban, or apixaban versus warfarin in patients with nonvalvular atrial fibrillation: A systematic review and meta-analysis of subgroups. Thrombosis. 2013;2013:640723.
7. Miller CS et al. Meta-analysis of efficacy and safety of new oral anticoagulants (dabigatran, rivaroxaban, apixaban) versus warfarin in patients with atrial fibrillation. Am J Cardiol. 2012;110(3):453-60.
8. Baker WL and Phung OJ. Systematic review and adjusted indirect comparison meta-analysis of oral anticoagulants in atrial fibrillation. Circ Cardiovasc Qual Outcomes. 2012;5(5):711-19.
9. Ntaios G et al. Nonvitamin-K-antagonist oral anticoagulants in patients with atrial fibrillation and previous stroke or transient ischemic attack: A systematic review and meta-analysis of randomized controlled trials. Stroke. 2012;43(12):3298-304.
10. Kearon C et al. Antithrombotic therapy for VTE disease: CHEST guideline and expert panel report. Chest. 2016;149(2):315-52.
11. Camm AJ et al. 2012 focused update of the ESC guidelines for the management of atrial fibrillation: an update of the 2010 ESC guidelines for the management of atrial fibrillation – developed with the special contribution of the European Heart Rhythm Association. Europace. 2012;14(10):1385-413.
12. January CT et al. 2014 AHA/ACC/HRS guideline for the management of patients with atrial fibrillation: A report of the American College of Cardiology/American Heart Association task force on practice guidelines and the Heart Rhythm Society. Circulation. 2014;130(23):e199-267.
Quiz
When considering a switch from warfarin to a NOAC, all the following factors should be considered a potential advantage, except:
A. No need for routing lab monitoring.
B. Lower risk of gastrointestinal bleeding.
C. Fewer drug interactions.
D. Lower rates of intracranial bleed and hemorrhagic stroke.
The correct answer is B. NOACs have been associated with lower risk of intracranial bleed and hemorrhagic stroke but not gastrointestinal bleed. Routine lab monitoring is not necessary during their use and they are associated with fewer drug interactions compared to warfarin.
Key Points
- NOACs represent a clear advancement in our anticoagulation armamentarium.
- Potential advantages of their use include lower rates of intracranial bleed and hemorrhagic strokes, fewer drug or food interactions, and lack of need for routing lab monitoring.
- Potential disadvantages of their use include increased rates of gastrointestinal bleed with some agents, general lack of availability of reversal agents, higher drug cost, unsuitability in patients with poor medication compliance, and lack of efficacy data in certain patient populations.
- Decision to switch from warfarin to a NOAC should thoroughly consider its pros and cons, clinical factors, and patient preferences.
Childhood change of residence raises psychoses risk in young adults
Children and adolescents who moved longer distances or more frequently before 16 years of age were significantly more likely to develop psychosis in early adulthood than were those with less residential mobility, according to data from about 1.4 million children and adolescents in Sweden.
Data from previous studies have supported a link between childhood residential mobility and subsequent nonaffective psychoses, but no research has addressed the effects in later adolescence and young adulthood until now, wrote Ceri Price of Cardiff (Wales) University and colleagues.
In a study published in JAMA Psychiatry, the researchers reviewed data from a population-based cohort of individuals who were born in Sweden between Jan. 1, 1982, and Dec. 31, 1995, and lived in Sweden at age 16 years. The participants were followed from their 16th birthdays until a diagnosis of a nonaffective psychotic disorder, death, censorship because of emigration, or Dec. 31, 2011 – whichever came first.
Overall, the most sensitive range for an association between moving and psychosis was ages 16-19 years; the adjusted hazard ratio for a nonaffective psychotic disorder was 1.99 for participants who moved each year between ages 16 and 19 years, compared with those who never moved. In addition, moving greater distances before 16 years of age was independently associated with an increased risk of nonaffective psychosis (HR, 1.11) and the data suggested a nonlinear threshold effect when the distance moved exceeded 30 km.
A total of 4,537 individuals had a nonaffective psychotic disorder at a median 21 years of age, and a dose-response relationship emerged between more frequent moves and increased risk of nonaffective psychosis after controlling for confounding variables.
By contrast, a single move in young adulthood was not associated with increased psychosis risk, but moving at least four times during young adulthood was associated with an increased risk (adjusted HR, 1.82).
The study findings were strengthened by the longitudinal design and large population, but they were limited by several factors, including an absence of data on other adverse childhood experiences, such as family discord; peer relationships, such as friendships and bullying; and information on school changes and the disruption of peer relationships, the researchers wrote.
However, the results support the theory that psychosis risk can be affected by the disruption of social networks, peer support, and identity formation that occurs when children and adolescents move, and these results have potential implications for child health services and social policy, they noted.
“It is important that health, social, and educational practitioners ensure that children and adolescents who are newly resident to their neighborhoods receive adequate support to minimize the risks of adverse outcomes during adulthood, and every effort should be made to ensure the effective transfer of care for highly mobile children who are already in contact with health and social services,” they said.
The researchers had no financial conflicts to disclose. The study was supported in part by the Wellcome Trust and the Royal Society.
SOURCE: Price C et al. JAMA Psychiatry. 2018 Aug 22. doi: 10.1001/jamapsychiatry.2018.2233.
Children and adolescents who moved longer distances or more frequently before 16 years of age were significantly more likely to develop psychosis in early adulthood than were those with less residential mobility, according to data from about 1.4 million children and adolescents in Sweden.
Data from previous studies have supported a link between childhood residential mobility and subsequent nonaffective psychoses, but no research has addressed the effects in later adolescence and young adulthood until now, wrote Ceri Price of Cardiff (Wales) University and colleagues.
In a study published in JAMA Psychiatry, the researchers reviewed data from a population-based cohort of individuals who were born in Sweden between Jan. 1, 1982, and Dec. 31, 1995, and lived in Sweden at age 16 years. The participants were followed from their 16th birthdays until a diagnosis of a nonaffective psychotic disorder, death, censorship because of emigration, or Dec. 31, 2011 – whichever came first.
Overall, the most sensitive range for an association between moving and psychosis was ages 16-19 years; the adjusted hazard ratio for a nonaffective psychotic disorder was 1.99 for participants who moved each year between ages 16 and 19 years, compared with those who never moved. In addition, moving greater distances before 16 years of age was independently associated with an increased risk of nonaffective psychosis (HR, 1.11) and the data suggested a nonlinear threshold effect when the distance moved exceeded 30 km.
A total of 4,537 individuals had a nonaffective psychotic disorder at a median 21 years of age, and a dose-response relationship emerged between more frequent moves and increased risk of nonaffective psychosis after controlling for confounding variables.
By contrast, a single move in young adulthood was not associated with increased psychosis risk, but moving at least four times during young adulthood was associated with an increased risk (adjusted HR, 1.82).
The study findings were strengthened by the longitudinal design and large population, but they were limited by several factors, including an absence of data on other adverse childhood experiences, such as family discord; peer relationships, such as friendships and bullying; and information on school changes and the disruption of peer relationships, the researchers wrote.
However, the results support the theory that psychosis risk can be affected by the disruption of social networks, peer support, and identity formation that occurs when children and adolescents move, and these results have potential implications for child health services and social policy, they noted.
“It is important that health, social, and educational practitioners ensure that children and adolescents who are newly resident to their neighborhoods receive adequate support to minimize the risks of adverse outcomes during adulthood, and every effort should be made to ensure the effective transfer of care for highly mobile children who are already in contact with health and social services,” they said.
The researchers had no financial conflicts to disclose. The study was supported in part by the Wellcome Trust and the Royal Society.
SOURCE: Price C et al. JAMA Psychiatry. 2018 Aug 22. doi: 10.1001/jamapsychiatry.2018.2233.
Children and adolescents who moved longer distances or more frequently before 16 years of age were significantly more likely to develop psychosis in early adulthood than were those with less residential mobility, according to data from about 1.4 million children and adolescents in Sweden.
Data from previous studies have supported a link between childhood residential mobility and subsequent nonaffective psychoses, but no research has addressed the effects in later adolescence and young adulthood until now, wrote Ceri Price of Cardiff (Wales) University and colleagues.
In a study published in JAMA Psychiatry, the researchers reviewed data from a population-based cohort of individuals who were born in Sweden between Jan. 1, 1982, and Dec. 31, 1995, and lived in Sweden at age 16 years. The participants were followed from their 16th birthdays until a diagnosis of a nonaffective psychotic disorder, death, censorship because of emigration, or Dec. 31, 2011 – whichever came first.
Overall, the most sensitive range for an association between moving and psychosis was ages 16-19 years; the adjusted hazard ratio for a nonaffective psychotic disorder was 1.99 for participants who moved each year between ages 16 and 19 years, compared with those who never moved. In addition, moving greater distances before 16 years of age was independently associated with an increased risk of nonaffective psychosis (HR, 1.11) and the data suggested a nonlinear threshold effect when the distance moved exceeded 30 km.
A total of 4,537 individuals had a nonaffective psychotic disorder at a median 21 years of age, and a dose-response relationship emerged between more frequent moves and increased risk of nonaffective psychosis after controlling for confounding variables.
By contrast, a single move in young adulthood was not associated with increased psychosis risk, but moving at least four times during young adulthood was associated with an increased risk (adjusted HR, 1.82).
The study findings were strengthened by the longitudinal design and large population, but they were limited by several factors, including an absence of data on other adverse childhood experiences, such as family discord; peer relationships, such as friendships and bullying; and information on school changes and the disruption of peer relationships, the researchers wrote.
However, the results support the theory that psychosis risk can be affected by the disruption of social networks, peer support, and identity formation that occurs when children and adolescents move, and these results have potential implications for child health services and social policy, they noted.
“It is important that health, social, and educational practitioners ensure that children and adolescents who are newly resident to their neighborhoods receive adequate support to minimize the risks of adverse outcomes during adulthood, and every effort should be made to ensure the effective transfer of care for highly mobile children who are already in contact with health and social services,” they said.
The researchers had no financial conflicts to disclose. The study was supported in part by the Wellcome Trust and the Royal Society.
SOURCE: Price C et al. JAMA Psychiatry. 2018 Aug 22. doi: 10.1001/jamapsychiatry.2018.2233.
FROM JAMA PSYCHIATRY
Key clinical point: Clinicians and teachers should ensure that children and adolescents who are new to communities receive support “to minimize the risks of adverse outcomes during adulthood.”
Major finding: Those who moved residentially each year between 16 and 19 years of age were significantly more likely to develop nonaffective psychoses, compared with those who never moved (hazard ratio, 1.99).
Study details: The data come from a prospective cohort study of 1,440,383 youth living in Sweden.
Disclosures: The researchers had no financial conflicts to disclose. The study was supported in part by the Wellcome Trust and the Royal Society.
Source: Price C et al. JAMA Psychiatry. 2018 Aug 22. doi: 10.1001/jamapsychiatry.2018.2233.
Mailing out fecal tests may improve CRC screening rates
Mailing fecal immunochemical tests (FITs) to overdue patients improved the rate of colorectal cancer screening in community health centers, results of a recent randomized trial show.
Outreach by mail led to a 3.4–percentage point increase in completion of FIT, compared with clinics who did not participate in the intervention, according to results of the randomized Strategies and Opportunities to STOP Colon Cancer in Priority Populations (STOP CRC) trial.
Although that difference was statistically significant, investigators said the improvement was less than expected based on previous experience, including a pilot study showing that the strategy of mailing fecal tests boosted completion rates by 38%.
Based on that discrepancy, additional strategies may be needed to support implementation of FIT mailing programs in low-resource health centers, reported Gloria D. Coronado, PhD, Kaiser Permanente Center for Health Research, Portland, Ore., and coinvestigators.
“This work demonstrates that mailed FIT outreach programs can have clinical impact when integrated into clinical work flows, but emphasizes the need to identify additional strategies to support program implementation in low-resource health centers,” Dr. Coronado and coauthors said in JAMA Internal Medicine.
The STOP CRC study included 26 federally qualified health center clinics serving low-income populations in Oregon and California. Investigators identified a total of 41,193 adults overdue for colorectal cancer screening between Feb. 4, 2014 and Feb. 3, 2015.
The core of the intervention was a set of electronic health record–embedded tools that identified adults due for screening and allowed staff to generate letters and mailing labels for a series of three mailings. The first mailing was an introductory letter, the second was a FIT kit packet that included wordless instructions, and the third was a reminder letter.
For clinics that participated in the intervention, the rate of FIT completion was 13.9%, versus 10.4%, a difference that was statistically significant (95% confidence interval, 0.1%-6.8%; P = .047), according to investigators. Likewise, the proportion of participants completing any CRC screening was significantly higher in the intervention clinics (18.3% versus 14.5%; 3.8 percentage points difference; 95% CI, 0.6%-7.0%; P = .024).
Somewhat larger effects were seen in an analysis that accounted for delays in implementation of the program. In that analysis, FIT completion rates were 17.6% for the intervention clinics and 12.8% for the usual care clinics (95% CI, 0.9-8.6%; P = .020), with similar increases seen in the proportion of patients receiving any CRC screening.
These increases in screening occurred despite “relatively low” implementation of the program, Dr. Coronado and colleagues said.
In the pilot study, a concerted effort was made to ensure all eligible adults got the intervention; in this study, 6,925 out of 21,134 intervention participants (33%) got an introductory letter, and of those, 91% received the FIT and 59% got the reminder letter.
Implementation varied widely by health center, ranging from 6.5% to 68.2%, investigators said in their report.
One reason for low implementation may be that the program competed with other priorities in the clinics. In interviews, health center leaders said challenges in the clinic included time burden, limited organizational capacity, and challenges with the EHR and associated reporting tools.
“For most participating health centers, STOP CRC represented the first time EHR tools were used to deliver cancer screening services outside the clinic,” Dr. Coronado said. “Implementation might have increased with experience.”
The research reported by Dr. Coronado and coinvestigators was supported by the National Institutes of Health. Dr. Coronado reported serving as a coinvestigator on a study of an experimental blood test for colorectal cancer funded by EpiGenomics and as principal investigator on a study of an experimental FIT funded by Quidel Corporation. No other disclosures were reported.
SOURCE: Coronado GD et al. JAMA Intern Med. 2018 Aug 6. doi: 10.1001/jamainternmed.2018.3629.
Mailing fecal immunochemical tests (FITs) to overdue patients improved the rate of colorectal cancer screening in community health centers, results of a recent randomized trial show.
Outreach by mail led to a 3.4–percentage point increase in completion of FIT, compared with clinics who did not participate in the intervention, according to results of the randomized Strategies and Opportunities to STOP Colon Cancer in Priority Populations (STOP CRC) trial.
Although that difference was statistically significant, investigators said the improvement was less than expected based on previous experience, including a pilot study showing that the strategy of mailing fecal tests boosted completion rates by 38%.
Based on that discrepancy, additional strategies may be needed to support implementation of FIT mailing programs in low-resource health centers, reported Gloria D. Coronado, PhD, Kaiser Permanente Center for Health Research, Portland, Ore., and coinvestigators.
“This work demonstrates that mailed FIT outreach programs can have clinical impact when integrated into clinical work flows, but emphasizes the need to identify additional strategies to support program implementation in low-resource health centers,” Dr. Coronado and coauthors said in JAMA Internal Medicine.
The STOP CRC study included 26 federally qualified health center clinics serving low-income populations in Oregon and California. Investigators identified a total of 41,193 adults overdue for colorectal cancer screening between Feb. 4, 2014 and Feb. 3, 2015.
The core of the intervention was a set of electronic health record–embedded tools that identified adults due for screening and allowed staff to generate letters and mailing labels for a series of three mailings. The first mailing was an introductory letter, the second was a FIT kit packet that included wordless instructions, and the third was a reminder letter.
For clinics that participated in the intervention, the rate of FIT completion was 13.9%, versus 10.4%, a difference that was statistically significant (95% confidence interval, 0.1%-6.8%; P = .047), according to investigators. Likewise, the proportion of participants completing any CRC screening was significantly higher in the intervention clinics (18.3% versus 14.5%; 3.8 percentage points difference; 95% CI, 0.6%-7.0%; P = .024).
Somewhat larger effects were seen in an analysis that accounted for delays in implementation of the program. In that analysis, FIT completion rates were 17.6% for the intervention clinics and 12.8% for the usual care clinics (95% CI, 0.9-8.6%; P = .020), with similar increases seen in the proportion of patients receiving any CRC screening.
These increases in screening occurred despite “relatively low” implementation of the program, Dr. Coronado and colleagues said.
In the pilot study, a concerted effort was made to ensure all eligible adults got the intervention; in this study, 6,925 out of 21,134 intervention participants (33%) got an introductory letter, and of those, 91% received the FIT and 59% got the reminder letter.
Implementation varied widely by health center, ranging from 6.5% to 68.2%, investigators said in their report.
One reason for low implementation may be that the program competed with other priorities in the clinics. In interviews, health center leaders said challenges in the clinic included time burden, limited organizational capacity, and challenges with the EHR and associated reporting tools.
“For most participating health centers, STOP CRC represented the first time EHR tools were used to deliver cancer screening services outside the clinic,” Dr. Coronado said. “Implementation might have increased with experience.”
The research reported by Dr. Coronado and coinvestigators was supported by the National Institutes of Health. Dr. Coronado reported serving as a coinvestigator on a study of an experimental blood test for colorectal cancer funded by EpiGenomics and as principal investigator on a study of an experimental FIT funded by Quidel Corporation. No other disclosures were reported.
SOURCE: Coronado GD et al. JAMA Intern Med. 2018 Aug 6. doi: 10.1001/jamainternmed.2018.3629.
Mailing fecal immunochemical tests (FITs) to overdue patients improved the rate of colorectal cancer screening in community health centers, results of a recent randomized trial show.
Outreach by mail led to a 3.4–percentage point increase in completion of FIT, compared with clinics who did not participate in the intervention, according to results of the randomized Strategies and Opportunities to STOP Colon Cancer in Priority Populations (STOP CRC) trial.
Although that difference was statistically significant, investigators said the improvement was less than expected based on previous experience, including a pilot study showing that the strategy of mailing fecal tests boosted completion rates by 38%.
Based on that discrepancy, additional strategies may be needed to support implementation of FIT mailing programs in low-resource health centers, reported Gloria D. Coronado, PhD, Kaiser Permanente Center for Health Research, Portland, Ore., and coinvestigators.
“This work demonstrates that mailed FIT outreach programs can have clinical impact when integrated into clinical work flows, but emphasizes the need to identify additional strategies to support program implementation in low-resource health centers,” Dr. Coronado and coauthors said in JAMA Internal Medicine.
The STOP CRC study included 26 federally qualified health center clinics serving low-income populations in Oregon and California. Investigators identified a total of 41,193 adults overdue for colorectal cancer screening between Feb. 4, 2014 and Feb. 3, 2015.
The core of the intervention was a set of electronic health record–embedded tools that identified adults due for screening and allowed staff to generate letters and mailing labels for a series of three mailings. The first mailing was an introductory letter, the second was a FIT kit packet that included wordless instructions, and the third was a reminder letter.
For clinics that participated in the intervention, the rate of FIT completion was 13.9%, versus 10.4%, a difference that was statistically significant (95% confidence interval, 0.1%-6.8%; P = .047), according to investigators. Likewise, the proportion of participants completing any CRC screening was significantly higher in the intervention clinics (18.3% versus 14.5%; 3.8 percentage points difference; 95% CI, 0.6%-7.0%; P = .024).
Somewhat larger effects were seen in an analysis that accounted for delays in implementation of the program. In that analysis, FIT completion rates were 17.6% for the intervention clinics and 12.8% for the usual care clinics (95% CI, 0.9-8.6%; P = .020), with similar increases seen in the proportion of patients receiving any CRC screening.
These increases in screening occurred despite “relatively low” implementation of the program, Dr. Coronado and colleagues said.
In the pilot study, a concerted effort was made to ensure all eligible adults got the intervention; in this study, 6,925 out of 21,134 intervention participants (33%) got an introductory letter, and of those, 91% received the FIT and 59% got the reminder letter.
Implementation varied widely by health center, ranging from 6.5% to 68.2%, investigators said in their report.
One reason for low implementation may be that the program competed with other priorities in the clinics. In interviews, health center leaders said challenges in the clinic included time burden, limited organizational capacity, and challenges with the EHR and associated reporting tools.
“For most participating health centers, STOP CRC represented the first time EHR tools were used to deliver cancer screening services outside the clinic,” Dr. Coronado said. “Implementation might have increased with experience.”
The research reported by Dr. Coronado and coinvestigators was supported by the National Institutes of Health. Dr. Coronado reported serving as a coinvestigator on a study of an experimental blood test for colorectal cancer funded by EpiGenomics and as principal investigator on a study of an experimental FIT funded by Quidel Corporation. No other disclosures were reported.
SOURCE: Coronado GD et al. JAMA Intern Med. 2018 Aug 6. doi: 10.1001/jamainternmed.2018.3629.
FROM JAMA INTERNAL MEDICINE
Key clinical point: Mailing fecal immunochemical tests (FITs) to overdue patients improved the rate of colorectal cancer screening, though not to the extent that had been seen in a pilot study.
Major finding: Outreach by mail led to a 3.4–percentage point increase in FIT completion for participating clinics versus clinics that implemented usual care.
Study details: STOP CRC, a cluster-randomized pragmatic clinical trial including 26 federally qualified health center clinics and a total of 41,193 adults overdue for colorectal cancer screening.
Disclosures: The research was supported by the National Institutes of Health. One investigator reported disclosures related to EpiGenomics and Quidel Corporation.
Source: Coronado GD et al. JAMA Intern Med. 2018 Aug 6. doi: 10.1001/jamainternmed.2018.3629.
High autonomic dysfunction distinguishes persistent posttraumatic headache
SAN FRANCISCO – Symptoms of autonomic dysfunction are significantly greater in patients with persistent posttraumatic headache than in migraine, raising the welcome possibility that this characteristic might serve to reliably differentiate the two disorders, Levi Howard, MD, said at the annual meeting of the American Headache Society.
“Interestingly enough, in looking at other studies evaluating dysautonomia, the [autonomic dysfunction scores] in our persistent posttraumatic headache group were on a par with scores previously reported for patients with diseases such as small-fiber polyneuropathy and postural orthostatic tachycardia syndrome,” observed Dr. Howard, an active duty military physician assigned to obtain neurology training at the Mayo Clinic Arizona in Phoenix.
“This brings up two questions: Do autonomic symptoms contribute to accurate classification of persistent posttraumatic headache versus migraine? And if we treat this autonomic dysfunction, does the headache also improve? In our clinical observation, this appears to be the case,” he continued.
Right now, posttraumatic headache (PTH) is defined on the basis of its temporal relationship to head injury. At this time, PTH has no defining clinical characteristics, although most often it has a phenotype that meets diagnostic criteria for migraine.
However, Dr. Howard and his coinvestigators have observed anecdotally in clinical practice that persistent PTH – defined as PTH lasting longer than 3 months – is often accompanied by orthostatic intolerance. This observation, coupled with reports in multiple prior studies that dysautonomia is common in patients with mild traumatic brain injury (TBI) and postconcussion syndrome, prompted Dr. Howard and his coworkers to conduct a cross-sectional cohort study. It included 56 patients with persistent PTH due to mild TBI, 30 patients with migraine, and 36 healthy controls. Most of the persistent PTH group were military veterans with mild TBI due to blast injuries.
All subjects were assessed for autonomic dysfunction using the well-validated COMPASS-31 questionnaire. This instrument assesses six domains of autonomic function: orthostatic intolerance, bladder, gastrointestinal, vasomotor, secretomotor, and pupillomotor.
Scores in each of the six domains were numerically higher in the persistent PTH group, with the differences achieving statistical significance in the orthostatic intolerance and bladder domains.
Of note, the migraine group had a greater headache burden, with a mean 23-year headache history, compared with 10.56 years in the persistent PTH group. The migraine patients also averaged 21.1 headache days per month, versus 16.2 in the persistent PTH group. Yet the investigators found no strong association between autonomic dysfunction and headache burden as reflected in headache duration or headache days per month.
The study was funded by the Department of Defense. Dr. Howard reported having no financial conflicts of interest.
SOURCE: Howard L et al. AHS 2018, Abstract FHM03.
SAN FRANCISCO – Symptoms of autonomic dysfunction are significantly greater in patients with persistent posttraumatic headache than in migraine, raising the welcome possibility that this characteristic might serve to reliably differentiate the two disorders, Levi Howard, MD, said at the annual meeting of the American Headache Society.
“Interestingly enough, in looking at other studies evaluating dysautonomia, the [autonomic dysfunction scores] in our persistent posttraumatic headache group were on a par with scores previously reported for patients with diseases such as small-fiber polyneuropathy and postural orthostatic tachycardia syndrome,” observed Dr. Howard, an active duty military physician assigned to obtain neurology training at the Mayo Clinic Arizona in Phoenix.
“This brings up two questions: Do autonomic symptoms contribute to accurate classification of persistent posttraumatic headache versus migraine? And if we treat this autonomic dysfunction, does the headache also improve? In our clinical observation, this appears to be the case,” he continued.
Right now, posttraumatic headache (PTH) is defined on the basis of its temporal relationship to head injury. At this time, PTH has no defining clinical characteristics, although most often it has a phenotype that meets diagnostic criteria for migraine.
However, Dr. Howard and his coinvestigators have observed anecdotally in clinical practice that persistent PTH – defined as PTH lasting longer than 3 months – is often accompanied by orthostatic intolerance. This observation, coupled with reports in multiple prior studies that dysautonomia is common in patients with mild traumatic brain injury (TBI) and postconcussion syndrome, prompted Dr. Howard and his coworkers to conduct a cross-sectional cohort study. It included 56 patients with persistent PTH due to mild TBI, 30 patients with migraine, and 36 healthy controls. Most of the persistent PTH group were military veterans with mild TBI due to blast injuries.
All subjects were assessed for autonomic dysfunction using the well-validated COMPASS-31 questionnaire. This instrument assesses six domains of autonomic function: orthostatic intolerance, bladder, gastrointestinal, vasomotor, secretomotor, and pupillomotor.
Scores in each of the six domains were numerically higher in the persistent PTH group, with the differences achieving statistical significance in the orthostatic intolerance and bladder domains.
Of note, the migraine group had a greater headache burden, with a mean 23-year headache history, compared with 10.56 years in the persistent PTH group. The migraine patients also averaged 21.1 headache days per month, versus 16.2 in the persistent PTH group. Yet the investigators found no strong association between autonomic dysfunction and headache burden as reflected in headache duration or headache days per month.
The study was funded by the Department of Defense. Dr. Howard reported having no financial conflicts of interest.
SOURCE: Howard L et al. AHS 2018, Abstract FHM03.
SAN FRANCISCO – Symptoms of autonomic dysfunction are significantly greater in patients with persistent posttraumatic headache than in migraine, raising the welcome possibility that this characteristic might serve to reliably differentiate the two disorders, Levi Howard, MD, said at the annual meeting of the American Headache Society.
“Interestingly enough, in looking at other studies evaluating dysautonomia, the [autonomic dysfunction scores] in our persistent posttraumatic headache group were on a par with scores previously reported for patients with diseases such as small-fiber polyneuropathy and postural orthostatic tachycardia syndrome,” observed Dr. Howard, an active duty military physician assigned to obtain neurology training at the Mayo Clinic Arizona in Phoenix.
“This brings up two questions: Do autonomic symptoms contribute to accurate classification of persistent posttraumatic headache versus migraine? And if we treat this autonomic dysfunction, does the headache also improve? In our clinical observation, this appears to be the case,” he continued.
Right now, posttraumatic headache (PTH) is defined on the basis of its temporal relationship to head injury. At this time, PTH has no defining clinical characteristics, although most often it has a phenotype that meets diagnostic criteria for migraine.
However, Dr. Howard and his coinvestigators have observed anecdotally in clinical practice that persistent PTH – defined as PTH lasting longer than 3 months – is often accompanied by orthostatic intolerance. This observation, coupled with reports in multiple prior studies that dysautonomia is common in patients with mild traumatic brain injury (TBI) and postconcussion syndrome, prompted Dr. Howard and his coworkers to conduct a cross-sectional cohort study. It included 56 patients with persistent PTH due to mild TBI, 30 patients with migraine, and 36 healthy controls. Most of the persistent PTH group were military veterans with mild TBI due to blast injuries.
All subjects were assessed for autonomic dysfunction using the well-validated COMPASS-31 questionnaire. This instrument assesses six domains of autonomic function: orthostatic intolerance, bladder, gastrointestinal, vasomotor, secretomotor, and pupillomotor.
Scores in each of the six domains were numerically higher in the persistent PTH group, with the differences achieving statistical significance in the orthostatic intolerance and bladder domains.
Of note, the migraine group had a greater headache burden, with a mean 23-year headache history, compared with 10.56 years in the persistent PTH group. The migraine patients also averaged 21.1 headache days per month, versus 16.2 in the persistent PTH group. Yet the investigators found no strong association between autonomic dysfunction and headache burden as reflected in headache duration or headache days per month.
The study was funded by the Department of Defense. Dr. Howard reported having no financial conflicts of interest.
SOURCE: Howard L et al. AHS 2018, Abstract FHM03.
REPORTING FROM THE AHS ANNUAL MEETING
Key clinical point: New evidence that patients with persistent posttraumatic headache have high levels of autonomic dysfunction could open the door to novel treatments.
Major finding: Scores on the COMPASS-31 questionnaire, a measure of autonomic dysfunction, averaged 37.22 in patients with persistent posttraumatic headache, indicative of significantly greater impairment than the 27.15 in migraine patients and 11.67 in healthy controls.
Study details: This cross-sectional cohort study included 56 patients with persistent posttraumatic headache, 30 with migraine, and 36 healthy controls.
Disclosures: The study was sponsored by the Department of Defense and presented by an active duty military physician.
Source: Howard L et al. AHS 2018, Abstract FHM03
Earnings gap seen among Maryland physicians
Male physicians in Maryland reported higher earnings than did female physicians, even when they all worked 41 or more hours a week, according to a 2018 survey of physicians in the state.
The average pretax income for all 508 respondents was $299,000 in 2016: Male physicians (66.6% of the sample) had an average of $335,000 and women averaged 33% lower at $224,000, MedChi (the Maryland State Medical Society) and Merritt Hawkins reported on July 31. Men did report working a longer week: Their average of 50.5 hours was 11% more than the 45.4-hour average for women.
“The biggest disparities we see in compensation are between male and female physicians in Maryland,” Gene Ransom, MedChi’s chief executive officer, said in a written statement. “Though such disparities have been noted in other research, it is still surprising to see the extent to which they persist.”
Of the respondents who worked an average of 41 or more hours per week – an analysis conducted only for the three largest specialties in the survey – female internists earned 27% less than their male counterparts, female psychiatrists earned 24% less, and female family physicians earned 26% less, the survey results showed.
Earnings were structured somewhat differently for Maryland’s male and female physicians. Women were more likely to be compensated in the form of a straight salary than men (35.0% vs. 30.3%), and men were more likely to paid based on production (22.7% vs. 16.9%) or in the form of an income guarantee (0.9% vs. 0.0%). Proportions receiving a salary with a production bonus were 42.7% for men and 42.5% for women, according to the survey.
The survey was commissioned by MedChi and conducted by Merritt Hawkins from Jan. 10 to Feb. 23, 2018. The margin of error was plus or minus 4.4%.
Male physicians in Maryland reported higher earnings than did female physicians, even when they all worked 41 or more hours a week, according to a 2018 survey of physicians in the state.
The average pretax income for all 508 respondents was $299,000 in 2016: Male physicians (66.6% of the sample) had an average of $335,000 and women averaged 33% lower at $224,000, MedChi (the Maryland State Medical Society) and Merritt Hawkins reported on July 31. Men did report working a longer week: Their average of 50.5 hours was 11% more than the 45.4-hour average for women.
“The biggest disparities we see in compensation are between male and female physicians in Maryland,” Gene Ransom, MedChi’s chief executive officer, said in a written statement. “Though such disparities have been noted in other research, it is still surprising to see the extent to which they persist.”
Of the respondents who worked an average of 41 or more hours per week – an analysis conducted only for the three largest specialties in the survey – female internists earned 27% less than their male counterparts, female psychiatrists earned 24% less, and female family physicians earned 26% less, the survey results showed.
Earnings were structured somewhat differently for Maryland’s male and female physicians. Women were more likely to be compensated in the form of a straight salary than men (35.0% vs. 30.3%), and men were more likely to paid based on production (22.7% vs. 16.9%) or in the form of an income guarantee (0.9% vs. 0.0%). Proportions receiving a salary with a production bonus were 42.7% for men and 42.5% for women, according to the survey.
The survey was commissioned by MedChi and conducted by Merritt Hawkins from Jan. 10 to Feb. 23, 2018. The margin of error was plus or minus 4.4%.
Male physicians in Maryland reported higher earnings than did female physicians, even when they all worked 41 or more hours a week, according to a 2018 survey of physicians in the state.
The average pretax income for all 508 respondents was $299,000 in 2016: Male physicians (66.6% of the sample) had an average of $335,000 and women averaged 33% lower at $224,000, MedChi (the Maryland State Medical Society) and Merritt Hawkins reported on July 31. Men did report working a longer week: Their average of 50.5 hours was 11% more than the 45.4-hour average for women.
“The biggest disparities we see in compensation are between male and female physicians in Maryland,” Gene Ransom, MedChi’s chief executive officer, said in a written statement. “Though such disparities have been noted in other research, it is still surprising to see the extent to which they persist.”
Of the respondents who worked an average of 41 or more hours per week – an analysis conducted only for the three largest specialties in the survey – female internists earned 27% less than their male counterparts, female psychiatrists earned 24% less, and female family physicians earned 26% less, the survey results showed.
Earnings were structured somewhat differently for Maryland’s male and female physicians. Women were more likely to be compensated in the form of a straight salary than men (35.0% vs. 30.3%), and men were more likely to paid based on production (22.7% vs. 16.9%) or in the form of an income guarantee (0.9% vs. 0.0%). Proportions receiving a salary with a production bonus were 42.7% for men and 42.5% for women, according to the survey.
The survey was commissioned by MedChi and conducted by Merritt Hawkins from Jan. 10 to Feb. 23, 2018. The margin of error was plus or minus 4.4%.
New MS subtype shows absence of cerebral white matter demyelination
A new subtype of multiple sclerosis called myelocortical multiple sclerosis is characterized by demyelination only in the spinal cord and cerebral cortex and not in the cerebral white matter.
A paper published online Aug. 21 in Lancet Neurology presents the results of a study of the brains and spinal cords of 100 patients who died of multiple sclerosis.
Bruce D. Trapp, PhD, of the Lerner Research Institute at the Cleveland Clinic in Ohio, and his coauthors wrote that while the demyelination of cerebral white matter is a pathologic hallmark of multiple sclerosis, previous research has found only around half of cerebral T2-weighted hyperintense white matter lesions are demyelinated, and these lesions account for less than a third of variance in the rate of brain atrophy.
“In the absence of specific MRI metrics for demyelination, the relationship between cerebral white-matter demyelination and neurodegeneration remains speculative,” they wrote.
In this study, researchers scanned the brains with MRI before autopsy, then took centimeter-thick hemispheric slices to study the white-matter lesions. They identified 12 individuals as having what they describe as ‘myelocortical multiple sclerosis,’ characterized by the absence of areas of cerebral white-matter discoloration indicative of demyelinated lesions.
The authors then compared these individuals to 12 individuals with typical multiple sclerosis matched by age, sex, MRI protocol, multiple sclerosis disease subtype, disease duration, and Expanded Disability Status Scale.
They found that while individuals with myelocortical multiple sclerosis did not have demyelinated lesions in the cerebral white matter, they had similar areas of demyelinated lesions in the cerebral cortex to individuals with typical multiple sclerosis (median 4.45% vs. 9.74% respectively, P = .5512).
However, the individuals with myelocortical multiple sclerosis had a significantly smaller area of spinal cord demyelination (median 3.81% vs. 13.81%, P = .0083).
Individuals with myelocortical multiple sclerosis also had significantly lower mean cortical neuronal densities, compared with healthy control brains in layer III, layer V, and layer VI. But individuals with typical multiple sclerosis only had a lower cortical neuronal density in layer V when compared with controls.
Researchers also saw that in typical multiple sclerosis, neuronal density decreased as the area of brain white-matter demyelination increased. However, this negative linear correlation was not seen in myelocortical multiple sclerosis.
On MRI, researchers were still able to see abnormalities in the cerebral white matter in individuals with myelocortical multiple sclerosis, in T2-weighted, T1-weighted and magnetization transfer ratios (MTR) images.
They also found similar total T2-weighted and T1-weighted lesion volumes in individuals with myelocortical and with typical multiple sclerosis, although individuals with typical multiple sclerosis had significantly greater MTR lesion volumes.
“We propose that myelocortical multiple sclerosis is characterized by spinal cord demyelination, subpial cortical demyelination, and an absence of cerebral white-matter demyelination,” the authors wrote. “Our findings indicate that abnormal cerebral white-matter T2-T1-MTR regions of interest are not always demyelinated, and this pathological evidence suggests that cerebral white-matter demyelination and cortical neuronal degeneration can be independent events in myelocortical multiple sclerosis.”
The authors noted that their study may have been affected by selection bias, as all the patients in the study had died from complications of advanced multiple sclerosis. They suggested that it was therefore not appropriate to conclude that the prevalence of myelocortical multiple sclerosis seen in their sample would be similar across the multiple sclerosis population, nor were the findings likely to apply to people with earlier stage disease.
The study was funded by the U.S. National Institutes of Health and National Multiple Sclerosis Society. One author was an employee of Renovo Neural, and three authors were employees of Biogen. One author declared a pending patent related to automated lesion segmentation from MRI images, and four authors declared funding, fees, and non-financial support from pharmaceutical companies.
SOURCE: Trapp B et al. Lancet Neurol. 2018 Aug 21. doi: 10.1016/ S1474-4422(18)30245-X.
A new subtype of multiple sclerosis called myelocortical multiple sclerosis is characterized by demyelination only in the spinal cord and cerebral cortex and not in the cerebral white matter.
A paper published online Aug. 21 in Lancet Neurology presents the results of a study of the brains and spinal cords of 100 patients who died of multiple sclerosis.
Bruce D. Trapp, PhD, of the Lerner Research Institute at the Cleveland Clinic in Ohio, and his coauthors wrote that while the demyelination of cerebral white matter is a pathologic hallmark of multiple sclerosis, previous research has found only around half of cerebral T2-weighted hyperintense white matter lesions are demyelinated, and these lesions account for less than a third of variance in the rate of brain atrophy.
“In the absence of specific MRI metrics for demyelination, the relationship between cerebral white-matter demyelination and neurodegeneration remains speculative,” they wrote.
In this study, researchers scanned the brains with MRI before autopsy, then took centimeter-thick hemispheric slices to study the white-matter lesions. They identified 12 individuals as having what they describe as ‘myelocortical multiple sclerosis,’ characterized by the absence of areas of cerebral white-matter discoloration indicative of demyelinated lesions.
The authors then compared these individuals to 12 individuals with typical multiple sclerosis matched by age, sex, MRI protocol, multiple sclerosis disease subtype, disease duration, and Expanded Disability Status Scale.
They found that while individuals with myelocortical multiple sclerosis did not have demyelinated lesions in the cerebral white matter, they had similar areas of demyelinated lesions in the cerebral cortex to individuals with typical multiple sclerosis (median 4.45% vs. 9.74% respectively, P = .5512).
However, the individuals with myelocortical multiple sclerosis had a significantly smaller area of spinal cord demyelination (median 3.81% vs. 13.81%, P = .0083).
Individuals with myelocortical multiple sclerosis also had significantly lower mean cortical neuronal densities, compared with healthy control brains in layer III, layer V, and layer VI. But individuals with typical multiple sclerosis only had a lower cortical neuronal density in layer V when compared with controls.
Researchers also saw that in typical multiple sclerosis, neuronal density decreased as the area of brain white-matter demyelination increased. However, this negative linear correlation was not seen in myelocortical multiple sclerosis.
On MRI, researchers were still able to see abnormalities in the cerebral white matter in individuals with myelocortical multiple sclerosis, in T2-weighted, T1-weighted and magnetization transfer ratios (MTR) images.
They also found similar total T2-weighted and T1-weighted lesion volumes in individuals with myelocortical and with typical multiple sclerosis, although individuals with typical multiple sclerosis had significantly greater MTR lesion volumes.
“We propose that myelocortical multiple sclerosis is characterized by spinal cord demyelination, subpial cortical demyelination, and an absence of cerebral white-matter demyelination,” the authors wrote. “Our findings indicate that abnormal cerebral white-matter T2-T1-MTR regions of interest are not always demyelinated, and this pathological evidence suggests that cerebral white-matter demyelination and cortical neuronal degeneration can be independent events in myelocortical multiple sclerosis.”
The authors noted that their study may have been affected by selection bias, as all the patients in the study had died from complications of advanced multiple sclerosis. They suggested that it was therefore not appropriate to conclude that the prevalence of myelocortical multiple sclerosis seen in their sample would be similar across the multiple sclerosis population, nor were the findings likely to apply to people with earlier stage disease.
The study was funded by the U.S. National Institutes of Health and National Multiple Sclerosis Society. One author was an employee of Renovo Neural, and three authors were employees of Biogen. One author declared a pending patent related to automated lesion segmentation from MRI images, and four authors declared funding, fees, and non-financial support from pharmaceutical companies.
SOURCE: Trapp B et al. Lancet Neurol. 2018 Aug 21. doi: 10.1016/ S1474-4422(18)30245-X.
A new subtype of multiple sclerosis called myelocortical multiple sclerosis is characterized by demyelination only in the spinal cord and cerebral cortex and not in the cerebral white matter.
A paper published online Aug. 21 in Lancet Neurology presents the results of a study of the brains and spinal cords of 100 patients who died of multiple sclerosis.
Bruce D. Trapp, PhD, of the Lerner Research Institute at the Cleveland Clinic in Ohio, and his coauthors wrote that while the demyelination of cerebral white matter is a pathologic hallmark of multiple sclerosis, previous research has found only around half of cerebral T2-weighted hyperintense white matter lesions are demyelinated, and these lesions account for less than a third of variance in the rate of brain atrophy.
“In the absence of specific MRI metrics for demyelination, the relationship between cerebral white-matter demyelination and neurodegeneration remains speculative,” they wrote.
In this study, researchers scanned the brains with MRI before autopsy, then took centimeter-thick hemispheric slices to study the white-matter lesions. They identified 12 individuals as having what they describe as ‘myelocortical multiple sclerosis,’ characterized by the absence of areas of cerebral white-matter discoloration indicative of demyelinated lesions.
The authors then compared these individuals to 12 individuals with typical multiple sclerosis matched by age, sex, MRI protocol, multiple sclerosis disease subtype, disease duration, and Expanded Disability Status Scale.
They found that while individuals with myelocortical multiple sclerosis did not have demyelinated lesions in the cerebral white matter, they had similar areas of demyelinated lesions in the cerebral cortex to individuals with typical multiple sclerosis (median 4.45% vs. 9.74% respectively, P = .5512).
However, the individuals with myelocortical multiple sclerosis had a significantly smaller area of spinal cord demyelination (median 3.81% vs. 13.81%, P = .0083).
Individuals with myelocortical multiple sclerosis also had significantly lower mean cortical neuronal densities, compared with healthy control brains in layer III, layer V, and layer VI. But individuals with typical multiple sclerosis only had a lower cortical neuronal density in layer V when compared with controls.
Researchers also saw that in typical multiple sclerosis, neuronal density decreased as the area of brain white-matter demyelination increased. However, this negative linear correlation was not seen in myelocortical multiple sclerosis.
On MRI, researchers were still able to see abnormalities in the cerebral white matter in individuals with myelocortical multiple sclerosis, in T2-weighted, T1-weighted and magnetization transfer ratios (MTR) images.
They also found similar total T2-weighted and T1-weighted lesion volumes in individuals with myelocortical and with typical multiple sclerosis, although individuals with typical multiple sclerosis had significantly greater MTR lesion volumes.
“We propose that myelocortical multiple sclerosis is characterized by spinal cord demyelination, subpial cortical demyelination, and an absence of cerebral white-matter demyelination,” the authors wrote. “Our findings indicate that abnormal cerebral white-matter T2-T1-MTR regions of interest are not always demyelinated, and this pathological evidence suggests that cerebral white-matter demyelination and cortical neuronal degeneration can be independent events in myelocortical multiple sclerosis.”
The authors noted that their study may have been affected by selection bias, as all the patients in the study had died from complications of advanced multiple sclerosis. They suggested that it was therefore not appropriate to conclude that the prevalence of myelocortical multiple sclerosis seen in their sample would be similar across the multiple sclerosis population, nor were the findings likely to apply to people with earlier stage disease.
The study was funded by the U.S. National Institutes of Health and National Multiple Sclerosis Society. One author was an employee of Renovo Neural, and three authors were employees of Biogen. One author declared a pending patent related to automated lesion segmentation from MRI images, and four authors declared funding, fees, and non-financial support from pharmaceutical companies.
SOURCE: Trapp B et al. Lancet Neurol. 2018 Aug 21. doi: 10.1016/ S1474-4422(18)30245-X.
FROM LANCET NEUROLOGY
Key clinical point: Researchers have identified a new subtype of multiple sclerosis.
Major finding: Individuals with myelocortical multiple sclerosis show demyelination in the spinal cord and cortex only.
Study details: Post-mortem study of brains and spinal cords of 100 individuals with multiple sclerosis.
Disclosures: The study was funded by the U.S. National Institutes of Health and National Multiple Sclerosis Society. One author was an employee of Renovo Neural, three authors were employees of Biogen. One author declared a pending patent related to automated lesion segmentation from MRI images, and four authors declared funding, fees and non-financial support from pharmaceutical companies.
Source: Trapp B et al. Lancet Neurol. 2018 Aug 21. doi: 10.1016/ S1474-4422(18)30245-X.
Model finds spontaneous HCV clearance higher than previous estimates
Up to 40% of hepatitis C virus (HCV)–infected individuals clear their infection spontaneously, based on the results of a new mathematical model of HCV transmission and clearance, according to a report published online in the International Journal of Infectious Diseases.
Houssein H. Ayoub, PhD, of Cornell University, New York, and his colleagues conducted a study on HCV clearance. Previous estimates using empirical data indicated that the HCV clearance rate was about 25% after and acute infection duration of 16.5 weeks, according to Dr. Ayoub and his colleagues.
They developed a model to describe HCV transmission and a virus clearance rate (fclearance), defined as the proportion of HCV-infected persons who spontaneously clear their infection after the acute stage. The rest of the infected population (1–fclearance) become chronically infected and positive for both HCV antibodies and HCV RNA.This was estimated by fitting the model to probability-based and nationally representative, population-based data from Egypt (2008 and 2015), and the U.S. National Health and Nutrition Examination Surveys (NHANES A and NHANES B) data. Their model showed that fclearance was related to the HCV viremic rate approximately as fclearance = 1.16 x (1–HCV viremic rate). The HCV viremic rate was defined as the proportion of individuals who were positive for HCV antibodies and HCV RNA out of all who were positive for HCV antibodies positive, regardless of RNA status, as measured in a cross-sectional survey.
Antibody prevalence in Egypt was estimated at 14.7% in 2008 and 10.0% in 2015, while the viremic rate was assessed as 67.1% and 70.2%, respectively. For the United States, the pooled antibody prevalence from the NHANES A data between 1999 and 2012 was an estimated 1.4% and the pooled viremic rate was estimated at around 74%. The NHANES B data used as the denominator for HCV viremic rate both individuals confirmed as HCV Ab positive and those with an undetermined HCV antibody status. (NHANES laboratory procedures can provide this added information because of their subsequent testing of undetermined HCV antibody results for HCV RNA positivity.) This change to the formula yielded a viremic rate of 64.6%.
They found that fclearance was an estimated at 39.9% and 33.5% for Egypt in 2008 and 2015, respectively, and 29.6% and 49.9% for NHANES A and NHANES B, respectively.
“Empirical measures from longitudinal cohort studies may have underestimated the ability of the host immune system to clear HCV infection. This finding may have also implications for our understanding of the biological determinants of HCV spontaneous clearance. It may hint that a strategy for HCV vaccine development could be a vaccine that does not necessarily prevent infection, but modulates immune response towards conditions that increase the capacity of the host immune system to clear HCV infection spontaneously,” the researchers concluded.
The study was funded by the Qatar National Research Fund and Cornell University. The authors reported no conflicts of interest.
SOURCE: Ayoub HH et al. Int J Infect Dis. 2018 Jul 18. doi: 10.1016/j.ijid.2018.07.013.
Up to 40% of hepatitis C virus (HCV)–infected individuals clear their infection spontaneously, based on the results of a new mathematical model of HCV transmission and clearance, according to a report published online in the International Journal of Infectious Diseases.
Houssein H. Ayoub, PhD, of Cornell University, New York, and his colleagues conducted a study on HCV clearance. Previous estimates using empirical data indicated that the HCV clearance rate was about 25% after and acute infection duration of 16.5 weeks, according to Dr. Ayoub and his colleagues.
They developed a model to describe HCV transmission and a virus clearance rate (fclearance), defined as the proportion of HCV-infected persons who spontaneously clear their infection after the acute stage. The rest of the infected population (1–fclearance) become chronically infected and positive for both HCV antibodies and HCV RNA.This was estimated by fitting the model to probability-based and nationally representative, population-based data from Egypt (2008 and 2015), and the U.S. National Health and Nutrition Examination Surveys (NHANES A and NHANES B) data. Their model showed that fclearance was related to the HCV viremic rate approximately as fclearance = 1.16 x (1–HCV viremic rate). The HCV viremic rate was defined as the proportion of individuals who were positive for HCV antibodies and HCV RNA out of all who were positive for HCV antibodies positive, regardless of RNA status, as measured in a cross-sectional survey.
Antibody prevalence in Egypt was estimated at 14.7% in 2008 and 10.0% in 2015, while the viremic rate was assessed as 67.1% and 70.2%, respectively. For the United States, the pooled antibody prevalence from the NHANES A data between 1999 and 2012 was an estimated 1.4% and the pooled viremic rate was estimated at around 74%. The NHANES B data used as the denominator for HCV viremic rate both individuals confirmed as HCV Ab positive and those with an undetermined HCV antibody status. (NHANES laboratory procedures can provide this added information because of their subsequent testing of undetermined HCV antibody results for HCV RNA positivity.) This change to the formula yielded a viremic rate of 64.6%.
They found that fclearance was an estimated at 39.9% and 33.5% for Egypt in 2008 and 2015, respectively, and 29.6% and 49.9% for NHANES A and NHANES B, respectively.
“Empirical measures from longitudinal cohort studies may have underestimated the ability of the host immune system to clear HCV infection. This finding may have also implications for our understanding of the biological determinants of HCV spontaneous clearance. It may hint that a strategy for HCV vaccine development could be a vaccine that does not necessarily prevent infection, but modulates immune response towards conditions that increase the capacity of the host immune system to clear HCV infection spontaneously,” the researchers concluded.
The study was funded by the Qatar National Research Fund and Cornell University. The authors reported no conflicts of interest.
SOURCE: Ayoub HH et al. Int J Infect Dis. 2018 Jul 18. doi: 10.1016/j.ijid.2018.07.013.
Up to 40% of hepatitis C virus (HCV)–infected individuals clear their infection spontaneously, based on the results of a new mathematical model of HCV transmission and clearance, according to a report published online in the International Journal of Infectious Diseases.
Houssein H. Ayoub, PhD, of Cornell University, New York, and his colleagues conducted a study on HCV clearance. Previous estimates using empirical data indicated that the HCV clearance rate was about 25% after and acute infection duration of 16.5 weeks, according to Dr. Ayoub and his colleagues.
They developed a model to describe HCV transmission and a virus clearance rate (fclearance), defined as the proportion of HCV-infected persons who spontaneously clear their infection after the acute stage. The rest of the infected population (1–fclearance) become chronically infected and positive for both HCV antibodies and HCV RNA.This was estimated by fitting the model to probability-based and nationally representative, population-based data from Egypt (2008 and 2015), and the U.S. National Health and Nutrition Examination Surveys (NHANES A and NHANES B) data. Their model showed that fclearance was related to the HCV viremic rate approximately as fclearance = 1.16 x (1–HCV viremic rate). The HCV viremic rate was defined as the proportion of individuals who were positive for HCV antibodies and HCV RNA out of all who were positive for HCV antibodies positive, regardless of RNA status, as measured in a cross-sectional survey.
Antibody prevalence in Egypt was estimated at 14.7% in 2008 and 10.0% in 2015, while the viremic rate was assessed as 67.1% and 70.2%, respectively. For the United States, the pooled antibody prevalence from the NHANES A data between 1999 and 2012 was an estimated 1.4% and the pooled viremic rate was estimated at around 74%. The NHANES B data used as the denominator for HCV viremic rate both individuals confirmed as HCV Ab positive and those with an undetermined HCV antibody status. (NHANES laboratory procedures can provide this added information because of their subsequent testing of undetermined HCV antibody results for HCV RNA positivity.) This change to the formula yielded a viremic rate of 64.6%.
They found that fclearance was an estimated at 39.9% and 33.5% for Egypt in 2008 and 2015, respectively, and 29.6% and 49.9% for NHANES A and NHANES B, respectively.
“Empirical measures from longitudinal cohort studies may have underestimated the ability of the host immune system to clear HCV infection. This finding may have also implications for our understanding of the biological determinants of HCV spontaneous clearance. It may hint that a strategy for HCV vaccine development could be a vaccine that does not necessarily prevent infection, but modulates immune response towards conditions that increase the capacity of the host immune system to clear HCV infection spontaneously,” the researchers concluded.
The study was funded by the Qatar National Research Fund and Cornell University. The authors reported no conflicts of interest.
SOURCE: Ayoub HH et al. Int J Infect Dis. 2018 Jul 18. doi: 10.1016/j.ijid.2018.07.013.
FROM THE INTERNATIONAL JOURNAL OF INFECTIOUS DISEASES
Key clinical point: The hepatitis C virus clearance rate was 39.9% and 33.5% for Egypt in 2008 and 2015, respectively, and 29.6% and 49.9% for two U.S. populations.
Major finding: A new model of HCV-infected persons indicates that up to 40% clear their infection spontaneously, higher than earlier estimates.
Study details: A mathematical model was developed to describe HCV transmission and clearance based on nationally representative population data for Egypt and the United States.
Disclosures: The study was funded by the Qatar National Research Fund and Cornell University. The authors reported no conflicts of interest.
Source: Ayoub HH et al. Int J Infect Dis. 2018 Jul 18. doi: 10.1016/j.ijid.2018.07.013.
Prescription opioid epidemic hits migraine patients hard
SAN FRANCISCO –
“Given the opioid epidemic in the U.S. and the high prevalence of opioid use in migraine patients shown in our study, especially in those with chronic migraine, our results suggest that improved management of treatment is needed to optimize care,” said Justin S. Yu, PharmD.
“We’re seeing a lot of opioid use in these migraine patients. It may not all be due to migraine – some had comorbid nonheadache pain conditions – but this still represents an opportunity to look at these patients more closely, maybe treat them better, because there are opportunities to improve their care, their outcomes, and their quality of life,” added Dr. Yu of Allergan in Irvine, Calif.
He presented an in-depth retrospective observational study of opioid use in 129 chronic migraine patients as defined by more than 15 headache days per month (at least 8 of which fulfilled the diagnostic criteria for migraine) and 63 others with less frequent episodic migraine. In the previous 12 months, according to claims data, 54% of the chronic migraine patients and 37% of the episodic migraine patient filled one or more prescriptions for an opioid.
More impressively, fully one-third of the chronic migraine group and 16% of episodic migraineurs filled three or more opioid prescriptions within that 12-month interval. In fact, the mean number of filled opioid prescriptions over the year was 4.0 among all chronic migraine patients and 2.8 in the overall episodic migraine cohort.
“Opioids have been used for acute treatment of chronic migraine and episodic migraine but are not recommended for regular use due to the risk of medication overuse, tolerance, dependence, and opioid hyperalgesia,” Dr. Yu noted.
Separately, Richard B. Lipton, MD, presented an analysis of 3,930 migraine patients currently using acute oral prescription headache medications who were among the larger group of 15,133 migraineurs who participated in the massive MAST (Migraine in America Symptoms and Treatment) study, an Internet-based epidemiologic survey of a nationally representative sample of patients with the disorder.
Topping the list of the most frequently used oral prescription acute headache medications were the oral triptans, used by 46% of subjects, followed by prescription NSAIDs, taken by 36%, and oral prescription opioids, used by 33.1%. And that eyebrow-raising rate of prescription opioid use is apparently par for the course.
“In lots of survey data now we’re seeing that among people with migraine who take acute prescription drugs, this one-third number is not unusual, although at least from my perspective it’s certainly a problem. Also, of people using oral acute prescription agents, a full 66% were also using OTC headache medications,” said Dr. Lipton, professor and vice chair of the department of neurology at Albert Einstein College of Medicine and director of the Montefiore Headache Center in New York.
In addition, oral prescription barbiturates were used for acute treatment of migraine by 11.2% of the MAST participants.
Acute medication overuse, as defined by use of oral prescription opioids and/or barbiturates on an average of more than 10 days per month, was identified in 8.1% of the total group. For them, the appropriate course of action is withdrawal of the overused medication, addition of a preventive agent – such drugs weren’t being used by the great majority of patients on acute prescription medications for migraine – and replacement of the opioid or barbiturate with a less problematic class of acute therapy, Dr. Lipton advised.
The MAST analysis identified a bevy of major unmet needs in people with migraine who are using acute prescription medications to treat their headaches. Fifty-three percent of participants said their severe headache attacks come on very rapidly, 50% indicated their attacks reach peak intensity in less than 2 hours no matter what they do, 39% said their head pain returns less than 24 hours after initial pain relief, and 41% complained of severe headache upon awakening. Nausea interfering with daily activities was frequently cited. Seventy-six percent of the sample had at least one of these major unmet needs.
Dr. Lipton stressed that although some of his colleagues have reacted defensively to these data highlighting numerous major unmet treatment needs in migraine patients, the MAST findings certainly aren’t an indictment that headache specialists are doing a poor job.
“I am not saying that. Of course, the vast majority of these people aren’t treated by headache specialists, they’re treated by primary care physicians. What I am saying is there are lots of opportunities to use new and emerging tools to improve the lives of our patients,” the neurologist said.
Dr. Yu noted that in his study, 14.7% of patients with chronic migraine and 15.9% with episodic migraine had been diagnosed with an anxiety disorder within the previous 12 months. Also, 24% with chronic and 11.1% with episodic migraine had been diagnosed with depression. A diagnosis of a comorbid nonheadache pain disorder was present in 13% of the chronic migraineurs and 8% of those with episodic migraine.
The MAST study was sponsored by Promius Pharma, a subsidiary of Dr. Reddy’s Laboratories. Dr. Lipton reported receiving research funding from and/or honoraria from that company and more than a dozen others.
Dr. Yu is employed by Allergan, which sponsored his study.
SOURCE: AHS annual meeting, Yu JS et al., Abstract PF11, and Lipton RB et al., Abstract OR02.
SAN FRANCISCO –
“Given the opioid epidemic in the U.S. and the high prevalence of opioid use in migraine patients shown in our study, especially in those with chronic migraine, our results suggest that improved management of treatment is needed to optimize care,” said Justin S. Yu, PharmD.
“We’re seeing a lot of opioid use in these migraine patients. It may not all be due to migraine – some had comorbid nonheadache pain conditions – but this still represents an opportunity to look at these patients more closely, maybe treat them better, because there are opportunities to improve their care, their outcomes, and their quality of life,” added Dr. Yu of Allergan in Irvine, Calif.
He presented an in-depth retrospective observational study of opioid use in 129 chronic migraine patients as defined by more than 15 headache days per month (at least 8 of which fulfilled the diagnostic criteria for migraine) and 63 others with less frequent episodic migraine. In the previous 12 months, according to claims data, 54% of the chronic migraine patients and 37% of the episodic migraine patient filled one or more prescriptions for an opioid.
More impressively, fully one-third of the chronic migraine group and 16% of episodic migraineurs filled three or more opioid prescriptions within that 12-month interval. In fact, the mean number of filled opioid prescriptions over the year was 4.0 among all chronic migraine patients and 2.8 in the overall episodic migraine cohort.
“Opioids have been used for acute treatment of chronic migraine and episodic migraine but are not recommended for regular use due to the risk of medication overuse, tolerance, dependence, and opioid hyperalgesia,” Dr. Yu noted.
Separately, Richard B. Lipton, MD, presented an analysis of 3,930 migraine patients currently using acute oral prescription headache medications who were among the larger group of 15,133 migraineurs who participated in the massive MAST (Migraine in America Symptoms and Treatment) study, an Internet-based epidemiologic survey of a nationally representative sample of patients with the disorder.
Topping the list of the most frequently used oral prescription acute headache medications were the oral triptans, used by 46% of subjects, followed by prescription NSAIDs, taken by 36%, and oral prescription opioids, used by 33.1%. And that eyebrow-raising rate of prescription opioid use is apparently par for the course.
“In lots of survey data now we’re seeing that among people with migraine who take acute prescription drugs, this one-third number is not unusual, although at least from my perspective it’s certainly a problem. Also, of people using oral acute prescription agents, a full 66% were also using OTC headache medications,” said Dr. Lipton, professor and vice chair of the department of neurology at Albert Einstein College of Medicine and director of the Montefiore Headache Center in New York.
In addition, oral prescription barbiturates were used for acute treatment of migraine by 11.2% of the MAST participants.
Acute medication overuse, as defined by use of oral prescription opioids and/or barbiturates on an average of more than 10 days per month, was identified in 8.1% of the total group. For them, the appropriate course of action is withdrawal of the overused medication, addition of a preventive agent – such drugs weren’t being used by the great majority of patients on acute prescription medications for migraine – and replacement of the opioid or barbiturate with a less problematic class of acute therapy, Dr. Lipton advised.
The MAST analysis identified a bevy of major unmet needs in people with migraine who are using acute prescription medications to treat their headaches. Fifty-three percent of participants said their severe headache attacks come on very rapidly, 50% indicated their attacks reach peak intensity in less than 2 hours no matter what they do, 39% said their head pain returns less than 24 hours after initial pain relief, and 41% complained of severe headache upon awakening. Nausea interfering with daily activities was frequently cited. Seventy-six percent of the sample had at least one of these major unmet needs.
Dr. Lipton stressed that although some of his colleagues have reacted defensively to these data highlighting numerous major unmet treatment needs in migraine patients, the MAST findings certainly aren’t an indictment that headache specialists are doing a poor job.
“I am not saying that. Of course, the vast majority of these people aren’t treated by headache specialists, they’re treated by primary care physicians. What I am saying is there are lots of opportunities to use new and emerging tools to improve the lives of our patients,” the neurologist said.
Dr. Yu noted that in his study, 14.7% of patients with chronic migraine and 15.9% with episodic migraine had been diagnosed with an anxiety disorder within the previous 12 months. Also, 24% with chronic and 11.1% with episodic migraine had been diagnosed with depression. A diagnosis of a comorbid nonheadache pain disorder was present in 13% of the chronic migraineurs and 8% of those with episodic migraine.
The MAST study was sponsored by Promius Pharma, a subsidiary of Dr. Reddy’s Laboratories. Dr. Lipton reported receiving research funding from and/or honoraria from that company and more than a dozen others.
Dr. Yu is employed by Allergan, which sponsored his study.
SOURCE: AHS annual meeting, Yu JS et al., Abstract PF11, and Lipton RB et al., Abstract OR02.
SAN FRANCISCO –
“Given the opioid epidemic in the U.S. and the high prevalence of opioid use in migraine patients shown in our study, especially in those with chronic migraine, our results suggest that improved management of treatment is needed to optimize care,” said Justin S. Yu, PharmD.
“We’re seeing a lot of opioid use in these migraine patients. It may not all be due to migraine – some had comorbid nonheadache pain conditions – but this still represents an opportunity to look at these patients more closely, maybe treat them better, because there are opportunities to improve their care, their outcomes, and their quality of life,” added Dr. Yu of Allergan in Irvine, Calif.
He presented an in-depth retrospective observational study of opioid use in 129 chronic migraine patients as defined by more than 15 headache days per month (at least 8 of which fulfilled the diagnostic criteria for migraine) and 63 others with less frequent episodic migraine. In the previous 12 months, according to claims data, 54% of the chronic migraine patients and 37% of the episodic migraine patient filled one or more prescriptions for an opioid.
More impressively, fully one-third of the chronic migraine group and 16% of episodic migraineurs filled three or more opioid prescriptions within that 12-month interval. In fact, the mean number of filled opioid prescriptions over the year was 4.0 among all chronic migraine patients and 2.8 in the overall episodic migraine cohort.
“Opioids have been used for acute treatment of chronic migraine and episodic migraine but are not recommended for regular use due to the risk of medication overuse, tolerance, dependence, and opioid hyperalgesia,” Dr. Yu noted.
Separately, Richard B. Lipton, MD, presented an analysis of 3,930 migraine patients currently using acute oral prescription headache medications who were among the larger group of 15,133 migraineurs who participated in the massive MAST (Migraine in America Symptoms and Treatment) study, an Internet-based epidemiologic survey of a nationally representative sample of patients with the disorder.
Topping the list of the most frequently used oral prescription acute headache medications were the oral triptans, used by 46% of subjects, followed by prescription NSAIDs, taken by 36%, and oral prescription opioids, used by 33.1%. And that eyebrow-raising rate of prescription opioid use is apparently par for the course.
“In lots of survey data now we’re seeing that among people with migraine who take acute prescription drugs, this one-third number is not unusual, although at least from my perspective it’s certainly a problem. Also, of people using oral acute prescription agents, a full 66% were also using OTC headache medications,” said Dr. Lipton, professor and vice chair of the department of neurology at Albert Einstein College of Medicine and director of the Montefiore Headache Center in New York.
In addition, oral prescription barbiturates were used for acute treatment of migraine by 11.2% of the MAST participants.
Acute medication overuse, as defined by use of oral prescription opioids and/or barbiturates on an average of more than 10 days per month, was identified in 8.1% of the total group. For them, the appropriate course of action is withdrawal of the overused medication, addition of a preventive agent – such drugs weren’t being used by the great majority of patients on acute prescription medications for migraine – and replacement of the opioid or barbiturate with a less problematic class of acute therapy, Dr. Lipton advised.
The MAST analysis identified a bevy of major unmet needs in people with migraine who are using acute prescription medications to treat their headaches. Fifty-three percent of participants said their severe headache attacks come on very rapidly, 50% indicated their attacks reach peak intensity in less than 2 hours no matter what they do, 39% said their head pain returns less than 24 hours after initial pain relief, and 41% complained of severe headache upon awakening. Nausea interfering with daily activities was frequently cited. Seventy-six percent of the sample had at least one of these major unmet needs.
Dr. Lipton stressed that although some of his colleagues have reacted defensively to these data highlighting numerous major unmet treatment needs in migraine patients, the MAST findings certainly aren’t an indictment that headache specialists are doing a poor job.
“I am not saying that. Of course, the vast majority of these people aren’t treated by headache specialists, they’re treated by primary care physicians. What I am saying is there are lots of opportunities to use new and emerging tools to improve the lives of our patients,” the neurologist said.
Dr. Yu noted that in his study, 14.7% of patients with chronic migraine and 15.9% with episodic migraine had been diagnosed with an anxiety disorder within the previous 12 months. Also, 24% with chronic and 11.1% with episodic migraine had been diagnosed with depression. A diagnosis of a comorbid nonheadache pain disorder was present in 13% of the chronic migraineurs and 8% of those with episodic migraine.
The MAST study was sponsored by Promius Pharma, a subsidiary of Dr. Reddy’s Laboratories. Dr. Lipton reported receiving research funding from and/or honoraria from that company and more than a dozen others.
Dr. Yu is employed by Allergan, which sponsored his study.
SOURCE: AHS annual meeting, Yu JS et al., Abstract PF11, and Lipton RB et al., Abstract OR02.
REPORTING FROM THE AHS ANNUAL MEETING