Psychological stress did not harm IVF outcomes

Article Type
Changed
Fri, 01/18/2019 - 16:17

 

– Self-reported psychological distress and biological markers of stress did not affect the chances of pregnancy after in vitro fertilization, according to a single-center prospective cohort study of 186 women.

 
Thinkstock


Patients with infertility reported significantly worse depressive symptoms at baseline compared with oocyte donors, with average Beck Depression Inventory scores of 11, compared with 2.3 (P less than .01). This difference persisted at the time of vaginal oocyte retrieval. First-time and third-time IVF recipients tended to have similar Beck Depression Inventory scores until pregnancy testing, when the average scores of third-time recipients increased by about five additional points.

Scores on the daily stress questionnaire echoed these findings. Patients who were receiving IVF reported significantly more stress than did oocyte donors, with the highest increase at the time of the pregnancy test among patients with previous IVF failures. However, both recipients and donors reported mild to moderate anxiety based on the State-Trait Anxiety Inventory, Dr. Costantini-Ferrando said.

All groups had low interleukin-6 levels when starting IVF, but donors had the highest levels of both ACTH and cortisol. Levels of these hormones dropped over time, and then rose at the time of pregnancy testing. That trend suggests that IVF was an acute “procedural” stressor for both recipients and donors, but that the magnitude of the stress response lessened somewhat over time with repeated exposure to the stressor, Dr. Costantini-Ferrando noted.

“The complexity of this interaction may explain why it is difficult to elucidate the true impact of stress on IVF outcome,” she said.

Ultimately, these findings support a shift in focus, she added. Instead of asking if stress decreases reproductive potential, clinicians and researchers can instead explore how to address psychological distress to maximize reproductive potential. “This is important to keep patients engaged in infertility treatment and facilitate the difficult process of decision making during treatment,” she added.

The paper won a prize from the ASRM Mental Health Professional Group.

The National Institutes of Health provided funding for the study. Dr. Costantini-Ferrando reported having no relevant financial disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Self-reported psychological distress and biological markers of stress did not affect the chances of pregnancy after in vitro fertilization, according to a single-center prospective cohort study of 186 women.

 
Thinkstock


Patients with infertility reported significantly worse depressive symptoms at baseline compared with oocyte donors, with average Beck Depression Inventory scores of 11, compared with 2.3 (P less than .01). This difference persisted at the time of vaginal oocyte retrieval. First-time and third-time IVF recipients tended to have similar Beck Depression Inventory scores until pregnancy testing, when the average scores of third-time recipients increased by about five additional points.

Scores on the daily stress questionnaire echoed these findings. Patients who were receiving IVF reported significantly more stress than did oocyte donors, with the highest increase at the time of the pregnancy test among patients with previous IVF failures. However, both recipients and donors reported mild to moderate anxiety based on the State-Trait Anxiety Inventory, Dr. Costantini-Ferrando said.

All groups had low interleukin-6 levels when starting IVF, but donors had the highest levels of both ACTH and cortisol. Levels of these hormones dropped over time, and then rose at the time of pregnancy testing. That trend suggests that IVF was an acute “procedural” stressor for both recipients and donors, but that the magnitude of the stress response lessened somewhat over time with repeated exposure to the stressor, Dr. Costantini-Ferrando noted.

“The complexity of this interaction may explain why it is difficult to elucidate the true impact of stress on IVF outcome,” she said.

Ultimately, these findings support a shift in focus, she added. Instead of asking if stress decreases reproductive potential, clinicians and researchers can instead explore how to address psychological distress to maximize reproductive potential. “This is important to keep patients engaged in infertility treatment and facilitate the difficult process of decision making during treatment,” she added.

The paper won a prize from the ASRM Mental Health Professional Group.

The National Institutes of Health provided funding for the study. Dr. Costantini-Ferrando reported having no relevant financial disclosures.

 

– Self-reported psychological distress and biological markers of stress did not affect the chances of pregnancy after in vitro fertilization, according to a single-center prospective cohort study of 186 women.

 
Thinkstock


Patients with infertility reported significantly worse depressive symptoms at baseline compared with oocyte donors, with average Beck Depression Inventory scores of 11, compared with 2.3 (P less than .01). This difference persisted at the time of vaginal oocyte retrieval. First-time and third-time IVF recipients tended to have similar Beck Depression Inventory scores until pregnancy testing, when the average scores of third-time recipients increased by about five additional points.

Scores on the daily stress questionnaire echoed these findings. Patients who were receiving IVF reported significantly more stress than did oocyte donors, with the highest increase at the time of the pregnancy test among patients with previous IVF failures. However, both recipients and donors reported mild to moderate anxiety based on the State-Trait Anxiety Inventory, Dr. Costantini-Ferrando said.

All groups had low interleukin-6 levels when starting IVF, but donors had the highest levels of both ACTH and cortisol. Levels of these hormones dropped over time, and then rose at the time of pregnancy testing. That trend suggests that IVF was an acute “procedural” stressor for both recipients and donors, but that the magnitude of the stress response lessened somewhat over time with repeated exposure to the stressor, Dr. Costantini-Ferrando noted.

“The complexity of this interaction may explain why it is difficult to elucidate the true impact of stress on IVF outcome,” she said.

Ultimately, these findings support a shift in focus, she added. Instead of asking if stress decreases reproductive potential, clinicians and researchers can instead explore how to address psychological distress to maximize reproductive potential. “This is important to keep patients engaged in infertility treatment and facilitate the difficult process of decision making during treatment,” she added.

The paper won a prize from the ASRM Mental Health Professional Group.

The National Institutes of Health provided funding for the study. Dr. Costantini-Ferrando reported having no relevant financial disclosures.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Vitals

 

Key clinical point: Psychological distress does not appear to impact pregnancy rates after in vitro fertilization.

Major finding: Rates of pregnancy were 36% among first-time IVF recipients and 30% among third-time IVF recipients, and did not correlate with self-reported depressive symptoms, stress, anxiety, or serum levels of adrenocorticotropic hormone, cortisol, or interleukin-6.

Data source: A single-center prospective study of 186 IVF patients.

Disclosures: The National Institutes of Health provided funding for the study. Dr. Costantini-Ferrando reported having no financial disclosures.

Use of 2D bar coding with vaccines may be the future in pediatric practice

Article Type
Changed
Fri, 01/18/2019 - 16:17

 

ATLANTA– Since the first bar coded consumer product, a pack of gum, was scanned in June of 1974, the soon widespread use of bar codes changed little until 2D bar codes arrived toward the end of last century. Today, the increasing use of 2D bar code technology with vaccines offers practices the potential for greater accuracy and efficiency with vaccine administration and data entry – if they have the resources to take the plunge.

An overview of 2D bar code use with vaccines, presented at a conference sponsored by the Centers for Disease Control and Prevention, provided a glimpse into both the types of changes practices might see with adoption of the technology and the way some clinics have made the transition.

Courtesy National Center for Immunization and Respiratory Diseases


Ken Gerlach, MPH, of the Immunization Services Division at the CDC in Atlanta, outlined the history of bar code use in immunizations, starting with a November 1999 Institute of Medicine report that identified the contribution of human error to disease and led the Food and Drug Administration to begin requiring linear bar codes on pharmaceutical unit-of-use products to reduce errors.

Then, a meeting organized by the American Academy of Pediatrics in January 2009 with the FDA, CDC, vaccine manufacturers, and other stakeholders led to a bar code rule change by the FDA in August 2011 that allowed alternatives to the traditional linear bar codes on vaccine vials and syringes.

“They essentially indicated to the pharmaceutical companies that it’s okay to add 2D bar codes, and this is essentially the point where things began to take off,” Mr. Gerlach explained. Until then, there had been no 2D bar codes on vaccines, but today the majority of vaccine products have them, as do all Vaccine Information Statements. In addition to the standard information included on traditional bar codes – Global Trade Item Number (GTIN), lot and serial numbers, and the expiration date – 2D bar codes also can include most relevant patient information that would go into the EMR except the injection site and immunization route. But a practice cannot simply jump over to scanning the 2D bar codes without ensuring that its EMR system is configured to accept the scanning.

Mr. Gerlach described a three-part project by the CDC, from 2011 through 2017, that assesses the impact of 2D coding on vaccination data quality and work flow, facilitates the adoption of 2D bar code scanning in health care practices, and then assesses the potential for expanding 2D bar code use in a large health care system. The first part of the project, which ran from 2011 to 2014, involved two vaccine manufacturers and 217 health care practices with more than 1.4 million de-identified vaccination records, 18.1% of which had been 2D bar coded.

Analysis of data quality from that pilot revealed an 8% increase in the correctness of lot numbers and 11% increase for expiration dates, with a time savings of 3.4 seconds per vaccine administration. Among the 116 staff users who completed surveys, 86% agreed that 2D bar coding improves accuracy and completeness, and 60% agreed it was easy to integrate the bar coding into their usual data recording process.

The pilot revealed challenges as well, however: not all individuals units of vaccines were 2D bar coded, users did not always consistently scan the bar codes, and some bar codes were difficult to read, such as one that was brown and wouldn’t scan. Another obstacle was having different lot numbers on the unit of use versus the unit of sale with 10% of the vaccines. Further, because inventory management typically involves unit of sale, it does not always match well with scanning unit of use.
 

Clinicians’ beliefs and attitudes toward 2D bar coding

As more practices consider adopting the technology, buy-in will important. At the conference, Sharon Humiston, MD, and Jill Hernandez, MPH, of Children’s Mercy Hospital in Kansas City, Mo., shared the findings of an online questionnaire about 2D bar coding and practices’ current systems for vaccine inventory and recording patient immunization information. The researchers distributed the questionnaire link to various AAP sections and committees in listservs and emails. Those eligible to complete the 15-minute survey were primary care personnel who used EMRs but not 2D bar code scanning for vaccines. They also needed to be key decision makers in the process of purchasing technology for the practice, and their practice needed to be enrolled in the Vaccines for Children program.

Among the 77 respondents who met all the inclusion criteria (61% of all who started the survey), 1 in 5 were private practices with one or two physicians, just over a third (36%) were private practices with more than two physicians, and a quarter were multispecialty group practices. Overall, respondents administered an average 116 doses of DTaP and 50 doses of Tdap each month.

Protocols for immunization management varied considerably across the respondents. For recording vaccine information, 49% reported that an administrator pre-entered it into an EMR, but 43% reported that staff manually enter it into an EMR. About 55% of practices entered the information before vaccine administration, and 42% entered it afterward. Although 57% of respondents’ practices upload the vaccination information directly from the EMR to their state’s Immunization Information System (IIS), 30% must enter it both into the EMR and into the state IIS separately, and 11% don’t enter it into a state IIS.

More than half (56%) of the respondents were extremely interested in having a bar code scanner system, and 31% were moderately to strongly interested, rating a 6 to 9 on a scale of 1 to 10. If provided evidence that 2D bar codes reduced errors in vaccine documentation, 56% of respondents said it would greatly increase their interest, and 32% said it would somewhat increase it. Only 23% said their interest would greatly increase if the bar code technology allowed the vaccine information statement to be scanned into EMRs.

Nearly all the respondents agreed that 2D bar code scanning technology would improve efficiency and accuracy of entering vaccine information into medical records and tracking vaccine inventory. Further, 81% believed it would reduce medical malpractice liability, and 85% believed it would reduce risk of harm to patients. However, 23% thought bar code technology would disrupt office work flow, and a quarter believed the technology’s costs would exceeds its benefits.

Despite the strong interest overall, respondents reported a number of barriers to adopting 2D bar code technology. The greatest barrier, reported by more than 70%, was the upfront cost of purchasing software for the EMR interface, followed by the cost of the bar code scanners. Other barriers, reported by 25%-45% of respondents, were the need for staff training, the need to service and maintain electronics for the technology, and the purchase of additional computers for scanner sites. If a bar code system cost less than $5,000, then 80% of the respondents would definitely or probably adopt such a system. Few would adopt it if the system cost $10,000 or more, but 42% probably would if it cost between $5,000 and $9,999. Even this small survey of self-selected volunteers, however, suggested strong interest in using 2D bar code technology for vaccines – although initial costs for a system presented a significant barrier to most practices.
 

 

 

One influenza vaccine clinic’s experience

Interest based on hypothetical questions is one thing. The process of actually implementing a 2D bar code scanning system into a health care center is another. In a separate presentation, Jane Glaser, MSN, RN, executive director of Campbell County Public Health in Gillette, Wyo., reviewed how such a system was implemented for mass influenza vaccination.

Campbell County, in the northeast corner of Wyoming, covers more than 4,800 square miles, has a population base of nearly 50,000 people, and also serves individuals from Montana, South Dakota, and North Dakota. Although the community as a whole works 24/7 in the county because of the oil, mining, and farming industries, the mass flu clinic is open 7 a.m. to 7 p.m., during which it provides an estimated 700 to 1,500 flu vaccines daily. Personnel comprises 13 public health nurses, 5 administrative assistants, and 3-4 community volunteers.

After 20 years of using an IIS, the clinic’s leadership decided to begin using 2D bar code scanners in October 2011 after observing it at a state immunization conference. Their goals in changing systems were to increase clinic flow, decrease registration time, and decrease overtime due to data entry. The new work flow went as follows: Those with Wyoming driver licenses or state ID cards have the linear bar code on their ID scanned in the immunization registry, which automatically populates the patient’s record. Then the staff member enters the vaccine information directly into the IIS registry in real time after the client receives the vaccine.

Ms. Glaser describes a number of improvements that resulted from use of the bar code scanning system, starting with reduced time for clinic registration and improved clinic flow. They also found that using bar code scanning reduced manual entry errors and improved the efficiency of assessing vaccination status and needed vaccines. Entering data in real time at point of care reduced time spent on data entry later on, thereby leading to a decrease in overtime and subsequent cost savings.

For providers and practices interested in learning more about 2D bar coding, the CDC offers a current list of 2D bar coded vaccines, data from the pilot program, training materials, and AAP guidance about 2D bar code use.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

ATLANTA– Since the first bar coded consumer product, a pack of gum, was scanned in June of 1974, the soon widespread use of bar codes changed little until 2D bar codes arrived toward the end of last century. Today, the increasing use of 2D bar code technology with vaccines offers practices the potential for greater accuracy and efficiency with vaccine administration and data entry – if they have the resources to take the plunge.

An overview of 2D bar code use with vaccines, presented at a conference sponsored by the Centers for Disease Control and Prevention, provided a glimpse into both the types of changes practices might see with adoption of the technology and the way some clinics have made the transition.

Courtesy National Center for Immunization and Respiratory Diseases


Ken Gerlach, MPH, of the Immunization Services Division at the CDC in Atlanta, outlined the history of bar code use in immunizations, starting with a November 1999 Institute of Medicine report that identified the contribution of human error to disease and led the Food and Drug Administration to begin requiring linear bar codes on pharmaceutical unit-of-use products to reduce errors.

Then, a meeting organized by the American Academy of Pediatrics in January 2009 with the FDA, CDC, vaccine manufacturers, and other stakeholders led to a bar code rule change by the FDA in August 2011 that allowed alternatives to the traditional linear bar codes on vaccine vials and syringes.

“They essentially indicated to the pharmaceutical companies that it’s okay to add 2D bar codes, and this is essentially the point where things began to take off,” Mr. Gerlach explained. Until then, there had been no 2D bar codes on vaccines, but today the majority of vaccine products have them, as do all Vaccine Information Statements. In addition to the standard information included on traditional bar codes – Global Trade Item Number (GTIN), lot and serial numbers, and the expiration date – 2D bar codes also can include most relevant patient information that would go into the EMR except the injection site and immunization route. But a practice cannot simply jump over to scanning the 2D bar codes without ensuring that its EMR system is configured to accept the scanning.

Mr. Gerlach described a three-part project by the CDC, from 2011 through 2017, that assesses the impact of 2D coding on vaccination data quality and work flow, facilitates the adoption of 2D bar code scanning in health care practices, and then assesses the potential for expanding 2D bar code use in a large health care system. The first part of the project, which ran from 2011 to 2014, involved two vaccine manufacturers and 217 health care practices with more than 1.4 million de-identified vaccination records, 18.1% of which had been 2D bar coded.

Analysis of data quality from that pilot revealed an 8% increase in the correctness of lot numbers and 11% increase for expiration dates, with a time savings of 3.4 seconds per vaccine administration. Among the 116 staff users who completed surveys, 86% agreed that 2D bar coding improves accuracy and completeness, and 60% agreed it was easy to integrate the bar coding into their usual data recording process.

The pilot revealed challenges as well, however: not all individuals units of vaccines were 2D bar coded, users did not always consistently scan the bar codes, and some bar codes were difficult to read, such as one that was brown and wouldn’t scan. Another obstacle was having different lot numbers on the unit of use versus the unit of sale with 10% of the vaccines. Further, because inventory management typically involves unit of sale, it does not always match well with scanning unit of use.
 

Clinicians’ beliefs and attitudes toward 2D bar coding

As more practices consider adopting the technology, buy-in will important. At the conference, Sharon Humiston, MD, and Jill Hernandez, MPH, of Children’s Mercy Hospital in Kansas City, Mo., shared the findings of an online questionnaire about 2D bar coding and practices’ current systems for vaccine inventory and recording patient immunization information. The researchers distributed the questionnaire link to various AAP sections and committees in listservs and emails. Those eligible to complete the 15-minute survey were primary care personnel who used EMRs but not 2D bar code scanning for vaccines. They also needed to be key decision makers in the process of purchasing technology for the practice, and their practice needed to be enrolled in the Vaccines for Children program.

Among the 77 respondents who met all the inclusion criteria (61% of all who started the survey), 1 in 5 were private practices with one or two physicians, just over a third (36%) were private practices with more than two physicians, and a quarter were multispecialty group practices. Overall, respondents administered an average 116 doses of DTaP and 50 doses of Tdap each month.

Protocols for immunization management varied considerably across the respondents. For recording vaccine information, 49% reported that an administrator pre-entered it into an EMR, but 43% reported that staff manually enter it into an EMR. About 55% of practices entered the information before vaccine administration, and 42% entered it afterward. Although 57% of respondents’ practices upload the vaccination information directly from the EMR to their state’s Immunization Information System (IIS), 30% must enter it both into the EMR and into the state IIS separately, and 11% don’t enter it into a state IIS.

More than half (56%) of the respondents were extremely interested in having a bar code scanner system, and 31% were moderately to strongly interested, rating a 6 to 9 on a scale of 1 to 10. If provided evidence that 2D bar codes reduced errors in vaccine documentation, 56% of respondents said it would greatly increase their interest, and 32% said it would somewhat increase it. Only 23% said their interest would greatly increase if the bar code technology allowed the vaccine information statement to be scanned into EMRs.

Nearly all the respondents agreed that 2D bar code scanning technology would improve efficiency and accuracy of entering vaccine information into medical records and tracking vaccine inventory. Further, 81% believed it would reduce medical malpractice liability, and 85% believed it would reduce risk of harm to patients. However, 23% thought bar code technology would disrupt office work flow, and a quarter believed the technology’s costs would exceeds its benefits.

Despite the strong interest overall, respondents reported a number of barriers to adopting 2D bar code technology. The greatest barrier, reported by more than 70%, was the upfront cost of purchasing software for the EMR interface, followed by the cost of the bar code scanners. Other barriers, reported by 25%-45% of respondents, were the need for staff training, the need to service and maintain electronics for the technology, and the purchase of additional computers for scanner sites. If a bar code system cost less than $5,000, then 80% of the respondents would definitely or probably adopt such a system. Few would adopt it if the system cost $10,000 or more, but 42% probably would if it cost between $5,000 and $9,999. Even this small survey of self-selected volunteers, however, suggested strong interest in using 2D bar code technology for vaccines – although initial costs for a system presented a significant barrier to most practices.
 

 

 

One influenza vaccine clinic’s experience

Interest based on hypothetical questions is one thing. The process of actually implementing a 2D bar code scanning system into a health care center is another. In a separate presentation, Jane Glaser, MSN, RN, executive director of Campbell County Public Health in Gillette, Wyo., reviewed how such a system was implemented for mass influenza vaccination.

Campbell County, in the northeast corner of Wyoming, covers more than 4,800 square miles, has a population base of nearly 50,000 people, and also serves individuals from Montana, South Dakota, and North Dakota. Although the community as a whole works 24/7 in the county because of the oil, mining, and farming industries, the mass flu clinic is open 7 a.m. to 7 p.m., during which it provides an estimated 700 to 1,500 flu vaccines daily. Personnel comprises 13 public health nurses, 5 administrative assistants, and 3-4 community volunteers.

After 20 years of using an IIS, the clinic’s leadership decided to begin using 2D bar code scanners in October 2011 after observing it at a state immunization conference. Their goals in changing systems were to increase clinic flow, decrease registration time, and decrease overtime due to data entry. The new work flow went as follows: Those with Wyoming driver licenses or state ID cards have the linear bar code on their ID scanned in the immunization registry, which automatically populates the patient’s record. Then the staff member enters the vaccine information directly into the IIS registry in real time after the client receives the vaccine.

Ms. Glaser describes a number of improvements that resulted from use of the bar code scanning system, starting with reduced time for clinic registration and improved clinic flow. They also found that using bar code scanning reduced manual entry errors and improved the efficiency of assessing vaccination status and needed vaccines. Entering data in real time at point of care reduced time spent on data entry later on, thereby leading to a decrease in overtime and subsequent cost savings.

For providers and practices interested in learning more about 2D bar coding, the CDC offers a current list of 2D bar coded vaccines, data from the pilot program, training materials, and AAP guidance about 2D bar code use.

 

ATLANTA– Since the first bar coded consumer product, a pack of gum, was scanned in June of 1974, the soon widespread use of bar codes changed little until 2D bar codes arrived toward the end of last century. Today, the increasing use of 2D bar code technology with vaccines offers practices the potential for greater accuracy and efficiency with vaccine administration and data entry – if they have the resources to take the plunge.

An overview of 2D bar code use with vaccines, presented at a conference sponsored by the Centers for Disease Control and Prevention, provided a glimpse into both the types of changes practices might see with adoption of the technology and the way some clinics have made the transition.

Courtesy National Center for Immunization and Respiratory Diseases


Ken Gerlach, MPH, of the Immunization Services Division at the CDC in Atlanta, outlined the history of bar code use in immunizations, starting with a November 1999 Institute of Medicine report that identified the contribution of human error to disease and led the Food and Drug Administration to begin requiring linear bar codes on pharmaceutical unit-of-use products to reduce errors.

Then, a meeting organized by the American Academy of Pediatrics in January 2009 with the FDA, CDC, vaccine manufacturers, and other stakeholders led to a bar code rule change by the FDA in August 2011 that allowed alternatives to the traditional linear bar codes on vaccine vials and syringes.

“They essentially indicated to the pharmaceutical companies that it’s okay to add 2D bar codes, and this is essentially the point where things began to take off,” Mr. Gerlach explained. Until then, there had been no 2D bar codes on vaccines, but today the majority of vaccine products have them, as do all Vaccine Information Statements. In addition to the standard information included on traditional bar codes – Global Trade Item Number (GTIN), lot and serial numbers, and the expiration date – 2D bar codes also can include most relevant patient information that would go into the EMR except the injection site and immunization route. But a practice cannot simply jump over to scanning the 2D bar codes without ensuring that its EMR system is configured to accept the scanning.

Mr. Gerlach described a three-part project by the CDC, from 2011 through 2017, that assesses the impact of 2D coding on vaccination data quality and work flow, facilitates the adoption of 2D bar code scanning in health care practices, and then assesses the potential for expanding 2D bar code use in a large health care system. The first part of the project, which ran from 2011 to 2014, involved two vaccine manufacturers and 217 health care practices with more than 1.4 million de-identified vaccination records, 18.1% of which had been 2D bar coded.

Analysis of data quality from that pilot revealed an 8% increase in the correctness of lot numbers and 11% increase for expiration dates, with a time savings of 3.4 seconds per vaccine administration. Among the 116 staff users who completed surveys, 86% agreed that 2D bar coding improves accuracy and completeness, and 60% agreed it was easy to integrate the bar coding into their usual data recording process.

The pilot revealed challenges as well, however: not all individuals units of vaccines were 2D bar coded, users did not always consistently scan the bar codes, and some bar codes were difficult to read, such as one that was brown and wouldn’t scan. Another obstacle was having different lot numbers on the unit of use versus the unit of sale with 10% of the vaccines. Further, because inventory management typically involves unit of sale, it does not always match well with scanning unit of use.
 

Clinicians’ beliefs and attitudes toward 2D bar coding

As more practices consider adopting the technology, buy-in will important. At the conference, Sharon Humiston, MD, and Jill Hernandez, MPH, of Children’s Mercy Hospital in Kansas City, Mo., shared the findings of an online questionnaire about 2D bar coding and practices’ current systems for vaccine inventory and recording patient immunization information. The researchers distributed the questionnaire link to various AAP sections and committees in listservs and emails. Those eligible to complete the 15-minute survey were primary care personnel who used EMRs but not 2D bar code scanning for vaccines. They also needed to be key decision makers in the process of purchasing technology for the practice, and their practice needed to be enrolled in the Vaccines for Children program.

Among the 77 respondents who met all the inclusion criteria (61% of all who started the survey), 1 in 5 were private practices with one or two physicians, just over a third (36%) were private practices with more than two physicians, and a quarter were multispecialty group practices. Overall, respondents administered an average 116 doses of DTaP and 50 doses of Tdap each month.

Protocols for immunization management varied considerably across the respondents. For recording vaccine information, 49% reported that an administrator pre-entered it into an EMR, but 43% reported that staff manually enter it into an EMR. About 55% of practices entered the information before vaccine administration, and 42% entered it afterward. Although 57% of respondents’ practices upload the vaccination information directly from the EMR to their state’s Immunization Information System (IIS), 30% must enter it both into the EMR and into the state IIS separately, and 11% don’t enter it into a state IIS.

More than half (56%) of the respondents were extremely interested in having a bar code scanner system, and 31% were moderately to strongly interested, rating a 6 to 9 on a scale of 1 to 10. If provided evidence that 2D bar codes reduced errors in vaccine documentation, 56% of respondents said it would greatly increase their interest, and 32% said it would somewhat increase it. Only 23% said their interest would greatly increase if the bar code technology allowed the vaccine information statement to be scanned into EMRs.

Nearly all the respondents agreed that 2D bar code scanning technology would improve efficiency and accuracy of entering vaccine information into medical records and tracking vaccine inventory. Further, 81% believed it would reduce medical malpractice liability, and 85% believed it would reduce risk of harm to patients. However, 23% thought bar code technology would disrupt office work flow, and a quarter believed the technology’s costs would exceeds its benefits.

Despite the strong interest overall, respondents reported a number of barriers to adopting 2D bar code technology. The greatest barrier, reported by more than 70%, was the upfront cost of purchasing software for the EMR interface, followed by the cost of the bar code scanners. Other barriers, reported by 25%-45% of respondents, were the need for staff training, the need to service and maintain electronics for the technology, and the purchase of additional computers for scanner sites. If a bar code system cost less than $5,000, then 80% of the respondents would definitely or probably adopt such a system. Few would adopt it if the system cost $10,000 or more, but 42% probably would if it cost between $5,000 and $9,999. Even this small survey of self-selected volunteers, however, suggested strong interest in using 2D bar code technology for vaccines – although initial costs for a system presented a significant barrier to most practices.
 

 

 

One influenza vaccine clinic’s experience

Interest based on hypothetical questions is one thing. The process of actually implementing a 2D bar code scanning system into a health care center is another. In a separate presentation, Jane Glaser, MSN, RN, executive director of Campbell County Public Health in Gillette, Wyo., reviewed how such a system was implemented for mass influenza vaccination.

Campbell County, in the northeast corner of Wyoming, covers more than 4,800 square miles, has a population base of nearly 50,000 people, and also serves individuals from Montana, South Dakota, and North Dakota. Although the community as a whole works 24/7 in the county because of the oil, mining, and farming industries, the mass flu clinic is open 7 a.m. to 7 p.m., during which it provides an estimated 700 to 1,500 flu vaccines daily. Personnel comprises 13 public health nurses, 5 administrative assistants, and 3-4 community volunteers.

After 20 years of using an IIS, the clinic’s leadership decided to begin using 2D bar code scanners in October 2011 after observing it at a state immunization conference. Their goals in changing systems were to increase clinic flow, decrease registration time, and decrease overtime due to data entry. The new work flow went as follows: Those with Wyoming driver licenses or state ID cards have the linear bar code on their ID scanned in the immunization registry, which automatically populates the patient’s record. Then the staff member enters the vaccine information directly into the IIS registry in real time after the client receives the vaccine.

Ms. Glaser describes a number of improvements that resulted from use of the bar code scanning system, starting with reduced time for clinic registration and improved clinic flow. They also found that using bar code scanning reduced manual entry errors and improved the efficiency of assessing vaccination status and needed vaccines. Entering data in real time at point of care reduced time spent on data entry later on, thereby leading to a decrease in overtime and subsequent cost savings.

For providers and practices interested in learning more about 2D bar coding, the CDC offers a current list of 2D bar coded vaccines, data from the pilot program, training materials, and AAP guidance about 2D bar code use.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

EXPERT ANALYSIS FROM AAP 16

Disallow All Ads
Vitals

 

Key clinical point: 2D bar coding with vaccines offers benefits and challenges.

Major finding: 56% of pediatric practice personnel are very interested in 2D bar coding use with immunizations, but 70% named cost a major barrier.

Data source: A CDC study, an online questionnaire, and experience in a Wyoming flu clinic.

Disclosures: None of three presentations noted external funding, and all researchers reported no financial relationships with companies that profit from bar code scanning technology. Deloitte Consulting was involved in the three-part project conducted by the CDC.

FDA grants accelerated approval to olaratumab for soft tissue sarcoma

Article Type
Changed
Fri, 01/04/2019 - 13:25

 

The Food and Drug Administration has granted accelerated approval to olaratumab with doxorubicin for the treatment of adult patients with certain types of soft tissue sarcoma.

“This is the first new therapy approved by the FDA for the initial treatment of soft tissue sarcoma since doxorubicin’s approval more than 40 years ago,” Richard Pazdur, MD, director of the office of hematology and oncology products in the FDA Center for Drug Evaluation and Research and acting director of the FDA Oncology Center of Excellence, said in a statement.

Olaratumab, a platelet-derived growth factor (PDGF) receptor-alpha blocking antibody, is approved for use with doxorubicin for the treatment of patients with soft tissue sarcoma who cannot be cured with radiation or surgery and who have a type of soft tissue sarcoma for which an anthracycline is an appropriate treatment, according to the FDA announcement.

Approval was based on a statistically significant improvement in survival in a randomized trial involving 133 patients with more than 25 different subtypes of metastatic soft tissue sarcoma. Patients received olaratumab (Lartruvo) with doxorubicin or doxorubicin alone. Median survival was 26.5 months for patients who received both drugs, compared with 14.7 months for patients who received doxorubicin alone. Median progression-free survival was 8.2 months for patients who received both drugs and 4.4 months for patients who received doxorubicin alone.

The most common adverse reactions from olaratumab were nausea, fatigue, neutropenia, musculoskeletal pain, mucositis, alopecia, vomiting, diarrhea, decreased appetite, abdominal pain, neuropathy, and headache. There are serious risks of infusion-related reactions and embryo-fetal harm, the FDA warned.

Olaratumab was approved under the FDA’s accelerated approval program after receiving fast track designation, breakthrough therapy designation, and a priority review status. The drug also received an orphan drug designation. Drug maker Eli Lilly is currently conducting a larger study of olaratumab across multiple subtypes of soft tissue sarcoma.

Publications
Topics
Sections

 

The Food and Drug Administration has granted accelerated approval to olaratumab with doxorubicin for the treatment of adult patients with certain types of soft tissue sarcoma.

“This is the first new therapy approved by the FDA for the initial treatment of soft tissue sarcoma since doxorubicin’s approval more than 40 years ago,” Richard Pazdur, MD, director of the office of hematology and oncology products in the FDA Center for Drug Evaluation and Research and acting director of the FDA Oncology Center of Excellence, said in a statement.

Olaratumab, a platelet-derived growth factor (PDGF) receptor-alpha blocking antibody, is approved for use with doxorubicin for the treatment of patients with soft tissue sarcoma who cannot be cured with radiation or surgery and who have a type of soft tissue sarcoma for which an anthracycline is an appropriate treatment, according to the FDA announcement.

Approval was based on a statistically significant improvement in survival in a randomized trial involving 133 patients with more than 25 different subtypes of metastatic soft tissue sarcoma. Patients received olaratumab (Lartruvo) with doxorubicin or doxorubicin alone. Median survival was 26.5 months for patients who received both drugs, compared with 14.7 months for patients who received doxorubicin alone. Median progression-free survival was 8.2 months for patients who received both drugs and 4.4 months for patients who received doxorubicin alone.

The most common adverse reactions from olaratumab were nausea, fatigue, neutropenia, musculoskeletal pain, mucositis, alopecia, vomiting, diarrhea, decreased appetite, abdominal pain, neuropathy, and headache. There are serious risks of infusion-related reactions and embryo-fetal harm, the FDA warned.

Olaratumab was approved under the FDA’s accelerated approval program after receiving fast track designation, breakthrough therapy designation, and a priority review status. The drug also received an orphan drug designation. Drug maker Eli Lilly is currently conducting a larger study of olaratumab across multiple subtypes of soft tissue sarcoma.

 

The Food and Drug Administration has granted accelerated approval to olaratumab with doxorubicin for the treatment of adult patients with certain types of soft tissue sarcoma.

“This is the first new therapy approved by the FDA for the initial treatment of soft tissue sarcoma since doxorubicin’s approval more than 40 years ago,” Richard Pazdur, MD, director of the office of hematology and oncology products in the FDA Center for Drug Evaluation and Research and acting director of the FDA Oncology Center of Excellence, said in a statement.

Olaratumab, a platelet-derived growth factor (PDGF) receptor-alpha blocking antibody, is approved for use with doxorubicin for the treatment of patients with soft tissue sarcoma who cannot be cured with radiation or surgery and who have a type of soft tissue sarcoma for which an anthracycline is an appropriate treatment, according to the FDA announcement.

Approval was based on a statistically significant improvement in survival in a randomized trial involving 133 patients with more than 25 different subtypes of metastatic soft tissue sarcoma. Patients received olaratumab (Lartruvo) with doxorubicin or doxorubicin alone. Median survival was 26.5 months for patients who received both drugs, compared with 14.7 months for patients who received doxorubicin alone. Median progression-free survival was 8.2 months for patients who received both drugs and 4.4 months for patients who received doxorubicin alone.

The most common adverse reactions from olaratumab were nausea, fatigue, neutropenia, musculoskeletal pain, mucositis, alopecia, vomiting, diarrhea, decreased appetite, abdominal pain, neuropathy, and headache. There are serious risks of infusion-related reactions and embryo-fetal harm, the FDA warned.

Olaratumab was approved under the FDA’s accelerated approval program after receiving fast track designation, breakthrough therapy designation, and a priority review status. The drug also received an orphan drug designation. Drug maker Eli Lilly is currently conducting a larger study of olaratumab across multiple subtypes of soft tissue sarcoma.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads

Uninsured rate lowest in Massachusetts

Article Type
Changed
Thu, 03/28/2019 - 15:01

 

Massachusetts had the nation’s lowest uninsured rate in 2015, and Texas had the highest, according to the personal finance website WalletHub.

Massachusetts’ uninsured rate of 2.8% was followed by Vermont at 3.8%, Hawaii at 4%, Minnesota at 4.5%, and Iowa at 5%, WalletHub reported.

The other end of the scale offered more proof that everything is bigger in Texas: The state’s 17.1% of population without insurance was the country’s highest. Alaska was 49th with a rate of 14.9%, which was preceded by Oklahoma at 13.9%, Georgia at 13.6%, and Florida at 13.3%, according to the U.S. Census Bureau data used in the analysis.

Nevada, which was 44th overall in 2015, had the largest reduction (–10.3%) in its uninsured rate from 2010 to 2015. Oregon had the next-largest drop (10.1%) and Massachusetts had the smallest decrease at –1.6%, meaning that no state saw an increase over the 5-year period, the WalletHub report showed.

A quick run through some subgroups shows that Vermont had the lowest percentage of uninsured children (1%) and Alaska had the highest (10.6%), Massachusetts was lowest for whites (2.2%) and Hispanics (5.3%) while Mississippi was highest (10.9% white and 37.6% Hispanic). Hawaii had the lowest rate (3.8%) for blacks, and Montana had the highest (17.4%), WalletHub said.

Publications
Topics
Sections

 

Massachusetts had the nation’s lowest uninsured rate in 2015, and Texas had the highest, according to the personal finance website WalletHub.

Massachusetts’ uninsured rate of 2.8% was followed by Vermont at 3.8%, Hawaii at 4%, Minnesota at 4.5%, and Iowa at 5%, WalletHub reported.

The other end of the scale offered more proof that everything is bigger in Texas: The state’s 17.1% of population without insurance was the country’s highest. Alaska was 49th with a rate of 14.9%, which was preceded by Oklahoma at 13.9%, Georgia at 13.6%, and Florida at 13.3%, according to the U.S. Census Bureau data used in the analysis.

Nevada, which was 44th overall in 2015, had the largest reduction (–10.3%) in its uninsured rate from 2010 to 2015. Oregon had the next-largest drop (10.1%) and Massachusetts had the smallest decrease at –1.6%, meaning that no state saw an increase over the 5-year period, the WalletHub report showed.

A quick run through some subgroups shows that Vermont had the lowest percentage of uninsured children (1%) and Alaska had the highest (10.6%), Massachusetts was lowest for whites (2.2%) and Hispanics (5.3%) while Mississippi was highest (10.9% white and 37.6% Hispanic). Hawaii had the lowest rate (3.8%) for blacks, and Montana had the highest (17.4%), WalletHub said.

 

Massachusetts had the nation’s lowest uninsured rate in 2015, and Texas had the highest, according to the personal finance website WalletHub.

Massachusetts’ uninsured rate of 2.8% was followed by Vermont at 3.8%, Hawaii at 4%, Minnesota at 4.5%, and Iowa at 5%, WalletHub reported.

The other end of the scale offered more proof that everything is bigger in Texas: The state’s 17.1% of population without insurance was the country’s highest. Alaska was 49th with a rate of 14.9%, which was preceded by Oklahoma at 13.9%, Georgia at 13.6%, and Florida at 13.3%, according to the U.S. Census Bureau data used in the analysis.

Nevada, which was 44th overall in 2015, had the largest reduction (–10.3%) in its uninsured rate from 2010 to 2015. Oregon had the next-largest drop (10.1%) and Massachusetts had the smallest decrease at –1.6%, meaning that no state saw an increase over the 5-year period, the WalletHub report showed.

A quick run through some subgroups shows that Vermont had the lowest percentage of uninsured children (1%) and Alaska had the highest (10.6%), Massachusetts was lowest for whites (2.2%) and Hispanics (5.3%) while Mississippi was highest (10.9% white and 37.6% Hispanic). Hawaii had the lowest rate (3.8%) for blacks, and Montana had the highest (17.4%), WalletHub said.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads

Picking at a Problem

Article Type
Changed
Thu, 04/12/2018 - 10:30
Display Headline
Picking at a Problem

An 80-year-old woman bitterly complains of itching and discomfort on the left side of her face that began several weeks ago. Her primary care provider initially diagnosed contact dermatitis and prescribed a class 6 topical steroid cream. This helped a bit with the itching but had no effect on appearance.

When a friend suggested the itch might be the result of a bug bite, the patient went directly to the emergency department and was given a two-week taper of prednisone (40 mg/d for a week, then 20 mg/d for a week). This eased the redness, but the itching returned as soon as the course was finished. Finally, she was referred to dermatology.

The patient, who has been a widow for many years, was recently “forced out” of her home of more than 50 years and into an assisted living facility; her family seldom visits, so her friend accompanies her to your office today.

EXAMINATION
Extensive honey-colored crusting on an erythematous base is confined to the left side of the patient’s face. No nodes are palpable in the area. Her friend confirms that she has been picking at the skin.

What is the diagnosis?

 

 

DISCUSSION
This condition is impetigo—in this case “non-bullous” impetigo, a rash that almost always begins with a breach in normal skin integrity. This opens the skin to superficial invasion by staph and strep organisms, mostly emitting from the nasal passages. In this case, the patient had been picking at seborrheic keratosis (an extremely common phenomenon in someone her age) on her face.

The inclination to scratch and pick—and the inability to manage nasal secretions—make children the most likely candidates for impetigo. It is especially common in those with atopic dermatitis, who not only have poor barrier function (which manifests as eczema) but also constant nasal drainage from seasonal allergies.

The differential includes herpes simplex or zoster, eczema and contact dermatitis. But the location of the rash, the honey-colored crust on an erythematous base, and the history of skin breaches all point directly to impetigo. Lymph nodes are often palpable in the drainage area; their presence corroborates the diagnosis.

Luckily, this type of impetigo is relatively easy to treat: The patient was advised to wash the area with soap and water and apply mupirocin ointment three times a day. She was also prescribed cephalexin (500 mg tid for a week), which cleared the condition aside from a faint bit of postinflammatory pinkness.

TAKE-HOME LEARNING POINTS
Impetigo is a superficial bacterial infection, usually on or near the face, caused by staph and strep organisms that seed the area from the nasal passages.
• These organisms generally require a break in the skin to gain entrance, often caused by picking or scratching.
• Honey-colored crust on an erythematous base, plus or minus enlarged local nodes, help to confirm the diagnosis.
• Symptoms of impetigo include itching and mild discomfort but not pain.
• Treatment can be as simple as cleaning with soap and water and applying topical mupirocin ointment. A short course of oral antibiotics may be needed to speed the clearance process.

Publications
Topics
Sections

An 80-year-old woman bitterly complains of itching and discomfort on the left side of her face that began several weeks ago. Her primary care provider initially diagnosed contact dermatitis and prescribed a class 6 topical steroid cream. This helped a bit with the itching but had no effect on appearance.

When a friend suggested the itch might be the result of a bug bite, the patient went directly to the emergency department and was given a two-week taper of prednisone (40 mg/d for a week, then 20 mg/d for a week). This eased the redness, but the itching returned as soon as the course was finished. Finally, she was referred to dermatology.

The patient, who has been a widow for many years, was recently “forced out” of her home of more than 50 years and into an assisted living facility; her family seldom visits, so her friend accompanies her to your office today.

EXAMINATION
Extensive honey-colored crusting on an erythematous base is confined to the left side of the patient’s face. No nodes are palpable in the area. Her friend confirms that she has been picking at the skin.

What is the diagnosis?

 

 

DISCUSSION
This condition is impetigo—in this case “non-bullous” impetigo, a rash that almost always begins with a breach in normal skin integrity. This opens the skin to superficial invasion by staph and strep organisms, mostly emitting from the nasal passages. In this case, the patient had been picking at seborrheic keratosis (an extremely common phenomenon in someone her age) on her face.

The inclination to scratch and pick—and the inability to manage nasal secretions—make children the most likely candidates for impetigo. It is especially common in those with atopic dermatitis, who not only have poor barrier function (which manifests as eczema) but also constant nasal drainage from seasonal allergies.

The differential includes herpes simplex or zoster, eczema and contact dermatitis. But the location of the rash, the honey-colored crust on an erythematous base, and the history of skin breaches all point directly to impetigo. Lymph nodes are often palpable in the drainage area; their presence corroborates the diagnosis.

Luckily, this type of impetigo is relatively easy to treat: The patient was advised to wash the area with soap and water and apply mupirocin ointment three times a day. She was also prescribed cephalexin (500 mg tid for a week), which cleared the condition aside from a faint bit of postinflammatory pinkness.

TAKE-HOME LEARNING POINTS
Impetigo is a superficial bacterial infection, usually on or near the face, caused by staph and strep organisms that seed the area from the nasal passages.
• These organisms generally require a break in the skin to gain entrance, often caused by picking or scratching.
• Honey-colored crust on an erythematous base, plus or minus enlarged local nodes, help to confirm the diagnosis.
• Symptoms of impetigo include itching and mild discomfort but not pain.
• Treatment can be as simple as cleaning with soap and water and applying topical mupirocin ointment. A short course of oral antibiotics may be needed to speed the clearance process.

An 80-year-old woman bitterly complains of itching and discomfort on the left side of her face that began several weeks ago. Her primary care provider initially diagnosed contact dermatitis and prescribed a class 6 topical steroid cream. This helped a bit with the itching but had no effect on appearance.

When a friend suggested the itch might be the result of a bug bite, the patient went directly to the emergency department and was given a two-week taper of prednisone (40 mg/d for a week, then 20 mg/d for a week). This eased the redness, but the itching returned as soon as the course was finished. Finally, she was referred to dermatology.

The patient, who has been a widow for many years, was recently “forced out” of her home of more than 50 years and into an assisted living facility; her family seldom visits, so her friend accompanies her to your office today.

EXAMINATION
Extensive honey-colored crusting on an erythematous base is confined to the left side of the patient’s face. No nodes are palpable in the area. Her friend confirms that she has been picking at the skin.

What is the diagnosis?

 

 

DISCUSSION
This condition is impetigo—in this case “non-bullous” impetigo, a rash that almost always begins with a breach in normal skin integrity. This opens the skin to superficial invasion by staph and strep organisms, mostly emitting from the nasal passages. In this case, the patient had been picking at seborrheic keratosis (an extremely common phenomenon in someone her age) on her face.

The inclination to scratch and pick—and the inability to manage nasal secretions—make children the most likely candidates for impetigo. It is especially common in those with atopic dermatitis, who not only have poor barrier function (which manifests as eczema) but also constant nasal drainage from seasonal allergies.

The differential includes herpes simplex or zoster, eczema and contact dermatitis. But the location of the rash, the honey-colored crust on an erythematous base, and the history of skin breaches all point directly to impetigo. Lymph nodes are often palpable in the drainage area; their presence corroborates the diagnosis.

Luckily, this type of impetigo is relatively easy to treat: The patient was advised to wash the area with soap and water and apply mupirocin ointment three times a day. She was also prescribed cephalexin (500 mg tid for a week), which cleared the condition aside from a faint bit of postinflammatory pinkness.

TAKE-HOME LEARNING POINTS
Impetigo is a superficial bacterial infection, usually on or near the face, caused by staph and strep organisms that seed the area from the nasal passages.
• These organisms generally require a break in the skin to gain entrance, often caused by picking or scratching.
• Honey-colored crust on an erythematous base, plus or minus enlarged local nodes, help to confirm the diagnosis.
• Symptoms of impetigo include itching and mild discomfort but not pain.
• Treatment can be as simple as cleaning with soap and water and applying topical mupirocin ointment. A short course of oral antibiotics may be needed to speed the clearance process.

Publications
Publications
Topics
Article Type
Display Headline
Picking at a Problem
Display Headline
Picking at a Problem
Sections
Disallow All Ads
Alternative CME
Use ProPublica

Ozanimod has lasting effect on ulcerative colitis

Article Type
Changed
Fri, 01/18/2019 - 16:17

 

– Ozanimod, an oral agent that selectively modulates the sphingosine 1-phosphate (S1P) 1 and 5 receptor, has a lasting effect on symptoms of ulcerative colitis, according to results from an open-label extension study.

The study extends the phase II TOUCHSTONE trial, in which patients with ulcerative colitis showed significant clinical improvement out to 32 weeks. The current study showed those improvements lasting out to at least 1 year, with about 80% of patients staying on the drug at the end of the study.

With other drug regimens, “the loss rates are at least 50% of the patients, so this is a remarkable level of durability over time,” William Sandborn, MD, chief of gastroenterology at the University of California at San Diego, said at the annual meeting of the American College of Gastroenterology. “With biologics, if you follow patients out for a year or 2, you see a fair amount of loss of response and some of that probably has to do with the formation of antidrug antibodies and immunogenicity,” Dr. Sandborn added.

Dr. William J. Sandborn


In the original TOUCHSTONE study, 197 patients were randomized to placebo, ozanimod 0.5 mg, or ozanimod 1.0 mg, and followed out to week 32. Twenty-one percent of those in the 1.0-mg group achieved clinical remission, compared with 26% in the 0.5-mg group and 6% of those receiving placebo. Clinical response rates were 51%, 35%, and 20%, respectively.

In the open-label study, patients from all arms who did not respond to treatment after the induction phase, or relapsed during the maintenance phase, or completed the maintenance phase (170 patients in total) entered the open-label study with a dose of 1.0 mg ozanimod. At the time of the cut-off, patients in the extension study had been taking ozanimod for at least 1 year.

At the start of the extension study, the partial Mayo Score (pMS) for patients on placebo, ozanimod 0.5 mg, and 1.0 mg were 4.6, 4.5, and 3.3, respectively. All groups showed improvement in pMS by week 44 (1.7, 1.7, and 1.9, respectively)

At week 44, 90.9% of patients had little or no active disease (physician global assessment 0 or 1), 98.4% had little or no blood in their stools, and 84.7% had no blood in their stools.

Adverse events with a frequency higher than 2% included ulcerative colitis flare (5.9%), anemia (3.5%), upper respiratory tract infection (4.1%), nasal pharyngitis (3.5%), back pain (2.9%), arthralgia (2.4%), and headache (2.4%). The researchers noted some transient elevation of alanine aminotransferase or aspartate aminotransferase, but these elevations were temporary and reversed during ongoing treatment; 2.4% of patients experienced ALT and AST levels higher than three times the upper limit of normal, and all were asymptomatic.

Serious adverse events that occurred in two or more patients included anemia (1.2%) and ulcerative colitis flare (2.4%).

“This is a promising oral product with a similar mechanism of action to other lymphocyte trafficking agents,” said Stephen Hanauer, MD, medical director of the digestive health center at the Northwestern University, Chicago, who attended the session.

Ozanimod and other lymphocyte trafficking agents may offer a slightly different profile than some of the other drug classes, such as the anti–tumor necrosis factor agents, according to Dr. Hanauer, because the agents don’t affect lymphocytes already in the tissues. On the other hand, once the drug has acted, its effect may linger. “The time to effect may be a little slower, but the long-term effect seems to be as good or better as other mechanisms of action.”

The drug’s real place could be in early disease, Dr. Hanauer said. “If this is truly an effective and safe agent, the real positioning should be earlier in the disease, before patients are exposed to steroids and other immune suppressants, or biologics that have an infection risk.”

Celgene funded the study. Dr. Sandborn has received funding from Receptos and Celgene and consulted for both companies. Dr. Hanauer has consulted with Receptos, Celgene, Pfizer, Jansen, and AbbVie.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Ozanimod, an oral agent that selectively modulates the sphingosine 1-phosphate (S1P) 1 and 5 receptor, has a lasting effect on symptoms of ulcerative colitis, according to results from an open-label extension study.

The study extends the phase II TOUCHSTONE trial, in which patients with ulcerative colitis showed significant clinical improvement out to 32 weeks. The current study showed those improvements lasting out to at least 1 year, with about 80% of patients staying on the drug at the end of the study.

With other drug regimens, “the loss rates are at least 50% of the patients, so this is a remarkable level of durability over time,” William Sandborn, MD, chief of gastroenterology at the University of California at San Diego, said at the annual meeting of the American College of Gastroenterology. “With biologics, if you follow patients out for a year or 2, you see a fair amount of loss of response and some of that probably has to do with the formation of antidrug antibodies and immunogenicity,” Dr. Sandborn added.

Dr. William J. Sandborn


In the original TOUCHSTONE study, 197 patients were randomized to placebo, ozanimod 0.5 mg, or ozanimod 1.0 mg, and followed out to week 32. Twenty-one percent of those in the 1.0-mg group achieved clinical remission, compared with 26% in the 0.5-mg group and 6% of those receiving placebo. Clinical response rates were 51%, 35%, and 20%, respectively.

In the open-label study, patients from all arms who did not respond to treatment after the induction phase, or relapsed during the maintenance phase, or completed the maintenance phase (170 patients in total) entered the open-label study with a dose of 1.0 mg ozanimod. At the time of the cut-off, patients in the extension study had been taking ozanimod for at least 1 year.

At the start of the extension study, the partial Mayo Score (pMS) for patients on placebo, ozanimod 0.5 mg, and 1.0 mg were 4.6, 4.5, and 3.3, respectively. All groups showed improvement in pMS by week 44 (1.7, 1.7, and 1.9, respectively)

At week 44, 90.9% of patients had little or no active disease (physician global assessment 0 or 1), 98.4% had little or no blood in their stools, and 84.7% had no blood in their stools.

Adverse events with a frequency higher than 2% included ulcerative colitis flare (5.9%), anemia (3.5%), upper respiratory tract infection (4.1%), nasal pharyngitis (3.5%), back pain (2.9%), arthralgia (2.4%), and headache (2.4%). The researchers noted some transient elevation of alanine aminotransferase or aspartate aminotransferase, but these elevations were temporary and reversed during ongoing treatment; 2.4% of patients experienced ALT and AST levels higher than three times the upper limit of normal, and all were asymptomatic.

Serious adverse events that occurred in two or more patients included anemia (1.2%) and ulcerative colitis flare (2.4%).

“This is a promising oral product with a similar mechanism of action to other lymphocyte trafficking agents,” said Stephen Hanauer, MD, medical director of the digestive health center at the Northwestern University, Chicago, who attended the session.

Ozanimod and other lymphocyte trafficking agents may offer a slightly different profile than some of the other drug classes, such as the anti–tumor necrosis factor agents, according to Dr. Hanauer, because the agents don’t affect lymphocytes already in the tissues. On the other hand, once the drug has acted, its effect may linger. “The time to effect may be a little slower, but the long-term effect seems to be as good or better as other mechanisms of action.”

The drug’s real place could be in early disease, Dr. Hanauer said. “If this is truly an effective and safe agent, the real positioning should be earlier in the disease, before patients are exposed to steroids and other immune suppressants, or biologics that have an infection risk.”

Celgene funded the study. Dr. Sandborn has received funding from Receptos and Celgene and consulted for both companies. Dr. Hanauer has consulted with Receptos, Celgene, Pfizer, Jansen, and AbbVie.

 

– Ozanimod, an oral agent that selectively modulates the sphingosine 1-phosphate (S1P) 1 and 5 receptor, has a lasting effect on symptoms of ulcerative colitis, according to results from an open-label extension study.

The study extends the phase II TOUCHSTONE trial, in which patients with ulcerative colitis showed significant clinical improvement out to 32 weeks. The current study showed those improvements lasting out to at least 1 year, with about 80% of patients staying on the drug at the end of the study.

With other drug regimens, “the loss rates are at least 50% of the patients, so this is a remarkable level of durability over time,” William Sandborn, MD, chief of gastroenterology at the University of California at San Diego, said at the annual meeting of the American College of Gastroenterology. “With biologics, if you follow patients out for a year or 2, you see a fair amount of loss of response and some of that probably has to do with the formation of antidrug antibodies and immunogenicity,” Dr. Sandborn added.

Dr. William J. Sandborn


In the original TOUCHSTONE study, 197 patients were randomized to placebo, ozanimod 0.5 mg, or ozanimod 1.0 mg, and followed out to week 32. Twenty-one percent of those in the 1.0-mg group achieved clinical remission, compared with 26% in the 0.5-mg group and 6% of those receiving placebo. Clinical response rates were 51%, 35%, and 20%, respectively.

In the open-label study, patients from all arms who did not respond to treatment after the induction phase, or relapsed during the maintenance phase, or completed the maintenance phase (170 patients in total) entered the open-label study with a dose of 1.0 mg ozanimod. At the time of the cut-off, patients in the extension study had been taking ozanimod for at least 1 year.

At the start of the extension study, the partial Mayo Score (pMS) for patients on placebo, ozanimod 0.5 mg, and 1.0 mg were 4.6, 4.5, and 3.3, respectively. All groups showed improvement in pMS by week 44 (1.7, 1.7, and 1.9, respectively)

At week 44, 90.9% of patients had little or no active disease (physician global assessment 0 or 1), 98.4% had little or no blood in their stools, and 84.7% had no blood in their stools.

Adverse events with a frequency higher than 2% included ulcerative colitis flare (5.9%), anemia (3.5%), upper respiratory tract infection (4.1%), nasal pharyngitis (3.5%), back pain (2.9%), arthralgia (2.4%), and headache (2.4%). The researchers noted some transient elevation of alanine aminotransferase or aspartate aminotransferase, but these elevations were temporary and reversed during ongoing treatment; 2.4% of patients experienced ALT and AST levels higher than three times the upper limit of normal, and all were asymptomatic.

Serious adverse events that occurred in two or more patients included anemia (1.2%) and ulcerative colitis flare (2.4%).

“This is a promising oral product with a similar mechanism of action to other lymphocyte trafficking agents,” said Stephen Hanauer, MD, medical director of the digestive health center at the Northwestern University, Chicago, who attended the session.

Ozanimod and other lymphocyte trafficking agents may offer a slightly different profile than some of the other drug classes, such as the anti–tumor necrosis factor agents, according to Dr. Hanauer, because the agents don’t affect lymphocytes already in the tissues. On the other hand, once the drug has acted, its effect may linger. “The time to effect may be a little slower, but the long-term effect seems to be as good or better as other mechanisms of action.”

The drug’s real place could be in early disease, Dr. Hanauer said. “If this is truly an effective and safe agent, the real positioning should be earlier in the disease, before patients are exposed to steroids and other immune suppressants, or biologics that have an infection risk.”

Celgene funded the study. Dr. Sandborn has received funding from Receptos and Celgene and consulted for both companies. Dr. Hanauer has consulted with Receptos, Celgene, Pfizer, Jansen, and AbbVie.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

AT ACG 2016

Disallow All Ads
Vitals

 

Key clinical point: Open-label study shows that short-term ozanimod gains are maintained.

Major finding: Ozanimod maintains efficacy in ulcerative colitis out to 1 year, with 90% of patients having little or no evidence of active disease.

Data source: Open-label extension study following a phase II clinical trial.

Disclosures: Celgene funded the study. Dr. Sandborn has received funding from Receptos and Celgene and consulted for both companies. Dr. Hanauer has consulted with Receptos, Celgene, Pfizer, Jansen, and AbbVie.

Experts: Fewer opioids, more treatment laws mean nothing without better access to care

Article Type
Changed
Fri, 01/18/2019 - 16:17

 

– Pressure on physicians to prescribe fewer opioids could have unintended consequences in the absence of adequate access to treatment, according to experts.

“There is mixed evidence that, when medication-assisted treatment is lacking, there are higher rates of transition from prescription opioids to heroin,” Gary Tsai, MD, said during a presentation at the American Psychiatric Association’s Institute on Psychiatric Services.

Whitney McKnight
Dr. Gary Tsai


“As we constrict our prescribing, we want to make sure that there is ready access to these interventions, so that those who are already dependent on opioids can transition to something safer,” said Dr. Tsai, medical director and science officer of Substance Abuse Prevention and Control, a division of Los Angeles County’s public health department.

Medication-assisted treatment (MAT) uses methadone, buprenorphine, or naltrexone in combination with appropriate behavioral and other other psychosocial therapies to help achieve opioid abstinence. Despite MAT’s well-established superiority to either pharmacotherapy or psychosocial interventions alone, the use of MAT has, in some cases, declined. According to the Substance Abuse and Mental Health Services Administration (SAMHSA), MAT was used in 35% of heroin-related treatment admissions in 2002, compared with 28% in 2010.

Reasons for MAT’s difficult path to acceptance are manifold, ranging from lack of certified facilities to administer the medications to misunderstanding about how the medications work, Dr. Tsai said.

A law passed earlier this year and the issuance of a final federal rule that increases the legal patient load that certified MAT providers can treat annually were designed to expand access to MAT. These, however, are only partial solutions, according to Margaret Chaplin, MD, a psychiatrist and program director of Community Mental Health Affiliates in New Britain, Conn.

“Can you imagine if endocrinologists were the only doctors who were certified to prescribe insulin and that each of them was only limited to prescribing to 100 patients?” Dr. Chaplin said in an interview. The final rule brought the number from 100 to 275 patients per year that a certified addiction specialist can treat. This might expand access to care, but “it sends a message that either the people with [addiction] don’t deserve treatment or that they don’t have a legitimate illness,” said Dr. Chaplin, who also was a presenter at the meeting.

Viewing people with opioid addiction through a lens of moral failing only compounds the nation’s addiction crisis, Dr. Chaplin believes. “Not to say that a person with a substance use disorder doesn’t have a responsibility to take care of their illness, [but] our [leaders] haven’t been well educated on the scientific evidence that addiction is a brain disease.”

It is true that, until the Comprehensive Addiction and Recovery Act was signed into law over the summer, nurse practitioners and physician assistants could have prescribed controlled substances such as acetaminophen/oxycontin but not the far less dangerous – and potentially life-saving – partial opioid agonist buprenorphine. Under the new law, those health care professions now have the same buprenorphine prescribing rights as physicians.

New legislation does not guarantee access to treatment, however. “Funding for MAT programs varies throughout the states, and the availability of these medications on formularies often determines how readily accessible MAT interventions are,” said Dr. Tsai, who emphasized the role of collaboration in ensuring the laws take hold.

“Addiction specialists comprise a minority of the work force. To scale MAT up, we need to engage other prescribers from other systems, including those in primary care and mental health,” Dr. Tai said. To wit, the three primary MAT facilities in Los Angeles County offer learning collaboratives with primary care clinicians who want to incorporate these services into their practice, even if they are not certified addiction specialists themselves. This helps increase referrals to the treatment facilities, he explained.

Overcoming resistance to offering MAT ultimately will depend on educating leaders about the costs of not doing so, Dr. Tsai and Dr. Chaplin said.

“Our system has been slow to adopt a disease model of addiction,” Dr. Chaplin said. “Buprenorphine and methadone are life-saving medical treatments that are regulated in ways that you don’t see for any other medical condition.”

SAMHSA currently is requesting comments through Nov. 1, 2016, on what should be required of MAT providers under the new law.

Neither Dr. Tsai nor Dr. Chaplin had any relevant disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Pressure on physicians to prescribe fewer opioids could have unintended consequences in the absence of adequate access to treatment, according to experts.

“There is mixed evidence that, when medication-assisted treatment is lacking, there are higher rates of transition from prescription opioids to heroin,” Gary Tsai, MD, said during a presentation at the American Psychiatric Association’s Institute on Psychiatric Services.

Whitney McKnight
Dr. Gary Tsai


“As we constrict our prescribing, we want to make sure that there is ready access to these interventions, so that those who are already dependent on opioids can transition to something safer,” said Dr. Tsai, medical director and science officer of Substance Abuse Prevention and Control, a division of Los Angeles County’s public health department.

Medication-assisted treatment (MAT) uses methadone, buprenorphine, or naltrexone in combination with appropriate behavioral and other other psychosocial therapies to help achieve opioid abstinence. Despite MAT’s well-established superiority to either pharmacotherapy or psychosocial interventions alone, the use of MAT has, in some cases, declined. According to the Substance Abuse and Mental Health Services Administration (SAMHSA), MAT was used in 35% of heroin-related treatment admissions in 2002, compared with 28% in 2010.

Reasons for MAT’s difficult path to acceptance are manifold, ranging from lack of certified facilities to administer the medications to misunderstanding about how the medications work, Dr. Tsai said.

A law passed earlier this year and the issuance of a final federal rule that increases the legal patient load that certified MAT providers can treat annually were designed to expand access to MAT. These, however, are only partial solutions, according to Margaret Chaplin, MD, a psychiatrist and program director of Community Mental Health Affiliates in New Britain, Conn.

“Can you imagine if endocrinologists were the only doctors who were certified to prescribe insulin and that each of them was only limited to prescribing to 100 patients?” Dr. Chaplin said in an interview. The final rule brought the number from 100 to 275 patients per year that a certified addiction specialist can treat. This might expand access to care, but “it sends a message that either the people with [addiction] don’t deserve treatment or that they don’t have a legitimate illness,” said Dr. Chaplin, who also was a presenter at the meeting.

Viewing people with opioid addiction through a lens of moral failing only compounds the nation’s addiction crisis, Dr. Chaplin believes. “Not to say that a person with a substance use disorder doesn’t have a responsibility to take care of their illness, [but] our [leaders] haven’t been well educated on the scientific evidence that addiction is a brain disease.”

It is true that, until the Comprehensive Addiction and Recovery Act was signed into law over the summer, nurse practitioners and physician assistants could have prescribed controlled substances such as acetaminophen/oxycontin but not the far less dangerous – and potentially life-saving – partial opioid agonist buprenorphine. Under the new law, those health care professions now have the same buprenorphine prescribing rights as physicians.

New legislation does not guarantee access to treatment, however. “Funding for MAT programs varies throughout the states, and the availability of these medications on formularies often determines how readily accessible MAT interventions are,” said Dr. Tsai, who emphasized the role of collaboration in ensuring the laws take hold.

“Addiction specialists comprise a minority of the work force. To scale MAT up, we need to engage other prescribers from other systems, including those in primary care and mental health,” Dr. Tai said. To wit, the three primary MAT facilities in Los Angeles County offer learning collaboratives with primary care clinicians who want to incorporate these services into their practice, even if they are not certified addiction specialists themselves. This helps increase referrals to the treatment facilities, he explained.

Overcoming resistance to offering MAT ultimately will depend on educating leaders about the costs of not doing so, Dr. Tsai and Dr. Chaplin said.

“Our system has been slow to adopt a disease model of addiction,” Dr. Chaplin said. “Buprenorphine and methadone are life-saving medical treatments that are regulated in ways that you don’t see for any other medical condition.”

SAMHSA currently is requesting comments through Nov. 1, 2016, on what should be required of MAT providers under the new law.

Neither Dr. Tsai nor Dr. Chaplin had any relevant disclosures.

 

– Pressure on physicians to prescribe fewer opioids could have unintended consequences in the absence of adequate access to treatment, according to experts.

“There is mixed evidence that, when medication-assisted treatment is lacking, there are higher rates of transition from prescription opioids to heroin,” Gary Tsai, MD, said during a presentation at the American Psychiatric Association’s Institute on Psychiatric Services.

Whitney McKnight
Dr. Gary Tsai


“As we constrict our prescribing, we want to make sure that there is ready access to these interventions, so that those who are already dependent on opioids can transition to something safer,” said Dr. Tsai, medical director and science officer of Substance Abuse Prevention and Control, a division of Los Angeles County’s public health department.

Medication-assisted treatment (MAT) uses methadone, buprenorphine, or naltrexone in combination with appropriate behavioral and other other psychosocial therapies to help achieve opioid abstinence. Despite MAT’s well-established superiority to either pharmacotherapy or psychosocial interventions alone, the use of MAT has, in some cases, declined. According to the Substance Abuse and Mental Health Services Administration (SAMHSA), MAT was used in 35% of heroin-related treatment admissions in 2002, compared with 28% in 2010.

Reasons for MAT’s difficult path to acceptance are manifold, ranging from lack of certified facilities to administer the medications to misunderstanding about how the medications work, Dr. Tsai said.

A law passed earlier this year and the issuance of a final federal rule that increases the legal patient load that certified MAT providers can treat annually were designed to expand access to MAT. These, however, are only partial solutions, according to Margaret Chaplin, MD, a psychiatrist and program director of Community Mental Health Affiliates in New Britain, Conn.

“Can you imagine if endocrinologists were the only doctors who were certified to prescribe insulin and that each of them was only limited to prescribing to 100 patients?” Dr. Chaplin said in an interview. The final rule brought the number from 100 to 275 patients per year that a certified addiction specialist can treat. This might expand access to care, but “it sends a message that either the people with [addiction] don’t deserve treatment or that they don’t have a legitimate illness,” said Dr. Chaplin, who also was a presenter at the meeting.

Viewing people with opioid addiction through a lens of moral failing only compounds the nation’s addiction crisis, Dr. Chaplin believes. “Not to say that a person with a substance use disorder doesn’t have a responsibility to take care of their illness, [but] our [leaders] haven’t been well educated on the scientific evidence that addiction is a brain disease.”

It is true that, until the Comprehensive Addiction and Recovery Act was signed into law over the summer, nurse practitioners and physician assistants could have prescribed controlled substances such as acetaminophen/oxycontin but not the far less dangerous – and potentially life-saving – partial opioid agonist buprenorphine. Under the new law, those health care professions now have the same buprenorphine prescribing rights as physicians.

New legislation does not guarantee access to treatment, however. “Funding for MAT programs varies throughout the states, and the availability of these medications on formularies often determines how readily accessible MAT interventions are,” said Dr. Tsai, who emphasized the role of collaboration in ensuring the laws take hold.

“Addiction specialists comprise a minority of the work force. To scale MAT up, we need to engage other prescribers from other systems, including those in primary care and mental health,” Dr. Tai said. To wit, the three primary MAT facilities in Los Angeles County offer learning collaboratives with primary care clinicians who want to incorporate these services into their practice, even if they are not certified addiction specialists themselves. This helps increase referrals to the treatment facilities, he explained.

Overcoming resistance to offering MAT ultimately will depend on educating leaders about the costs of not doing so, Dr. Tsai and Dr. Chaplin said.

“Our system has been slow to adopt a disease model of addiction,” Dr. Chaplin said. “Buprenorphine and methadone are life-saving medical treatments that are regulated in ways that you don’t see for any other medical condition.”

SAMHSA currently is requesting comments through Nov. 1, 2016, on what should be required of MAT providers under the new law.

Neither Dr. Tsai nor Dr. Chaplin had any relevant disclosures.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

EXPERT ANALYSIS FROM INSTITUTE ON PSYCHIATRIC SERVICES

Disallow All Ads

SAVR for radiation-induced aortic stenosis has high late mortality

Article Type
Changed
Wed, 01/02/2019 - 09:41

 

ROME– Radiation-induced aortic stenosis is associated with markedly worse long-term outcome after surgical aortic valve replacement than when the operation is performed in patients without a history of radiotherapy, Milind Y. Desai, MD, reported at the annual congress of the European Society of Cardiology.

Moreover, the Society of Thoracic Surgeons (STS) score isn’t good at risk-stratifying patients with radiation-induced aortic stenosis who are under consideration for surgical aortic valve replacement (SAVR).

Bruce Jancin/Frontline Medical News
Dr. Milind Desai
“We probably need to develop a new score for these patients,” said Dr. Desai, a cardiologist at the Cleveland Clinic.

Radiation-induced heart disease is a late complication of thoracic radiotherapy. It’s particularly common in patients who got radiation for lymphomas or breast cancer. It can affect any cardiac structure, including the myocardium, pericardium, valves, coronary arteries, and the conduction system.

Aortic stenosis is the most common valvular manifestation, present in roughly 80% of patients with radiation-induced heart disease. At the Cleveland Clinic, the average time from radiotherapy to development of radiation-induced aortic stenosis (RIAS) is about 20 years. The condition is characterized by thickening of the junction between the base of the anterior mitral leaflet and aortic root, known as the aortomitral curtain, Dr. Desai explained.

He presented a retrospective observational cohort study involving 172 patients who underwent SAVR for RIAS and an equal number of SAVR patients with no such history. The groups were matched by age, sex, aortic valve area, and type and timing of SAVR. Of note, the group with RIAS had a mean preoperative STS score of 11, and the control group averaged a similar score of 10.

The key finding: During a mean follow-up of 6 years, the all-cause mortality rate was a hefty 48% in patients with RIAS, compared with just 7% in matched controls. Only about 5% of deaths in the group with RIAS were from recurrent malignancy. The low figure is not surprising given the average 20-year lag between radiotherapy and development of radiation-induced heart disease.

“In our experience, most of these patients develop a recurrent pleural effusion and nasty cardiopulmonary issues that result in their death,” according to Dr. Desai.

In a multivariate Cox proportional hazards analysis, a history of chest radiation therapy was by far the strongest predictor of all-cause mortality, conferring an 8.5-fold increase in risk.

The only other statistically significant predictor of mortality during follow-up in multivariate analysis was a high STS score, with an associated weak albeit statistically significant 1.15-fold increased risk. A total of 30 of 78 (39%) RIAS patients with an STS score below 4 died during follow-up, compared with none of 91 controls.

Thirty-four of 92 (37%) RIAS patients under age 65 died during follow-up, whereas none of 83 control SAVR patients did so.

Having coronary artery bypass surgery or other cardiac surgery at the time of SAVR was not associated with significantly increased risk of mortality compared with solo SAVR.

In-hospital outcomes were consistently worse after SAVR in the RIAS group. Half of the RIAS patients experienced in-hospital atrial fibrillation and 29% developed persistent atrial fibrillation, compared with 30% and 24% of controls. About 22% of RIAS patients were readmitted within 3 months after surgery, as were only 8% of controls. In-hospital mortality occurred in 2% of SAVR patients with RIAS; none of the matched controls did.

Dr. Desai reported having no financial interests relative to this study.
 

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

ROME– Radiation-induced aortic stenosis is associated with markedly worse long-term outcome after surgical aortic valve replacement than when the operation is performed in patients without a history of radiotherapy, Milind Y. Desai, MD, reported at the annual congress of the European Society of Cardiology.

Moreover, the Society of Thoracic Surgeons (STS) score isn’t good at risk-stratifying patients with radiation-induced aortic stenosis who are under consideration for surgical aortic valve replacement (SAVR).

Bruce Jancin/Frontline Medical News
Dr. Milind Desai
“We probably need to develop a new score for these patients,” said Dr. Desai, a cardiologist at the Cleveland Clinic.

Radiation-induced heart disease is a late complication of thoracic radiotherapy. It’s particularly common in patients who got radiation for lymphomas or breast cancer. It can affect any cardiac structure, including the myocardium, pericardium, valves, coronary arteries, and the conduction system.

Aortic stenosis is the most common valvular manifestation, present in roughly 80% of patients with radiation-induced heart disease. At the Cleveland Clinic, the average time from radiotherapy to development of radiation-induced aortic stenosis (RIAS) is about 20 years. The condition is characterized by thickening of the junction between the base of the anterior mitral leaflet and aortic root, known as the aortomitral curtain, Dr. Desai explained.

He presented a retrospective observational cohort study involving 172 patients who underwent SAVR for RIAS and an equal number of SAVR patients with no such history. The groups were matched by age, sex, aortic valve area, and type and timing of SAVR. Of note, the group with RIAS had a mean preoperative STS score of 11, and the control group averaged a similar score of 10.

The key finding: During a mean follow-up of 6 years, the all-cause mortality rate was a hefty 48% in patients with RIAS, compared with just 7% in matched controls. Only about 5% of deaths in the group with RIAS were from recurrent malignancy. The low figure is not surprising given the average 20-year lag between radiotherapy and development of radiation-induced heart disease.

“In our experience, most of these patients develop a recurrent pleural effusion and nasty cardiopulmonary issues that result in their death,” according to Dr. Desai.

In a multivariate Cox proportional hazards analysis, a history of chest radiation therapy was by far the strongest predictor of all-cause mortality, conferring an 8.5-fold increase in risk.

The only other statistically significant predictor of mortality during follow-up in multivariate analysis was a high STS score, with an associated weak albeit statistically significant 1.15-fold increased risk. A total of 30 of 78 (39%) RIAS patients with an STS score below 4 died during follow-up, compared with none of 91 controls.

Thirty-four of 92 (37%) RIAS patients under age 65 died during follow-up, whereas none of 83 control SAVR patients did so.

Having coronary artery bypass surgery or other cardiac surgery at the time of SAVR was not associated with significantly increased risk of mortality compared with solo SAVR.

In-hospital outcomes were consistently worse after SAVR in the RIAS group. Half of the RIAS patients experienced in-hospital atrial fibrillation and 29% developed persistent atrial fibrillation, compared with 30% and 24% of controls. About 22% of RIAS patients were readmitted within 3 months after surgery, as were only 8% of controls. In-hospital mortality occurred in 2% of SAVR patients with RIAS; none of the matched controls did.

Dr. Desai reported having no financial interests relative to this study.
 

 

ROME– Radiation-induced aortic stenosis is associated with markedly worse long-term outcome after surgical aortic valve replacement than when the operation is performed in patients without a history of radiotherapy, Milind Y. Desai, MD, reported at the annual congress of the European Society of Cardiology.

Moreover, the Society of Thoracic Surgeons (STS) score isn’t good at risk-stratifying patients with radiation-induced aortic stenosis who are under consideration for surgical aortic valve replacement (SAVR).

Bruce Jancin/Frontline Medical News
Dr. Milind Desai
“We probably need to develop a new score for these patients,” said Dr. Desai, a cardiologist at the Cleveland Clinic.

Radiation-induced heart disease is a late complication of thoracic radiotherapy. It’s particularly common in patients who got radiation for lymphomas or breast cancer. It can affect any cardiac structure, including the myocardium, pericardium, valves, coronary arteries, and the conduction system.

Aortic stenosis is the most common valvular manifestation, present in roughly 80% of patients with radiation-induced heart disease. At the Cleveland Clinic, the average time from radiotherapy to development of radiation-induced aortic stenosis (RIAS) is about 20 years. The condition is characterized by thickening of the junction between the base of the anterior mitral leaflet and aortic root, known as the aortomitral curtain, Dr. Desai explained.

He presented a retrospective observational cohort study involving 172 patients who underwent SAVR for RIAS and an equal number of SAVR patients with no such history. The groups were matched by age, sex, aortic valve area, and type and timing of SAVR. Of note, the group with RIAS had a mean preoperative STS score of 11, and the control group averaged a similar score of 10.

The key finding: During a mean follow-up of 6 years, the all-cause mortality rate was a hefty 48% in patients with RIAS, compared with just 7% in matched controls. Only about 5% of deaths in the group with RIAS were from recurrent malignancy. The low figure is not surprising given the average 20-year lag between radiotherapy and development of radiation-induced heart disease.

“In our experience, most of these patients develop a recurrent pleural effusion and nasty cardiopulmonary issues that result in their death,” according to Dr. Desai.

In a multivariate Cox proportional hazards analysis, a history of chest radiation therapy was by far the strongest predictor of all-cause mortality, conferring an 8.5-fold increase in risk.

The only other statistically significant predictor of mortality during follow-up in multivariate analysis was a high STS score, with an associated weak albeit statistically significant 1.15-fold increased risk. A total of 30 of 78 (39%) RIAS patients with an STS score below 4 died during follow-up, compared with none of 91 controls.

Thirty-four of 92 (37%) RIAS patients under age 65 died during follow-up, whereas none of 83 control SAVR patients did so.

Having coronary artery bypass surgery or other cardiac surgery at the time of SAVR was not associated with significantly increased risk of mortality compared with solo SAVR.

In-hospital outcomes were consistently worse after SAVR in the RIAS group. Half of the RIAS patients experienced in-hospital atrial fibrillation and 29% developed persistent atrial fibrillation, compared with 30% and 24% of controls. About 22% of RIAS patients were readmitted within 3 months after surgery, as were only 8% of controls. In-hospital mortality occurred in 2% of SAVR patients with RIAS; none of the matched controls did.

Dr. Desai reported having no financial interests relative to this study.
 

Publications
Publications
Topics
Article Type
Sections
Article Source

AT THE ESC CONGRESS 2016

Disallow All Ads
Vitals

 

Key clinical point: Mortality is high following surgical aortic valve replacement in patients with radiation-induced severe aortic stenosis.

Major finding: All-cause mortality occurred in 48% of 172 patients with radiation-induced severe aortic stenosis during a mean follow-up of 6 years after surgical aortic valve replacement, compared with just 7% of matched controls.

Data source: This was a retrospective observational study involving 172 closely matched pairs of surgical aortic valve replacement patients.

Disclosures: The presenter reported having no financial conflicts of interest regarding this study.

Frailty stratifies pediatric liver disease severity

Article Type
Changed
Tue, 02/14/2023 - 13:06

 

– A newly devised measurement of frailty in children effectively determined the severity of liver disease in pediatric patients and might serve as a useful, independent predictor of outcomes following liver transplantations in children and adolescents.

The adapted pediatric frailty assessment formula is a “very valid, feasible, and valuable tool” for assessing children with chronic liver disease, Eberhard Lurz, MD, said at the World Congress of Pediatric Gastroenterology, Hepatology and Nutrition. “Frailty captures an additional marker of ill health that is independent of the MELD-Na [Model for End-Stage Liver Disease–Na] and PELD,” [Pediatric End-Stage Liver Disease] said Dr. Lurz, a pediatric gastroenterologist at the Hospital for Sick Children in Toronto.

Mitchel L. Zoler/Frontline Medical News
Dr. Eberhard Lurz
“Frailty may be an additional marker [of suitability for liver transplantation], and every additional, objective marker is needed” when evaluating children for liver disease, but this new pediatric frailty score now needs validation, he said.

The idea of frailty assessment of children with liver disease sprang from a 2014 report that showed a five-item frailty index could predict mortality in adults with liver disease who were listed for liver transplantation and that this predictive power was independent of the patients’ MELD scores (Am J Transplant. 2014 Aug;14[8]:1870-9). That study used a five-item frailty index developed for adults (J Gerontol A Biol Sci Med Sci. 2001;56[3]:M146-57).

Dr. Lurz came up with a pediatric version of this frailty score using pediatric-oriented measures for each of the five items. To measure exhaustion he used the PedsQL (Pediatric Quality of Life Inventory) Multidimensional Fatigue Scale; for slowness he used a 6-minute walk test; for weakness he measured grip strength; for shrinkage he measured triceps skinfold thickness; and for diminished activity he used an age-appropriate physical activity questionnaire. He prespecified that a patient’s scores for each of these five measures are calculated by comparing their test results against age-specific norms. A patient with a value that fell more than one standard deviation below the normal range scores one point for the item and those with values more than two standard deviations below the normal range score two points. Hence the maximum score for all five items is 10.

Researchers at the collaborating centers completed full assessments for 71 of 85 pediatric patients with chronic liver disease in their clinics, and each full assessment took a median of 60 minutes. The patients ranged from 8-16 years old, with an average age of 13. The cohort included 36 patients with compensated chronic liver disease (CCLD) and 35 with end-stage liver disease (ESLD) who were listed for liver transplantation.

The median frailty score of the CCLD patients was 3 and the median score for those with ESLD was 5, a statistically significant difference that was largely driven by between-group differences in fatigue scores and physical activity scores. A receiver operating characteristic curve analysis by area under the curve showed that the frailty score accounted for 83% of the difference between patients with CCLD and ESLD, comparable to the distinguishing power of the MELD-Na score. Using a cutoff on the score of 6 or greater identified patients with ESLD with 47% sensitivity and 98% specificity, and this diagnostic capability was independent of a patient’s MELD-Na or PELD score.

The five elements that contribute to this pediatric frailty score could be the focus for targeted interventions to improve the outcomes of patients scheduled to undergo liver transplantation, Dr. Lurz said.

Dr. Lurz had no relevant financial disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– A newly devised measurement of frailty in children effectively determined the severity of liver disease in pediatric patients and might serve as a useful, independent predictor of outcomes following liver transplantations in children and adolescents.

The adapted pediatric frailty assessment formula is a “very valid, feasible, and valuable tool” for assessing children with chronic liver disease, Eberhard Lurz, MD, said at the World Congress of Pediatric Gastroenterology, Hepatology and Nutrition. “Frailty captures an additional marker of ill health that is independent of the MELD-Na [Model for End-Stage Liver Disease–Na] and PELD,” [Pediatric End-Stage Liver Disease] said Dr. Lurz, a pediatric gastroenterologist at the Hospital for Sick Children in Toronto.

Mitchel L. Zoler/Frontline Medical News
Dr. Eberhard Lurz
“Frailty may be an additional marker [of suitability for liver transplantation], and every additional, objective marker is needed” when evaluating children for liver disease, but this new pediatric frailty score now needs validation, he said.

The idea of frailty assessment of children with liver disease sprang from a 2014 report that showed a five-item frailty index could predict mortality in adults with liver disease who were listed for liver transplantation and that this predictive power was independent of the patients’ MELD scores (Am J Transplant. 2014 Aug;14[8]:1870-9). That study used a five-item frailty index developed for adults (J Gerontol A Biol Sci Med Sci. 2001;56[3]:M146-57).

Dr. Lurz came up with a pediatric version of this frailty score using pediatric-oriented measures for each of the five items. To measure exhaustion he used the PedsQL (Pediatric Quality of Life Inventory) Multidimensional Fatigue Scale; for slowness he used a 6-minute walk test; for weakness he measured grip strength; for shrinkage he measured triceps skinfold thickness; and for diminished activity he used an age-appropriate physical activity questionnaire. He prespecified that a patient’s scores for each of these five measures are calculated by comparing their test results against age-specific norms. A patient with a value that fell more than one standard deviation below the normal range scores one point for the item and those with values more than two standard deviations below the normal range score two points. Hence the maximum score for all five items is 10.

Researchers at the collaborating centers completed full assessments for 71 of 85 pediatric patients with chronic liver disease in their clinics, and each full assessment took a median of 60 minutes. The patients ranged from 8-16 years old, with an average age of 13. The cohort included 36 patients with compensated chronic liver disease (CCLD) and 35 with end-stage liver disease (ESLD) who were listed for liver transplantation.

The median frailty score of the CCLD patients was 3 and the median score for those with ESLD was 5, a statistically significant difference that was largely driven by between-group differences in fatigue scores and physical activity scores. A receiver operating characteristic curve analysis by area under the curve showed that the frailty score accounted for 83% of the difference between patients with CCLD and ESLD, comparable to the distinguishing power of the MELD-Na score. Using a cutoff on the score of 6 or greater identified patients with ESLD with 47% sensitivity and 98% specificity, and this diagnostic capability was independent of a patient’s MELD-Na or PELD score.

The five elements that contribute to this pediatric frailty score could be the focus for targeted interventions to improve the outcomes of patients scheduled to undergo liver transplantation, Dr. Lurz said.

Dr. Lurz had no relevant financial disclosures.

 

– A newly devised measurement of frailty in children effectively determined the severity of liver disease in pediatric patients and might serve as a useful, independent predictor of outcomes following liver transplantations in children and adolescents.

The adapted pediatric frailty assessment formula is a “very valid, feasible, and valuable tool” for assessing children with chronic liver disease, Eberhard Lurz, MD, said at the World Congress of Pediatric Gastroenterology, Hepatology and Nutrition. “Frailty captures an additional marker of ill health that is independent of the MELD-Na [Model for End-Stage Liver Disease–Na] and PELD,” [Pediatric End-Stage Liver Disease] said Dr. Lurz, a pediatric gastroenterologist at the Hospital for Sick Children in Toronto.

Mitchel L. Zoler/Frontline Medical News
Dr. Eberhard Lurz
“Frailty may be an additional marker [of suitability for liver transplantation], and every additional, objective marker is needed” when evaluating children for liver disease, but this new pediatric frailty score now needs validation, he said.

The idea of frailty assessment of children with liver disease sprang from a 2014 report that showed a five-item frailty index could predict mortality in adults with liver disease who were listed for liver transplantation and that this predictive power was independent of the patients’ MELD scores (Am J Transplant. 2014 Aug;14[8]:1870-9). That study used a five-item frailty index developed for adults (J Gerontol A Biol Sci Med Sci. 2001;56[3]:M146-57).

Dr. Lurz came up with a pediatric version of this frailty score using pediatric-oriented measures for each of the five items. To measure exhaustion he used the PedsQL (Pediatric Quality of Life Inventory) Multidimensional Fatigue Scale; for slowness he used a 6-minute walk test; for weakness he measured grip strength; for shrinkage he measured triceps skinfold thickness; and for diminished activity he used an age-appropriate physical activity questionnaire. He prespecified that a patient’s scores for each of these five measures are calculated by comparing their test results against age-specific norms. A patient with a value that fell more than one standard deviation below the normal range scores one point for the item and those with values more than two standard deviations below the normal range score two points. Hence the maximum score for all five items is 10.

Researchers at the collaborating centers completed full assessments for 71 of 85 pediatric patients with chronic liver disease in their clinics, and each full assessment took a median of 60 minutes. The patients ranged from 8-16 years old, with an average age of 13. The cohort included 36 patients with compensated chronic liver disease (CCLD) and 35 with end-stage liver disease (ESLD) who were listed for liver transplantation.

The median frailty score of the CCLD patients was 3 and the median score for those with ESLD was 5, a statistically significant difference that was largely driven by between-group differences in fatigue scores and physical activity scores. A receiver operating characteristic curve analysis by area under the curve showed that the frailty score accounted for 83% of the difference between patients with CCLD and ESLD, comparable to the distinguishing power of the MELD-Na score. Using a cutoff on the score of 6 or greater identified patients with ESLD with 47% sensitivity and 98% specificity, and this diagnostic capability was independent of a patient’s MELD-Na or PELD score.

The five elements that contribute to this pediatric frailty score could be the focus for targeted interventions to improve the outcomes of patients scheduled to undergo liver transplantation, Dr. Lurz said.

Dr. Lurz had no relevant financial disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT WCPGHAN 2016

Disallow All Ads
Vitals

 

Key clinical point: A new measure of pediatric frailty distinguished patients with compensated and end-stage liver disease independent of existing methods for assessing liver disease patients.

Major finding: The pediatric frailty score identified patients with end-stage liver disease with sensitivity of 47% and specificity of 98%.

Data source: A series of 71 pediatric patients with liver disease compiled from 17 U.S. and Canadian centers.

Disclosures: Dr. Lurz had no relevant financial disclosures.

High resting heart rate may signal exacerbation risk in COPD patients

Article Type
Changed
Fri, 01/18/2019 - 16:17


– Higher resting heart rate may predict future risk of exacerbation in patients with recent chronic obstructive pulmonary disease (COPD) exacerbation, results from a multicenter study suggest.

“Resting heart [rate] is often a readily available clinical data,” lead study author Ahmad Ismail, MD, said in an interview in advance of the annual meeting of the American College of Chest Physicians. “Its significance is often overlooked in daily clinical practice until tachycardia or bradycardia happens. In COPD patients, it has been shown that the resting heart rate can predict mortality. However, there is a lack of data showing its association with the rates of exacerbations, the major player in determining overall outcome in patients with COPD.”

Dr. Ahmad Ismail
Dr. Ahmad Ismail
In an effort to identify the association between resting heart rate and risk of exacerbation, Dr. Ismail of Universiti Teknologi MARA, Malaysia, and his associates at nine other centers evaluated 147 COPD patients who were recruited during acute exacerbation of COPD that required hospitalization between April 2012 and September 2015. The researchers recorded each patient’s sociodemographic data, anthropometric indices, and medication history during their acute exacerbation at the hospital. Next, they followed up with the patients in clinic at 3 months after the recruitment (month 0), and collected resting heart rate, spirometry, and COPD Assessment Test (CAT) scores. Subsequently, patients were followed up in clinic at 6 and 12 months, and followed up in between via telephone interviews to collect data on exacerbation history.

The mean age of the study population was 67 years, and 77% of them had higher resting heart rates, defined as one that exceeded 80 beats per minute (BPM). The mean resting heart rate in the higher resting heart rate group was 92, compared with a mean of 70 BPM in the lower resting heart rate group. Dr. Ismail reported that at month 3, patients with higher resting heart rates had significantly higher proportion of exacerbations, compared with those who had a lower resting heart rates (54% vs. 27%; P = .013). The trend was followed through until month 9. There was also a statistically significant moderate strength linear correlation between resting heart rate and exacerbation frequency at 3, 6, and 9 months (r = 0.400; P less than .001: r = 0.440; P less than .001: and r = 0.416; P = .004, respectively). The mean exacerbation frequency was also significantly higher in the higher resting heart rate group at month 3 and month 6 (2.00 vs. 0.48; P less than .001: and 3.42 vs. 1.14; P = .004).

“Higher resting heart rate may predict future risk of exacerbation in patients with recent COPD exacerbation,” Dr. Ismail concluded. “Further study however is required to determine the effect of lowering resting heart rate on the future risk of exacerbation.” He acknowledged certain limitations of the study, including the fact that it excluded patients who were on beta-blockers or any rate-modifying drugs, and those with history of cardiac failure and ischemic heart disease, and that there was no baseline echocardiogram performed to ensure the absence of ischemic heart disease and other possible causes of the higher resting heart rates. “We also had slightly higher than expected dropouts giving a nonsignificant result at 12 months follow-up, though the trend follows the overall results of the study,” he said.

The study was funded by a grant from the Malaysian Thoracic Society. Dr. Ismail reported having no financial disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event


– Higher resting heart rate may predict future risk of exacerbation in patients with recent chronic obstructive pulmonary disease (COPD) exacerbation, results from a multicenter study suggest.

“Resting heart [rate] is often a readily available clinical data,” lead study author Ahmad Ismail, MD, said in an interview in advance of the annual meeting of the American College of Chest Physicians. “Its significance is often overlooked in daily clinical practice until tachycardia or bradycardia happens. In COPD patients, it has been shown that the resting heart rate can predict mortality. However, there is a lack of data showing its association with the rates of exacerbations, the major player in determining overall outcome in patients with COPD.”

Dr. Ahmad Ismail
Dr. Ahmad Ismail
In an effort to identify the association between resting heart rate and risk of exacerbation, Dr. Ismail of Universiti Teknologi MARA, Malaysia, and his associates at nine other centers evaluated 147 COPD patients who were recruited during acute exacerbation of COPD that required hospitalization between April 2012 and September 2015. The researchers recorded each patient’s sociodemographic data, anthropometric indices, and medication history during their acute exacerbation at the hospital. Next, they followed up with the patients in clinic at 3 months after the recruitment (month 0), and collected resting heart rate, spirometry, and COPD Assessment Test (CAT) scores. Subsequently, patients were followed up in clinic at 6 and 12 months, and followed up in between via telephone interviews to collect data on exacerbation history.

The mean age of the study population was 67 years, and 77% of them had higher resting heart rates, defined as one that exceeded 80 beats per minute (BPM). The mean resting heart rate in the higher resting heart rate group was 92, compared with a mean of 70 BPM in the lower resting heart rate group. Dr. Ismail reported that at month 3, patients with higher resting heart rates had significantly higher proportion of exacerbations, compared with those who had a lower resting heart rates (54% vs. 27%; P = .013). The trend was followed through until month 9. There was also a statistically significant moderate strength linear correlation between resting heart rate and exacerbation frequency at 3, 6, and 9 months (r = 0.400; P less than .001: r = 0.440; P less than .001: and r = 0.416; P = .004, respectively). The mean exacerbation frequency was also significantly higher in the higher resting heart rate group at month 3 and month 6 (2.00 vs. 0.48; P less than .001: and 3.42 vs. 1.14; P = .004).

“Higher resting heart rate may predict future risk of exacerbation in patients with recent COPD exacerbation,” Dr. Ismail concluded. “Further study however is required to determine the effect of lowering resting heart rate on the future risk of exacerbation.” He acknowledged certain limitations of the study, including the fact that it excluded patients who were on beta-blockers or any rate-modifying drugs, and those with history of cardiac failure and ischemic heart disease, and that there was no baseline echocardiogram performed to ensure the absence of ischemic heart disease and other possible causes of the higher resting heart rates. “We also had slightly higher than expected dropouts giving a nonsignificant result at 12 months follow-up, though the trend follows the overall results of the study,” he said.

The study was funded by a grant from the Malaysian Thoracic Society. Dr. Ismail reported having no financial disclosures.


– Higher resting heart rate may predict future risk of exacerbation in patients with recent chronic obstructive pulmonary disease (COPD) exacerbation, results from a multicenter study suggest.

“Resting heart [rate] is often a readily available clinical data,” lead study author Ahmad Ismail, MD, said in an interview in advance of the annual meeting of the American College of Chest Physicians. “Its significance is often overlooked in daily clinical practice until tachycardia or bradycardia happens. In COPD patients, it has been shown that the resting heart rate can predict mortality. However, there is a lack of data showing its association with the rates of exacerbations, the major player in determining overall outcome in patients with COPD.”

Dr. Ahmad Ismail
Dr. Ahmad Ismail
In an effort to identify the association between resting heart rate and risk of exacerbation, Dr. Ismail of Universiti Teknologi MARA, Malaysia, and his associates at nine other centers evaluated 147 COPD patients who were recruited during acute exacerbation of COPD that required hospitalization between April 2012 and September 2015. The researchers recorded each patient’s sociodemographic data, anthropometric indices, and medication history during their acute exacerbation at the hospital. Next, they followed up with the patients in clinic at 3 months after the recruitment (month 0), and collected resting heart rate, spirometry, and COPD Assessment Test (CAT) scores. Subsequently, patients were followed up in clinic at 6 and 12 months, and followed up in between via telephone interviews to collect data on exacerbation history.

The mean age of the study population was 67 years, and 77% of them had higher resting heart rates, defined as one that exceeded 80 beats per minute (BPM). The mean resting heart rate in the higher resting heart rate group was 92, compared with a mean of 70 BPM in the lower resting heart rate group. Dr. Ismail reported that at month 3, patients with higher resting heart rates had significantly higher proportion of exacerbations, compared with those who had a lower resting heart rates (54% vs. 27%; P = .013). The trend was followed through until month 9. There was also a statistically significant moderate strength linear correlation between resting heart rate and exacerbation frequency at 3, 6, and 9 months (r = 0.400; P less than .001: r = 0.440; P less than .001: and r = 0.416; P = .004, respectively). The mean exacerbation frequency was also significantly higher in the higher resting heart rate group at month 3 and month 6 (2.00 vs. 0.48; P less than .001: and 3.42 vs. 1.14; P = .004).

“Higher resting heart rate may predict future risk of exacerbation in patients with recent COPD exacerbation,” Dr. Ismail concluded. “Further study however is required to determine the effect of lowering resting heart rate on the future risk of exacerbation.” He acknowledged certain limitations of the study, including the fact that it excluded patients who were on beta-blockers or any rate-modifying drugs, and those with history of cardiac failure and ischemic heart disease, and that there was no baseline echocardiogram performed to ensure the absence of ischemic heart disease and other possible causes of the higher resting heart rates. “We also had slightly higher than expected dropouts giving a nonsignificant result at 12 months follow-up, though the trend follows the overall results of the study,” he said.

The study was funded by a grant from the Malaysian Thoracic Society. Dr. Ismail reported having no financial disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT CHEST 2016

Disallow All Ads
Vitals

 

Key clinical point: Knowing the resting heart of COPD patients may predict whether they are at risk of having an exacerbation.

Major finding: At month 3, patients with higher resting heart rates had significantly higher proportion of exacerbations, compared with those who had a lower resting heart rates (54% vs. 27%; P = .013).

Data source: An evaluation of 147 COPD patients at 10 centers who were hospitalized for acute exacerbation of COPD between April 2012 and September 2015.

Disclosures: The study was funded by a grant from the Malaysian Thoracic Society. Dr. Ismail reported having no financial disclosures.