Family dinners are good medicine

Article Type
Changed
Thu, 07/18/2019 - 09:28

Intuitively, we have come to believe that adding more to each family members’ schedule – a lesson, an activity, more homework time – is more enriching or meaningful than is a family dinner, which appears to have less direct impact. However, there is a growing body of evidence that, when an entire family eats dinner together 5 or more nights weekly, the emotional health and well-being of all family members is improved. Not only is their health improved, as there is a greater likelihood of eating nutritious food, but so are a child’s school performance and emotional well-being. As the frequency of eating dinner with parents goes up, the rates of mood and anxiety disorders and high-risk behaviors in teenagers go down.

Wavebreakmedia/Thinkstock

But less than 60% of children eat five or more meals with their parents each week (National Center on Addiction and Substance Abuse [CASA], 2012). Few people would suggest that encouraging families to eat dinner together is a bad idea, but time is the ultimate scarce resource. Preparing food and eating together takes time, and parents and children have many demands on that time that feel nonnegotiable, such as homework, exercise, team practice, or work obligations. When you meet with your patients and explain the tremendous health benefits of eating dinner together, you help your patients and their parents make informed decisions about how to rebalance time to prioritize family dinners that have real but fewer obvious impacts then do a piano lesson or dance class.

Of course, children who eat regular family dinners eat more fruits and vegetables and fewer fried foods and soft drinks than do their peers who eat dinner with their families less often. They are less likely to become obese in youth and more likely to eat healthily and maintain a healthy weight once they live on their own as adults.

 

 

Scientific evidence of the mental health benefits to children of eating meals with their families first emerged in the 1990s when the National Center on Addiction and Substance Abuse at Columbia University, New York, began surveying various family behaviors and correlating them with the risk of adolescent substance use and misuse. They found strong evidence that when families ate dinner together five or more times weekly (we’ll call this “frequent family dinners”), their adolescents were far less likely to initiate alcohol and cigarette use and less likely to regularly abuse alcohol and drugs. Subsequent studies have demonstrated that the protective effect may be greater for girls than boys and may be greater for alcohol, cigarettes, and marijuana than for other drugs. But earlier age of first use of substances substantially raises the risk of later addiction, so the health benefits of any delay in first use are significant.

Since CASA’s first studies in the 1990s, researchers began paying closer attention to family meals and a variety of psychiatric problems in youth. They demonstrated that frequent family dinners lowered the risk of other externalizing behaviors in youth, including risky sexual behaviors, threats of physical harm, aggression, fights leading to injury, and carrying or using a weapon.1,2 Frequent family dinners are associated with lower rates of disordered eating behaviors and disordered body image in adolescent girls.3,4 Multiple studies have found a powerful association between frequent family dinners and lower rates of depressive symptoms and suicide attempts in both male and female adolescents.1 Frequent family dinners even have been shown to mitigate against the risks of multiple poor health and academic outcomes in children with high adverse childhood experience (ACE) scores.5

Beyond protecting against problems, frequent family meals are associated with improved well-being and performance. Studies have demonstrated positive associations between frequent family meals and higher levels of self-esteem, self-efficacy, and well-being in adolescents, both male and female. They have consistently found significant associations between frequent family meals and higher grade point averages, commitment to learning, and rich vocabularies in children and adolescents, even after adjustment for demographic and other familial factors.6 And children are not the only ones who benefit. Frequent family meals even have been shown to be associated with higher self-reported levels of well-being and self-esteem, and lower levels of stress among parents.7,8 While investing the time in preparing meals and eating them together may sound stressful, it’s clear the benefits outweigh the risks for parents as well as for their children.

Dr. Susan D. Swick

It is important to set the framework for what really matters in a family dinner so that your patients can enjoy all of these benefits. Parents may assume that the meal must be prepared from scratch with only fresh, local, or organic ingredients. But what matters most is that the food is delicious and nutritious, and that the time spent eating (and preparing it) is fun, and promotes conversation and connection. Homemade food usually is more nutritious and will bring more of the physical health benefits, but many store-bought ingredients or even take-out options can be healthy and can promote time for the family to sit together and connect. If parents enjoy preparing food, then it’s worthwhile! And they should not worry about having every member of the family together at every meal. Even if only one parent and child are present for a dinner, they each will enjoy the benefits.

 

 


Parents can use this time to help promote good habits in their children. Talking about why manners matter while practicing them at the table is powerful for young children. Let them know manners are how we show people that we care about them, whether by taking turns talking or chewing with our mouths closed! Older children and adolescents can learn about how effort is an essential ingredient in every important area in life, from school to meals. Tell them that sometimes the work or effort will be uncomfortable, and pitching in to share the effort lightens everyone’s load. When parents ask for help, they show their children how to do the same and that they have confidence in their child’s ability to be helpful.

Parents should share the joy of the effort, too! They can invite their young children to help with the meal preparation in age-appropriate ways: pulling herbs off of their stems, rinsing vegetables, sprinkling spices, or emptying a box of spaghetti into a pot of water. Older children feel honored to be given bigger responsibilities, such as carrying plates to the table or cutting vegetables (with supervision, when appropriate). And adolescents, exploring their interests and enjoying their independence, may enjoy building their own menus for the family, doing the shopping or leading the preparation of a dish or full meal themselves.

While there is a role for supporting good manners and helpful habits, help parents avoid getting into power struggles with their children over what they will eat or how they conduct themselves at the table. There should be reasonable rules and expectations around mealtime, and predictable, reasonable consequences. If children try a food and don’t like it, they can have a bowl of (nutritious) cereal and stay at the table with the family. Phones should not be allowed at the table, and televisions should be off during the meal (although music may enhance the sense of pleasure or celebration). Mealtime should be time for relaxing, listening, and connecting.

Dr. Michael S. Jellinek

Offer some ideas about how to facilitate conversations. Asking about how a child’s day went may spark conversations sometimes, but usually people benefit from specific questions. What made you really laugh today? What did you have for lunch? Whom did you sit next to on the bus? If a parent starts by telling a story about his or her day, even better! This is especially potent if a parent talks about something embarrassing or challenging, or mentions a failure. Young children will have plenty of these stories, and adolescents build resilience by internalizing the idea that setbacks and difficulties are a normal, healthy part of every day. This is a great time to talk about current events, whether in the news, entertainment, or sports. And telling stories about when children were younger, when the parents were children, or even about grandparents or more distant ancestors is a wonderful way to engage children in the greater story of their family narrative, and is always engaging and memorable.

At a deeper level, the family dinner is a time that recognizes each person’s contribution to a discussion, and facilitates a calm discussion of the families’ history and values. There is connection, communication, and building of trust. Families that cannot schedule a minimum number of dinners or that have dinners filled with tension and conflict, are very likely to have children at risk. For those conflicted and often unhappy families, a pediatrician’s early recognition and intervention could make a meaningful difference.

Dr. Swick is physician in chief at Ohana, Center for Child and Adolescent Behavioral Health, Community Hospital of the Monterey (Calif.) Peninsula. Dr. Jellinek is professor emeritus of psychiatry and pediatrics, Harvard Medical School, Boston. Email them at [email protected].

References

1. J Adolesc Health. 2006;39(3):337-45.

2. J Adolesc. 2010;33(1):187-96.

3. J Adolesc Health. 2009;44(5):431-6.

4. Health Psychol. 2008;27(Suppl 2):s109-17.

5. J Adolesc Health. 2009;45(4):389-95.

6. Pediatrics. 2019 Jul 8. doi: 10.1542/peds.2018-945.

7. Arch Pediatr Adolesc Med. 2004;158(8):792-6.

8. Prev Med. 2018;113:7-12.

Publications
Topics
Sections

Intuitively, we have come to believe that adding more to each family members’ schedule – a lesson, an activity, more homework time – is more enriching or meaningful than is a family dinner, which appears to have less direct impact. However, there is a growing body of evidence that, when an entire family eats dinner together 5 or more nights weekly, the emotional health and well-being of all family members is improved. Not only is their health improved, as there is a greater likelihood of eating nutritious food, but so are a child’s school performance and emotional well-being. As the frequency of eating dinner with parents goes up, the rates of mood and anxiety disorders and high-risk behaviors in teenagers go down.

Wavebreakmedia/Thinkstock

But less than 60% of children eat five or more meals with their parents each week (National Center on Addiction and Substance Abuse [CASA], 2012). Few people would suggest that encouraging families to eat dinner together is a bad idea, but time is the ultimate scarce resource. Preparing food and eating together takes time, and parents and children have many demands on that time that feel nonnegotiable, such as homework, exercise, team practice, or work obligations. When you meet with your patients and explain the tremendous health benefits of eating dinner together, you help your patients and their parents make informed decisions about how to rebalance time to prioritize family dinners that have real but fewer obvious impacts then do a piano lesson or dance class.

Of course, children who eat regular family dinners eat more fruits and vegetables and fewer fried foods and soft drinks than do their peers who eat dinner with their families less often. They are less likely to become obese in youth and more likely to eat healthily and maintain a healthy weight once they live on their own as adults.

 

 

Scientific evidence of the mental health benefits to children of eating meals with their families first emerged in the 1990s when the National Center on Addiction and Substance Abuse at Columbia University, New York, began surveying various family behaviors and correlating them with the risk of adolescent substance use and misuse. They found strong evidence that when families ate dinner together five or more times weekly (we’ll call this “frequent family dinners”), their adolescents were far less likely to initiate alcohol and cigarette use and less likely to regularly abuse alcohol and drugs. Subsequent studies have demonstrated that the protective effect may be greater for girls than boys and may be greater for alcohol, cigarettes, and marijuana than for other drugs. But earlier age of first use of substances substantially raises the risk of later addiction, so the health benefits of any delay in first use are significant.

Since CASA’s first studies in the 1990s, researchers began paying closer attention to family meals and a variety of psychiatric problems in youth. They demonstrated that frequent family dinners lowered the risk of other externalizing behaviors in youth, including risky sexual behaviors, threats of physical harm, aggression, fights leading to injury, and carrying or using a weapon.1,2 Frequent family dinners are associated with lower rates of disordered eating behaviors and disordered body image in adolescent girls.3,4 Multiple studies have found a powerful association between frequent family dinners and lower rates of depressive symptoms and suicide attempts in both male and female adolescents.1 Frequent family dinners even have been shown to mitigate against the risks of multiple poor health and academic outcomes in children with high adverse childhood experience (ACE) scores.5

Beyond protecting against problems, frequent family meals are associated with improved well-being and performance. Studies have demonstrated positive associations between frequent family meals and higher levels of self-esteem, self-efficacy, and well-being in adolescents, both male and female. They have consistently found significant associations between frequent family meals and higher grade point averages, commitment to learning, and rich vocabularies in children and adolescents, even after adjustment for demographic and other familial factors.6 And children are not the only ones who benefit. Frequent family meals even have been shown to be associated with higher self-reported levels of well-being and self-esteem, and lower levels of stress among parents.7,8 While investing the time in preparing meals and eating them together may sound stressful, it’s clear the benefits outweigh the risks for parents as well as for their children.

Dr. Susan D. Swick

It is important to set the framework for what really matters in a family dinner so that your patients can enjoy all of these benefits. Parents may assume that the meal must be prepared from scratch with only fresh, local, or organic ingredients. But what matters most is that the food is delicious and nutritious, and that the time spent eating (and preparing it) is fun, and promotes conversation and connection. Homemade food usually is more nutritious and will bring more of the physical health benefits, but many store-bought ingredients or even take-out options can be healthy and can promote time for the family to sit together and connect. If parents enjoy preparing food, then it’s worthwhile! And they should not worry about having every member of the family together at every meal. Even if only one parent and child are present for a dinner, they each will enjoy the benefits.

 

 


Parents can use this time to help promote good habits in their children. Talking about why manners matter while practicing them at the table is powerful for young children. Let them know manners are how we show people that we care about them, whether by taking turns talking or chewing with our mouths closed! Older children and adolescents can learn about how effort is an essential ingredient in every important area in life, from school to meals. Tell them that sometimes the work or effort will be uncomfortable, and pitching in to share the effort lightens everyone’s load. When parents ask for help, they show their children how to do the same and that they have confidence in their child’s ability to be helpful.

Parents should share the joy of the effort, too! They can invite their young children to help with the meal preparation in age-appropriate ways: pulling herbs off of their stems, rinsing vegetables, sprinkling spices, or emptying a box of spaghetti into a pot of water. Older children feel honored to be given bigger responsibilities, such as carrying plates to the table or cutting vegetables (with supervision, when appropriate). And adolescents, exploring their interests and enjoying their independence, may enjoy building their own menus for the family, doing the shopping or leading the preparation of a dish or full meal themselves.

While there is a role for supporting good manners and helpful habits, help parents avoid getting into power struggles with their children over what they will eat or how they conduct themselves at the table. There should be reasonable rules and expectations around mealtime, and predictable, reasonable consequences. If children try a food and don’t like it, they can have a bowl of (nutritious) cereal and stay at the table with the family. Phones should not be allowed at the table, and televisions should be off during the meal (although music may enhance the sense of pleasure or celebration). Mealtime should be time for relaxing, listening, and connecting.

Dr. Michael S. Jellinek

Offer some ideas about how to facilitate conversations. Asking about how a child’s day went may spark conversations sometimes, but usually people benefit from specific questions. What made you really laugh today? What did you have for lunch? Whom did you sit next to on the bus? If a parent starts by telling a story about his or her day, even better! This is especially potent if a parent talks about something embarrassing or challenging, or mentions a failure. Young children will have plenty of these stories, and adolescents build resilience by internalizing the idea that setbacks and difficulties are a normal, healthy part of every day. This is a great time to talk about current events, whether in the news, entertainment, or sports. And telling stories about when children were younger, when the parents were children, or even about grandparents or more distant ancestors is a wonderful way to engage children in the greater story of their family narrative, and is always engaging and memorable.

At a deeper level, the family dinner is a time that recognizes each person’s contribution to a discussion, and facilitates a calm discussion of the families’ history and values. There is connection, communication, and building of trust. Families that cannot schedule a minimum number of dinners or that have dinners filled with tension and conflict, are very likely to have children at risk. For those conflicted and often unhappy families, a pediatrician’s early recognition and intervention could make a meaningful difference.

Dr. Swick is physician in chief at Ohana, Center for Child and Adolescent Behavioral Health, Community Hospital of the Monterey (Calif.) Peninsula. Dr. Jellinek is professor emeritus of psychiatry and pediatrics, Harvard Medical School, Boston. Email them at [email protected].

References

1. J Adolesc Health. 2006;39(3):337-45.

2. J Adolesc. 2010;33(1):187-96.

3. J Adolesc Health. 2009;44(5):431-6.

4. Health Psychol. 2008;27(Suppl 2):s109-17.

5. J Adolesc Health. 2009;45(4):389-95.

6. Pediatrics. 2019 Jul 8. doi: 10.1542/peds.2018-945.

7. Arch Pediatr Adolesc Med. 2004;158(8):792-6.

8. Prev Med. 2018;113:7-12.

Intuitively, we have come to believe that adding more to each family members’ schedule – a lesson, an activity, more homework time – is more enriching or meaningful than is a family dinner, which appears to have less direct impact. However, there is a growing body of evidence that, when an entire family eats dinner together 5 or more nights weekly, the emotional health and well-being of all family members is improved. Not only is their health improved, as there is a greater likelihood of eating nutritious food, but so are a child’s school performance and emotional well-being. As the frequency of eating dinner with parents goes up, the rates of mood and anxiety disorders and high-risk behaviors in teenagers go down.

Wavebreakmedia/Thinkstock

But less than 60% of children eat five or more meals with their parents each week (National Center on Addiction and Substance Abuse [CASA], 2012). Few people would suggest that encouraging families to eat dinner together is a bad idea, but time is the ultimate scarce resource. Preparing food and eating together takes time, and parents and children have many demands on that time that feel nonnegotiable, such as homework, exercise, team practice, or work obligations. When you meet with your patients and explain the tremendous health benefits of eating dinner together, you help your patients and their parents make informed decisions about how to rebalance time to prioritize family dinners that have real but fewer obvious impacts then do a piano lesson or dance class.

Of course, children who eat regular family dinners eat more fruits and vegetables and fewer fried foods and soft drinks than do their peers who eat dinner with their families less often. They are less likely to become obese in youth and more likely to eat healthily and maintain a healthy weight once they live on their own as adults.

 

 

Scientific evidence of the mental health benefits to children of eating meals with their families first emerged in the 1990s when the National Center on Addiction and Substance Abuse at Columbia University, New York, began surveying various family behaviors and correlating them with the risk of adolescent substance use and misuse. They found strong evidence that when families ate dinner together five or more times weekly (we’ll call this “frequent family dinners”), their adolescents were far less likely to initiate alcohol and cigarette use and less likely to regularly abuse alcohol and drugs. Subsequent studies have demonstrated that the protective effect may be greater for girls than boys and may be greater for alcohol, cigarettes, and marijuana than for other drugs. But earlier age of first use of substances substantially raises the risk of later addiction, so the health benefits of any delay in first use are significant.

Since CASA’s first studies in the 1990s, researchers began paying closer attention to family meals and a variety of psychiatric problems in youth. They demonstrated that frequent family dinners lowered the risk of other externalizing behaviors in youth, including risky sexual behaviors, threats of physical harm, aggression, fights leading to injury, and carrying or using a weapon.1,2 Frequent family dinners are associated with lower rates of disordered eating behaviors and disordered body image in adolescent girls.3,4 Multiple studies have found a powerful association between frequent family dinners and lower rates of depressive symptoms and suicide attempts in both male and female adolescents.1 Frequent family dinners even have been shown to mitigate against the risks of multiple poor health and academic outcomes in children with high adverse childhood experience (ACE) scores.5

Beyond protecting against problems, frequent family meals are associated with improved well-being and performance. Studies have demonstrated positive associations between frequent family meals and higher levels of self-esteem, self-efficacy, and well-being in adolescents, both male and female. They have consistently found significant associations between frequent family meals and higher grade point averages, commitment to learning, and rich vocabularies in children and adolescents, even after adjustment for demographic and other familial factors.6 And children are not the only ones who benefit. Frequent family meals even have been shown to be associated with higher self-reported levels of well-being and self-esteem, and lower levels of stress among parents.7,8 While investing the time in preparing meals and eating them together may sound stressful, it’s clear the benefits outweigh the risks for parents as well as for their children.

Dr. Susan D. Swick

It is important to set the framework for what really matters in a family dinner so that your patients can enjoy all of these benefits. Parents may assume that the meal must be prepared from scratch with only fresh, local, or organic ingredients. But what matters most is that the food is delicious and nutritious, and that the time spent eating (and preparing it) is fun, and promotes conversation and connection. Homemade food usually is more nutritious and will bring more of the physical health benefits, but many store-bought ingredients or even take-out options can be healthy and can promote time for the family to sit together and connect. If parents enjoy preparing food, then it’s worthwhile! And they should not worry about having every member of the family together at every meal. Even if only one parent and child are present for a dinner, they each will enjoy the benefits.

 

 


Parents can use this time to help promote good habits in their children. Talking about why manners matter while practicing them at the table is powerful for young children. Let them know manners are how we show people that we care about them, whether by taking turns talking or chewing with our mouths closed! Older children and adolescents can learn about how effort is an essential ingredient in every important area in life, from school to meals. Tell them that sometimes the work or effort will be uncomfortable, and pitching in to share the effort lightens everyone’s load. When parents ask for help, they show their children how to do the same and that they have confidence in their child’s ability to be helpful.

Parents should share the joy of the effort, too! They can invite their young children to help with the meal preparation in age-appropriate ways: pulling herbs off of their stems, rinsing vegetables, sprinkling spices, or emptying a box of spaghetti into a pot of water. Older children feel honored to be given bigger responsibilities, such as carrying plates to the table or cutting vegetables (with supervision, when appropriate). And adolescents, exploring their interests and enjoying their independence, may enjoy building their own menus for the family, doing the shopping or leading the preparation of a dish or full meal themselves.

While there is a role for supporting good manners and helpful habits, help parents avoid getting into power struggles with their children over what they will eat or how they conduct themselves at the table. There should be reasonable rules and expectations around mealtime, and predictable, reasonable consequences. If children try a food and don’t like it, they can have a bowl of (nutritious) cereal and stay at the table with the family. Phones should not be allowed at the table, and televisions should be off during the meal (although music may enhance the sense of pleasure or celebration). Mealtime should be time for relaxing, listening, and connecting.

Dr. Michael S. Jellinek

Offer some ideas about how to facilitate conversations. Asking about how a child’s day went may spark conversations sometimes, but usually people benefit from specific questions. What made you really laugh today? What did you have for lunch? Whom did you sit next to on the bus? If a parent starts by telling a story about his or her day, even better! This is especially potent if a parent talks about something embarrassing or challenging, or mentions a failure. Young children will have plenty of these stories, and adolescents build resilience by internalizing the idea that setbacks and difficulties are a normal, healthy part of every day. This is a great time to talk about current events, whether in the news, entertainment, or sports. And telling stories about when children were younger, when the parents were children, or even about grandparents or more distant ancestors is a wonderful way to engage children in the greater story of their family narrative, and is always engaging and memorable.

At a deeper level, the family dinner is a time that recognizes each person’s contribution to a discussion, and facilitates a calm discussion of the families’ history and values. There is connection, communication, and building of trust. Families that cannot schedule a minimum number of dinners or that have dinners filled with tension and conflict, are very likely to have children at risk. For those conflicted and often unhappy families, a pediatrician’s early recognition and intervention could make a meaningful difference.

Dr. Swick is physician in chief at Ohana, Center for Child and Adolescent Behavioral Health, Community Hospital of the Monterey (Calif.) Peninsula. Dr. Jellinek is professor emeritus of psychiatry and pediatrics, Harvard Medical School, Boston. Email them at [email protected].

References

1. J Adolesc Health. 2006;39(3):337-45.

2. J Adolesc. 2010;33(1):187-96.

3. J Adolesc Health. 2009;44(5):431-6.

4. Health Psychol. 2008;27(Suppl 2):s109-17.

5. J Adolesc Health. 2009;45(4):389-95.

6. Pediatrics. 2019 Jul 8. doi: 10.1542/peds.2018-945.

7. Arch Pediatr Adolesc Med. 2004;158(8):792-6.

8. Prev Med. 2018;113:7-12.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

LAIs still underused for patients with psychosis

Article Type
Changed
Wed, 07/31/2019 - 15:12

Long-acting injectables (LAIs) continue to be underused for patients with chronic diseases such as schizophrenia and bipolar disorder. However, in my practice, I have found the use of those medications to be useful for promoting adherence, and I wonder why they are not used more often – in light of their effectiveness. Specifically, among individuals with schizophrenia, LAIs can lead to significant improvements in symptom control, quality of life, and overall functioning.1

Dr. Gurprit S. Lamba

The following three cases illustrate the power of LAIs:

Case 1: A male patient with schizoaffective diagnosis had been admitted several times to the inpatient psychiatric unit and had poor compliance to medications by mouth. He had multiple emergency department visits besides having community health behavioral support. After various medication trials by mouth, he responded to LAIs. He was able to function in the community for longer periods of time and required far fewer ED visits. He followed up with his outpatient psychiatric support regularly.

Case 2: A female patient with schizoaffective disorder had psychosis of a persecutory nature and paranoia. She was unable to function in the community and struggled with delusional thoughts leading to anger outbursts in the community. She continually refused medicines by mouth in the outpatient unit. Upon involuntary inpatient management as per court order, the patient responded to LAIs. Her insight improved, and she displayed better judgment in the future.

Case 3: A female patient with bipolar I was impulsive and promiscuous, and routinely entered into high-risk situations. She was not able to negotiate safely in the community, and was shuttling from shelter to shelter. She was losing her medications time and again during her transition in the community. She responded well to LAIs, however, and was able to keep herself out of the inpatient hospital for longer periods of time. She said she felt relieved about not depending on daily oral medications. She also reported not self medicating with street substances.

A recent retrospective study of more than 3,600 patients showed that those who initiate LAIs versus oral antipsychotics have greater reductions in the number of hospitalizations.2 Furthermore, treatment with LAIs might be more cost-effective than oral medications, and might reduce the risk of suicide and the propensity to violence in at least a subset of individuals with psychotic illnesses and comorbid substance use disorders.3,4

 

 


Introduction of LAI intervention within the treatment plan also might provide additional benefits and potentially reduce the burden on health care resources.5 Psychiatrists seem to use LAIs conservatively and tend to be too slow to introduce this intervention even after patients experience several acute episodes. Psychiatrists should inform patients about different forms of treatment, including LAIs, during the early stages of the illness.6

Improving medication adherence in physical and mental health care is of paramount importance for the effective care of patients. Psychiatrists and primary care physicians should be made aware of the anticipated benefits of this intervention.

References

1. Kaplan G et al. Impact of long-acting injectable antipsychotics on medication adherence and clinical, functional, and economic outcomes of schizophrenia. Patient Prefer Adherence. 2013;13:1171-80.

2. Brissos S et al. The role of long-acting injectable antipsychotics in schizophrenia: a critical appraisal. Therapeutic advances in psychopharmacology. 2014 Oct;4(5):198-219.

3. Ravasio R et al. Analisi di costo-efficacia dello switch da un antipsicotico orale a risperidone a rilascio prolungato nel trattamento dei pazienti affetti da schizofrenia. Giorn Ital Health Technol Ass. 2019;2:1-8.

4. Reichhart T and W Kissling. Societal costs of nonadherence in schizophrenia: homicide/suicide. Mind & Brain, J Psychiatry. 2010 Aug 1(2):29-32.

5. Offord S et al. Health care resource usage of schizophrenia patients initiating long-acting injectable antipsychotics vs oral. J Med Econ. 2013;16:231-9.

6. Matthias J and W Rossler. Attitudes toward long-acting depot antipsychotics: a survey of patients, relatives and psychiatrists. Psychiatry Res. 2010 Jan 30;175(1-2):58-62.

Dr. Lamba, a psychiatrist and medical director at BayRidge Hospital in Lynn, Mass., has no disclosures. He changed key facts about the patients discussed to protect their confidentiality.

Publications
Topics
Sections

Long-acting injectables (LAIs) continue to be underused for patients with chronic diseases such as schizophrenia and bipolar disorder. However, in my practice, I have found the use of those medications to be useful for promoting adherence, and I wonder why they are not used more often – in light of their effectiveness. Specifically, among individuals with schizophrenia, LAIs can lead to significant improvements in symptom control, quality of life, and overall functioning.1

Dr. Gurprit S. Lamba

The following three cases illustrate the power of LAIs:

Case 1: A male patient with schizoaffective diagnosis had been admitted several times to the inpatient psychiatric unit and had poor compliance to medications by mouth. He had multiple emergency department visits besides having community health behavioral support. After various medication trials by mouth, he responded to LAIs. He was able to function in the community for longer periods of time and required far fewer ED visits. He followed up with his outpatient psychiatric support regularly.

Case 2: A female patient with schizoaffective disorder had psychosis of a persecutory nature and paranoia. She was unable to function in the community and struggled with delusional thoughts leading to anger outbursts in the community. She continually refused medicines by mouth in the outpatient unit. Upon involuntary inpatient management as per court order, the patient responded to LAIs. Her insight improved, and she displayed better judgment in the future.

Case 3: A female patient with bipolar I was impulsive and promiscuous, and routinely entered into high-risk situations. She was not able to negotiate safely in the community, and was shuttling from shelter to shelter. She was losing her medications time and again during her transition in the community. She responded well to LAIs, however, and was able to keep herself out of the inpatient hospital for longer periods of time. She said she felt relieved about not depending on daily oral medications. She also reported not self medicating with street substances.

A recent retrospective study of more than 3,600 patients showed that those who initiate LAIs versus oral antipsychotics have greater reductions in the number of hospitalizations.2 Furthermore, treatment with LAIs might be more cost-effective than oral medications, and might reduce the risk of suicide and the propensity to violence in at least a subset of individuals with psychotic illnesses and comorbid substance use disorders.3,4

 

 


Introduction of LAI intervention within the treatment plan also might provide additional benefits and potentially reduce the burden on health care resources.5 Psychiatrists seem to use LAIs conservatively and tend to be too slow to introduce this intervention even after patients experience several acute episodes. Psychiatrists should inform patients about different forms of treatment, including LAIs, during the early stages of the illness.6

Improving medication adherence in physical and mental health care is of paramount importance for the effective care of patients. Psychiatrists and primary care physicians should be made aware of the anticipated benefits of this intervention.

References

1. Kaplan G et al. Impact of long-acting injectable antipsychotics on medication adherence and clinical, functional, and economic outcomes of schizophrenia. Patient Prefer Adherence. 2013;13:1171-80.

2. Brissos S et al. The role of long-acting injectable antipsychotics in schizophrenia: a critical appraisal. Therapeutic advances in psychopharmacology. 2014 Oct;4(5):198-219.

3. Ravasio R et al. Analisi di costo-efficacia dello switch da un antipsicotico orale a risperidone a rilascio prolungato nel trattamento dei pazienti affetti da schizofrenia. Giorn Ital Health Technol Ass. 2019;2:1-8.

4. Reichhart T and W Kissling. Societal costs of nonadherence in schizophrenia: homicide/suicide. Mind & Brain, J Psychiatry. 2010 Aug 1(2):29-32.

5. Offord S et al. Health care resource usage of schizophrenia patients initiating long-acting injectable antipsychotics vs oral. J Med Econ. 2013;16:231-9.

6. Matthias J and W Rossler. Attitudes toward long-acting depot antipsychotics: a survey of patients, relatives and psychiatrists. Psychiatry Res. 2010 Jan 30;175(1-2):58-62.

Dr. Lamba, a psychiatrist and medical director at BayRidge Hospital in Lynn, Mass., has no disclosures. He changed key facts about the patients discussed to protect their confidentiality.

Long-acting injectables (LAIs) continue to be underused for patients with chronic diseases such as schizophrenia and bipolar disorder. However, in my practice, I have found the use of those medications to be useful for promoting adherence, and I wonder why they are not used more often – in light of their effectiveness. Specifically, among individuals with schizophrenia, LAIs can lead to significant improvements in symptom control, quality of life, and overall functioning.1

Dr. Gurprit S. Lamba

The following three cases illustrate the power of LAIs:

Case 1: A male patient with schizoaffective diagnosis had been admitted several times to the inpatient psychiatric unit and had poor compliance to medications by mouth. He had multiple emergency department visits besides having community health behavioral support. After various medication trials by mouth, he responded to LAIs. He was able to function in the community for longer periods of time and required far fewer ED visits. He followed up with his outpatient psychiatric support regularly.

Case 2: A female patient with schizoaffective disorder had psychosis of a persecutory nature and paranoia. She was unable to function in the community and struggled with delusional thoughts leading to anger outbursts in the community. She continually refused medicines by mouth in the outpatient unit. Upon involuntary inpatient management as per court order, the patient responded to LAIs. Her insight improved, and she displayed better judgment in the future.

Case 3: A female patient with bipolar I was impulsive and promiscuous, and routinely entered into high-risk situations. She was not able to negotiate safely in the community, and was shuttling from shelter to shelter. She was losing her medications time and again during her transition in the community. She responded well to LAIs, however, and was able to keep herself out of the inpatient hospital for longer periods of time. She said she felt relieved about not depending on daily oral medications. She also reported not self medicating with street substances.

A recent retrospective study of more than 3,600 patients showed that those who initiate LAIs versus oral antipsychotics have greater reductions in the number of hospitalizations.2 Furthermore, treatment with LAIs might be more cost-effective than oral medications, and might reduce the risk of suicide and the propensity to violence in at least a subset of individuals with psychotic illnesses and comorbid substance use disorders.3,4

 

 


Introduction of LAI intervention within the treatment plan also might provide additional benefits and potentially reduce the burden on health care resources.5 Psychiatrists seem to use LAIs conservatively and tend to be too slow to introduce this intervention even after patients experience several acute episodes. Psychiatrists should inform patients about different forms of treatment, including LAIs, during the early stages of the illness.6

Improving medication adherence in physical and mental health care is of paramount importance for the effective care of patients. Psychiatrists and primary care physicians should be made aware of the anticipated benefits of this intervention.

References

1. Kaplan G et al. Impact of long-acting injectable antipsychotics on medication adherence and clinical, functional, and economic outcomes of schizophrenia. Patient Prefer Adherence. 2013;13:1171-80.

2. Brissos S et al. The role of long-acting injectable antipsychotics in schizophrenia: a critical appraisal. Therapeutic advances in psychopharmacology. 2014 Oct;4(5):198-219.

3. Ravasio R et al. Analisi di costo-efficacia dello switch da un antipsicotico orale a risperidone a rilascio prolungato nel trattamento dei pazienti affetti da schizofrenia. Giorn Ital Health Technol Ass. 2019;2:1-8.

4. Reichhart T and W Kissling. Societal costs of nonadherence in schizophrenia: homicide/suicide. Mind & Brain, J Psychiatry. 2010 Aug 1(2):29-32.

5. Offord S et al. Health care resource usage of schizophrenia patients initiating long-acting injectable antipsychotics vs oral. J Med Econ. 2013;16:231-9.

6. Matthias J and W Rossler. Attitudes toward long-acting depot antipsychotics: a survey of patients, relatives and psychiatrists. Psychiatry Res. 2010 Jan 30;175(1-2):58-62.

Dr. Lamba, a psychiatrist and medical director at BayRidge Hospital in Lynn, Mass., has no disclosures. He changed key facts about the patients discussed to protect their confidentiality.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Statin use linked to less depression, anxiety in ACOS patients

Article Type
Changed
Tue, 07/16/2019 - 15:59

Adults with asthma–chronic obstructive pulmonary disease overlap syndrome who took statins had lower rates of anxiety and depression than did those not on statins, based on data from approximately 9,000 patients.

Although asthma–COPD overlap syndrome (ACOS) has been associated with depression, the effects of oral and inhaled corticosteroids on anxiety and depression in these patients have not been well investigated, wrote Jun-Jun Yeh, MD, of Ditmanson Medical Foundation Chia-Yi (Taiwan) Christian Hospital, and colleagues.

In a study published in the Journal of Affective Disorders, the researchers analyzed 9,139 ACOS patients including 1,252 statin users and 7,887 nonstatin users; 62% were male.

The statin users had significantly lower risk of both anxiety and depression than did the nonstatin users (adjusted hazard ratio 0.34 for anxiety and 0.36 for depression) after researchers controlled for factors including age, sex, comorbidities, and medications. Statin users experienced a total of 109 anxiety or depression events over an average of 8 years’ follow-up, while nonstatin users experienced a total of 1,333 anxiety or depression events over an average of 5 years’ follow-up.

The incidence density rate of anxiety was 11/1,000 person-years for statin users and 33/1,000 person-years for nonstatin users. The incidence density rate of depression was 3/1,000 person-years for statin users and 9/1,000 person-years for nonstatin users.

Significantly lower risk of anxiety and depression also were observed in statin users, compared with nonstatin users, in subgroups of men, women, patients younger than 50 years, and patients aged 50 years and older. The risks of anxiety and depression were lower in statin users versus nonstatin users across all subgroups with or without inhaled or oral corticosteroids.

Overall, the statin users were significantly younger, had more comorbidities, and were more likely to use inhaled or oral corticosteroids than were the nonstatin users.

The findings were limited by several factors including the retrospective nature of the study and a lack of information on prescribed daily doses of medication, the researchers noted. However, the results support those from previous studies and suggest that “the anti-inflammatory effect of statins may attenuate anxiety and depression in ACOS patients, even in the late stages of the disease,” although the exact mechanism of action remains unknown and larger, randomized, controlled trials are needed, they said.

The study was supported by grants from a variety of organizations in Taiwan, China, and Japan. The researchers had no financial conflicts to disclose.

SOURCE: Yeh JJ et al. J Affect Disord. 2019 Jun 15; 253:277-84.

Publications
Topics
Sections

Adults with asthma–chronic obstructive pulmonary disease overlap syndrome who took statins had lower rates of anxiety and depression than did those not on statins, based on data from approximately 9,000 patients.

Although asthma–COPD overlap syndrome (ACOS) has been associated with depression, the effects of oral and inhaled corticosteroids on anxiety and depression in these patients have not been well investigated, wrote Jun-Jun Yeh, MD, of Ditmanson Medical Foundation Chia-Yi (Taiwan) Christian Hospital, and colleagues.

In a study published in the Journal of Affective Disorders, the researchers analyzed 9,139 ACOS patients including 1,252 statin users and 7,887 nonstatin users; 62% were male.

The statin users had significantly lower risk of both anxiety and depression than did the nonstatin users (adjusted hazard ratio 0.34 for anxiety and 0.36 for depression) after researchers controlled for factors including age, sex, comorbidities, and medications. Statin users experienced a total of 109 anxiety or depression events over an average of 8 years’ follow-up, while nonstatin users experienced a total of 1,333 anxiety or depression events over an average of 5 years’ follow-up.

The incidence density rate of anxiety was 11/1,000 person-years for statin users and 33/1,000 person-years for nonstatin users. The incidence density rate of depression was 3/1,000 person-years for statin users and 9/1,000 person-years for nonstatin users.

Significantly lower risk of anxiety and depression also were observed in statin users, compared with nonstatin users, in subgroups of men, women, patients younger than 50 years, and patients aged 50 years and older. The risks of anxiety and depression were lower in statin users versus nonstatin users across all subgroups with or without inhaled or oral corticosteroids.

Overall, the statin users were significantly younger, had more comorbidities, and were more likely to use inhaled or oral corticosteroids than were the nonstatin users.

The findings were limited by several factors including the retrospective nature of the study and a lack of information on prescribed daily doses of medication, the researchers noted. However, the results support those from previous studies and suggest that “the anti-inflammatory effect of statins may attenuate anxiety and depression in ACOS patients, even in the late stages of the disease,” although the exact mechanism of action remains unknown and larger, randomized, controlled trials are needed, they said.

The study was supported by grants from a variety of organizations in Taiwan, China, and Japan. The researchers had no financial conflicts to disclose.

SOURCE: Yeh JJ et al. J Affect Disord. 2019 Jun 15; 253:277-84.

Adults with asthma–chronic obstructive pulmonary disease overlap syndrome who took statins had lower rates of anxiety and depression than did those not on statins, based on data from approximately 9,000 patients.

Although asthma–COPD overlap syndrome (ACOS) has been associated with depression, the effects of oral and inhaled corticosteroids on anxiety and depression in these patients have not been well investigated, wrote Jun-Jun Yeh, MD, of Ditmanson Medical Foundation Chia-Yi (Taiwan) Christian Hospital, and colleagues.

In a study published in the Journal of Affective Disorders, the researchers analyzed 9,139 ACOS patients including 1,252 statin users and 7,887 nonstatin users; 62% were male.

The statin users had significantly lower risk of both anxiety and depression than did the nonstatin users (adjusted hazard ratio 0.34 for anxiety and 0.36 for depression) after researchers controlled for factors including age, sex, comorbidities, and medications. Statin users experienced a total of 109 anxiety or depression events over an average of 8 years’ follow-up, while nonstatin users experienced a total of 1,333 anxiety or depression events over an average of 5 years’ follow-up.

The incidence density rate of anxiety was 11/1,000 person-years for statin users and 33/1,000 person-years for nonstatin users. The incidence density rate of depression was 3/1,000 person-years for statin users and 9/1,000 person-years for nonstatin users.

Significantly lower risk of anxiety and depression also were observed in statin users, compared with nonstatin users, in subgroups of men, women, patients younger than 50 years, and patients aged 50 years and older. The risks of anxiety and depression were lower in statin users versus nonstatin users across all subgroups with or without inhaled or oral corticosteroids.

Overall, the statin users were significantly younger, had more comorbidities, and were more likely to use inhaled or oral corticosteroids than were the nonstatin users.

The findings were limited by several factors including the retrospective nature of the study and a lack of information on prescribed daily doses of medication, the researchers noted. However, the results support those from previous studies and suggest that “the anti-inflammatory effect of statins may attenuate anxiety and depression in ACOS patients, even in the late stages of the disease,” although the exact mechanism of action remains unknown and larger, randomized, controlled trials are needed, they said.

The study was supported by grants from a variety of organizations in Taiwan, China, and Japan. The researchers had no financial conflicts to disclose.

SOURCE: Yeh JJ et al. J Affect Disord. 2019 Jun 15; 253:277-84.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF AFFECTIVE DISORDERS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Atezolizumab combo in first-line NSCLC misses cost-effectiveness mark

Article Type
Changed
Tue, 07/16/2019 - 10:11

 

Adding the immune checkpoint inhibitor atezolizumab (Tecentriq) to doublet or triplet regimens as first-line treatment for nonsquamous non–small cell lung cancer (NSCLC) is not cost effective, even by a long shot, concluded a Markov modeling study.

Positive results of the IMpower150 trial led the Food and Drug Administration to approve and the National Comprehensive Cancer Network to recommend the combination of atezolizumab, bevacizumab, carboplatin, and paclitaxel (ABCP) as an option for selected patients in this setting, noted the investigators, led by XiaoMin Wan, PhD, of the department of pharmacy at the Second Xiangya Hospital and the Institute of Clinical Pharmacy, both at Central South University, Changsha, China.

“Although adding atezolizumab to the combination of bevacizumab and chemotherapy results in significantly higher survival in patients with metastatic NSCLC, the question of whether its price reflects its potential benefit remains unclear from a value standpoint,” they wrote.

Dr. Wan and colleagues developed a Markov model to compare the lifetime cost and effectiveness of various combinations – the quadruplet ABCP regimen, the triplet BCP regimen (bevacizumab, carboplatin, and paclitaxel), and the doublet CP regimen (carboplatin and paclitaxel) – when used as first‐line treatment for metastatic nonsquamous NSCLC.

ABCP yielded an additional 0.413 quality-adjusted life-years (QALYs) and 0.460 life-years, compared with BCP, and an additional 0.738 QALYs and 0.956 life-years, compared with CP. Respective incremental costs were $234,998 and $381,116, the investigators reported in Cancer.

Ultimately, ABCP had an incremental cost‐effectiveness ratio (ICER) of $568,967 per QALY, compared with BCP, and $516,114 per QALY, compared with CP – both of which far exceeded the conventional $100,000 ICER per QALY willingness-to-pay threshold.

Although atezolizumab targets programmed death–ligand 1 (PD-L1), the ICER improved only modestly to $464,703 per QALY when treatment was given only to patients having PD‐L1 expression of at least 50% on tumor cells or at least 10% on immune cells. Findings were similar when the duration of atezolizumab therapy was restricted to 2 years.

However, steep reductions in the costs of the two targeted agents altered results. Specifically, ABCP had an ICER of $99,786 and $162,441 per QALY, compared with BCP and CP, respectively, when the costs of atezolizumab and bevacizumab were reduced by 70%, and ABCP fell below the $100,000 willingness-to-pay threshold, compared with both regimens, when those costs were reduced by 83%.

“To our knowledge, the current study is the first cost-effectiveness analysis of ABCP, compared with BCP, in the first-line setting for patients with metastatic NSCLC,” Dr. Wan and colleagues noted. “From the perspective of the U.S. payer, ABCP is estimated not to be cost effective, compared with BCP or CP, in the first-line setting for patients with metastatic, nonsquamous NSCLC at a [willingness-to-pay] threshold of $100,000 per QALY.”

“Although ABCP is not considered to be cost effective, this does not mean that patients should receive the less-effective treatment strategy of BCP,” they cautioned, noting that recent cost-effectiveness data appear to favor the first-line combination of another immune checkpoint inhibitor, pembrolizumab (Keytruda), with chemotherapy instead. “A price reduction is warranted to make ABCP cost effective and affordable.”

Dr. Wan did not report any relevant conflicts of interest. The study was supported by grants from the National Natural Science Foundation of China and the research project of the Health and Family Planning Commission of Hunan province.

SOURCE: Wan X et al. Cancer. 2019 Jul 9. doi: 10.1002/cncr.32368.

Publications
Topics
Sections

 

Adding the immune checkpoint inhibitor atezolizumab (Tecentriq) to doublet or triplet regimens as first-line treatment for nonsquamous non–small cell lung cancer (NSCLC) is not cost effective, even by a long shot, concluded a Markov modeling study.

Positive results of the IMpower150 trial led the Food and Drug Administration to approve and the National Comprehensive Cancer Network to recommend the combination of atezolizumab, bevacizumab, carboplatin, and paclitaxel (ABCP) as an option for selected patients in this setting, noted the investigators, led by XiaoMin Wan, PhD, of the department of pharmacy at the Second Xiangya Hospital and the Institute of Clinical Pharmacy, both at Central South University, Changsha, China.

“Although adding atezolizumab to the combination of bevacizumab and chemotherapy results in significantly higher survival in patients with metastatic NSCLC, the question of whether its price reflects its potential benefit remains unclear from a value standpoint,” they wrote.

Dr. Wan and colleagues developed a Markov model to compare the lifetime cost and effectiveness of various combinations – the quadruplet ABCP regimen, the triplet BCP regimen (bevacizumab, carboplatin, and paclitaxel), and the doublet CP regimen (carboplatin and paclitaxel) – when used as first‐line treatment for metastatic nonsquamous NSCLC.

ABCP yielded an additional 0.413 quality-adjusted life-years (QALYs) and 0.460 life-years, compared with BCP, and an additional 0.738 QALYs and 0.956 life-years, compared with CP. Respective incremental costs were $234,998 and $381,116, the investigators reported in Cancer.

Ultimately, ABCP had an incremental cost‐effectiveness ratio (ICER) of $568,967 per QALY, compared with BCP, and $516,114 per QALY, compared with CP – both of which far exceeded the conventional $100,000 ICER per QALY willingness-to-pay threshold.

Although atezolizumab targets programmed death–ligand 1 (PD-L1), the ICER improved only modestly to $464,703 per QALY when treatment was given only to patients having PD‐L1 expression of at least 50% on tumor cells or at least 10% on immune cells. Findings were similar when the duration of atezolizumab therapy was restricted to 2 years.

However, steep reductions in the costs of the two targeted agents altered results. Specifically, ABCP had an ICER of $99,786 and $162,441 per QALY, compared with BCP and CP, respectively, when the costs of atezolizumab and bevacizumab were reduced by 70%, and ABCP fell below the $100,000 willingness-to-pay threshold, compared with both regimens, when those costs were reduced by 83%.

“To our knowledge, the current study is the first cost-effectiveness analysis of ABCP, compared with BCP, in the first-line setting for patients with metastatic NSCLC,” Dr. Wan and colleagues noted. “From the perspective of the U.S. payer, ABCP is estimated not to be cost effective, compared with BCP or CP, in the first-line setting for patients with metastatic, nonsquamous NSCLC at a [willingness-to-pay] threshold of $100,000 per QALY.”

“Although ABCP is not considered to be cost effective, this does not mean that patients should receive the less-effective treatment strategy of BCP,” they cautioned, noting that recent cost-effectiveness data appear to favor the first-line combination of another immune checkpoint inhibitor, pembrolizumab (Keytruda), with chemotherapy instead. “A price reduction is warranted to make ABCP cost effective and affordable.”

Dr. Wan did not report any relevant conflicts of interest. The study was supported by grants from the National Natural Science Foundation of China and the research project of the Health and Family Planning Commission of Hunan province.

SOURCE: Wan X et al. Cancer. 2019 Jul 9. doi: 10.1002/cncr.32368.

 

Adding the immune checkpoint inhibitor atezolizumab (Tecentriq) to doublet or triplet regimens as first-line treatment for nonsquamous non–small cell lung cancer (NSCLC) is not cost effective, even by a long shot, concluded a Markov modeling study.

Positive results of the IMpower150 trial led the Food and Drug Administration to approve and the National Comprehensive Cancer Network to recommend the combination of atezolizumab, bevacizumab, carboplatin, and paclitaxel (ABCP) as an option for selected patients in this setting, noted the investigators, led by XiaoMin Wan, PhD, of the department of pharmacy at the Second Xiangya Hospital and the Institute of Clinical Pharmacy, both at Central South University, Changsha, China.

“Although adding atezolizumab to the combination of bevacizumab and chemotherapy results in significantly higher survival in patients with metastatic NSCLC, the question of whether its price reflects its potential benefit remains unclear from a value standpoint,” they wrote.

Dr. Wan and colleagues developed a Markov model to compare the lifetime cost and effectiveness of various combinations – the quadruplet ABCP regimen, the triplet BCP regimen (bevacizumab, carboplatin, and paclitaxel), and the doublet CP regimen (carboplatin and paclitaxel) – when used as first‐line treatment for metastatic nonsquamous NSCLC.

ABCP yielded an additional 0.413 quality-adjusted life-years (QALYs) and 0.460 life-years, compared with BCP, and an additional 0.738 QALYs and 0.956 life-years, compared with CP. Respective incremental costs were $234,998 and $381,116, the investigators reported in Cancer.

Ultimately, ABCP had an incremental cost‐effectiveness ratio (ICER) of $568,967 per QALY, compared with BCP, and $516,114 per QALY, compared with CP – both of which far exceeded the conventional $100,000 ICER per QALY willingness-to-pay threshold.

Although atezolizumab targets programmed death–ligand 1 (PD-L1), the ICER improved only modestly to $464,703 per QALY when treatment was given only to patients having PD‐L1 expression of at least 50% on tumor cells or at least 10% on immune cells. Findings were similar when the duration of atezolizumab therapy was restricted to 2 years.

However, steep reductions in the costs of the two targeted agents altered results. Specifically, ABCP had an ICER of $99,786 and $162,441 per QALY, compared with BCP and CP, respectively, when the costs of atezolizumab and bevacizumab were reduced by 70%, and ABCP fell below the $100,000 willingness-to-pay threshold, compared with both regimens, when those costs were reduced by 83%.

“To our knowledge, the current study is the first cost-effectiveness analysis of ABCP, compared with BCP, in the first-line setting for patients with metastatic NSCLC,” Dr. Wan and colleagues noted. “From the perspective of the U.S. payer, ABCP is estimated not to be cost effective, compared with BCP or CP, in the first-line setting for patients with metastatic, nonsquamous NSCLC at a [willingness-to-pay] threshold of $100,000 per QALY.”

“Although ABCP is not considered to be cost effective, this does not mean that patients should receive the less-effective treatment strategy of BCP,” they cautioned, noting that recent cost-effectiveness data appear to favor the first-line combination of another immune checkpoint inhibitor, pembrolizumab (Keytruda), with chemotherapy instead. “A price reduction is warranted to make ABCP cost effective and affordable.”

Dr. Wan did not report any relevant conflicts of interest. The study was supported by grants from the National Natural Science Foundation of China and the research project of the Health and Family Planning Commission of Hunan province.

SOURCE: Wan X et al. Cancer. 2019 Jul 9. doi: 10.1002/cncr.32368.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CANCER

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

 

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

NOACs benefit early stage chronic kidney disease patients

Consider NOACs for early chronic kidney disease
Article Type
Changed
Thu, 07/25/2019 - 12:29

Non–vitamin K oral anticoagulants (NOACs) significantly reduced the risk of stroke or systemic embolism compared to vitamin K antagonists (VKAs) for patients in the early stages of chronic kidney disease and comorbid atrial fibrillation, based on data from a meta-analysis of roughly 34,000 patients.

Chronic kidney disease increases the risk of complications including stroke, congestive heart failure, and death in patients who also have atrial fibrillation, but most trials of anticoagulant therapy to reduce the risk of such events have excluded these patients, wrote Jeffrey T. Ha, MBBS, of the George Institute for Global Health, Newtown, Australia, and colleagues.

To assess the benefits and harms of oral anticoagulants for multiple indications in chronic kidney disease patients, the researchers conducted a meta-analysis of 45 studies including 34,082 individuals. The findings were published in the Annals of Internal Medicine. The analysis included 8 trials of end stage kidney disease patients on dialysis; the remaining trials excluded patients with creatinine clearance less than 20 mL/min or an estimated glomerular filtration rate less than 15 mL/min per 1.73 m2. The interventional agents were rivaroxaban, dabigatran, apixaban, edoxaban, betrixaban, warfarin, and acenocoumarol.

A notable finding was the significant reduction in relative risk of stroke or systemic embolism (21%), hemorrhagic stroke (52%), and intracranial hemorrhage (51%) for early-stage chronic kidney disease patients with atrial fibrillation given NOACs, compared with those given VKAs.

The evidence for the superiority of NOACs over VKAs for reducing risk of venous thromboembolism (VTE) or VTE-related death was uncertain, as was the evidence to draw any conclusions about benefits and harms of either NOACs or VKAs for patients with advanced or end-stage kidney disease.

Across all trials, NOACs appeared to reduce the relative risk of major bleeding, compared with VKAs by roughly 25%, but the difference was not statistically significant, the researchers noted.

The findings were limited by the lack of evidence for oral anticoagulant use in patients with advanced chronic or end-stage kidney disease, as well as inability to assess differences among NOACs, the researchers noted. However, the results suggest that NOACs may be recommended over VKAs for the subgroup of early-stage chronic kidney disease patients with atrial fibrillation, they said.

Several additional trials are in progress, and future trials “should include not only participants with dialysis-dependent ESKD [end-stage kidney disease] but also those with CrCl [creatinine clearance of] less than 25 mL/min,” and compare NOACs with placebo as well, they noted.

Lead author Dr. Ha is supported by a University Postgraduate Award from University of New South Wales, Sydney, but had no financial conflicts to disclose; coauthors disclosed support from various organizations as well as pharmaceutical companies including Baxter, Amgen, Eli Lilly, Boehringer Ingelheim, Vifor Pharma, Janssen, Pfizer, Bristol-Myers Squibb, and GlaxoSmithKline.
 

SOURCE: Ha JT et al. Ann Intern Med. 2019 July 15. doi: 10.7326/M19-0087

Body

The significant reduction in risk of hemorrhagic stroke, recurrent venous thromboembolism, and VTE-related deaths in patients with early-stage chronic kidney disease given a NOAC [non–vitamin K oral anticoagulants] in a meta-analysis supports clinical application, but is there a level of renal dysfunction for which clinicians should apply greater caution in extrapolating these findings? As the evidence supporting the safety and effectiveness of NOACs in the general population increases, there is a renewed interest in defining the role of anticoagulant therapy to prevent stroke and VTE in patients with chronic kidney disease and end-stage kidney disease. This interest is driven in part by uncertainty as to the benefits vs. harms of warfarin for patients with chronic kidney disease. The data in the meta-analysis by Ha and colleagues do not support any benefits for patients with end-stage disease, but the results of two ongoing clinical trials of patients with atrial fibrillation and end-stage kidney disease may offer insights.

Until the results of these trials become available, the decision to use anticoagulant therapy in patients with end-stage kidney disease will continue to require an individualized approach that balances potential benefits and harms.
 

Ainslie Hildebrand, MD, of University of Alberta, Edmonton; Christine Ribic, MD, of McMaster University, Hamilton, Ont.; and Deborah Zimmerman, MD, of the University of Ottawa, made these comments in an accompanying editorial (Ann Intern Med. 2019 July 15. doi:10.7326/M19-1504). Dr. Ribic disclosed grants from Pfizer, Leo Pharma, and Astellas Pharma. Dr. Hildebrand and Dr. Zimmerman had no financial conflicts to disclose.

Publications
Topics
Sections
Body

The significant reduction in risk of hemorrhagic stroke, recurrent venous thromboembolism, and VTE-related deaths in patients with early-stage chronic kidney disease given a NOAC [non–vitamin K oral anticoagulants] in a meta-analysis supports clinical application, but is there a level of renal dysfunction for which clinicians should apply greater caution in extrapolating these findings? As the evidence supporting the safety and effectiveness of NOACs in the general population increases, there is a renewed interest in defining the role of anticoagulant therapy to prevent stroke and VTE in patients with chronic kidney disease and end-stage kidney disease. This interest is driven in part by uncertainty as to the benefits vs. harms of warfarin for patients with chronic kidney disease. The data in the meta-analysis by Ha and colleagues do not support any benefits for patients with end-stage disease, but the results of two ongoing clinical trials of patients with atrial fibrillation and end-stage kidney disease may offer insights.

Until the results of these trials become available, the decision to use anticoagulant therapy in patients with end-stage kidney disease will continue to require an individualized approach that balances potential benefits and harms.
 

Ainslie Hildebrand, MD, of University of Alberta, Edmonton; Christine Ribic, MD, of McMaster University, Hamilton, Ont.; and Deborah Zimmerman, MD, of the University of Ottawa, made these comments in an accompanying editorial (Ann Intern Med. 2019 July 15. doi:10.7326/M19-1504). Dr. Ribic disclosed grants from Pfizer, Leo Pharma, and Astellas Pharma. Dr. Hildebrand and Dr. Zimmerman had no financial conflicts to disclose.

Body

The significant reduction in risk of hemorrhagic stroke, recurrent venous thromboembolism, and VTE-related deaths in patients with early-stage chronic kidney disease given a NOAC [non–vitamin K oral anticoagulants] in a meta-analysis supports clinical application, but is there a level of renal dysfunction for which clinicians should apply greater caution in extrapolating these findings? As the evidence supporting the safety and effectiveness of NOACs in the general population increases, there is a renewed interest in defining the role of anticoagulant therapy to prevent stroke and VTE in patients with chronic kidney disease and end-stage kidney disease. This interest is driven in part by uncertainty as to the benefits vs. harms of warfarin for patients with chronic kidney disease. The data in the meta-analysis by Ha and colleagues do not support any benefits for patients with end-stage disease, but the results of two ongoing clinical trials of patients with atrial fibrillation and end-stage kidney disease may offer insights.

Until the results of these trials become available, the decision to use anticoagulant therapy in patients with end-stage kidney disease will continue to require an individualized approach that balances potential benefits and harms.
 

Ainslie Hildebrand, MD, of University of Alberta, Edmonton; Christine Ribic, MD, of McMaster University, Hamilton, Ont.; and Deborah Zimmerman, MD, of the University of Ottawa, made these comments in an accompanying editorial (Ann Intern Med. 2019 July 15. doi:10.7326/M19-1504). Dr. Ribic disclosed grants from Pfizer, Leo Pharma, and Astellas Pharma. Dr. Hildebrand and Dr. Zimmerman had no financial conflicts to disclose.

Title
Consider NOACs for early chronic kidney disease
Consider NOACs for early chronic kidney disease

Non–vitamin K oral anticoagulants (NOACs) significantly reduced the risk of stroke or systemic embolism compared to vitamin K antagonists (VKAs) for patients in the early stages of chronic kidney disease and comorbid atrial fibrillation, based on data from a meta-analysis of roughly 34,000 patients.

Chronic kidney disease increases the risk of complications including stroke, congestive heart failure, and death in patients who also have atrial fibrillation, but most trials of anticoagulant therapy to reduce the risk of such events have excluded these patients, wrote Jeffrey T. Ha, MBBS, of the George Institute for Global Health, Newtown, Australia, and colleagues.

To assess the benefits and harms of oral anticoagulants for multiple indications in chronic kidney disease patients, the researchers conducted a meta-analysis of 45 studies including 34,082 individuals. The findings were published in the Annals of Internal Medicine. The analysis included 8 trials of end stage kidney disease patients on dialysis; the remaining trials excluded patients with creatinine clearance less than 20 mL/min or an estimated glomerular filtration rate less than 15 mL/min per 1.73 m2. The interventional agents were rivaroxaban, dabigatran, apixaban, edoxaban, betrixaban, warfarin, and acenocoumarol.

A notable finding was the significant reduction in relative risk of stroke or systemic embolism (21%), hemorrhagic stroke (52%), and intracranial hemorrhage (51%) for early-stage chronic kidney disease patients with atrial fibrillation given NOACs, compared with those given VKAs.

The evidence for the superiority of NOACs over VKAs for reducing risk of venous thromboembolism (VTE) or VTE-related death was uncertain, as was the evidence to draw any conclusions about benefits and harms of either NOACs or VKAs for patients with advanced or end-stage kidney disease.

Across all trials, NOACs appeared to reduce the relative risk of major bleeding, compared with VKAs by roughly 25%, but the difference was not statistically significant, the researchers noted.

The findings were limited by the lack of evidence for oral anticoagulant use in patients with advanced chronic or end-stage kidney disease, as well as inability to assess differences among NOACs, the researchers noted. However, the results suggest that NOACs may be recommended over VKAs for the subgroup of early-stage chronic kidney disease patients with atrial fibrillation, they said.

Several additional trials are in progress, and future trials “should include not only participants with dialysis-dependent ESKD [end-stage kidney disease] but also those with CrCl [creatinine clearance of] less than 25 mL/min,” and compare NOACs with placebo as well, they noted.

Lead author Dr. Ha is supported by a University Postgraduate Award from University of New South Wales, Sydney, but had no financial conflicts to disclose; coauthors disclosed support from various organizations as well as pharmaceutical companies including Baxter, Amgen, Eli Lilly, Boehringer Ingelheim, Vifor Pharma, Janssen, Pfizer, Bristol-Myers Squibb, and GlaxoSmithKline.
 

SOURCE: Ha JT et al. Ann Intern Med. 2019 July 15. doi: 10.7326/M19-0087

Non–vitamin K oral anticoagulants (NOACs) significantly reduced the risk of stroke or systemic embolism compared to vitamin K antagonists (VKAs) for patients in the early stages of chronic kidney disease and comorbid atrial fibrillation, based on data from a meta-analysis of roughly 34,000 patients.

Chronic kidney disease increases the risk of complications including stroke, congestive heart failure, and death in patients who also have atrial fibrillation, but most trials of anticoagulant therapy to reduce the risk of such events have excluded these patients, wrote Jeffrey T. Ha, MBBS, of the George Institute for Global Health, Newtown, Australia, and colleagues.

To assess the benefits and harms of oral anticoagulants for multiple indications in chronic kidney disease patients, the researchers conducted a meta-analysis of 45 studies including 34,082 individuals. The findings were published in the Annals of Internal Medicine. The analysis included 8 trials of end stage kidney disease patients on dialysis; the remaining trials excluded patients with creatinine clearance less than 20 mL/min or an estimated glomerular filtration rate less than 15 mL/min per 1.73 m2. The interventional agents were rivaroxaban, dabigatran, apixaban, edoxaban, betrixaban, warfarin, and acenocoumarol.

A notable finding was the significant reduction in relative risk of stroke or systemic embolism (21%), hemorrhagic stroke (52%), and intracranial hemorrhage (51%) for early-stage chronic kidney disease patients with atrial fibrillation given NOACs, compared with those given VKAs.

The evidence for the superiority of NOACs over VKAs for reducing risk of venous thromboembolism (VTE) or VTE-related death was uncertain, as was the evidence to draw any conclusions about benefits and harms of either NOACs or VKAs for patients with advanced or end-stage kidney disease.

Across all trials, NOACs appeared to reduce the relative risk of major bleeding, compared with VKAs by roughly 25%, but the difference was not statistically significant, the researchers noted.

The findings were limited by the lack of evidence for oral anticoagulant use in patients with advanced chronic or end-stage kidney disease, as well as inability to assess differences among NOACs, the researchers noted. However, the results suggest that NOACs may be recommended over VKAs for the subgroup of early-stage chronic kidney disease patients with atrial fibrillation, they said.

Several additional trials are in progress, and future trials “should include not only participants with dialysis-dependent ESKD [end-stage kidney disease] but also those with CrCl [creatinine clearance of] less than 25 mL/min,” and compare NOACs with placebo as well, they noted.

Lead author Dr. Ha is supported by a University Postgraduate Award from University of New South Wales, Sydney, but had no financial conflicts to disclose; coauthors disclosed support from various organizations as well as pharmaceutical companies including Baxter, Amgen, Eli Lilly, Boehringer Ingelheim, Vifor Pharma, Janssen, Pfizer, Bristol-Myers Squibb, and GlaxoSmithKline.
 

SOURCE: Ha JT et al. Ann Intern Med. 2019 July 15. doi: 10.7326/M19-0087

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM THE ANNALS OF INTERNAL MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

 

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Continuous anticoagulation plus cold snare colon polypectomy decreases bleeding, procedure time, hospital stay

Small polyps may be safely removed with cold snare polypectomy
Article Type
Changed
Tue, 07/16/2019 - 10:00

Cold snare polypectomy with continuous administration of anticoagulants results in less bleeding, shorter procedure time, and shorter time in hospital in patients with colon polyps taking anticoagulants compared with hot snare polypectomy with periprocedural heparin bridging, according to recent research published in the Annals of Internal Medicine.

“Guidelines on peripolypectomy management of anticoagulants vary greatly, and the current updated guidelines do not recommend heparin bridging (HB) for all patients; however, direct comparison of HB with continuous administration of oral anticoagulants (CA) has provided little evidence,” Yoji Takeuchi, MD, from the Department of Gastrointestinal Oncology at Osaka International Cancer Institute in Osaka, Japan, and colleagues wrote.

While cold snare polypectomy (CSP) has been recommended by the European Society of Gastrointestinal Endoscopy for subcentimeter polyps, anticoagulant delivery method has not been studied between these two poly removal methods. “Cold snare polypectomy with CA may be performed safely, without the complications of HB, while theoretically maintaining an anticoagulant effect,” the researchers said.

Dr. Takeuchi and colleagues performed a randomized controlled trial of 182 patients with subcentimeter colorectal polyps who underwent either CA with CSP (CA+CSP; 92 patients) or hot snare polypectomy (HSP) with HB (HB+HSP; 90 patients) at one of 30 different Japanese centers. Patients were between 20 and 80 years old and had preserved organ function, an Eastern Cooperative Oncology Group Performance Status score of 1 or less, and were taking warfarin or a direct oral anticoagulant (DOAC) such as dabigatran, rivaroxaban, apixaban, or edoxaban. Researchers assessed the level of bleeding at 28-day follow-up, and also measured procedure time per polyp and length of hospital stay for each group.

Overall, there were 611 polyps removed in 168 patients. The rate of major bleeding in the CA+CSP group was 4.7% (95% confidence interval [CI], 0.2%-9.2%) compared with 12.0% (95% CI, 5.0%-19.1%) in the HB+HSP group with an intergroup difference of 7.3% (95% CI, 1.0%-15.7%).

“[T]he Japanese guidelines consider all patients receiving anticoagulants to be at high risk for thromboembolism associated with antithrombotic withdrawal,” Dr. Takeuchi and colleagues said. “Our results suggest that discontinuing anticoagulant therapy before polypectomy for subcentimeter polyps may be unnecessary and support the Japanese guidelines, which recommend not withholding anticoagulants for procedures with low bleeding risk.”

The researchers declared CA+CSP to be non-inferior with a 0.4% lower limit of 2-sided 90% CI. “[W]e noted a higher number of total and right-sided polyps in the CA+CSP group, both of which may result in more frequent bleeding episodes, which suggests that CA+CSP may be a relatively safe approach,” the researchers said. “Therefore, we think that CSP may be the least risky polypectomy procedure.”

The mean procedure time per polyp was 59.6 seconds in the CA+CSP group (54.0-65.2 seconds) compared with 94.4 seconds in the HB+HSP group (87.1-101.7 seconds; P less than .001). Mean hospital stay for patients in the CA+CSP group was shorter at 2.9 days (1.8-4.0 days) compared with 5.1 days in the HB+HSP group (4.2-6.1 days; P equals .003).

The study examined patients receiving two different anticoagulant delivery methods and polyp removal procedures, which made it difficult to determine which intervention contributed to the results, the researchers said. In addition, the study was not blinded and polyp type was limited to only subcentimeter polyps.

“Although CA+CSP is considered standard treatment for subcentimeter colorectal polyps in patients receiving anticoagulants, a larger trial is needed to identify a better management strategy for patients receiving DOACs,” the researchers said.

This study was supported by a grant from the Japanese Gastroenterological Association. The authors report no relevant conflicts of interest.

SOURCE: Takeuchi Y et al. Ann Intern Med. 2019;doi: 10.7326/M19-0026 .

Body

It is still an open question what the safest method to remove colon polyps is in patients taking continuous anticoagulants (CA), but the study by Takeuchi et al. shows cold snare polypectomy (CSP) has promise, Jeffrey L. Tokar, MD; and Michael J. Bartel, MD, wrote in a related editorial.

“[T]his study adds to emerging evidence that small colorectal polyps may be resected safely with CSP while oral anticoagulation continues and provides the first comparative evidence that this strategy may be safer than [heparin bridging with hot snare polypectomy] HB+HSP,” they said.

Another consideration of CA+CSP is the risk of intraprocedural postpolypectomy bleeding, but there were no cases of this kind of bleeding in the results by Takeuchi et al., which may give some clinicians reassurance about the method. However, the study did not take into account the risk in patients taking warfarin or direct oral anticoagulants who had incomplete polyp resection, and the difference in CA therapy between CSP and HSP, or the effect of not using heparin bridging in CSP or HSP was not studied.

“The results warrant confirmatory studies, preferably with blinding to the use of anticoagulation and assessment of several additional factors: incomplete polyp resection, the effect of prophylactic hemostatic actions (such as clipping), and the applicability of CA+CSP to the removal of larger polyps and to the use of other classes of antithrombotic medications (such as thienopyridines),” Dr. Tokar and Dr. Bartel concluded.

Dr. Tokar and Dr. Bartel are from the Fox Chase Cancer Center in Philadelphia. They report no relevant conflicts of interest.

Publications
Topics
Sections
Body

It is still an open question what the safest method to remove colon polyps is in patients taking continuous anticoagulants (CA), but the study by Takeuchi et al. shows cold snare polypectomy (CSP) has promise, Jeffrey L. Tokar, MD; and Michael J. Bartel, MD, wrote in a related editorial.

“[T]his study adds to emerging evidence that small colorectal polyps may be resected safely with CSP while oral anticoagulation continues and provides the first comparative evidence that this strategy may be safer than [heparin bridging with hot snare polypectomy] HB+HSP,” they said.

Another consideration of CA+CSP is the risk of intraprocedural postpolypectomy bleeding, but there were no cases of this kind of bleeding in the results by Takeuchi et al., which may give some clinicians reassurance about the method. However, the study did not take into account the risk in patients taking warfarin or direct oral anticoagulants who had incomplete polyp resection, and the difference in CA therapy between CSP and HSP, or the effect of not using heparin bridging in CSP or HSP was not studied.

“The results warrant confirmatory studies, preferably with blinding to the use of anticoagulation and assessment of several additional factors: incomplete polyp resection, the effect of prophylactic hemostatic actions (such as clipping), and the applicability of CA+CSP to the removal of larger polyps and to the use of other classes of antithrombotic medications (such as thienopyridines),” Dr. Tokar and Dr. Bartel concluded.

Dr. Tokar and Dr. Bartel are from the Fox Chase Cancer Center in Philadelphia. They report no relevant conflicts of interest.

Body

It is still an open question what the safest method to remove colon polyps is in patients taking continuous anticoagulants (CA), but the study by Takeuchi et al. shows cold snare polypectomy (CSP) has promise, Jeffrey L. Tokar, MD; and Michael J. Bartel, MD, wrote in a related editorial.

“[T]his study adds to emerging evidence that small colorectal polyps may be resected safely with CSP while oral anticoagulation continues and provides the first comparative evidence that this strategy may be safer than [heparin bridging with hot snare polypectomy] HB+HSP,” they said.

Another consideration of CA+CSP is the risk of intraprocedural postpolypectomy bleeding, but there were no cases of this kind of bleeding in the results by Takeuchi et al., which may give some clinicians reassurance about the method. However, the study did not take into account the risk in patients taking warfarin or direct oral anticoagulants who had incomplete polyp resection, and the difference in CA therapy between CSP and HSP, or the effect of not using heparin bridging in CSP or HSP was not studied.

“The results warrant confirmatory studies, preferably with blinding to the use of anticoagulation and assessment of several additional factors: incomplete polyp resection, the effect of prophylactic hemostatic actions (such as clipping), and the applicability of CA+CSP to the removal of larger polyps and to the use of other classes of antithrombotic medications (such as thienopyridines),” Dr. Tokar and Dr. Bartel concluded.

Dr. Tokar and Dr. Bartel are from the Fox Chase Cancer Center in Philadelphia. They report no relevant conflicts of interest.

Title
Small polyps may be safely removed with cold snare polypectomy
Small polyps may be safely removed with cold snare polypectomy

Cold snare polypectomy with continuous administration of anticoagulants results in less bleeding, shorter procedure time, and shorter time in hospital in patients with colon polyps taking anticoagulants compared with hot snare polypectomy with periprocedural heparin bridging, according to recent research published in the Annals of Internal Medicine.

“Guidelines on peripolypectomy management of anticoagulants vary greatly, and the current updated guidelines do not recommend heparin bridging (HB) for all patients; however, direct comparison of HB with continuous administration of oral anticoagulants (CA) has provided little evidence,” Yoji Takeuchi, MD, from the Department of Gastrointestinal Oncology at Osaka International Cancer Institute in Osaka, Japan, and colleagues wrote.

While cold snare polypectomy (CSP) has been recommended by the European Society of Gastrointestinal Endoscopy for subcentimeter polyps, anticoagulant delivery method has not been studied between these two poly removal methods. “Cold snare polypectomy with CA may be performed safely, without the complications of HB, while theoretically maintaining an anticoagulant effect,” the researchers said.

Dr. Takeuchi and colleagues performed a randomized controlled trial of 182 patients with subcentimeter colorectal polyps who underwent either CA with CSP (CA+CSP; 92 patients) or hot snare polypectomy (HSP) with HB (HB+HSP; 90 patients) at one of 30 different Japanese centers. Patients were between 20 and 80 years old and had preserved organ function, an Eastern Cooperative Oncology Group Performance Status score of 1 or less, and were taking warfarin or a direct oral anticoagulant (DOAC) such as dabigatran, rivaroxaban, apixaban, or edoxaban. Researchers assessed the level of bleeding at 28-day follow-up, and also measured procedure time per polyp and length of hospital stay for each group.

Overall, there were 611 polyps removed in 168 patients. The rate of major bleeding in the CA+CSP group was 4.7% (95% confidence interval [CI], 0.2%-9.2%) compared with 12.0% (95% CI, 5.0%-19.1%) in the HB+HSP group with an intergroup difference of 7.3% (95% CI, 1.0%-15.7%).

“[T]he Japanese guidelines consider all patients receiving anticoagulants to be at high risk for thromboembolism associated with antithrombotic withdrawal,” Dr. Takeuchi and colleagues said. “Our results suggest that discontinuing anticoagulant therapy before polypectomy for subcentimeter polyps may be unnecessary and support the Japanese guidelines, which recommend not withholding anticoagulants for procedures with low bleeding risk.”

The researchers declared CA+CSP to be non-inferior with a 0.4% lower limit of 2-sided 90% CI. “[W]e noted a higher number of total and right-sided polyps in the CA+CSP group, both of which may result in more frequent bleeding episodes, which suggests that CA+CSP may be a relatively safe approach,” the researchers said. “Therefore, we think that CSP may be the least risky polypectomy procedure.”

The mean procedure time per polyp was 59.6 seconds in the CA+CSP group (54.0-65.2 seconds) compared with 94.4 seconds in the HB+HSP group (87.1-101.7 seconds; P less than .001). Mean hospital stay for patients in the CA+CSP group was shorter at 2.9 days (1.8-4.0 days) compared with 5.1 days in the HB+HSP group (4.2-6.1 days; P equals .003).

The study examined patients receiving two different anticoagulant delivery methods and polyp removal procedures, which made it difficult to determine which intervention contributed to the results, the researchers said. In addition, the study was not blinded and polyp type was limited to only subcentimeter polyps.

“Although CA+CSP is considered standard treatment for subcentimeter colorectal polyps in patients receiving anticoagulants, a larger trial is needed to identify a better management strategy for patients receiving DOACs,” the researchers said.

This study was supported by a grant from the Japanese Gastroenterological Association. The authors report no relevant conflicts of interest.

SOURCE: Takeuchi Y et al. Ann Intern Med. 2019;doi: 10.7326/M19-0026 .

Cold snare polypectomy with continuous administration of anticoagulants results in less bleeding, shorter procedure time, and shorter time in hospital in patients with colon polyps taking anticoagulants compared with hot snare polypectomy with periprocedural heparin bridging, according to recent research published in the Annals of Internal Medicine.

“Guidelines on peripolypectomy management of anticoagulants vary greatly, and the current updated guidelines do not recommend heparin bridging (HB) for all patients; however, direct comparison of HB with continuous administration of oral anticoagulants (CA) has provided little evidence,” Yoji Takeuchi, MD, from the Department of Gastrointestinal Oncology at Osaka International Cancer Institute in Osaka, Japan, and colleagues wrote.

While cold snare polypectomy (CSP) has been recommended by the European Society of Gastrointestinal Endoscopy for subcentimeter polyps, anticoagulant delivery method has not been studied between these two poly removal methods. “Cold snare polypectomy with CA may be performed safely, without the complications of HB, while theoretically maintaining an anticoagulant effect,” the researchers said.

Dr. Takeuchi and colleagues performed a randomized controlled trial of 182 patients with subcentimeter colorectal polyps who underwent either CA with CSP (CA+CSP; 92 patients) or hot snare polypectomy (HSP) with HB (HB+HSP; 90 patients) at one of 30 different Japanese centers. Patients were between 20 and 80 years old and had preserved organ function, an Eastern Cooperative Oncology Group Performance Status score of 1 or less, and were taking warfarin or a direct oral anticoagulant (DOAC) such as dabigatran, rivaroxaban, apixaban, or edoxaban. Researchers assessed the level of bleeding at 28-day follow-up, and also measured procedure time per polyp and length of hospital stay for each group.

Overall, there were 611 polyps removed in 168 patients. The rate of major bleeding in the CA+CSP group was 4.7% (95% confidence interval [CI], 0.2%-9.2%) compared with 12.0% (95% CI, 5.0%-19.1%) in the HB+HSP group with an intergroup difference of 7.3% (95% CI, 1.0%-15.7%).

“[T]he Japanese guidelines consider all patients receiving anticoagulants to be at high risk for thromboembolism associated with antithrombotic withdrawal,” Dr. Takeuchi and colleagues said. “Our results suggest that discontinuing anticoagulant therapy before polypectomy for subcentimeter polyps may be unnecessary and support the Japanese guidelines, which recommend not withholding anticoagulants for procedures with low bleeding risk.”

The researchers declared CA+CSP to be non-inferior with a 0.4% lower limit of 2-sided 90% CI. “[W]e noted a higher number of total and right-sided polyps in the CA+CSP group, both of which may result in more frequent bleeding episodes, which suggests that CA+CSP may be a relatively safe approach,” the researchers said. “Therefore, we think that CSP may be the least risky polypectomy procedure.”

The mean procedure time per polyp was 59.6 seconds in the CA+CSP group (54.0-65.2 seconds) compared with 94.4 seconds in the HB+HSP group (87.1-101.7 seconds; P less than .001). Mean hospital stay for patients in the CA+CSP group was shorter at 2.9 days (1.8-4.0 days) compared with 5.1 days in the HB+HSP group (4.2-6.1 days; P equals .003).

The study examined patients receiving two different anticoagulant delivery methods and polyp removal procedures, which made it difficult to determine which intervention contributed to the results, the researchers said. In addition, the study was not blinded and polyp type was limited to only subcentimeter polyps.

“Although CA+CSP is considered standard treatment for subcentimeter colorectal polyps in patients receiving anticoagulants, a larger trial is needed to identify a better management strategy for patients receiving DOACs,” the researchers said.

This study was supported by a grant from the Japanese Gastroenterological Association. The authors report no relevant conflicts of interest.

SOURCE: Takeuchi Y et al. Ann Intern Med. 2019;doi: 10.7326/M19-0026 .

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE ANNALS OF INTERNAL MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

Key clinical point: Cold snare polypectomy (CSP) with continuous administration of anticoagulants (CA) used to remove colon polyps appears to result in less bleeding, a lower procedure time and shorter hospital stay than heparin bridging (HB) with hot snare polypectomy (HSP).

Major finding: The rate of major bleeding in the CA+CSP group was 4.7% compared with 12.0% in the HB+HSP group.

Study details: A prospective, open-label, parallel, multicenter randomized controlled trial of 182 patients who underwent CA+CSP or HB+HSP at 30 Japanese institutions between June 2016 and April 2018.

Disclosures: This study was supported by a grant from the Japanese Gastroenterological Association. The authors report no relevant conflicts of interest.

Source: Takeuchi Y, et al. Ann Intern Med. 2019;doi:10.7326/M19-0026.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Acne and Rosacea - July 2019 Supplement

Article Type
Changed
Wed, 07/17/2019 - 09:01
Display Headline
Acne and Rosacea - July 2019 Supplement

The 2019 Acne & Rosacea supplement features a selection of articles on these two topics published in Dermatology News during the previous year, with commentaries by dermatologists Hilary E. Baldwin, MD, and Julie C. Harper, M.D., both past presidents of the American Acne & Rosacea Society. They were also both members of the American Academy of Dermatology work group that developed the AAD’s updated guidelines on the management of acne vulgaris, published in 2016.

Highlights include:

  • The global incidence of rosacea
  • Rosacea likely undertreated in skin of color
  • The impact of isotretinoin on the dermal microbiome
  • Laser treatments of acne and rosacea

 

Click here to view the supplement.

Publications
Sections

The 2019 Acne & Rosacea supplement features a selection of articles on these two topics published in Dermatology News during the previous year, with commentaries by dermatologists Hilary E. Baldwin, MD, and Julie C. Harper, M.D., both past presidents of the American Acne & Rosacea Society. They were also both members of the American Academy of Dermatology work group that developed the AAD’s updated guidelines on the management of acne vulgaris, published in 2016.

Highlights include:

  • The global incidence of rosacea
  • Rosacea likely undertreated in skin of color
  • The impact of isotretinoin on the dermal microbiome
  • Laser treatments of acne and rosacea

 

Click here to view the supplement.

The 2019 Acne & Rosacea supplement features a selection of articles on these two topics published in Dermatology News during the previous year, with commentaries by dermatologists Hilary E. Baldwin, MD, and Julie C. Harper, M.D., both past presidents of the American Acne & Rosacea Society. They were also both members of the American Academy of Dermatology work group that developed the AAD’s updated guidelines on the management of acne vulgaris, published in 2016.

Highlights include:

  • The global incidence of rosacea
  • Rosacea likely undertreated in skin of color
  • The impact of isotretinoin on the dermal microbiome
  • Laser treatments of acne and rosacea

 

Click here to view the supplement.

Publications
Publications
Article Type
Display Headline
Acne and Rosacea - July 2019 Supplement
Display Headline
Acne and Rosacea - July 2019 Supplement
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Mon, 07/15/2019 - 16:30
Un-Gate On Date
Mon, 07/15/2019 - 16:30
Use ProPublica
CFC Schedule Remove Status
Mon, 07/15/2019 - 16:30
Hide sidebar & use full width
render the right sidebar.

COPD eosinophil counts predict steroid responders

Article Type
Changed
Mon, 07/15/2019 - 16:30

 

Triple therapy with an inhaled corticosteroid is particularly helpful for patients with chronic obstructive pulmonary disease (COPD) who have high baseline eosinophil counts, a trial involving more than 10,000 patients found.

Former smokers received greater benefit from inhaled corticosteroids (ICS) than did current smokers, reported lead author Steven Pascoe, MBBS, of GlaxoSmithKline and colleagues. The investigators noted that these findings can help personalize therapy for patients with COPD, which can be challenging to treat because of its heterogeneity. The study was published in Lancet Respiratory Medicine.

The phase 3 IMPACT trial compared single-inhaler fluticasone furoate–umeclidinium–vilanterol with umeclidinium-vilanterol and fluticasone furoate–vilanterol in patients with moderate to very severe COPD at high risk of exacerbation. Of the 10,333 patients involved, approximately one-quarter (26%) had one or more severe exacerbations in the previous year and half (47%) had two or more moderate exacerbations in the same time period. All patients were symptomatic and were aged 40 years or older. A variety of baseline and demographic patient characteristics were recorded, including blood eosinophil count, smoking status, and others. Responses to therapy were measured with trough forced expiratory volume in 1 second (FEV1), symptom scoring, and a quality of life questionnaire.

After 52 weeks, results showed that higher baseline eosinophil counts were associated with progressively greater benefits in favor of triple therapy. For patients with baseline blood eosinophil counts of at least 310 cells per mcL, triple therapy was associated with about half as many moderate and severe exacerbations as treatment with umeclidinium-vilanterol (rate ratio = 0.56; 95% confidence interval, 0.47-0.66). For patients with less than 90 cells per mcL at baseline, the rate ratio for the same two regimens was 0.88, but with a confidence interval crossing 1 (0.74-1.04). For fluticasone furoate–vilanterol vs. umeclidinium-vilanterol, high baseline eosinophil count again demonstrated its predictive power for ICS efficacy, again with an associated rate ratio of 0.56 (0.47-0.66), compared with 1.09 (0.91-1.29) for patients below the lower threshold. Symptom scoring, quality of life, and FEV1 followed a similar trend, although the investigators noted that this was “less marked” for FEV1. Although the trend held regardless of smoking status, benefits were more pronounced among former smokers than current smokers.

“In former smokers, ICS benefits were observed at all blood eosinophil counts when comparing triple therapy with umeclidinium-vilanterol, whereas in current smokers no ICS benefit was observed at lower eosinophil counts, less than approximately 200 eosinophils per [mcL],” the investigators wrote.

“Overall, these results show the potential use of blood eosinophil counts in conjunction with smoking status to predict the magnitude of ICS response within a dual or triple-combination therapy,” the investigators concluded. “Future approaches to the pharmacological management of COPD should move beyond the simple dichotomization of each clinical or biomarker variable, toward more complex algorithms that integrate the interactions between important variables including exacerbation history, smoking status, and blood eosinophil counts.”

The study was funded by GlaxoSmithKline. The investigators disclosed additional relationships with AstraZeneca, Boehringer Ingelheim, Chiesi, CSA Medical, and others.

SOURCE: Pascoe S et al. Lancet Resp Med. 2019 Jul 4. doi: 10.1016/S2213-2600(19)30190-0.

Publications
Topics
Sections

 

Triple therapy with an inhaled corticosteroid is particularly helpful for patients with chronic obstructive pulmonary disease (COPD) who have high baseline eosinophil counts, a trial involving more than 10,000 patients found.

Former smokers received greater benefit from inhaled corticosteroids (ICS) than did current smokers, reported lead author Steven Pascoe, MBBS, of GlaxoSmithKline and colleagues. The investigators noted that these findings can help personalize therapy for patients with COPD, which can be challenging to treat because of its heterogeneity. The study was published in Lancet Respiratory Medicine.

The phase 3 IMPACT trial compared single-inhaler fluticasone furoate–umeclidinium–vilanterol with umeclidinium-vilanterol and fluticasone furoate–vilanterol in patients with moderate to very severe COPD at high risk of exacerbation. Of the 10,333 patients involved, approximately one-quarter (26%) had one or more severe exacerbations in the previous year and half (47%) had two or more moderate exacerbations in the same time period. All patients were symptomatic and were aged 40 years or older. A variety of baseline and demographic patient characteristics were recorded, including blood eosinophil count, smoking status, and others. Responses to therapy were measured with trough forced expiratory volume in 1 second (FEV1), symptom scoring, and a quality of life questionnaire.

After 52 weeks, results showed that higher baseline eosinophil counts were associated with progressively greater benefits in favor of triple therapy. For patients with baseline blood eosinophil counts of at least 310 cells per mcL, triple therapy was associated with about half as many moderate and severe exacerbations as treatment with umeclidinium-vilanterol (rate ratio = 0.56; 95% confidence interval, 0.47-0.66). For patients with less than 90 cells per mcL at baseline, the rate ratio for the same two regimens was 0.88, but with a confidence interval crossing 1 (0.74-1.04). For fluticasone furoate–vilanterol vs. umeclidinium-vilanterol, high baseline eosinophil count again demonstrated its predictive power for ICS efficacy, again with an associated rate ratio of 0.56 (0.47-0.66), compared with 1.09 (0.91-1.29) for patients below the lower threshold. Symptom scoring, quality of life, and FEV1 followed a similar trend, although the investigators noted that this was “less marked” for FEV1. Although the trend held regardless of smoking status, benefits were more pronounced among former smokers than current smokers.

“In former smokers, ICS benefits were observed at all blood eosinophil counts when comparing triple therapy with umeclidinium-vilanterol, whereas in current smokers no ICS benefit was observed at lower eosinophil counts, less than approximately 200 eosinophils per [mcL],” the investigators wrote.

“Overall, these results show the potential use of blood eosinophil counts in conjunction with smoking status to predict the magnitude of ICS response within a dual or triple-combination therapy,” the investigators concluded. “Future approaches to the pharmacological management of COPD should move beyond the simple dichotomization of each clinical or biomarker variable, toward more complex algorithms that integrate the interactions between important variables including exacerbation history, smoking status, and blood eosinophil counts.”

The study was funded by GlaxoSmithKline. The investigators disclosed additional relationships with AstraZeneca, Boehringer Ingelheim, Chiesi, CSA Medical, and others.

SOURCE: Pascoe S et al. Lancet Resp Med. 2019 Jul 4. doi: 10.1016/S2213-2600(19)30190-0.

 

Triple therapy with an inhaled corticosteroid is particularly helpful for patients with chronic obstructive pulmonary disease (COPD) who have high baseline eosinophil counts, a trial involving more than 10,000 patients found.

Former smokers received greater benefit from inhaled corticosteroids (ICS) than did current smokers, reported lead author Steven Pascoe, MBBS, of GlaxoSmithKline and colleagues. The investigators noted that these findings can help personalize therapy for patients with COPD, which can be challenging to treat because of its heterogeneity. The study was published in Lancet Respiratory Medicine.

The phase 3 IMPACT trial compared single-inhaler fluticasone furoate–umeclidinium–vilanterol with umeclidinium-vilanterol and fluticasone furoate–vilanterol in patients with moderate to very severe COPD at high risk of exacerbation. Of the 10,333 patients involved, approximately one-quarter (26%) had one or more severe exacerbations in the previous year and half (47%) had two or more moderate exacerbations in the same time period. All patients were symptomatic and were aged 40 years or older. A variety of baseline and demographic patient characteristics were recorded, including blood eosinophil count, smoking status, and others. Responses to therapy were measured with trough forced expiratory volume in 1 second (FEV1), symptom scoring, and a quality of life questionnaire.

After 52 weeks, results showed that higher baseline eosinophil counts were associated with progressively greater benefits in favor of triple therapy. For patients with baseline blood eosinophil counts of at least 310 cells per mcL, triple therapy was associated with about half as many moderate and severe exacerbations as treatment with umeclidinium-vilanterol (rate ratio = 0.56; 95% confidence interval, 0.47-0.66). For patients with less than 90 cells per mcL at baseline, the rate ratio for the same two regimens was 0.88, but with a confidence interval crossing 1 (0.74-1.04). For fluticasone furoate–vilanterol vs. umeclidinium-vilanterol, high baseline eosinophil count again demonstrated its predictive power for ICS efficacy, again with an associated rate ratio of 0.56 (0.47-0.66), compared with 1.09 (0.91-1.29) for patients below the lower threshold. Symptom scoring, quality of life, and FEV1 followed a similar trend, although the investigators noted that this was “less marked” for FEV1. Although the trend held regardless of smoking status, benefits were more pronounced among former smokers than current smokers.

“In former smokers, ICS benefits were observed at all blood eosinophil counts when comparing triple therapy with umeclidinium-vilanterol, whereas in current smokers no ICS benefit was observed at lower eosinophil counts, less than approximately 200 eosinophils per [mcL],” the investigators wrote.

“Overall, these results show the potential use of blood eosinophil counts in conjunction with smoking status to predict the magnitude of ICS response within a dual or triple-combination therapy,” the investigators concluded. “Future approaches to the pharmacological management of COPD should move beyond the simple dichotomization of each clinical or biomarker variable, toward more complex algorithms that integrate the interactions between important variables including exacerbation history, smoking status, and blood eosinophil counts.”

The study was funded by GlaxoSmithKline. The investigators disclosed additional relationships with AstraZeneca, Boehringer Ingelheim, Chiesi, CSA Medical, and others.

SOURCE: Pascoe S et al. Lancet Resp Med. 2019 Jul 4. doi: 10.1016/S2213-2600(19)30190-0.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM LANCET RESPIRATORY MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Cathepsin Z identified as a potential biomarker for osteoporosis

Article Type
Changed
Tue, 07/16/2019 - 11:20

 

The presence of cathepsin Z messenger RNA in peripheral blood mononuclear cells of people with osteopenia, osteoporosis, and women with osteoporosis and older than 50 years could be used as a biomarker to help diagnose osteoporosis, according to a recent study published in Scientific Reports.

Dong L. Barraclough, PhD, of the Institute of Ageing and Chronic Disease at the University of Liverpool, England, and colleagues studied the expression of cathepsin Z messenger RNA (mRNA) in peripheral blood mononuclear cells (PBMCs) of 88 participants (71 women, 17 men). The participants were grouped according to their bone mineral density and T score, where a T score of −1.0 or higher was considered nonosteoporotic, a score between −1.0 and −2.5 was classified as osteopenia, and −2.5 or less was classified as osteoporosis.

Overall, there were 48 participants with osteopenia (38 women, 10 men; 55% of total participants; average age, 65 years), 23 participants with osteoporosis (19 women, 4 men; 26%; 69 years), and 17 participants in the nonosteoporotic control group (14 women, 3 men; 19%; 56 years), with 88% of the total number of participants aged 50 years and older (82% women, 18% men).

The researchers found significantly higher differential expression of cathepsin Z mRNA in PBMCs when comparing the nonosteoporotic control group and participants with osteopenia (95% confidence interval, −0.32 to −0.053; P = .0067), the control group with participants with osteoporosis (95% CI, −0.543 to −0.24; P less than .0001), and participants with osteopenia and those with osteoporosis (95% CI, −0.325 to −0.084; P = .0011).

That association also was seen in women with osteoporosis who were older than 50 years (P = .0016) and did not change when participants were excluded for receiving treatment for osteoporosis, the authors wrote.

There also was an inverse association between cathepsin Z mRNA levels and bone mineral density (P = .0149) as well as inversely associated with lumbar spine L2-L4 and femoral neck T-scores (P = .0002 and P = .0139, respectively) and fragility fracture (P = .0018) in participants with osteopenia, osteoporosis, and women with osteoporosis older than 50 years.

Patients with chronic inflammatory disease sometimes have “osteoporosis-like conditions,” the authors noted. “However, there was no significant difference in cathepsin Z mRNA levels between osteopenia and osteoporosis patients who were also suffering from chronic inflammatory disorders and those [who] were not,” either when all osteopenia and osteoporosis participants were included (P = .774), or when only women participants with osteopenia or osteoporosis and older than 50 years were included (P = .666).


“The observation that [participants] with osteopenia also showed a significant increase in cathepsin Z mRNA, compared [with] nonosteoporotic controls, strongly suggests that, if replicated in a larger study, the cathepsin Z mRNA in patients’ PBMC preparations could form the basis of a test for osteoporosis, which could aid in the detection of osteoporosis before a critical and expensive fragility fracture occurs,” the authors wrote.

The authors reported no relevant conflicts of interest.
 

SOURCE: Dera AA et al. Sci Rep. 2019 Jul 5. doi: 10.1038/s41598-019-46068-0.

Publications
Topics
Sections

 

The presence of cathepsin Z messenger RNA in peripheral blood mononuclear cells of people with osteopenia, osteoporosis, and women with osteoporosis and older than 50 years could be used as a biomarker to help diagnose osteoporosis, according to a recent study published in Scientific Reports.

Dong L. Barraclough, PhD, of the Institute of Ageing and Chronic Disease at the University of Liverpool, England, and colleagues studied the expression of cathepsin Z messenger RNA (mRNA) in peripheral blood mononuclear cells (PBMCs) of 88 participants (71 women, 17 men). The participants were grouped according to their bone mineral density and T score, where a T score of −1.0 or higher was considered nonosteoporotic, a score between −1.0 and −2.5 was classified as osteopenia, and −2.5 or less was classified as osteoporosis.

Overall, there were 48 participants with osteopenia (38 women, 10 men; 55% of total participants; average age, 65 years), 23 participants with osteoporosis (19 women, 4 men; 26%; 69 years), and 17 participants in the nonosteoporotic control group (14 women, 3 men; 19%; 56 years), with 88% of the total number of participants aged 50 years and older (82% women, 18% men).

The researchers found significantly higher differential expression of cathepsin Z mRNA in PBMCs when comparing the nonosteoporotic control group and participants with osteopenia (95% confidence interval, −0.32 to −0.053; P = .0067), the control group with participants with osteoporosis (95% CI, −0.543 to −0.24; P less than .0001), and participants with osteopenia and those with osteoporosis (95% CI, −0.325 to −0.084; P = .0011).

That association also was seen in women with osteoporosis who were older than 50 years (P = .0016) and did not change when participants were excluded for receiving treatment for osteoporosis, the authors wrote.

There also was an inverse association between cathepsin Z mRNA levels and bone mineral density (P = .0149) as well as inversely associated with lumbar spine L2-L4 and femoral neck T-scores (P = .0002 and P = .0139, respectively) and fragility fracture (P = .0018) in participants with osteopenia, osteoporosis, and women with osteoporosis older than 50 years.

Patients with chronic inflammatory disease sometimes have “osteoporosis-like conditions,” the authors noted. “However, there was no significant difference in cathepsin Z mRNA levels between osteopenia and osteoporosis patients who were also suffering from chronic inflammatory disorders and those [who] were not,” either when all osteopenia and osteoporosis participants were included (P = .774), or when only women participants with osteopenia or osteoporosis and older than 50 years were included (P = .666).


“The observation that [participants] with osteopenia also showed a significant increase in cathepsin Z mRNA, compared [with] nonosteoporotic controls, strongly suggests that, if replicated in a larger study, the cathepsin Z mRNA in patients’ PBMC preparations could form the basis of a test for osteoporosis, which could aid in the detection of osteoporosis before a critical and expensive fragility fracture occurs,” the authors wrote.

The authors reported no relevant conflicts of interest.
 

SOURCE: Dera AA et al. Sci Rep. 2019 Jul 5. doi: 10.1038/s41598-019-46068-0.

 

The presence of cathepsin Z messenger RNA in peripheral blood mononuclear cells of people with osteopenia, osteoporosis, and women with osteoporosis and older than 50 years could be used as a biomarker to help diagnose osteoporosis, according to a recent study published in Scientific Reports.

Dong L. Barraclough, PhD, of the Institute of Ageing and Chronic Disease at the University of Liverpool, England, and colleagues studied the expression of cathepsin Z messenger RNA (mRNA) in peripheral blood mononuclear cells (PBMCs) of 88 participants (71 women, 17 men). The participants were grouped according to their bone mineral density and T score, where a T score of −1.0 or higher was considered nonosteoporotic, a score between −1.0 and −2.5 was classified as osteopenia, and −2.5 or less was classified as osteoporosis.

Overall, there were 48 participants with osteopenia (38 women, 10 men; 55% of total participants; average age, 65 years), 23 participants with osteoporosis (19 women, 4 men; 26%; 69 years), and 17 participants in the nonosteoporotic control group (14 women, 3 men; 19%; 56 years), with 88% of the total number of participants aged 50 years and older (82% women, 18% men).

The researchers found significantly higher differential expression of cathepsin Z mRNA in PBMCs when comparing the nonosteoporotic control group and participants with osteopenia (95% confidence interval, −0.32 to −0.053; P = .0067), the control group with participants with osteoporosis (95% CI, −0.543 to −0.24; P less than .0001), and participants with osteopenia and those with osteoporosis (95% CI, −0.325 to −0.084; P = .0011).

That association also was seen in women with osteoporosis who were older than 50 years (P = .0016) and did not change when participants were excluded for receiving treatment for osteoporosis, the authors wrote.

There also was an inverse association between cathepsin Z mRNA levels and bone mineral density (P = .0149) as well as inversely associated with lumbar spine L2-L4 and femoral neck T-scores (P = .0002 and P = .0139, respectively) and fragility fracture (P = .0018) in participants with osteopenia, osteoporosis, and women with osteoporosis older than 50 years.

Patients with chronic inflammatory disease sometimes have “osteoporosis-like conditions,” the authors noted. “However, there was no significant difference in cathepsin Z mRNA levels between osteopenia and osteoporosis patients who were also suffering from chronic inflammatory disorders and those [who] were not,” either when all osteopenia and osteoporosis participants were included (P = .774), or when only women participants with osteopenia or osteoporosis and older than 50 years were included (P = .666).


“The observation that [participants] with osteopenia also showed a significant increase in cathepsin Z mRNA, compared [with] nonosteoporotic controls, strongly suggests that, if replicated in a larger study, the cathepsin Z mRNA in patients’ PBMC preparations could form the basis of a test for osteoporosis, which could aid in the detection of osteoporosis before a critical and expensive fragility fracture occurs,” the authors wrote.

The authors reported no relevant conflicts of interest.
 

SOURCE: Dera AA et al. Sci Rep. 2019 Jul 5. doi: 10.1038/s41598-019-46068-0.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM SCIENTIFIC REPORTS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Stillbirth linked to nearly fivefold increase in maternal morbidity risk

Article Type
Changed
Mon, 07/15/2019 - 16:11

 

Severe maternal morbidity is almost five times more common in women who have stillbirth deliveries than in women who have live births, according to research in Obstetrics & Gynecology.

JazzIRT/Getty Images

Citing major increases in risk for a host of serious complications, the authors of the large population-based study urge those caring for women experiencing stillbirth to be vigilant for trouble.

Severe maternal morbidity among mothers experiencing stillbirth occurred in 578 cases per 10,000 deliveries, compared with 99 cases per 10,000 live deliveries, wrote Elizabeth Wall-Wieler, PhD, and coauthors. After statistical adjustment, the relative risk (RR) for severe maternal morbidity in a stillbirth compared with a live delivery was 4.77 (95% confidence interval, 4.53-5.02).

“Our findings indicate that nearly 1 in 17 women who deliver a stillbirth in California experience severe maternal morbidity. Furthermore, the risk of severe maternal morbidity was more than fourfold higher for women undergoing stillbirth delivery than live birth delivery,” the investigators wrote.

Major maternal organ dysfunction or failure – including acute renal failure, adult respiratory distress syndrome, disseminated intravascular coagulation, sepsis, or shock – all were more common in stillbirth deliveries, noted Dr. Wall-Wieler and colleagues. Hysterectomy, likely performed to control major loss of blood, also was more likely in stillbirth deliveries.

“Minimal attention has been given to maternal outcomes and acute complications experienced by women who have a stillbirth,” wrote Dr. Wall-Wieler, a postdoctoral research fellow in developmental and neonatal medicine, and colleagues at Stanford (Calif.) University. This is so because many analyses of maternal morbidity exclude stillbirth deliveries, or lump them with term deliveries, she and coauthors explained.

Using data from the Office of Statewide Health Planning and Development in California, Dr. Wall-Wieler and colleagues examined a total of 6,459,842 deliveries occurring in the state during 1999-2011; of these, 25,997 (0.4%) were stillbirths. For the cross-sectional study, the investigators included only deliveries for which fetal or neonatal vital records could be linked with the maternal hospital record.

Stillbirth was defined in the study as a fetal death delivered at or after 20 weeks’ gestation, so deliveries at less than 20 weeks’ gestation were excluded, as were any deliveries recorded as being at or after 45 weeks’ gestation, because the latter set were considered likely to be data entry errors.

Deliveries were considered to have severe maternal morbidity if any of the 18 indicators identified by the Centers for Disease Control and Prevention were coded in the medical record. The most common severe morbidities seen in stillbirth were blood transfusion, disseminated intravascular coagulation, and acute renal failure (adjusted RRs 5.38, 8.78, and 13.22, respectively). Although absolute occurrences were less frequent, relative risk for sepsis and shock were more than 14 times higher for stillbirths than for live birth deliveries.

“Taken together, these findings suggest the morbidity associated with obstetric hemorrhage and preeclampsia among women hospitalized for stillbirth delivery is a serious concern,” wrote Dr. Wall-Wieler and coauthors. They called for prospective studies to clarify cause and effect between stillbirth and these morbidities and to look into whether women carrying a nonviable fetus or with known fetal demise are managed differently than those with a viable fetus.

Overall, stillbirth deliveries were more likely for women who were older, for non-Hispanic black women, for those who did not have a college education, and those who did not have private insurance. Preexisting diabetes and hypertension, as well as a vaginal delivery, also upped the risk for stillbirth.

For reasons that are not completely clear, the risk for severe maternal morbidity with stillbirth climbed after 30 weeks’ gestation. Dr. Wall-Wieler and collaborators conducted an exploratory analysis that dichotomized deliveries for both stillbirth and live births into those occurring at fewer than 30 weeks’ gestation, or at or after 30 weeks’. They found no increased risk for severe maternal morbidity earlier than 30 weeks, but an RR of 5.4 for stillbirth at or after 30 weeks.

A reported cause of fetal demise was available for 71% of deliveries, with umbilical cord anomalies, obstetric complications, and placental conditions collectively accounting for almost half (46%) of the identified causes of demise. Severe maternal morbidity was most common in deaths related to hypertensive disorders, at 24/100, and least common in deaths from major fetal structural or genetic problems, at 1/100.

The size of the study strengthens the findings, said the investigators, but the large amount of missing data in recording fetal deaths does introduce some limitations. These include the inability to distinguish between intrapartum and antepartum fetal death, as well as the fact that cause of fetal death was not recorded for over one in four stillbirths.

“Given the recent calls to reduce the national rate of severe maternal morbidity, new public health initiatives and practice guidelines are needed to highlight and address the morbidity risk associated with stillbirth identified in this study,” wrote Dr. Wall-Wieler and colleagues.

The study was funded by the National Institutes of Health and by Stanford University. Ronald S. Gibbs, MD, reported receiving money from Novavax/ACI. Alexander J. Butwick, MD, reported receiving money from Cerus Corp. and Instrumentation Laboratory. The other coauthors reported no relevant financial conflicts of interest.

SOURCE: Wall-Wieler E et al. Obstet Gynecol. 2019 Aug. 134:2;310-7.

Publications
Topics
Sections

 

Severe maternal morbidity is almost five times more common in women who have stillbirth deliveries than in women who have live births, according to research in Obstetrics & Gynecology.

JazzIRT/Getty Images

Citing major increases in risk for a host of serious complications, the authors of the large population-based study urge those caring for women experiencing stillbirth to be vigilant for trouble.

Severe maternal morbidity among mothers experiencing stillbirth occurred in 578 cases per 10,000 deliveries, compared with 99 cases per 10,000 live deliveries, wrote Elizabeth Wall-Wieler, PhD, and coauthors. After statistical adjustment, the relative risk (RR) for severe maternal morbidity in a stillbirth compared with a live delivery was 4.77 (95% confidence interval, 4.53-5.02).

“Our findings indicate that nearly 1 in 17 women who deliver a stillbirth in California experience severe maternal morbidity. Furthermore, the risk of severe maternal morbidity was more than fourfold higher for women undergoing stillbirth delivery than live birth delivery,” the investigators wrote.

Major maternal organ dysfunction or failure – including acute renal failure, adult respiratory distress syndrome, disseminated intravascular coagulation, sepsis, or shock – all were more common in stillbirth deliveries, noted Dr. Wall-Wieler and colleagues. Hysterectomy, likely performed to control major loss of blood, also was more likely in stillbirth deliveries.

“Minimal attention has been given to maternal outcomes and acute complications experienced by women who have a stillbirth,” wrote Dr. Wall-Wieler, a postdoctoral research fellow in developmental and neonatal medicine, and colleagues at Stanford (Calif.) University. This is so because many analyses of maternal morbidity exclude stillbirth deliveries, or lump them with term deliveries, she and coauthors explained.

Using data from the Office of Statewide Health Planning and Development in California, Dr. Wall-Wieler and colleagues examined a total of 6,459,842 deliveries occurring in the state during 1999-2011; of these, 25,997 (0.4%) were stillbirths. For the cross-sectional study, the investigators included only deliveries for which fetal or neonatal vital records could be linked with the maternal hospital record.

Stillbirth was defined in the study as a fetal death delivered at or after 20 weeks’ gestation, so deliveries at less than 20 weeks’ gestation were excluded, as were any deliveries recorded as being at or after 45 weeks’ gestation, because the latter set were considered likely to be data entry errors.

Deliveries were considered to have severe maternal morbidity if any of the 18 indicators identified by the Centers for Disease Control and Prevention were coded in the medical record. The most common severe morbidities seen in stillbirth were blood transfusion, disseminated intravascular coagulation, and acute renal failure (adjusted RRs 5.38, 8.78, and 13.22, respectively). Although absolute occurrences were less frequent, relative risk for sepsis and shock were more than 14 times higher for stillbirths than for live birth deliveries.

“Taken together, these findings suggest the morbidity associated with obstetric hemorrhage and preeclampsia among women hospitalized for stillbirth delivery is a serious concern,” wrote Dr. Wall-Wieler and coauthors. They called for prospective studies to clarify cause and effect between stillbirth and these morbidities and to look into whether women carrying a nonviable fetus or with known fetal demise are managed differently than those with a viable fetus.

Overall, stillbirth deliveries were more likely for women who were older, for non-Hispanic black women, for those who did not have a college education, and those who did not have private insurance. Preexisting diabetes and hypertension, as well as a vaginal delivery, also upped the risk for stillbirth.

For reasons that are not completely clear, the risk for severe maternal morbidity with stillbirth climbed after 30 weeks’ gestation. Dr. Wall-Wieler and collaborators conducted an exploratory analysis that dichotomized deliveries for both stillbirth and live births into those occurring at fewer than 30 weeks’ gestation, or at or after 30 weeks’. They found no increased risk for severe maternal morbidity earlier than 30 weeks, but an RR of 5.4 for stillbirth at or after 30 weeks.

A reported cause of fetal demise was available for 71% of deliveries, with umbilical cord anomalies, obstetric complications, and placental conditions collectively accounting for almost half (46%) of the identified causes of demise. Severe maternal morbidity was most common in deaths related to hypertensive disorders, at 24/100, and least common in deaths from major fetal structural or genetic problems, at 1/100.

The size of the study strengthens the findings, said the investigators, but the large amount of missing data in recording fetal deaths does introduce some limitations. These include the inability to distinguish between intrapartum and antepartum fetal death, as well as the fact that cause of fetal death was not recorded for over one in four stillbirths.

“Given the recent calls to reduce the national rate of severe maternal morbidity, new public health initiatives and practice guidelines are needed to highlight and address the morbidity risk associated with stillbirth identified in this study,” wrote Dr. Wall-Wieler and colleagues.

The study was funded by the National Institutes of Health and by Stanford University. Ronald S. Gibbs, MD, reported receiving money from Novavax/ACI. Alexander J. Butwick, MD, reported receiving money from Cerus Corp. and Instrumentation Laboratory. The other coauthors reported no relevant financial conflicts of interest.

SOURCE: Wall-Wieler E et al. Obstet Gynecol. 2019 Aug. 134:2;310-7.

 

Severe maternal morbidity is almost five times more common in women who have stillbirth deliveries than in women who have live births, according to research in Obstetrics & Gynecology.

JazzIRT/Getty Images

Citing major increases in risk for a host of serious complications, the authors of the large population-based study urge those caring for women experiencing stillbirth to be vigilant for trouble.

Severe maternal morbidity among mothers experiencing stillbirth occurred in 578 cases per 10,000 deliveries, compared with 99 cases per 10,000 live deliveries, wrote Elizabeth Wall-Wieler, PhD, and coauthors. After statistical adjustment, the relative risk (RR) for severe maternal morbidity in a stillbirth compared with a live delivery was 4.77 (95% confidence interval, 4.53-5.02).

“Our findings indicate that nearly 1 in 17 women who deliver a stillbirth in California experience severe maternal morbidity. Furthermore, the risk of severe maternal morbidity was more than fourfold higher for women undergoing stillbirth delivery than live birth delivery,” the investigators wrote.

Major maternal organ dysfunction or failure – including acute renal failure, adult respiratory distress syndrome, disseminated intravascular coagulation, sepsis, or shock – all were more common in stillbirth deliveries, noted Dr. Wall-Wieler and colleagues. Hysterectomy, likely performed to control major loss of blood, also was more likely in stillbirth deliveries.

“Minimal attention has been given to maternal outcomes and acute complications experienced by women who have a stillbirth,” wrote Dr. Wall-Wieler, a postdoctoral research fellow in developmental and neonatal medicine, and colleagues at Stanford (Calif.) University. This is so because many analyses of maternal morbidity exclude stillbirth deliveries, or lump them with term deliveries, she and coauthors explained.

Using data from the Office of Statewide Health Planning and Development in California, Dr. Wall-Wieler and colleagues examined a total of 6,459,842 deliveries occurring in the state during 1999-2011; of these, 25,997 (0.4%) were stillbirths. For the cross-sectional study, the investigators included only deliveries for which fetal or neonatal vital records could be linked with the maternal hospital record.

Stillbirth was defined in the study as a fetal death delivered at or after 20 weeks’ gestation, so deliveries at less than 20 weeks’ gestation were excluded, as were any deliveries recorded as being at or after 45 weeks’ gestation, because the latter set were considered likely to be data entry errors.

Deliveries were considered to have severe maternal morbidity if any of the 18 indicators identified by the Centers for Disease Control and Prevention were coded in the medical record. The most common severe morbidities seen in stillbirth were blood transfusion, disseminated intravascular coagulation, and acute renal failure (adjusted RRs 5.38, 8.78, and 13.22, respectively). Although absolute occurrences were less frequent, relative risk for sepsis and shock were more than 14 times higher for stillbirths than for live birth deliveries.

“Taken together, these findings suggest the morbidity associated with obstetric hemorrhage and preeclampsia among women hospitalized for stillbirth delivery is a serious concern,” wrote Dr. Wall-Wieler and coauthors. They called for prospective studies to clarify cause and effect between stillbirth and these morbidities and to look into whether women carrying a nonviable fetus or with known fetal demise are managed differently than those with a viable fetus.

Overall, stillbirth deliveries were more likely for women who were older, for non-Hispanic black women, for those who did not have a college education, and those who did not have private insurance. Preexisting diabetes and hypertension, as well as a vaginal delivery, also upped the risk for stillbirth.

For reasons that are not completely clear, the risk for severe maternal morbidity with stillbirth climbed after 30 weeks’ gestation. Dr. Wall-Wieler and collaborators conducted an exploratory analysis that dichotomized deliveries for both stillbirth and live births into those occurring at fewer than 30 weeks’ gestation, or at or after 30 weeks’. They found no increased risk for severe maternal morbidity earlier than 30 weeks, but an RR of 5.4 for stillbirth at or after 30 weeks.

A reported cause of fetal demise was available for 71% of deliveries, with umbilical cord anomalies, obstetric complications, and placental conditions collectively accounting for almost half (46%) of the identified causes of demise. Severe maternal morbidity was most common in deaths related to hypertensive disorders, at 24/100, and least common in deaths from major fetal structural or genetic problems, at 1/100.

The size of the study strengthens the findings, said the investigators, but the large amount of missing data in recording fetal deaths does introduce some limitations. These include the inability to distinguish between intrapartum and antepartum fetal death, as well as the fact that cause of fetal death was not recorded for over one in four stillbirths.

“Given the recent calls to reduce the national rate of severe maternal morbidity, new public health initiatives and practice guidelines are needed to highlight and address the morbidity risk associated with stillbirth identified in this study,” wrote Dr. Wall-Wieler and colleagues.

The study was funded by the National Institutes of Health and by Stanford University. Ronald S. Gibbs, MD, reported receiving money from Novavax/ACI. Alexander J. Butwick, MD, reported receiving money from Cerus Corp. and Instrumentation Laboratory. The other coauthors reported no relevant financial conflicts of interest.

SOURCE: Wall-Wieler E et al. Obstet Gynecol. 2019 Aug. 134:2;310-7.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM OBSTETRICS & GYNECOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.