50 years of child psychiatry, developmental-behavioral pediatrics

Article Type
Changed
Tue, 05/07/2019 - 14:53

 

The 50th anniversary of Pediatric News prompts us to look back on the past 50 years in child psychiatry and developmental-behavioral pediatrics, and reflect on the evolution of the field. This includes the approach to diagnosis, the thinking about development and family, and the approach and access to treatment during this dynamic period.

While some historians identify the establishment of the first juvenile court in Chicago in 1899 and the work to help judges evaluate juvenile delinquency as the origin of child psychiatry in the United States, it was not until after World War II that the field really began to take root here, largely based on psychiatrists fleeing Europe and the seminal work of Anna Freud. Some of the earliest connections between pediatrics and child psychiatry were based on the work in England of Donald W. Winnicott, a practicing pediatrician and child psychiatrist, Albert J. Solnit, MD, at the Yale Child Study Center, and psychologically informed work of pediatrician Benjamin M. Spock, MD.

The first Diagnostic and Statistical Manual (DSM) was published in 1952, based on a codification of mental disorders established by the Navy during WWII. The American Academy of Child & Adolescent Psychiatry was established in 1953, the same year that the first “tranquilizer,” chlorpromazine (Thorazine) was introduced (in France), marking the start of a revolution in psychiatric care. In 1959, the first candidates sat for a licensing examination in child psychiatry. The Section on Developmental and Behavioral Pediatrics was established as part of the American Academy of Pediatrics in 1960 to support training in this area. The AACAP established a journal in 1961. Child guidance clinics started affiliating with hospitals and universities in the 1960’s, after the Community Mental Health Act of 1963. Then, in 1965, Julius B. Richmond, MD, (a pediatrician) and Uri Bronfenbrenner, PhD, (a developmental psychologist), recognizing the importance of ecological systems to child development, were involved in the creation of Head Start, and the first Joint Commission on Mental Health for Children was established by federal legislation in 1965. The field was truly coalescing into a distinct discipline of medicine, one that bridged pediatrics, psychiatry, and neurology with nonmedical disciplines such as justice and education.

The decade between 1967 and 1977 was a period of transition from the focus on psychoanalytic concepts typical of the first half of the century to a more systematic approach to diagnosis. Children in psychiatric treatment had commonly been seen for extended individual treatments, and those with more disruptive disorders often were hospitalized for long periods. Psychoanalysis focused on the unconscious (theoretical drives and conflicts) to guide treatment. Treatment often focused on the role (causal) of parents, and family treatment was common, even on inpatient units. The second edition of the DSM (DSM-II) was published in 1968, with its first distinct section for disorders of childhood and adolescence, and an overarching focus on psychodynamics. In 1974, the decision was made to publish a new edition of the DSM that would establish a multiaxial assessment system (separating “biological” mental health problems from personality disorders, medical illnesses, and psychosocial stressors) and research-oriented diagnostic criteria that would attempt to facilitate reliable diagnoses based on common clusters of symptoms. Field trials sponsored by the National Institute of Mental Health began in 1977 to establish the reliability of the new diagnoses.

The year 1977 saw the first Apple computer, the New York City blackout, the release of the first “Star Wars” movie, and also the start of a momentous decade in general and child psychiatry. The third edition of the DSM (DSM-III) was published in 1980, the beginning of a revolution in psychiatric diagnosis and treatments. It created reliable, reproducible diagnostic constructs to serve as the basis for studies on epidemiology and treatment. Implications of causality were replaced by description; for example, hyperkinetic reaction of childhood was redefined and labeled attention-deficit disorder. Recognizing the importance of research and training in this rapidly changing field, W.T. Grant Foundation funded 11 fellowship programs in 1977, and the Society for Developmental and Behavioral Pediatrics was founded in 1982 by the leaders of those programs.

In 1983, The AACAP published “Child Psychiatry: A Plan for the Coming Decades.” It was the result of 5 years’ work by 100 child psychiatrists, general psychiatrists, pediatricians, epidemiologists, nurses, leaders of the NIMH, and various child advocates. This report laid out a challenge for child psychiatry to develop research strategies that would allow evidence-based understanding and treatment of the mental illnesses of children. The established focus on individual experience and anecdotal data, particularly about social and psychodynamic influences, would shift towards a more scientific approach to diagnosis and treatment. This decade started an explosion in epidemiologic research, medication trials, and controlled studies of nonbiological treatments in child psychiatry. At the same time, the political landscape changed, and an ascendant conservatism began the process of closing publicly funded residential treatment centers that had offered care to the more chronically mentally ill and children with profound developmental disorders. This would accelerate the shift towards outpatient psychiatric care of children. Ironically, as research would accelerate in child psychiatry, access to effective treatments would become more difficult.

The decade from 1987 to 1997 was a period of dramatic growth in medication use in child psychiatry. Prozac was approved by the Food and Drug Administration for use in the United States in 1988 and soon followed by other selective serotonin reuptake inhibitors (Zoloft in 1991 and Paxil in 1992). The journal of the AACAP began to publish more randomized controlled trials of medication treatments in children with DSM-codified diagnoses, and clinicians became more comfortable using stimulants, antidepressants, and even antipsychotic medications in the outpatient setting. This trend was enhanced by the emergence of managed care and the denial of coverage for alleged “nonbiological” diagnoses and for many psychiatric treatments. Loss of reimbursement led to a significant decline in resources, particularly inpatient child psychiatry beds and specialized clinics. This, in turn, contributed to the growing emphasis on medication treatments for children’s mental health problems. For-profit managed care companies underbid each other to provide mental health coverage and incentivized medication visits. Of note, the medical budgets, not the mental health carve outs, were billed for the medication prescribed.

The Americans with Disabilities Act was passed in 1990, increasing the funding for school-based mental health resources for children, and in 1996, Congress passed the Mental Health Parity Act, the first of several legislative attempts to ensure parity between insurance coverage for medical and psychiatric illnesses – legislation that to this day has not achieved parity of access to care. As pediatricians took on more of mental health care, a multidisciplinary team created a primary care version of DSM IV, the DSM-IV-PC, in 1995, to assist with defining levels of symptoms less than disorder to facilitate earlier intervention. A formal subspecialty of developmental-behavioral pediatrics was established in 1999 to educate leaders. Pediatric residents have had required training in developmental-behavioral pediatrics since 2008.

The year 1997 saw the first nationwide survey of parents about attention-deficit/hyperactivity disorder, kicking off what could be called the decade of ADHD, in which prevalence rates steadily climbed, from 5.7% in 1997 to 9.5% in 2007. The prevalence of stimulant treatment in children skyrocketed in this period. According to the NIMH, stimulants were prescribed to 4.2% of 6- to 12-year-olds in 1996, and that number grew to 5.1% in 2008. For 13- to 18-year-olds, the rate more than doubled during this time, from 2.3% in 1996 to 4.9% in 2008. The prevalence of autism also grew dramatically during this time, from 1.9 per 1,000 in 1997-1999 to 7.4 per 1,000 in 2006-2008, probably based on an evolving understanding of the disorder and this diagnosis providing special access to resources in schools.

Research during this decade became increasingly focused on imaging studies of children (and adults), as leaders in the field were trying to move from symptom clusters to anatomic and physiologic correlates of psychiatric illness. The great increase in medication use in children hit a speed bump in October 2004, when the Food and Drug Administration issued a controversial public warning about an increased risk of suicidal thoughts or behaviors in youth being treated with SSRI antidepressants. As access to child psychiatric treatment had become more difficult over the preceding decades, pediatricians had assumed much of the medication treatment of common psychiatric problems. The FDA’s black box warning complicated pediatricians’ efforts to fill this void.

The last decade has been the decade of genetics and efforts to improve access to care. It started in 2007 with the FDA expanding its SSRI warning to acknowledge that depression itself increased the risk for suicide, in an effort to not discourage needed depression treatment in young people. But studies demonstrated that the rates of diagnosing and treating depression dropped dramatically in the years following the warning: Diagnoses of depression declined by as much as 42% in children, and the rate of antidepressant treatment in adolescents dropped by as much as 32% in the 2 years following the warning (N Engl J Med. 2014 Oct 30;371(18):1666-8). There was no compensatory increase in utilization of other kinds of treatments. While suicide rates in young people had been stubbornly steady from the mid-1970’s to the mid-1990’s, they began to decline in 1996, according to the Centers for Disease Control and Prevention. But that trend was broken in 2004, with a jump in attempted and completed suicides in young people. The rate stabilized later in the decade, but has never returned to the lows that were being achieved prior to the warning.

This decade was marked by the passage of the Affordable Care Act, including – again – an unfulfilled mandate for mental health parity for any insurance plans in the marketplace. Although diagnosis is still symptom based, the effort to define psychiatric disorders based on brain anatomy, neurotransmitters, and genomics continues to intensify. There is growing evidence that psychiatric disorders are not nature or nurture, but nature and nurture. Epigenetic findings show that environment impacts gene expression and brain functioning. These findings promise to deepen our understanding of the critical role of early experiences (consider Adverse Childhood Experiences [ACE] scores) and the promise of protective relationships, in schools and parenting.

And what will come next? We believe that silos – medical, psychiatric, parenting, school, environment – will be bridged to understand the many factors that impact behavior and treatment, but the need to advocate for policies that support funding for the education and mental health care of children and the training of professionals to provide that care is never ending. As our knowledge of the genome marches forward, we may discover effective strategies for preventing the emergence of mental illness in children or create individualized treatments. We may learn more about the role of nutrition and the microbiome in health and disease, about autoimmunity and mental illness. Our focus may return to parents, not as culprits, but as the mediators of health from the prenatal period on. Technology may enable us to improve access to effective treatments, with teens monitoring their sleep and mood, and accessing therapy on their smart phones. And our understanding of development and vulnerability may help us stem the rise in autism or collaborate with educators so that education could better put every child on their healthiest possible path. We look forward to experiencing it – and writing about it – with you!
 

 

 

Dr. Swick is an attending psychiatrist in the division of child psychiatry at Massachusetts General Hospital, Boston, and director of the Parenting at a Challenging Time (PACT) Program at the Vernon Cancer Center at Newton Wellesley Hospital, also in Boston. Dr. Jellinek is professor emeritus of psychiatry and pediatrics, Harvard Medical School, Boston. They said they had no relevant financial disclosures. Dr. Howard is assistant professor of pediatrics at Johns Hopkins University, Baltimore, and creator of CHADIS (www.CHADIS.com). She had no other relevant disclosures. Dr. Howard’s contribution to this publication was as a paid expert to Frontline Medical News. Email them at [email protected].

Publications
Topics
Sections

 

The 50th anniversary of Pediatric News prompts us to look back on the past 50 years in child psychiatry and developmental-behavioral pediatrics, and reflect on the evolution of the field. This includes the approach to diagnosis, the thinking about development and family, and the approach and access to treatment during this dynamic period.

While some historians identify the establishment of the first juvenile court in Chicago in 1899 and the work to help judges evaluate juvenile delinquency as the origin of child psychiatry in the United States, it was not until after World War II that the field really began to take root here, largely based on psychiatrists fleeing Europe and the seminal work of Anna Freud. Some of the earliest connections between pediatrics and child psychiatry were based on the work in England of Donald W. Winnicott, a practicing pediatrician and child psychiatrist, Albert J. Solnit, MD, at the Yale Child Study Center, and psychologically informed work of pediatrician Benjamin M. Spock, MD.

The first Diagnostic and Statistical Manual (DSM) was published in 1952, based on a codification of mental disorders established by the Navy during WWII. The American Academy of Child & Adolescent Psychiatry was established in 1953, the same year that the first “tranquilizer,” chlorpromazine (Thorazine) was introduced (in France), marking the start of a revolution in psychiatric care. In 1959, the first candidates sat for a licensing examination in child psychiatry. The Section on Developmental and Behavioral Pediatrics was established as part of the American Academy of Pediatrics in 1960 to support training in this area. The AACAP established a journal in 1961. Child guidance clinics started affiliating with hospitals and universities in the 1960’s, after the Community Mental Health Act of 1963. Then, in 1965, Julius B. Richmond, MD, (a pediatrician) and Uri Bronfenbrenner, PhD, (a developmental psychologist), recognizing the importance of ecological systems to child development, were involved in the creation of Head Start, and the first Joint Commission on Mental Health for Children was established by federal legislation in 1965. The field was truly coalescing into a distinct discipline of medicine, one that bridged pediatrics, psychiatry, and neurology with nonmedical disciplines such as justice and education.

The decade between 1967 and 1977 was a period of transition from the focus on psychoanalytic concepts typical of the first half of the century to a more systematic approach to diagnosis. Children in psychiatric treatment had commonly been seen for extended individual treatments, and those with more disruptive disorders often were hospitalized for long periods. Psychoanalysis focused on the unconscious (theoretical drives and conflicts) to guide treatment. Treatment often focused on the role (causal) of parents, and family treatment was common, even on inpatient units. The second edition of the DSM (DSM-II) was published in 1968, with its first distinct section for disorders of childhood and adolescence, and an overarching focus on psychodynamics. In 1974, the decision was made to publish a new edition of the DSM that would establish a multiaxial assessment system (separating “biological” mental health problems from personality disorders, medical illnesses, and psychosocial stressors) and research-oriented diagnostic criteria that would attempt to facilitate reliable diagnoses based on common clusters of symptoms. Field trials sponsored by the National Institute of Mental Health began in 1977 to establish the reliability of the new diagnoses.

The year 1977 saw the first Apple computer, the New York City blackout, the release of the first “Star Wars” movie, and also the start of a momentous decade in general and child psychiatry. The third edition of the DSM (DSM-III) was published in 1980, the beginning of a revolution in psychiatric diagnosis and treatments. It created reliable, reproducible diagnostic constructs to serve as the basis for studies on epidemiology and treatment. Implications of causality were replaced by description; for example, hyperkinetic reaction of childhood was redefined and labeled attention-deficit disorder. Recognizing the importance of research and training in this rapidly changing field, W.T. Grant Foundation funded 11 fellowship programs in 1977, and the Society for Developmental and Behavioral Pediatrics was founded in 1982 by the leaders of those programs.

In 1983, The AACAP published “Child Psychiatry: A Plan for the Coming Decades.” It was the result of 5 years’ work by 100 child psychiatrists, general psychiatrists, pediatricians, epidemiologists, nurses, leaders of the NIMH, and various child advocates. This report laid out a challenge for child psychiatry to develop research strategies that would allow evidence-based understanding and treatment of the mental illnesses of children. The established focus on individual experience and anecdotal data, particularly about social and psychodynamic influences, would shift towards a more scientific approach to diagnosis and treatment. This decade started an explosion in epidemiologic research, medication trials, and controlled studies of nonbiological treatments in child psychiatry. At the same time, the political landscape changed, and an ascendant conservatism began the process of closing publicly funded residential treatment centers that had offered care to the more chronically mentally ill and children with profound developmental disorders. This would accelerate the shift towards outpatient psychiatric care of children. Ironically, as research would accelerate in child psychiatry, access to effective treatments would become more difficult.

The decade from 1987 to 1997 was a period of dramatic growth in medication use in child psychiatry. Prozac was approved by the Food and Drug Administration for use in the United States in 1988 and soon followed by other selective serotonin reuptake inhibitors (Zoloft in 1991 and Paxil in 1992). The journal of the AACAP began to publish more randomized controlled trials of medication treatments in children with DSM-codified diagnoses, and clinicians became more comfortable using stimulants, antidepressants, and even antipsychotic medications in the outpatient setting. This trend was enhanced by the emergence of managed care and the denial of coverage for alleged “nonbiological” diagnoses and for many psychiatric treatments. Loss of reimbursement led to a significant decline in resources, particularly inpatient child psychiatry beds and specialized clinics. This, in turn, contributed to the growing emphasis on medication treatments for children’s mental health problems. For-profit managed care companies underbid each other to provide mental health coverage and incentivized medication visits. Of note, the medical budgets, not the mental health carve outs, were billed for the medication prescribed.

The Americans with Disabilities Act was passed in 1990, increasing the funding for school-based mental health resources for children, and in 1996, Congress passed the Mental Health Parity Act, the first of several legislative attempts to ensure parity between insurance coverage for medical and psychiatric illnesses – legislation that to this day has not achieved parity of access to care. As pediatricians took on more of mental health care, a multidisciplinary team created a primary care version of DSM IV, the DSM-IV-PC, in 1995, to assist with defining levels of symptoms less than disorder to facilitate earlier intervention. A formal subspecialty of developmental-behavioral pediatrics was established in 1999 to educate leaders. Pediatric residents have had required training in developmental-behavioral pediatrics since 2008.

The year 1997 saw the first nationwide survey of parents about attention-deficit/hyperactivity disorder, kicking off what could be called the decade of ADHD, in which prevalence rates steadily climbed, from 5.7% in 1997 to 9.5% in 2007. The prevalence of stimulant treatment in children skyrocketed in this period. According to the NIMH, stimulants were prescribed to 4.2% of 6- to 12-year-olds in 1996, and that number grew to 5.1% in 2008. For 13- to 18-year-olds, the rate more than doubled during this time, from 2.3% in 1996 to 4.9% in 2008. The prevalence of autism also grew dramatically during this time, from 1.9 per 1,000 in 1997-1999 to 7.4 per 1,000 in 2006-2008, probably based on an evolving understanding of the disorder and this diagnosis providing special access to resources in schools.

Research during this decade became increasingly focused on imaging studies of children (and adults), as leaders in the field were trying to move from symptom clusters to anatomic and physiologic correlates of psychiatric illness. The great increase in medication use in children hit a speed bump in October 2004, when the Food and Drug Administration issued a controversial public warning about an increased risk of suicidal thoughts or behaviors in youth being treated with SSRI antidepressants. As access to child psychiatric treatment had become more difficult over the preceding decades, pediatricians had assumed much of the medication treatment of common psychiatric problems. The FDA’s black box warning complicated pediatricians’ efforts to fill this void.

The last decade has been the decade of genetics and efforts to improve access to care. It started in 2007 with the FDA expanding its SSRI warning to acknowledge that depression itself increased the risk for suicide, in an effort to not discourage needed depression treatment in young people. But studies demonstrated that the rates of diagnosing and treating depression dropped dramatically in the years following the warning: Diagnoses of depression declined by as much as 42% in children, and the rate of antidepressant treatment in adolescents dropped by as much as 32% in the 2 years following the warning (N Engl J Med. 2014 Oct 30;371(18):1666-8). There was no compensatory increase in utilization of other kinds of treatments. While suicide rates in young people had been stubbornly steady from the mid-1970’s to the mid-1990’s, they began to decline in 1996, according to the Centers for Disease Control and Prevention. But that trend was broken in 2004, with a jump in attempted and completed suicides in young people. The rate stabilized later in the decade, but has never returned to the lows that were being achieved prior to the warning.

This decade was marked by the passage of the Affordable Care Act, including – again – an unfulfilled mandate for mental health parity for any insurance plans in the marketplace. Although diagnosis is still symptom based, the effort to define psychiatric disorders based on brain anatomy, neurotransmitters, and genomics continues to intensify. There is growing evidence that psychiatric disorders are not nature or nurture, but nature and nurture. Epigenetic findings show that environment impacts gene expression and brain functioning. These findings promise to deepen our understanding of the critical role of early experiences (consider Adverse Childhood Experiences [ACE] scores) and the promise of protective relationships, in schools and parenting.

And what will come next? We believe that silos – medical, psychiatric, parenting, school, environment – will be bridged to understand the many factors that impact behavior and treatment, but the need to advocate for policies that support funding for the education and mental health care of children and the training of professionals to provide that care is never ending. As our knowledge of the genome marches forward, we may discover effective strategies for preventing the emergence of mental illness in children or create individualized treatments. We may learn more about the role of nutrition and the microbiome in health and disease, about autoimmunity and mental illness. Our focus may return to parents, not as culprits, but as the mediators of health from the prenatal period on. Technology may enable us to improve access to effective treatments, with teens monitoring their sleep and mood, and accessing therapy on their smart phones. And our understanding of development and vulnerability may help us stem the rise in autism or collaborate with educators so that education could better put every child on their healthiest possible path. We look forward to experiencing it – and writing about it – with you!
 

 

 

Dr. Swick is an attending psychiatrist in the division of child psychiatry at Massachusetts General Hospital, Boston, and director of the Parenting at a Challenging Time (PACT) Program at the Vernon Cancer Center at Newton Wellesley Hospital, also in Boston. Dr. Jellinek is professor emeritus of psychiatry and pediatrics, Harvard Medical School, Boston. They said they had no relevant financial disclosures. Dr. Howard is assistant professor of pediatrics at Johns Hopkins University, Baltimore, and creator of CHADIS (www.CHADIS.com). She had no other relevant disclosures. Dr. Howard’s contribution to this publication was as a paid expert to Frontline Medical News. Email them at [email protected].

 

The 50th anniversary of Pediatric News prompts us to look back on the past 50 years in child psychiatry and developmental-behavioral pediatrics, and reflect on the evolution of the field. This includes the approach to diagnosis, the thinking about development and family, and the approach and access to treatment during this dynamic period.

While some historians identify the establishment of the first juvenile court in Chicago in 1899 and the work to help judges evaluate juvenile delinquency as the origin of child psychiatry in the United States, it was not until after World War II that the field really began to take root here, largely based on psychiatrists fleeing Europe and the seminal work of Anna Freud. Some of the earliest connections between pediatrics and child psychiatry were based on the work in England of Donald W. Winnicott, a practicing pediatrician and child psychiatrist, Albert J. Solnit, MD, at the Yale Child Study Center, and psychologically informed work of pediatrician Benjamin M. Spock, MD.

The first Diagnostic and Statistical Manual (DSM) was published in 1952, based on a codification of mental disorders established by the Navy during WWII. The American Academy of Child & Adolescent Psychiatry was established in 1953, the same year that the first “tranquilizer,” chlorpromazine (Thorazine) was introduced (in France), marking the start of a revolution in psychiatric care. In 1959, the first candidates sat for a licensing examination in child psychiatry. The Section on Developmental and Behavioral Pediatrics was established as part of the American Academy of Pediatrics in 1960 to support training in this area. The AACAP established a journal in 1961. Child guidance clinics started affiliating with hospitals and universities in the 1960’s, after the Community Mental Health Act of 1963. Then, in 1965, Julius B. Richmond, MD, (a pediatrician) and Uri Bronfenbrenner, PhD, (a developmental psychologist), recognizing the importance of ecological systems to child development, were involved in the creation of Head Start, and the first Joint Commission on Mental Health for Children was established by federal legislation in 1965. The field was truly coalescing into a distinct discipline of medicine, one that bridged pediatrics, psychiatry, and neurology with nonmedical disciplines such as justice and education.

The decade between 1967 and 1977 was a period of transition from the focus on psychoanalytic concepts typical of the first half of the century to a more systematic approach to diagnosis. Children in psychiatric treatment had commonly been seen for extended individual treatments, and those with more disruptive disorders often were hospitalized for long periods. Psychoanalysis focused on the unconscious (theoretical drives and conflicts) to guide treatment. Treatment often focused on the role (causal) of parents, and family treatment was common, even on inpatient units. The second edition of the DSM (DSM-II) was published in 1968, with its first distinct section for disorders of childhood and adolescence, and an overarching focus on psychodynamics. In 1974, the decision was made to publish a new edition of the DSM that would establish a multiaxial assessment system (separating “biological” mental health problems from personality disorders, medical illnesses, and psychosocial stressors) and research-oriented diagnostic criteria that would attempt to facilitate reliable diagnoses based on common clusters of symptoms. Field trials sponsored by the National Institute of Mental Health began in 1977 to establish the reliability of the new diagnoses.

The year 1977 saw the first Apple computer, the New York City blackout, the release of the first “Star Wars” movie, and also the start of a momentous decade in general and child psychiatry. The third edition of the DSM (DSM-III) was published in 1980, the beginning of a revolution in psychiatric diagnosis and treatments. It created reliable, reproducible diagnostic constructs to serve as the basis for studies on epidemiology and treatment. Implications of causality were replaced by description; for example, hyperkinetic reaction of childhood was redefined and labeled attention-deficit disorder. Recognizing the importance of research and training in this rapidly changing field, W.T. Grant Foundation funded 11 fellowship programs in 1977, and the Society for Developmental and Behavioral Pediatrics was founded in 1982 by the leaders of those programs.

In 1983, The AACAP published “Child Psychiatry: A Plan for the Coming Decades.” It was the result of 5 years’ work by 100 child psychiatrists, general psychiatrists, pediatricians, epidemiologists, nurses, leaders of the NIMH, and various child advocates. This report laid out a challenge for child psychiatry to develop research strategies that would allow evidence-based understanding and treatment of the mental illnesses of children. The established focus on individual experience and anecdotal data, particularly about social and psychodynamic influences, would shift towards a more scientific approach to diagnosis and treatment. This decade started an explosion in epidemiologic research, medication trials, and controlled studies of nonbiological treatments in child psychiatry. At the same time, the political landscape changed, and an ascendant conservatism began the process of closing publicly funded residential treatment centers that had offered care to the more chronically mentally ill and children with profound developmental disorders. This would accelerate the shift towards outpatient psychiatric care of children. Ironically, as research would accelerate in child psychiatry, access to effective treatments would become more difficult.

The decade from 1987 to 1997 was a period of dramatic growth in medication use in child psychiatry. Prozac was approved by the Food and Drug Administration for use in the United States in 1988 and soon followed by other selective serotonin reuptake inhibitors (Zoloft in 1991 and Paxil in 1992). The journal of the AACAP began to publish more randomized controlled trials of medication treatments in children with DSM-codified diagnoses, and clinicians became more comfortable using stimulants, antidepressants, and even antipsychotic medications in the outpatient setting. This trend was enhanced by the emergence of managed care and the denial of coverage for alleged “nonbiological” diagnoses and for many psychiatric treatments. Loss of reimbursement led to a significant decline in resources, particularly inpatient child psychiatry beds and specialized clinics. This, in turn, contributed to the growing emphasis on medication treatments for children’s mental health problems. For-profit managed care companies underbid each other to provide mental health coverage and incentivized medication visits. Of note, the medical budgets, not the mental health carve outs, were billed for the medication prescribed.

The Americans with Disabilities Act was passed in 1990, increasing the funding for school-based mental health resources for children, and in 1996, Congress passed the Mental Health Parity Act, the first of several legislative attempts to ensure parity between insurance coverage for medical and psychiatric illnesses – legislation that to this day has not achieved parity of access to care. As pediatricians took on more of mental health care, a multidisciplinary team created a primary care version of DSM IV, the DSM-IV-PC, in 1995, to assist with defining levels of symptoms less than disorder to facilitate earlier intervention. A formal subspecialty of developmental-behavioral pediatrics was established in 1999 to educate leaders. Pediatric residents have had required training in developmental-behavioral pediatrics since 2008.

The year 1997 saw the first nationwide survey of parents about attention-deficit/hyperactivity disorder, kicking off what could be called the decade of ADHD, in which prevalence rates steadily climbed, from 5.7% in 1997 to 9.5% in 2007. The prevalence of stimulant treatment in children skyrocketed in this period. According to the NIMH, stimulants were prescribed to 4.2% of 6- to 12-year-olds in 1996, and that number grew to 5.1% in 2008. For 13- to 18-year-olds, the rate more than doubled during this time, from 2.3% in 1996 to 4.9% in 2008. The prevalence of autism also grew dramatically during this time, from 1.9 per 1,000 in 1997-1999 to 7.4 per 1,000 in 2006-2008, probably based on an evolving understanding of the disorder and this diagnosis providing special access to resources in schools.

Research during this decade became increasingly focused on imaging studies of children (and adults), as leaders in the field were trying to move from symptom clusters to anatomic and physiologic correlates of psychiatric illness. The great increase in medication use in children hit a speed bump in October 2004, when the Food and Drug Administration issued a controversial public warning about an increased risk of suicidal thoughts or behaviors in youth being treated with SSRI antidepressants. As access to child psychiatric treatment had become more difficult over the preceding decades, pediatricians had assumed much of the medication treatment of common psychiatric problems. The FDA’s black box warning complicated pediatricians’ efforts to fill this void.

The last decade has been the decade of genetics and efforts to improve access to care. It started in 2007 with the FDA expanding its SSRI warning to acknowledge that depression itself increased the risk for suicide, in an effort to not discourage needed depression treatment in young people. But studies demonstrated that the rates of diagnosing and treating depression dropped dramatically in the years following the warning: Diagnoses of depression declined by as much as 42% in children, and the rate of antidepressant treatment in adolescents dropped by as much as 32% in the 2 years following the warning (N Engl J Med. 2014 Oct 30;371(18):1666-8). There was no compensatory increase in utilization of other kinds of treatments. While suicide rates in young people had been stubbornly steady from the mid-1970’s to the mid-1990’s, they began to decline in 1996, according to the Centers for Disease Control and Prevention. But that trend was broken in 2004, with a jump in attempted and completed suicides in young people. The rate stabilized later in the decade, but has never returned to the lows that were being achieved prior to the warning.

This decade was marked by the passage of the Affordable Care Act, including – again – an unfulfilled mandate for mental health parity for any insurance plans in the marketplace. Although diagnosis is still symptom based, the effort to define psychiatric disorders based on brain anatomy, neurotransmitters, and genomics continues to intensify. There is growing evidence that psychiatric disorders are not nature or nurture, but nature and nurture. Epigenetic findings show that environment impacts gene expression and brain functioning. These findings promise to deepen our understanding of the critical role of early experiences (consider Adverse Childhood Experiences [ACE] scores) and the promise of protective relationships, in schools and parenting.

And what will come next? We believe that silos – medical, psychiatric, parenting, school, environment – will be bridged to understand the many factors that impact behavior and treatment, but the need to advocate for policies that support funding for the education and mental health care of children and the training of professionals to provide that care is never ending. As our knowledge of the genome marches forward, we may discover effective strategies for preventing the emergence of mental illness in children or create individualized treatments. We may learn more about the role of nutrition and the microbiome in health and disease, about autoimmunity and mental illness. Our focus may return to parents, not as culprits, but as the mediators of health from the prenatal period on. Technology may enable us to improve access to effective treatments, with teens monitoring their sleep and mood, and accessing therapy on their smart phones. And our understanding of development and vulnerability may help us stem the rise in autism or collaborate with educators so that education could better put every child on their healthiest possible path. We look forward to experiencing it – and writing about it – with you!
 

 

 

Dr. Swick is an attending psychiatrist in the division of child psychiatry at Massachusetts General Hospital, Boston, and director of the Parenting at a Challenging Time (PACT) Program at the Vernon Cancer Center at Newton Wellesley Hospital, also in Boston. Dr. Jellinek is professor emeritus of psychiatry and pediatrics, Harvard Medical School, Boston. They said they had no relevant financial disclosures. Dr. Howard is assistant professor of pediatrics at Johns Hopkins University, Baltimore, and creator of CHADIS (www.CHADIS.com). She had no other relevant disclosures. Dr. Howard’s contribution to this publication was as a paid expert to Frontline Medical News. Email them at [email protected].

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME

Ublituximab was safe, highly active in rituximab-pretreated B-cell NHL, CLL

Article Type
Changed
Fri, 12/16/2022 - 11:37

 

The investigational anti-CD20 monoclonal antibody ublituximab is safe and has good antitumor activity in patients with B-cell non-Hodgkin lymphoma (B-NHL) or chronic lymphocytic leukemia (CLL) who have previously received the anti-CD20 antibody rituximab, results from a phase I/II trial suggest.

Ublituximab is engineered to have a low fucose content. This feature gives it enhanced antibody-dependent cellular cytotoxicity relative to other anti-CD20 antibodies, especially against tumors having low expression of that protein, as may occur in the development of rituximab resistance.

In the trial, nearly half of the 35 patients studied had a complete or partial response to ublituximab, including nearly one-third of those whose disease was refractory to rituximab (Rituxan) (Br J Haematol. 2017 Apr;177[2]:243-53). The main adverse events were infusion-related reactions, fatigue, pyrexia, and diarrhea, but almost all were lower grade.

“Ublituximab was well tolerated and efficacious in a heterogeneous and highly rituximab–pretreated patient population,” said the investigators, who were led by Ahmed Sawas, MD, at the Center for Lymphoid Malignancies, Columbia University Medical Center, N.Y.

The observed response rate is much the same as those seen with two other anti-CD20 antibodies – obinutuzumab (Gazyva)and ofatumumab (Arzerra)– in similar patient populations, and ublituximab may have advantages in terms of fewer higher-grade infusion-related reactions and shorter infusion time.

“Enhanced anti-CD20 [monoclonal antibodies] that are well tolerated and active in rituximab-resistant disease can provide meaningful clinical benefit to patients with limited treatment options,” the investigators noted.

The trial enrolled 27 patients with B-NHL and 8 patients with CLL (or small lymphocytic lymphoma) who had rituximab-refractory disease (defined by progression on or within 6 months of receiving that agent) or rituximab-relapsed disease (defined by progression more than 6 months after receiving it). They had received a median of three prior therapies.

The patients were treated on an open-label basis with ublituximab at various doses as induction therapy (3-4 weekly infusions during cycles 1 and 2) and then as maintenance therapy (monthly during cycles 3-5, then once every 3 months for up to 2 years). All patients received an oral antihistamine and steroids before infusions.

By the end of the trial, 60% of patients had discontinued treatment because of progression; 23% had discontinued because of adverse events, physician decision, or other reasons; and the remaining 17% had received all planned treatment, Dr. Sawas and his coinvestigators reported.

None of the patients experienced dose-limiting toxicities or unexpected adverse events. The rate of any-grade adverse events was 100%, and the rate specifically of grade 3/4 adverse events was 49%. The rate of serious adverse events (most commonly pneumonia) was 37%.

The leading nonhematologic adverse events were infusion-related reactions (40%; grade 3/4, 0%), fatigue (37%; grade 3/4, 3%), pyrexia (29%; grade 3/4, 0%), and diarrhea (26%; grade 3/4, 0%).

The leading hematologic adverse events were neutropenia (14%; grade 3/4, 14%), with no associated infections; anemia (11%; grade 3/4, 6%); and thrombocytopenia (6%; grade 3/4, 6%), with no associated bleeding.

The overall response rate was 45% (44% in the B-NHL cohort and 50% in the CLL cohort); the majority of responses were partial responses. Notably, the rate was 31% among the subset of patients who had rituximab-refractory disease.

The median duration of response to ublituximab was 9.2 months, and the median progression-free survival was 7.7 months.

“Anti-CD20 therapy has demonstrated the greatest benefit in combination, traditionally with multidrug chemotherapy–based regimens,” the investigators noted. “While the introduction of novel targeted therapies has shifted the treatment paradigm of CLL and indolent lymphoma, the activity of these agents is likely to be potentiated by the addition of an anti-CD20 [monoclonal antibody], given their different mechanisms of action.”

In fact, several multidrug, non–chemotherapy-based regimens are showing promising efficacy and milder toxicity in early trials, they pointed out. “In similar fashion, ublituximab is being evaluated for the treatment of NHL or CLL in combination with other agents,” such as the immunomodulator lenalidomide (Revlimid) and the Bruton tyrosine kinase inhibitor ibrutinib (Imbruvica).

TG Therapeutics funded the trial. Dr. Sawas disclosed that he receives research funds from TG Therapeutics.

Publications
Topics
Sections

 

The investigational anti-CD20 monoclonal antibody ublituximab is safe and has good antitumor activity in patients with B-cell non-Hodgkin lymphoma (B-NHL) or chronic lymphocytic leukemia (CLL) who have previously received the anti-CD20 antibody rituximab, results from a phase I/II trial suggest.

Ublituximab is engineered to have a low fucose content. This feature gives it enhanced antibody-dependent cellular cytotoxicity relative to other anti-CD20 antibodies, especially against tumors having low expression of that protein, as may occur in the development of rituximab resistance.

In the trial, nearly half of the 35 patients studied had a complete or partial response to ublituximab, including nearly one-third of those whose disease was refractory to rituximab (Rituxan) (Br J Haematol. 2017 Apr;177[2]:243-53). The main adverse events were infusion-related reactions, fatigue, pyrexia, and diarrhea, but almost all were lower grade.

“Ublituximab was well tolerated and efficacious in a heterogeneous and highly rituximab–pretreated patient population,” said the investigators, who were led by Ahmed Sawas, MD, at the Center for Lymphoid Malignancies, Columbia University Medical Center, N.Y.

The observed response rate is much the same as those seen with two other anti-CD20 antibodies – obinutuzumab (Gazyva)and ofatumumab (Arzerra)– in similar patient populations, and ublituximab may have advantages in terms of fewer higher-grade infusion-related reactions and shorter infusion time.

“Enhanced anti-CD20 [monoclonal antibodies] that are well tolerated and active in rituximab-resistant disease can provide meaningful clinical benefit to patients with limited treatment options,” the investigators noted.

The trial enrolled 27 patients with B-NHL and 8 patients with CLL (or small lymphocytic lymphoma) who had rituximab-refractory disease (defined by progression on or within 6 months of receiving that agent) or rituximab-relapsed disease (defined by progression more than 6 months after receiving it). They had received a median of three prior therapies.

The patients were treated on an open-label basis with ublituximab at various doses as induction therapy (3-4 weekly infusions during cycles 1 and 2) and then as maintenance therapy (monthly during cycles 3-5, then once every 3 months for up to 2 years). All patients received an oral antihistamine and steroids before infusions.

By the end of the trial, 60% of patients had discontinued treatment because of progression; 23% had discontinued because of adverse events, physician decision, or other reasons; and the remaining 17% had received all planned treatment, Dr. Sawas and his coinvestigators reported.

None of the patients experienced dose-limiting toxicities or unexpected adverse events. The rate of any-grade adverse events was 100%, and the rate specifically of grade 3/4 adverse events was 49%. The rate of serious adverse events (most commonly pneumonia) was 37%.

The leading nonhematologic adverse events were infusion-related reactions (40%; grade 3/4, 0%), fatigue (37%; grade 3/4, 3%), pyrexia (29%; grade 3/4, 0%), and diarrhea (26%; grade 3/4, 0%).

The leading hematologic adverse events were neutropenia (14%; grade 3/4, 14%), with no associated infections; anemia (11%; grade 3/4, 6%); and thrombocytopenia (6%; grade 3/4, 6%), with no associated bleeding.

The overall response rate was 45% (44% in the B-NHL cohort and 50% in the CLL cohort); the majority of responses were partial responses. Notably, the rate was 31% among the subset of patients who had rituximab-refractory disease.

The median duration of response to ublituximab was 9.2 months, and the median progression-free survival was 7.7 months.

“Anti-CD20 therapy has demonstrated the greatest benefit in combination, traditionally with multidrug chemotherapy–based regimens,” the investigators noted. “While the introduction of novel targeted therapies has shifted the treatment paradigm of CLL and indolent lymphoma, the activity of these agents is likely to be potentiated by the addition of an anti-CD20 [monoclonal antibody], given their different mechanisms of action.”

In fact, several multidrug, non–chemotherapy-based regimens are showing promising efficacy and milder toxicity in early trials, they pointed out. “In similar fashion, ublituximab is being evaluated for the treatment of NHL or CLL in combination with other agents,” such as the immunomodulator lenalidomide (Revlimid) and the Bruton tyrosine kinase inhibitor ibrutinib (Imbruvica).

TG Therapeutics funded the trial. Dr. Sawas disclosed that he receives research funds from TG Therapeutics.

 

The investigational anti-CD20 monoclonal antibody ublituximab is safe and has good antitumor activity in patients with B-cell non-Hodgkin lymphoma (B-NHL) or chronic lymphocytic leukemia (CLL) who have previously received the anti-CD20 antibody rituximab, results from a phase I/II trial suggest.

Ublituximab is engineered to have a low fucose content. This feature gives it enhanced antibody-dependent cellular cytotoxicity relative to other anti-CD20 antibodies, especially against tumors having low expression of that protein, as may occur in the development of rituximab resistance.

In the trial, nearly half of the 35 patients studied had a complete or partial response to ublituximab, including nearly one-third of those whose disease was refractory to rituximab (Rituxan) (Br J Haematol. 2017 Apr;177[2]:243-53). The main adverse events were infusion-related reactions, fatigue, pyrexia, and diarrhea, but almost all were lower grade.

“Ublituximab was well tolerated and efficacious in a heterogeneous and highly rituximab–pretreated patient population,” said the investigators, who were led by Ahmed Sawas, MD, at the Center for Lymphoid Malignancies, Columbia University Medical Center, N.Y.

The observed response rate is much the same as those seen with two other anti-CD20 antibodies – obinutuzumab (Gazyva)and ofatumumab (Arzerra)– in similar patient populations, and ublituximab may have advantages in terms of fewer higher-grade infusion-related reactions and shorter infusion time.

“Enhanced anti-CD20 [monoclonal antibodies] that are well tolerated and active in rituximab-resistant disease can provide meaningful clinical benefit to patients with limited treatment options,” the investigators noted.

The trial enrolled 27 patients with B-NHL and 8 patients with CLL (or small lymphocytic lymphoma) who had rituximab-refractory disease (defined by progression on or within 6 months of receiving that agent) or rituximab-relapsed disease (defined by progression more than 6 months after receiving it). They had received a median of three prior therapies.

The patients were treated on an open-label basis with ublituximab at various doses as induction therapy (3-4 weekly infusions during cycles 1 and 2) and then as maintenance therapy (monthly during cycles 3-5, then once every 3 months for up to 2 years). All patients received an oral antihistamine and steroids before infusions.

By the end of the trial, 60% of patients had discontinued treatment because of progression; 23% had discontinued because of adverse events, physician decision, or other reasons; and the remaining 17% had received all planned treatment, Dr. Sawas and his coinvestigators reported.

None of the patients experienced dose-limiting toxicities or unexpected adverse events. The rate of any-grade adverse events was 100%, and the rate specifically of grade 3/4 adverse events was 49%. The rate of serious adverse events (most commonly pneumonia) was 37%.

The leading nonhematologic adverse events were infusion-related reactions (40%; grade 3/4, 0%), fatigue (37%; grade 3/4, 3%), pyrexia (29%; grade 3/4, 0%), and diarrhea (26%; grade 3/4, 0%).

The leading hematologic adverse events were neutropenia (14%; grade 3/4, 14%), with no associated infections; anemia (11%; grade 3/4, 6%); and thrombocytopenia (6%; grade 3/4, 6%), with no associated bleeding.

The overall response rate was 45% (44% in the B-NHL cohort and 50% in the CLL cohort); the majority of responses were partial responses. Notably, the rate was 31% among the subset of patients who had rituximab-refractory disease.

The median duration of response to ublituximab was 9.2 months, and the median progression-free survival was 7.7 months.

“Anti-CD20 therapy has demonstrated the greatest benefit in combination, traditionally with multidrug chemotherapy–based regimens,” the investigators noted. “While the introduction of novel targeted therapies has shifted the treatment paradigm of CLL and indolent lymphoma, the activity of these agents is likely to be potentiated by the addition of an anti-CD20 [monoclonal antibody], given their different mechanisms of action.”

In fact, several multidrug, non–chemotherapy-based regimens are showing promising efficacy and milder toxicity in early trials, they pointed out. “In similar fashion, ublituximab is being evaluated for the treatment of NHL or CLL in combination with other agents,” such as the immunomodulator lenalidomide (Revlimid) and the Bruton tyrosine kinase inhibitor ibrutinib (Imbruvica).

TG Therapeutics funded the trial. Dr. Sawas disclosed that he receives research funds from TG Therapeutics.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Ublituximab had good safety and antitumor activity in rituximab-relapsed or -refractory B-NHL or CLL.

Major finding: The overall response rate was 45%. There were no dose-limiting toxicities; main adverse events of any grade were infusion-related reactions (40%), fatigue (37%), pyrexia (29%), and diarrhea (26%).

Data source: A phase I/II trial among 35 patients with B-NHL or CLL who had previously received rituximab.

Disclosures: TG Therapeutics funded the trial. Dr. Sawas disclosed that he receives research funds from TG Therapeutics.

Sleep-Disordered Breathing in the Active-Duty Military Population and Its Civilian Care Cost

Article Type
Changed
Wed, 01/31/2018 - 11:33
A 5-year review of an active-duty service member population found increased costs, prevalence, and incidence of sleep-disordered breathing.

Sleep-disordered breathing (SDB) is a continuum of symptoms that range from primary snoring with upper airway resistance to frank obstruction seen in obstructive sleep apnea (OSA). This disease spectrum has been reported to affect 10% to 17% of men and 3% to 9% of women in the general population.1 The specific incidence of OSA has been estimated to be about 2% to 4% of the general adult population.2,3 Sleep-disordered breathing often leads to poor sleep quality, which has been associated with many medical comorbidities, including vascular disease, hypertension, major cardiac events, cardiomyopathies, impaired concentration, reduced psychomotor vigilance and cognition, and daytime somnolence.1,2,4-6 Furthermore, there is evidence that the prevalence of SDB continues to grow among the general population.1 However, the prevalence of SDB in various populations (eg, pediatric vs adult, varying body mass index, country of origin) varies widely due to the multifactorial nature of the risk factors and the difficulty in diagnosing SDB.

Some of the more intuitive medical sequelae of SDB are daytime somnolence and subsequent impaired concentration for those with disrupted sleep patterns. Medical literature has paid specific attention to cohorts of personnel who may be at heightened risk from impaired concentration or inability to focus. These populations include but are not limited to sleep-deprived resident physicians, firefighters, truck drivers, and heavy-machine operators.7,8

Military service members represent a distinct cohort that often is relied on to maintain vigilance even in austere environments. Concentration is paramount in order to perform combat operations or tasks that involve operating heavy machinery, such as nuclear submarines, aircraft, or tanks. Given the myriad of unique operational demands on service members, SDB can have detrimental consequences on an individual’s health and his or her military readiness and training. Ultimately, SDB may degrade a unit’s effectiveness and perhaps the country’s military capability.

Active-duty military service members seem to be more susceptible to clinically relevant sleep conditions. In the military, causes of disruptions in normal sleep patterns are multifactorial. Medical literature focuses on circadian disruptions due to shift work and frequent travel, frequent alternating use of caffeine and sedatives, exposure to combat/trauma, and chronic sleep deprivation.9-11 Studies have been published that focus on service members who have returned from combat deployment.10,12,13 However, these studies do not explore the overall burden of disease, and there are no specific data to suggest the prevalence, annual incidence, or associated costs.

To quantify this disease burden in the military, this study focused on the subset of sleep disorders that impact respiration during sleep and determined the prevalence and annual incidence for the entire active-duty population. Additionally, the authors fill a void in the literature by determining the financial burden of SDB on civilian care expenditures.

Methods

This study was a retrospective review of administrative military health care data spanning fiscal years (FYs) 2009 to 2013 (October 1, 2008 to September 30, 2013). The study protocol was approved by the Naval Medical Center Portsmouth Institutional Review Board, and approval was given to waive informed consent. The Health Analysis Department at the Navy and Marine Corps Public Health Center (NMCPHC) obtained and analyzed data from the Military Health System (MHS) Management Analysis and Reporting Tool (M2). The M2 system is an ad hoc query tool used for viewing population, clinical, and financial MHS data, including care received within military treatment facilities (MTFs) and care purchased through TRICARE at civilian facilities. Both inpatient and outpatient health care records were included.

The population included all active-duty service members and guard/reserve members on active duty within all military services, including air force, army, coast guard, and navy branches, between FY 2009 and FY 2013. The authors identified service members with SDB as those with at least 1 ICD-9 diagnosis code related to SDB: obstructive sleep apnea (327.23); sleep-related hypoventilation/hypoxemia (327.26); and other organic sleep disorder (327.80).

Due to the transient nature of the military population, a monthly average over the 5 years of the study determined the overall number of service members eligible for care (1,717,227 service members).

Data Analysis

Prevalence of diagnosed SDB per FY was calculated as the number of service members who received at least 1 SDB diagnostic code between October 1, 2008 and September 30, 2013, over the average total active-duty population. Incidence per year was calculated as the number of new cases per FY, using 2009 as the baseline. Data were stratified by demographic and enrollment information for diagnosed service members and analyzed using SAS 9.4 (Cary, NC) software.

Direct costs associated with SDB treatment fall into 2 categories for service members: (1) care delivered by civilian providers, calculated based on the amount TRICARE paid for the service, using insurance claim data; and (2) care received at MTFs by military providers. Costs for care at MTFs cannot be calculated, as the total cost amount for a single record is not directly attributed to SDB diagnosis.

 

 

Results

A total of 197,183 service members were diagnosed with SDB from FY 2009 to FY 2013. Both the annual incidence and prevalence of SDB for the active-duty military population showed upward trends for each of the years evaluated (Figure 1).

Annual prevalence of SDB diagnoses increased from 2.4% to 4.9%. Annual incidence increased from 2.0% to 2.7% from FY 2010 to FY 2013.

Notably, 72% of service members seen for SDB ranged in age from 25 to 44 years (Table).

Even though the military is about 15% female, only 8% of the patients diagnosed with SDB were female. Nearly three-quarters (73%) of service members had been previously deployed in overseas contingency operations, suggesting a possible impact on military readiness and capability. A study using these specific demographic distributions is being conducted to assess the significance of possible predictive factors.The increasing trend in SDB civilian care costs from FY 2009 to FY 2012 plateaued in FY 2013. The highest cost per year was $99,954,780 in FY 2012 compared with $51,911,146 in FY 2009 (Figure 2).
There was an overall civilian care cost increase of 89%, from $51,911,146 in FY 2009 to $99,954,780 in FY 2012. As expected in the care of SDB, outpatient treatment represents most of the cost.

Discussion

This study shows that the prevalence and incidence of SDB in the active-duty population are less than those reported for the civilian populace as a whole but are still greater than expected for an otherwise healthy and young population. Furthermore, the burden of disease and the cost to diagnose and treat have steadily increased for each of the past 5 fiscal years that were assessed.

The data show an upward trend in the incidence and prevalence of SDB in the military from FY 2009 to FY 2013 for reasons that are not clear but likely with many confounding contributions. As the spectrum of SDB has become better defined and the detrimental sequelae are better understood, it is likely that both service members and health care providers are more aware of the symptoms and more importantly, the potential for interventions that improve quality of life. It is also important to note that the U.S. military is a very transient organization with a nearly constant turnover between new enlistees/officers and those leaving the service or retiring after 20 years of service. Thus, despite an annual incidence of nearly 3% throughout the years evaluated, the annual increase in prevalence is not necessarily commensurate.

The FY 2013 prevalence (4.2%) and civilian care costs ($98,259,519) present traditional indications of the disease burden. Both metrics represent a sizable and increasing disease burden for the military. It is also important to note that these costs reflect only the short-term expenses for initial diagnosis and therapy. These costs in no way reflect the care for the long-term medical sequelae that have been recently linked to uncontrolled SDB/OSA, such as heart and vascular diseases, hypertension, and increased stroke risk. Additional costs will continue to grow.

Perhaps the most validated predictive factor for diagnosis of SDB or OSA is body habitus as measured by body mass index (BMI). In particular, nearly 60% to 90% of patients with OSA are obese.2 Weight gain seems to increase the OSA severity, whereas losing weight decreases it.14-16 Although the U.S. military employs height and weight standards that preclude those with persistently overweight or obese BMIs from continued service, these standards often are not rigid, and there are overweight or even obese active-duty members. Interestingly, despite a population that essentially controls for the most predictive risk factor, the prevalence of SDB is still approximately 1 in 20 (4.9%) in FY 2013.

Given the significant burden of disease represented by the incidence, prevalence, and cost data determined in this study and the growing recognition of long-term complications from poorly controlled SDB, it has become evident that more efficacious interventions are needed. Modern treatments for SDB can be classified as surgical or nonsurgical but with no single modality fitting the need for all patients secondary to poor adherence and/or limited efficacy.17-20 However, to mitigate the impact on military readiness and taxpayer-funded health care costs, it may be appropriate to begin exploring therapeutic options beyond the current standard of care. For example, an invasive and costly onetime surgical intervention using an implantable device to stimulate the hypoglossal nerve to open a person’s airway during inspiration is being investigated in a younger, nonobese cohort of patients.21 Further research is warranted into this specific model of therapeutic intervention and others for service members.

 

 

Limitations

Limitations in this study include possible reporting errors due to improper or insufficient medical coding as well as data entry errors at the clinic that may exist within medical billing databases. Therefore, the results of this analysis may be over- or underrepresented. The increase in incidence and prevalence may not necessarily reflect an increasing number of people who have the disease. The increase could be a result of better SDB detection practices or incentives to be diagnosed with SDB (VA disability claims upon retirement). The assumption is made that procedures corresponding with SDB diagnoses are directly related to SDB, and any costs incurred from those procedures are due to SDB.

It is important to note variability between services and institutions within the DoD in the diagnosis and treatment of SDB. Specifically, some institutions use ambulatory polysomnograms, or studies done at home, and autotitration of continuous positive airway pressure, whereas others require more costly hospital-based studies and laboratory titration. Another confounder in the cost data is the number of diagnoses and treatment deferred to the network as a result of the relatively small number of sleep-trained physicians within the military.

Conclusion

As the field of sleep medicine continues to develop its literature, it is becoming clearer that the detrimental sequelae of SDB are varied and pose significant short- and long-term risks. Active-duty service members represent a subset of the population with consequences that are potentially graver than those of civilians, especially when they are expected to operate complicated machinery or to make rapid and critical decisions in battle.

The prevalence and incidence of SDB increased each year during a 5-year review and currently affects 1 in 20 service members. Furthermore, the cost of civilian care for this disease process was nearly $100 million in FY 2012 to FY 2013, suggesting a growing financial burden for taxpayers. Further research is warranted to fully appreciate the impact of SDB on both service members and the U.S. military.

Acknowledgments
The authors thank the U.S. Navy and specifically the support within the Department of Otolaryngology at the Naval Medical Center Portsmouth for the time and effort allotted for completion of this study. This research was supported in part by an appointment to the Postgraduate Research Participation Program at the Navy and Marine Corps Public Health Center (NMCPHC) administered by the Oak Ridge Institute for Science and Education through an interagency agreement between the U.S. Department of Energy and NMCPHC.

References

1. Peppard PE, Young T, Barnet JH, Palta M, Hagen EW, Hla KM. Increased prevalence of sleep-disordered breathing in adults. Am J Epidemiol. 2013;177(9):1006-1014.

2. Young T, Peppard PE, Gottlieb DJ. Epidemiology of obstructive sleep apnea: a population health perspective. Am J Respir Crit Care Med. 2002;165(9):1217-1239.

3. Ram S, Seirawan H, Kumar SK, Clark GT. Prevalence and impact of sleep disorders and sleep habits in the United States. Sleep Breath. 2010;14(1):63-70.

4. Kim H, Dinges DF, Young T. Sleep-disordered breathing and psychomotor vigilance in a community-based sample. Sleep. 2007;30(10):1309-1316.

5. Yaffe K, Falvey CM, Hoang T. Connections between sleep and cognition in older adults. Lancet Neurol. 2014;13(10):1017-1028.

6. Gilat H, Vinker S, Buda I, Soudry E, Shani M, Bachar G. Obstructive sleep apnea and cardiovascular comorbidities: a large epidemiologic study. Medicine (Baltimore). 2014;93(9):e45.

7. Li X, Sundquist K, Sundquist J. Socioeconomic status and occupation as risk factors for obstructive sleep apnea in Sweden: a population-based study. Sleep Med. 2008;9(2):129-136.

8. Barger LK, Rajaratnam SM, Wang W, et al. Common sleep disorders increase risk of motor vehicle crashes and adverse health outcomes in firefighters. J Clin Sleep Med. 2015;11(3):233-240.

9. Mysliwiec V, Gill J, Lee H, et al. Sleep disorders in US military personnel: a high rate of comorbid insomnia and obstructive sleep apnea. Chest. 2013;144(2):549-557.

10. Mysliwiec V, McGraw L, Pierce R, Smith P, Trapp B, Roth BJ. Sleep disorders and associated medical comorbidities in active duty military personnel. Sleep. 2013;36(2):167-174.

11. Capaldi VF 2nd, Guerrero ML, Killgore WD. Sleep disruptions among returning combat veterans from Iraq and Afghanistan. Mil Med. 2011;176(8):879-888.

12. Collen J, Orr N, Lettieri CJ, Carter K, Holley AB. Sleep disturbances among soldiers with combat-related traumatic brain injury. Chest. 2012;142(3):622-630.

13. Peterson AL, Goodie JL, Satterfield WA, Brim WL. Sleep disturbance during military deployment. Mil Med. 2008;173(3):230-235.

14. Dixon JB, Schachter LM, O’Brien PE. Polysomnography before and after weight loss in obese patients with severe sleep apnea. Int J Obes (Lond). 2005;29(9):1048-1054.

15. Loube DI, Loube AA, Erman MK. Continuous positive airway pressure treatment results in weight less in obese and overweight patients with obstructive sleep apnea. J Am Diet Assoc. 1997;97(8):896-897.

16. Loube DI, Loube AA, Mitler MM. Weight loss for obstructive sleep apnea: the optimal therapy for obese patients. J Am Diet Assoc. 1994;94(11):1291-1295.

17. Malhotra A, Orr JE, Owens RL. On the cutting edge of obstructive sleep apnoea: where next? Lancet Respir Med. 2015;3(5):397-403.

18. Mysliwiec V, Capaldi VF, 2nd, Gill J, et al. Adherence to positive airway pressure therapy in U.S. military personnel with sleep apnea improves sleepiness, sleep quality, and depressive symptoms. Mil Med. 2015;180(4):475-482.

19. Salepci B, Caglayan B, Kiral N, et al. CPAP adherence of patients with obstructive sleep apnea. Respir Care. 2013;58(9):1467-1473.

20. Weaver TE, Grunstein RR. Adherence to continuous positive airway pressure therapy: the challenge to effective treatment. Proc Am Thorac Soc. 2008;5(2):173-178.

21. Pietzsch JB, Liu S, Garner AM, Kezirian EJ, Strollo PJ. Long-term cost-effectiveness of upper airway stimulation for the treatment of obstructive sleep apnea: a model-based projection based on the STAR trial. Sleep. 2015;38(5):735-744.

Article PDF
Author and Disclosure Information

Dr. Eliason is a resident physician; Dr. Jardine and Dr. McIntyre are staff physicians; and Dr. Meyer is an intern physician; all in the department of otolaryngology at Naval Medical Center Portsmouth in Virginia. Ms. Pelchy is an epidemiologist in the health analysis department of the Navy and Marine Corps Public Health Center in Portsmouth.

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the U.S. Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

Issue
Federal Practitioner - 34(2)
Publications
Topics
Page Number
32-36
Sections
Author and Disclosure Information

Dr. Eliason is a resident physician; Dr. Jardine and Dr. McIntyre are staff physicians; and Dr. Meyer is an intern physician; all in the department of otolaryngology at Naval Medical Center Portsmouth in Virginia. Ms. Pelchy is an epidemiologist in the health analysis department of the Navy and Marine Corps Public Health Center in Portsmouth.

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the U.S. Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

Author and Disclosure Information

Dr. Eliason is a resident physician; Dr. Jardine and Dr. McIntyre are staff physicians; and Dr. Meyer is an intern physician; all in the department of otolaryngology at Naval Medical Center Portsmouth in Virginia. Ms. Pelchy is an epidemiologist in the health analysis department of the Navy and Marine Corps Public Health Center in Portsmouth.

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the U.S. Government, or any of its agencies. This article may discuss unlabeled or investigational use of certain drugs. Please review the complete prescribing information for specific drugs or drug combinations—including indications, contraindications, warnings, and adverse effects—before administering pharmacologic therapy to patients.

Article PDF
Article PDF
A 5-year review of an active-duty service member population found increased costs, prevalence, and incidence of sleep-disordered breathing.
A 5-year review of an active-duty service member population found increased costs, prevalence, and incidence of sleep-disordered breathing.

Sleep-disordered breathing (SDB) is a continuum of symptoms that range from primary snoring with upper airway resistance to frank obstruction seen in obstructive sleep apnea (OSA). This disease spectrum has been reported to affect 10% to 17% of men and 3% to 9% of women in the general population.1 The specific incidence of OSA has been estimated to be about 2% to 4% of the general adult population.2,3 Sleep-disordered breathing often leads to poor sleep quality, which has been associated with many medical comorbidities, including vascular disease, hypertension, major cardiac events, cardiomyopathies, impaired concentration, reduced psychomotor vigilance and cognition, and daytime somnolence.1,2,4-6 Furthermore, there is evidence that the prevalence of SDB continues to grow among the general population.1 However, the prevalence of SDB in various populations (eg, pediatric vs adult, varying body mass index, country of origin) varies widely due to the multifactorial nature of the risk factors and the difficulty in diagnosing SDB.

Some of the more intuitive medical sequelae of SDB are daytime somnolence and subsequent impaired concentration for those with disrupted sleep patterns. Medical literature has paid specific attention to cohorts of personnel who may be at heightened risk from impaired concentration or inability to focus. These populations include but are not limited to sleep-deprived resident physicians, firefighters, truck drivers, and heavy-machine operators.7,8

Military service members represent a distinct cohort that often is relied on to maintain vigilance even in austere environments. Concentration is paramount in order to perform combat operations or tasks that involve operating heavy machinery, such as nuclear submarines, aircraft, or tanks. Given the myriad of unique operational demands on service members, SDB can have detrimental consequences on an individual’s health and his or her military readiness and training. Ultimately, SDB may degrade a unit’s effectiveness and perhaps the country’s military capability.

Active-duty military service members seem to be more susceptible to clinically relevant sleep conditions. In the military, causes of disruptions in normal sleep patterns are multifactorial. Medical literature focuses on circadian disruptions due to shift work and frequent travel, frequent alternating use of caffeine and sedatives, exposure to combat/trauma, and chronic sleep deprivation.9-11 Studies have been published that focus on service members who have returned from combat deployment.10,12,13 However, these studies do not explore the overall burden of disease, and there are no specific data to suggest the prevalence, annual incidence, or associated costs.

To quantify this disease burden in the military, this study focused on the subset of sleep disorders that impact respiration during sleep and determined the prevalence and annual incidence for the entire active-duty population. Additionally, the authors fill a void in the literature by determining the financial burden of SDB on civilian care expenditures.

Methods

This study was a retrospective review of administrative military health care data spanning fiscal years (FYs) 2009 to 2013 (October 1, 2008 to September 30, 2013). The study protocol was approved by the Naval Medical Center Portsmouth Institutional Review Board, and approval was given to waive informed consent. The Health Analysis Department at the Navy and Marine Corps Public Health Center (NMCPHC) obtained and analyzed data from the Military Health System (MHS) Management Analysis and Reporting Tool (M2). The M2 system is an ad hoc query tool used for viewing population, clinical, and financial MHS data, including care received within military treatment facilities (MTFs) and care purchased through TRICARE at civilian facilities. Both inpatient and outpatient health care records were included.

The population included all active-duty service members and guard/reserve members on active duty within all military services, including air force, army, coast guard, and navy branches, between FY 2009 and FY 2013. The authors identified service members with SDB as those with at least 1 ICD-9 diagnosis code related to SDB: obstructive sleep apnea (327.23); sleep-related hypoventilation/hypoxemia (327.26); and other organic sleep disorder (327.80).

Due to the transient nature of the military population, a monthly average over the 5 years of the study determined the overall number of service members eligible for care (1,717,227 service members).

Data Analysis

Prevalence of diagnosed SDB per FY was calculated as the number of service members who received at least 1 SDB diagnostic code between October 1, 2008 and September 30, 2013, over the average total active-duty population. Incidence per year was calculated as the number of new cases per FY, using 2009 as the baseline. Data were stratified by demographic and enrollment information for diagnosed service members and analyzed using SAS 9.4 (Cary, NC) software.

Direct costs associated with SDB treatment fall into 2 categories for service members: (1) care delivered by civilian providers, calculated based on the amount TRICARE paid for the service, using insurance claim data; and (2) care received at MTFs by military providers. Costs for care at MTFs cannot be calculated, as the total cost amount for a single record is not directly attributed to SDB diagnosis.

 

 

Results

A total of 197,183 service members were diagnosed with SDB from FY 2009 to FY 2013. Both the annual incidence and prevalence of SDB for the active-duty military population showed upward trends for each of the years evaluated (Figure 1).

Annual prevalence of SDB diagnoses increased from 2.4% to 4.9%. Annual incidence increased from 2.0% to 2.7% from FY 2010 to FY 2013.

Notably, 72% of service members seen for SDB ranged in age from 25 to 44 years (Table).

Even though the military is about 15% female, only 8% of the patients diagnosed with SDB were female. Nearly three-quarters (73%) of service members had been previously deployed in overseas contingency operations, suggesting a possible impact on military readiness and capability. A study using these specific demographic distributions is being conducted to assess the significance of possible predictive factors.The increasing trend in SDB civilian care costs from FY 2009 to FY 2012 plateaued in FY 2013. The highest cost per year was $99,954,780 in FY 2012 compared with $51,911,146 in FY 2009 (Figure 2).
There was an overall civilian care cost increase of 89%, from $51,911,146 in FY 2009 to $99,954,780 in FY 2012. As expected in the care of SDB, outpatient treatment represents most of the cost.

Discussion

This study shows that the prevalence and incidence of SDB in the active-duty population are less than those reported for the civilian populace as a whole but are still greater than expected for an otherwise healthy and young population. Furthermore, the burden of disease and the cost to diagnose and treat have steadily increased for each of the past 5 fiscal years that were assessed.

The data show an upward trend in the incidence and prevalence of SDB in the military from FY 2009 to FY 2013 for reasons that are not clear but likely with many confounding contributions. As the spectrum of SDB has become better defined and the detrimental sequelae are better understood, it is likely that both service members and health care providers are more aware of the symptoms and more importantly, the potential for interventions that improve quality of life. It is also important to note that the U.S. military is a very transient organization with a nearly constant turnover between new enlistees/officers and those leaving the service or retiring after 20 years of service. Thus, despite an annual incidence of nearly 3% throughout the years evaluated, the annual increase in prevalence is not necessarily commensurate.

The FY 2013 prevalence (4.2%) and civilian care costs ($98,259,519) present traditional indications of the disease burden. Both metrics represent a sizable and increasing disease burden for the military. It is also important to note that these costs reflect only the short-term expenses for initial diagnosis and therapy. These costs in no way reflect the care for the long-term medical sequelae that have been recently linked to uncontrolled SDB/OSA, such as heart and vascular diseases, hypertension, and increased stroke risk. Additional costs will continue to grow.

Perhaps the most validated predictive factor for diagnosis of SDB or OSA is body habitus as measured by body mass index (BMI). In particular, nearly 60% to 90% of patients with OSA are obese.2 Weight gain seems to increase the OSA severity, whereas losing weight decreases it.14-16 Although the U.S. military employs height and weight standards that preclude those with persistently overweight or obese BMIs from continued service, these standards often are not rigid, and there are overweight or even obese active-duty members. Interestingly, despite a population that essentially controls for the most predictive risk factor, the prevalence of SDB is still approximately 1 in 20 (4.9%) in FY 2013.

Given the significant burden of disease represented by the incidence, prevalence, and cost data determined in this study and the growing recognition of long-term complications from poorly controlled SDB, it has become evident that more efficacious interventions are needed. Modern treatments for SDB can be classified as surgical or nonsurgical but with no single modality fitting the need for all patients secondary to poor adherence and/or limited efficacy.17-20 However, to mitigate the impact on military readiness and taxpayer-funded health care costs, it may be appropriate to begin exploring therapeutic options beyond the current standard of care. For example, an invasive and costly onetime surgical intervention using an implantable device to stimulate the hypoglossal nerve to open a person’s airway during inspiration is being investigated in a younger, nonobese cohort of patients.21 Further research is warranted into this specific model of therapeutic intervention and others for service members.

 

 

Limitations

Limitations in this study include possible reporting errors due to improper or insufficient medical coding as well as data entry errors at the clinic that may exist within medical billing databases. Therefore, the results of this analysis may be over- or underrepresented. The increase in incidence and prevalence may not necessarily reflect an increasing number of people who have the disease. The increase could be a result of better SDB detection practices or incentives to be diagnosed with SDB (VA disability claims upon retirement). The assumption is made that procedures corresponding with SDB diagnoses are directly related to SDB, and any costs incurred from those procedures are due to SDB.

It is important to note variability between services and institutions within the DoD in the diagnosis and treatment of SDB. Specifically, some institutions use ambulatory polysomnograms, or studies done at home, and autotitration of continuous positive airway pressure, whereas others require more costly hospital-based studies and laboratory titration. Another confounder in the cost data is the number of diagnoses and treatment deferred to the network as a result of the relatively small number of sleep-trained physicians within the military.

Conclusion

As the field of sleep medicine continues to develop its literature, it is becoming clearer that the detrimental sequelae of SDB are varied and pose significant short- and long-term risks. Active-duty service members represent a subset of the population with consequences that are potentially graver than those of civilians, especially when they are expected to operate complicated machinery or to make rapid and critical decisions in battle.

The prevalence and incidence of SDB increased each year during a 5-year review and currently affects 1 in 20 service members. Furthermore, the cost of civilian care for this disease process was nearly $100 million in FY 2012 to FY 2013, suggesting a growing financial burden for taxpayers. Further research is warranted to fully appreciate the impact of SDB on both service members and the U.S. military.

Acknowledgments
The authors thank the U.S. Navy and specifically the support within the Department of Otolaryngology at the Naval Medical Center Portsmouth for the time and effort allotted for completion of this study. This research was supported in part by an appointment to the Postgraduate Research Participation Program at the Navy and Marine Corps Public Health Center (NMCPHC) administered by the Oak Ridge Institute for Science and Education through an interagency agreement between the U.S. Department of Energy and NMCPHC.

Sleep-disordered breathing (SDB) is a continuum of symptoms that range from primary snoring with upper airway resistance to frank obstruction seen in obstructive sleep apnea (OSA). This disease spectrum has been reported to affect 10% to 17% of men and 3% to 9% of women in the general population.1 The specific incidence of OSA has been estimated to be about 2% to 4% of the general adult population.2,3 Sleep-disordered breathing often leads to poor sleep quality, which has been associated with many medical comorbidities, including vascular disease, hypertension, major cardiac events, cardiomyopathies, impaired concentration, reduced psychomotor vigilance and cognition, and daytime somnolence.1,2,4-6 Furthermore, there is evidence that the prevalence of SDB continues to grow among the general population.1 However, the prevalence of SDB in various populations (eg, pediatric vs adult, varying body mass index, country of origin) varies widely due to the multifactorial nature of the risk factors and the difficulty in diagnosing SDB.

Some of the more intuitive medical sequelae of SDB are daytime somnolence and subsequent impaired concentration for those with disrupted sleep patterns. Medical literature has paid specific attention to cohorts of personnel who may be at heightened risk from impaired concentration or inability to focus. These populations include but are not limited to sleep-deprived resident physicians, firefighters, truck drivers, and heavy-machine operators.7,8

Military service members represent a distinct cohort that often is relied on to maintain vigilance even in austere environments. Concentration is paramount in order to perform combat operations or tasks that involve operating heavy machinery, such as nuclear submarines, aircraft, or tanks. Given the myriad of unique operational demands on service members, SDB can have detrimental consequences on an individual’s health and his or her military readiness and training. Ultimately, SDB may degrade a unit’s effectiveness and perhaps the country’s military capability.

Active-duty military service members seem to be more susceptible to clinically relevant sleep conditions. In the military, causes of disruptions in normal sleep patterns are multifactorial. Medical literature focuses on circadian disruptions due to shift work and frequent travel, frequent alternating use of caffeine and sedatives, exposure to combat/trauma, and chronic sleep deprivation.9-11 Studies have been published that focus on service members who have returned from combat deployment.10,12,13 However, these studies do not explore the overall burden of disease, and there are no specific data to suggest the prevalence, annual incidence, or associated costs.

To quantify this disease burden in the military, this study focused on the subset of sleep disorders that impact respiration during sleep and determined the prevalence and annual incidence for the entire active-duty population. Additionally, the authors fill a void in the literature by determining the financial burden of SDB on civilian care expenditures.

Methods

This study was a retrospective review of administrative military health care data spanning fiscal years (FYs) 2009 to 2013 (October 1, 2008 to September 30, 2013). The study protocol was approved by the Naval Medical Center Portsmouth Institutional Review Board, and approval was given to waive informed consent. The Health Analysis Department at the Navy and Marine Corps Public Health Center (NMCPHC) obtained and analyzed data from the Military Health System (MHS) Management Analysis and Reporting Tool (M2). The M2 system is an ad hoc query tool used for viewing population, clinical, and financial MHS data, including care received within military treatment facilities (MTFs) and care purchased through TRICARE at civilian facilities. Both inpatient and outpatient health care records were included.

The population included all active-duty service members and guard/reserve members on active duty within all military services, including air force, army, coast guard, and navy branches, between FY 2009 and FY 2013. The authors identified service members with SDB as those with at least 1 ICD-9 diagnosis code related to SDB: obstructive sleep apnea (327.23); sleep-related hypoventilation/hypoxemia (327.26); and other organic sleep disorder (327.80).

Due to the transient nature of the military population, a monthly average over the 5 years of the study determined the overall number of service members eligible for care (1,717,227 service members).

Data Analysis

Prevalence of diagnosed SDB per FY was calculated as the number of service members who received at least 1 SDB diagnostic code between October 1, 2008 and September 30, 2013, over the average total active-duty population. Incidence per year was calculated as the number of new cases per FY, using 2009 as the baseline. Data were stratified by demographic and enrollment information for diagnosed service members and analyzed using SAS 9.4 (Cary, NC) software.

Direct costs associated with SDB treatment fall into 2 categories for service members: (1) care delivered by civilian providers, calculated based on the amount TRICARE paid for the service, using insurance claim data; and (2) care received at MTFs by military providers. Costs for care at MTFs cannot be calculated, as the total cost amount for a single record is not directly attributed to SDB diagnosis.

 

 

Results

A total of 197,183 service members were diagnosed with SDB from FY 2009 to FY 2013. Both the annual incidence and prevalence of SDB for the active-duty military population showed upward trends for each of the years evaluated (Figure 1).

Annual prevalence of SDB diagnoses increased from 2.4% to 4.9%. Annual incidence increased from 2.0% to 2.7% from FY 2010 to FY 2013.

Notably, 72% of service members seen for SDB ranged in age from 25 to 44 years (Table).

Even though the military is about 15% female, only 8% of the patients diagnosed with SDB were female. Nearly three-quarters (73%) of service members had been previously deployed in overseas contingency operations, suggesting a possible impact on military readiness and capability. A study using these specific demographic distributions is being conducted to assess the significance of possible predictive factors.The increasing trend in SDB civilian care costs from FY 2009 to FY 2012 plateaued in FY 2013. The highest cost per year was $99,954,780 in FY 2012 compared with $51,911,146 in FY 2009 (Figure 2).
There was an overall civilian care cost increase of 89%, from $51,911,146 in FY 2009 to $99,954,780 in FY 2012. As expected in the care of SDB, outpatient treatment represents most of the cost.

Discussion

This study shows that the prevalence and incidence of SDB in the active-duty population are less than those reported for the civilian populace as a whole but are still greater than expected for an otherwise healthy and young population. Furthermore, the burden of disease and the cost to diagnose and treat have steadily increased for each of the past 5 fiscal years that were assessed.

The data show an upward trend in the incidence and prevalence of SDB in the military from FY 2009 to FY 2013 for reasons that are not clear but likely with many confounding contributions. As the spectrum of SDB has become better defined and the detrimental sequelae are better understood, it is likely that both service members and health care providers are more aware of the symptoms and more importantly, the potential for interventions that improve quality of life. It is also important to note that the U.S. military is a very transient organization with a nearly constant turnover between new enlistees/officers and those leaving the service or retiring after 20 years of service. Thus, despite an annual incidence of nearly 3% throughout the years evaluated, the annual increase in prevalence is not necessarily commensurate.

The FY 2013 prevalence (4.2%) and civilian care costs ($98,259,519) present traditional indications of the disease burden. Both metrics represent a sizable and increasing disease burden for the military. It is also important to note that these costs reflect only the short-term expenses for initial diagnosis and therapy. These costs in no way reflect the care for the long-term medical sequelae that have been recently linked to uncontrolled SDB/OSA, such as heart and vascular diseases, hypertension, and increased stroke risk. Additional costs will continue to grow.

Perhaps the most validated predictive factor for diagnosis of SDB or OSA is body habitus as measured by body mass index (BMI). In particular, nearly 60% to 90% of patients with OSA are obese.2 Weight gain seems to increase the OSA severity, whereas losing weight decreases it.14-16 Although the U.S. military employs height and weight standards that preclude those with persistently overweight or obese BMIs from continued service, these standards often are not rigid, and there are overweight or even obese active-duty members. Interestingly, despite a population that essentially controls for the most predictive risk factor, the prevalence of SDB is still approximately 1 in 20 (4.9%) in FY 2013.

Given the significant burden of disease represented by the incidence, prevalence, and cost data determined in this study and the growing recognition of long-term complications from poorly controlled SDB, it has become evident that more efficacious interventions are needed. Modern treatments for SDB can be classified as surgical or nonsurgical but with no single modality fitting the need for all patients secondary to poor adherence and/or limited efficacy.17-20 However, to mitigate the impact on military readiness and taxpayer-funded health care costs, it may be appropriate to begin exploring therapeutic options beyond the current standard of care. For example, an invasive and costly onetime surgical intervention using an implantable device to stimulate the hypoglossal nerve to open a person’s airway during inspiration is being investigated in a younger, nonobese cohort of patients.21 Further research is warranted into this specific model of therapeutic intervention and others for service members.

 

 

Limitations

Limitations in this study include possible reporting errors due to improper or insufficient medical coding as well as data entry errors at the clinic that may exist within medical billing databases. Therefore, the results of this analysis may be over- or underrepresented. The increase in incidence and prevalence may not necessarily reflect an increasing number of people who have the disease. The increase could be a result of better SDB detection practices or incentives to be diagnosed with SDB (VA disability claims upon retirement). The assumption is made that procedures corresponding with SDB diagnoses are directly related to SDB, and any costs incurred from those procedures are due to SDB.

It is important to note variability between services and institutions within the DoD in the diagnosis and treatment of SDB. Specifically, some institutions use ambulatory polysomnograms, or studies done at home, and autotitration of continuous positive airway pressure, whereas others require more costly hospital-based studies and laboratory titration. Another confounder in the cost data is the number of diagnoses and treatment deferred to the network as a result of the relatively small number of sleep-trained physicians within the military.

Conclusion

As the field of sleep medicine continues to develop its literature, it is becoming clearer that the detrimental sequelae of SDB are varied and pose significant short- and long-term risks. Active-duty service members represent a subset of the population with consequences that are potentially graver than those of civilians, especially when they are expected to operate complicated machinery or to make rapid and critical decisions in battle.

The prevalence and incidence of SDB increased each year during a 5-year review and currently affects 1 in 20 service members. Furthermore, the cost of civilian care for this disease process was nearly $100 million in FY 2012 to FY 2013, suggesting a growing financial burden for taxpayers. Further research is warranted to fully appreciate the impact of SDB on both service members and the U.S. military.

Acknowledgments
The authors thank the U.S. Navy and specifically the support within the Department of Otolaryngology at the Naval Medical Center Portsmouth for the time and effort allotted for completion of this study. This research was supported in part by an appointment to the Postgraduate Research Participation Program at the Navy and Marine Corps Public Health Center (NMCPHC) administered by the Oak Ridge Institute for Science and Education through an interagency agreement between the U.S. Department of Energy and NMCPHC.

References

1. Peppard PE, Young T, Barnet JH, Palta M, Hagen EW, Hla KM. Increased prevalence of sleep-disordered breathing in adults. Am J Epidemiol. 2013;177(9):1006-1014.

2. Young T, Peppard PE, Gottlieb DJ. Epidemiology of obstructive sleep apnea: a population health perspective. Am J Respir Crit Care Med. 2002;165(9):1217-1239.

3. Ram S, Seirawan H, Kumar SK, Clark GT. Prevalence and impact of sleep disorders and sleep habits in the United States. Sleep Breath. 2010;14(1):63-70.

4. Kim H, Dinges DF, Young T. Sleep-disordered breathing and psychomotor vigilance in a community-based sample. Sleep. 2007;30(10):1309-1316.

5. Yaffe K, Falvey CM, Hoang T. Connections between sleep and cognition in older adults. Lancet Neurol. 2014;13(10):1017-1028.

6. Gilat H, Vinker S, Buda I, Soudry E, Shani M, Bachar G. Obstructive sleep apnea and cardiovascular comorbidities: a large epidemiologic study. Medicine (Baltimore). 2014;93(9):e45.

7. Li X, Sundquist K, Sundquist J. Socioeconomic status and occupation as risk factors for obstructive sleep apnea in Sweden: a population-based study. Sleep Med. 2008;9(2):129-136.

8. Barger LK, Rajaratnam SM, Wang W, et al. Common sleep disorders increase risk of motor vehicle crashes and adverse health outcomes in firefighters. J Clin Sleep Med. 2015;11(3):233-240.

9. Mysliwiec V, Gill J, Lee H, et al. Sleep disorders in US military personnel: a high rate of comorbid insomnia and obstructive sleep apnea. Chest. 2013;144(2):549-557.

10. Mysliwiec V, McGraw L, Pierce R, Smith P, Trapp B, Roth BJ. Sleep disorders and associated medical comorbidities in active duty military personnel. Sleep. 2013;36(2):167-174.

11. Capaldi VF 2nd, Guerrero ML, Killgore WD. Sleep disruptions among returning combat veterans from Iraq and Afghanistan. Mil Med. 2011;176(8):879-888.

12. Collen J, Orr N, Lettieri CJ, Carter K, Holley AB. Sleep disturbances among soldiers with combat-related traumatic brain injury. Chest. 2012;142(3):622-630.

13. Peterson AL, Goodie JL, Satterfield WA, Brim WL. Sleep disturbance during military deployment. Mil Med. 2008;173(3):230-235.

14. Dixon JB, Schachter LM, O’Brien PE. Polysomnography before and after weight loss in obese patients with severe sleep apnea. Int J Obes (Lond). 2005;29(9):1048-1054.

15. Loube DI, Loube AA, Erman MK. Continuous positive airway pressure treatment results in weight less in obese and overweight patients with obstructive sleep apnea. J Am Diet Assoc. 1997;97(8):896-897.

16. Loube DI, Loube AA, Mitler MM. Weight loss for obstructive sleep apnea: the optimal therapy for obese patients. J Am Diet Assoc. 1994;94(11):1291-1295.

17. Malhotra A, Orr JE, Owens RL. On the cutting edge of obstructive sleep apnoea: where next? Lancet Respir Med. 2015;3(5):397-403.

18. Mysliwiec V, Capaldi VF, 2nd, Gill J, et al. Adherence to positive airway pressure therapy in U.S. military personnel with sleep apnea improves sleepiness, sleep quality, and depressive symptoms. Mil Med. 2015;180(4):475-482.

19. Salepci B, Caglayan B, Kiral N, et al. CPAP adherence of patients with obstructive sleep apnea. Respir Care. 2013;58(9):1467-1473.

20. Weaver TE, Grunstein RR. Adherence to continuous positive airway pressure therapy: the challenge to effective treatment. Proc Am Thorac Soc. 2008;5(2):173-178.

21. Pietzsch JB, Liu S, Garner AM, Kezirian EJ, Strollo PJ. Long-term cost-effectiveness of upper airway stimulation for the treatment of obstructive sleep apnea: a model-based projection based on the STAR trial. Sleep. 2015;38(5):735-744.

References

1. Peppard PE, Young T, Barnet JH, Palta M, Hagen EW, Hla KM. Increased prevalence of sleep-disordered breathing in adults. Am J Epidemiol. 2013;177(9):1006-1014.

2. Young T, Peppard PE, Gottlieb DJ. Epidemiology of obstructive sleep apnea: a population health perspective. Am J Respir Crit Care Med. 2002;165(9):1217-1239.

3. Ram S, Seirawan H, Kumar SK, Clark GT. Prevalence and impact of sleep disorders and sleep habits in the United States. Sleep Breath. 2010;14(1):63-70.

4. Kim H, Dinges DF, Young T. Sleep-disordered breathing and psychomotor vigilance in a community-based sample. Sleep. 2007;30(10):1309-1316.

5. Yaffe K, Falvey CM, Hoang T. Connections between sleep and cognition in older adults. Lancet Neurol. 2014;13(10):1017-1028.

6. Gilat H, Vinker S, Buda I, Soudry E, Shani M, Bachar G. Obstructive sleep apnea and cardiovascular comorbidities: a large epidemiologic study. Medicine (Baltimore). 2014;93(9):e45.

7. Li X, Sundquist K, Sundquist J. Socioeconomic status and occupation as risk factors for obstructive sleep apnea in Sweden: a population-based study. Sleep Med. 2008;9(2):129-136.

8. Barger LK, Rajaratnam SM, Wang W, et al. Common sleep disorders increase risk of motor vehicle crashes and adverse health outcomes in firefighters. J Clin Sleep Med. 2015;11(3):233-240.

9. Mysliwiec V, Gill J, Lee H, et al. Sleep disorders in US military personnel: a high rate of comorbid insomnia and obstructive sleep apnea. Chest. 2013;144(2):549-557.

10. Mysliwiec V, McGraw L, Pierce R, Smith P, Trapp B, Roth BJ. Sleep disorders and associated medical comorbidities in active duty military personnel. Sleep. 2013;36(2):167-174.

11. Capaldi VF 2nd, Guerrero ML, Killgore WD. Sleep disruptions among returning combat veterans from Iraq and Afghanistan. Mil Med. 2011;176(8):879-888.

12. Collen J, Orr N, Lettieri CJ, Carter K, Holley AB. Sleep disturbances among soldiers with combat-related traumatic brain injury. Chest. 2012;142(3):622-630.

13. Peterson AL, Goodie JL, Satterfield WA, Brim WL. Sleep disturbance during military deployment. Mil Med. 2008;173(3):230-235.

14. Dixon JB, Schachter LM, O’Brien PE. Polysomnography before and after weight loss in obese patients with severe sleep apnea. Int J Obes (Lond). 2005;29(9):1048-1054.

15. Loube DI, Loube AA, Erman MK. Continuous positive airway pressure treatment results in weight less in obese and overweight patients with obstructive sleep apnea. J Am Diet Assoc. 1997;97(8):896-897.

16. Loube DI, Loube AA, Mitler MM. Weight loss for obstructive sleep apnea: the optimal therapy for obese patients. J Am Diet Assoc. 1994;94(11):1291-1295.

17. Malhotra A, Orr JE, Owens RL. On the cutting edge of obstructive sleep apnoea: where next? Lancet Respir Med. 2015;3(5):397-403.

18. Mysliwiec V, Capaldi VF, 2nd, Gill J, et al. Adherence to positive airway pressure therapy in U.S. military personnel with sleep apnea improves sleepiness, sleep quality, and depressive symptoms. Mil Med. 2015;180(4):475-482.

19. Salepci B, Caglayan B, Kiral N, et al. CPAP adherence of patients with obstructive sleep apnea. Respir Care. 2013;58(9):1467-1473.

20. Weaver TE, Grunstein RR. Adherence to continuous positive airway pressure therapy: the challenge to effective treatment. Proc Am Thorac Soc. 2008;5(2):173-178.

21. Pietzsch JB, Liu S, Garner AM, Kezirian EJ, Strollo PJ. Long-term cost-effectiveness of upper airway stimulation for the treatment of obstructive sleep apnea: a model-based projection based on the STAR trial. Sleep. 2015;38(5):735-744.

Issue
Federal Practitioner - 34(2)
Issue
Federal Practitioner - 34(2)
Page Number
32-36
Page Number
32-36
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media

Walking can benefit advanced cancer patients

Article Type
Changed
Sun, 02/19/2017 - 14:02
Display Headline
Walking can benefit advanced cancer patients

Photo courtesy of Walking for Health/Paul Glendell
Group walk in Epsom, England

Walking for 30 minutes 3 times a week can improve quality of life for patients with advanced cancer, according to research published in BMJ Open.

The study indicated that some patients with advanced cancer may not be able to commit to weekly walks with a group of fellow patients.

However, some patients enjoyed walking in groups, and most reported benefits from regular walks, whether taken alone or with others.

“Findings from this important study show that exercise is valued by, suitable for, and beneficial to people with advanced cancer,” said study author Emma Ream, RN, PhD, of the University of Surrey in the UK.

“Rather than shying away from exercise, people with advanced disease should be encouraged to be more active and incorporate exercise into their daily lives where possible.”

One hundred and ten patients with advanced cancer were eligible to participate in this study, but 49 (47%) declined, primarily because of work commitments. Patients said they could not commit to a weekly walking group.

The 42 patients who did participate in this study were divided into 2 groups.

Group 1 (n=21) received coaching, which included a short motivational interview, as well as the recommendation to walk for at least 30 minutes on alternate days and attend a volunteer-led group walk weekly.

Patients in group 2 (n=21) were encouraged to maintain their current level of activity.

Nineteen participants (45%) withdrew from the study—11 in group 1 and 8 in group 2. In general, patients did not provide reasons for withdrawal. However, 2 patients were too unwell to participate, and 2 patients died during the study.

At 6, 12, and 24 weeks, scores on quality of life questionnaires were not significantly different between groups 1 and 2.

However, in interviews, patients in group 1 said they felt walking provided physical, emotional, and psychological benefits, as well as improvements in social well-being and lifestyle.

At 24 weeks, 8 of 9 participants in group 1 said they found the walking intervention useful, and 7 participants said they were satisfied with it.

Some patients said walking improved their attitude toward their illness and spoke of the social benefits of participating in group walks.

But other patients were dissatisfied with the walking groups. They reported accessibility issues and a dislike of group activities. One younger individual felt the group was more appropriate for older patients.

“This study is a first step towards exploring how walking can help people living with advanced cancer,” said study author Jo Armes, RGN, PhD, of King’s College London in the UK.

“Walking is a free and accessible form of physical activity, and patients reported that it made a real difference to their quality of life. Further research is needed with a larger number of people to provide definitive evidence that walking improves both health outcomes and social and emotional wellbeing in this group of people.”

Publications
Topics

Photo courtesy of Walking for Health/Paul Glendell
Group walk in Epsom, England

Walking for 30 minutes 3 times a week can improve quality of life for patients with advanced cancer, according to research published in BMJ Open.

The study indicated that some patients with advanced cancer may not be able to commit to weekly walks with a group of fellow patients.

However, some patients enjoyed walking in groups, and most reported benefits from regular walks, whether taken alone or with others.

“Findings from this important study show that exercise is valued by, suitable for, and beneficial to people with advanced cancer,” said study author Emma Ream, RN, PhD, of the University of Surrey in the UK.

“Rather than shying away from exercise, people with advanced disease should be encouraged to be more active and incorporate exercise into their daily lives where possible.”

One hundred and ten patients with advanced cancer were eligible to participate in this study, but 49 (47%) declined, primarily because of work commitments. Patients said they could not commit to a weekly walking group.

The 42 patients who did participate in this study were divided into 2 groups.

Group 1 (n=21) received coaching, which included a short motivational interview, as well as the recommendation to walk for at least 30 minutes on alternate days and attend a volunteer-led group walk weekly.

Patients in group 2 (n=21) were encouraged to maintain their current level of activity.

Nineteen participants (45%) withdrew from the study—11 in group 1 and 8 in group 2. In general, patients did not provide reasons for withdrawal. However, 2 patients were too unwell to participate, and 2 patients died during the study.

At 6, 12, and 24 weeks, scores on quality of life questionnaires were not significantly different between groups 1 and 2.

However, in interviews, patients in group 1 said they felt walking provided physical, emotional, and psychological benefits, as well as improvements in social well-being and lifestyle.

At 24 weeks, 8 of 9 participants in group 1 said they found the walking intervention useful, and 7 participants said they were satisfied with it.

Some patients said walking improved their attitude toward their illness and spoke of the social benefits of participating in group walks.

But other patients were dissatisfied with the walking groups. They reported accessibility issues and a dislike of group activities. One younger individual felt the group was more appropriate for older patients.

“This study is a first step towards exploring how walking can help people living with advanced cancer,” said study author Jo Armes, RGN, PhD, of King’s College London in the UK.

“Walking is a free and accessible form of physical activity, and patients reported that it made a real difference to their quality of life. Further research is needed with a larger number of people to provide definitive evidence that walking improves both health outcomes and social and emotional wellbeing in this group of people.”

Photo courtesy of Walking for Health/Paul Glendell
Group walk in Epsom, England

Walking for 30 minutes 3 times a week can improve quality of life for patients with advanced cancer, according to research published in BMJ Open.

The study indicated that some patients with advanced cancer may not be able to commit to weekly walks with a group of fellow patients.

However, some patients enjoyed walking in groups, and most reported benefits from regular walks, whether taken alone or with others.

“Findings from this important study show that exercise is valued by, suitable for, and beneficial to people with advanced cancer,” said study author Emma Ream, RN, PhD, of the University of Surrey in the UK.

“Rather than shying away from exercise, people with advanced disease should be encouraged to be more active and incorporate exercise into their daily lives where possible.”

One hundred and ten patients with advanced cancer were eligible to participate in this study, but 49 (47%) declined, primarily because of work commitments. Patients said they could not commit to a weekly walking group.

The 42 patients who did participate in this study were divided into 2 groups.

Group 1 (n=21) received coaching, which included a short motivational interview, as well as the recommendation to walk for at least 30 minutes on alternate days and attend a volunteer-led group walk weekly.

Patients in group 2 (n=21) were encouraged to maintain their current level of activity.

Nineteen participants (45%) withdrew from the study—11 in group 1 and 8 in group 2. In general, patients did not provide reasons for withdrawal. However, 2 patients were too unwell to participate, and 2 patients died during the study.

At 6, 12, and 24 weeks, scores on quality of life questionnaires were not significantly different between groups 1 and 2.

However, in interviews, patients in group 1 said they felt walking provided physical, emotional, and psychological benefits, as well as improvements in social well-being and lifestyle.

At 24 weeks, 8 of 9 participants in group 1 said they found the walking intervention useful, and 7 participants said they were satisfied with it.

Some patients said walking improved their attitude toward their illness and spoke of the social benefits of participating in group walks.

But other patients were dissatisfied with the walking groups. They reported accessibility issues and a dislike of group activities. One younger individual felt the group was more appropriate for older patients.

“This study is a first step towards exploring how walking can help people living with advanced cancer,” said study author Jo Armes, RGN, PhD, of King’s College London in the UK.

“Walking is a free and accessible form of physical activity, and patients reported that it made a real difference to their quality of life. Further research is needed with a larger number of people to provide definitive evidence that walking improves both health outcomes and social and emotional wellbeing in this group of people.”

Publications
Publications
Topics
Article Type
Display Headline
Walking can benefit advanced cancer patients
Display Headline
Walking can benefit advanced cancer patients
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Understanding a rare hemoglobin mutation

Article Type
Changed
Sat, 02/18/2017 - 04:10
Display Headline
Understanding a rare hemoglobin mutation

Photo by Tiffany Dawn Nicholson
Woman smoking

Smoking can prevent anemia in individuals with a rare hemoglobin mutation, according to research published in the Journal of Biological Chemistry.

The so-called Kirklareli mutation was found to be the cause of mild anemia in a young woman in Germany.

But a smoking habit protected the young woman’s father, who also carried the mutation, from developing anemia.

The Kirklareli mutation is one of more than 1000 discovered so far in adult human hemoglobin.

Most of these mutations appear to have no effect on people, but when medical problems occur, the disease is called a hemoglobinopathy and often named after the city or hospital where it was discovered. In this case, the family was living in Mannheim, Germany, but the father was born in the Turkish city of Kirklareli.

The Kirklareli mutation did not affect the iron content of the father’s blood, but it did appear to be the root cause of the young woman’s chronic anemia, according to researchers.

Further investigation revealed that absorbing carbon monoxide from cigarette smoke is therapeutic for individuals with this rare genetic disorder.

The Kirklareli mutation is in the alpha subunit of human hemoglobin (H58L) and causes it to rapidly auto-oxidize, which causes the protein to fall apart, lose heme, and precipitate. As a result, the protein loses its ability to carry oxygen. Eventually, red cells become deformed and are destroyed.

This mutation also gives the protein an 80,000-fold higher affinity for carbon monoxide than for oxygen. Carbon monoxide from a cigarette will be selectively taken up by the mutant hemoglobin and prevent it from oxidizing and denaturing.

This high affinity for carbon monoxide explained why the father showed no signs of anemia, the researchers said.

“He may never be an athlete because his blood can’t carry as much oxygen, but smoking has prevented him from being anemic,” said study author John Olson, PhD, of Rice University in Houston, Texas.

“And there’s a side benefit. People with this trait are more resistant to carbon monoxide poisoning.”

Dr Olson said he doesn’t know how or if doctors treated the young woman, but he suspects her iron-deficiency anemia was more an annoyance than a threat to her life and would not recommend she start smoking to relieve it.

“She shouldn’t smoke,” Dr Olson said. “But she could take antioxidants, such as a lot of vitamin C, which would help prevent oxidation of her mutant hemoglobin. Her anemia is not that severe. At the same time, she shouldn’t worry too much about secondhand smoke, which might have a positive effect.”

After ruling out common causes of anemia—such as blood loss, gastritis, or congenital defects—the woman’s doctors were curious enough about her ailment to call upon Emmanuel Bissé, MD, PhD, a researcher at Universitätsklinikum Freiburg in Freiburg, Germany, who discovered the Kirklareli mutation after sequencing the woman’s DNA.

Dr Bissé, in turn, recruited Dr Olson and his team to help determine why the histidine-to-leucine change caused anemia in the daughter but not the father.

Coincidentally, Ivan Birukou, a graduate student in Dr Olson’s lab, had already generated the Kirklareli mutation in human hemoglobin to study how the protein rapidly and selectively binds oxygen.

“Emmanuel wrote to me and said, ‘I know you’ve been making all these mutants in hemoglobin, and you’ve probably done the H58L mutation in [alpha] chains. Does this phenotype make sense?’” Dr Olson recalled.

“I said, ‘We can do a really neat study here, because we’ve already made the mutant hemoglobin in a recombinant system.’ We actually had a crystal structure [matching Kirklareli] that Ivan and [staff scientist] Jayashree Soman never published but had deposited in the Protein Data Bank. We had made this mutation to try to understand what the distal histidine was doing in alpha subunits.”

 

 

The researchers found in a 2010 study that replacing the histidine, which forms a strong hydrogen bond to oxygen, with leucine caused a dramatic decrease in oxygen affinity and an increase in carbon monoxide binding.

Dr Olson and Birukou realized back then that histidine played a key role in discriminating between oxygen and carbon monoxide in hemoglobin.

“When Emmanuel wrote to me about his discovery, I already ‘knew’ what was happening with respect to carbon monoxide binding,” Dr Olson said.

He said the normal hydrogen bond causes bound oxygen to stick more tightly to hemoglobin in the same way hydrogen bonds cause spilled soda to feel sticky.

“When you touch it, the sugar oxygens and hydrogens make hydrogen bonds with the polysaccharides on your finger,” Dr Olson said. “That stickiness helps hold onto oxygen. But leucine is more like an oil, like butane or hexane, and oxygen does not stick well inside hemoglobin. In contrast, bound carbon monoxide is more like methane or ethane and can’t form hydrogen bonds.”

Andres Benitez Cardenas, PhD, a researcher in Dr Olson’s lab, did the experiment in which he put carbon monoxide on the mutant alpha subunit of hemoglobin Kirklareli. The bound carbon monoxide slowed down oxidation of the protein and prevented loss of heme and precipitation.

“In effect, Andres did the ‘smoking experiment’ to show why the father’s hemoglobin didn’t denature and cause anemia,” Dr Olson said.

He noted that the effect caused by Kirklareli, though unusual, is not unique. Patients with hemoglobin Zurich also have an abnormal form of hemoglobin that more readily binds to carbon monoxide.

Publications
Topics

Photo by Tiffany Dawn Nicholson
Woman smoking

Smoking can prevent anemia in individuals with a rare hemoglobin mutation, according to research published in the Journal of Biological Chemistry.

The so-called Kirklareli mutation was found to be the cause of mild anemia in a young woman in Germany.

But a smoking habit protected the young woman’s father, who also carried the mutation, from developing anemia.

The Kirklareli mutation is one of more than 1000 discovered so far in adult human hemoglobin.

Most of these mutations appear to have no effect on people, but when medical problems occur, the disease is called a hemoglobinopathy and often named after the city or hospital where it was discovered. In this case, the family was living in Mannheim, Germany, but the father was born in the Turkish city of Kirklareli.

The Kirklareli mutation did not affect the iron content of the father’s blood, but it did appear to be the root cause of the young woman’s chronic anemia, according to researchers.

Further investigation revealed that absorbing carbon monoxide from cigarette smoke is therapeutic for individuals with this rare genetic disorder.

The Kirklareli mutation is in the alpha subunit of human hemoglobin (H58L) and causes it to rapidly auto-oxidize, which causes the protein to fall apart, lose heme, and precipitate. As a result, the protein loses its ability to carry oxygen. Eventually, red cells become deformed and are destroyed.

This mutation also gives the protein an 80,000-fold higher affinity for carbon monoxide than for oxygen. Carbon monoxide from a cigarette will be selectively taken up by the mutant hemoglobin and prevent it from oxidizing and denaturing.

This high affinity for carbon monoxide explained why the father showed no signs of anemia, the researchers said.

“He may never be an athlete because his blood can’t carry as much oxygen, but smoking has prevented him from being anemic,” said study author John Olson, PhD, of Rice University in Houston, Texas.

“And there’s a side benefit. People with this trait are more resistant to carbon monoxide poisoning.”

Dr Olson said he doesn’t know how or if doctors treated the young woman, but he suspects her iron-deficiency anemia was more an annoyance than a threat to her life and would not recommend she start smoking to relieve it.

“She shouldn’t smoke,” Dr Olson said. “But she could take antioxidants, such as a lot of vitamin C, which would help prevent oxidation of her mutant hemoglobin. Her anemia is not that severe. At the same time, she shouldn’t worry too much about secondhand smoke, which might have a positive effect.”

After ruling out common causes of anemia—such as blood loss, gastritis, or congenital defects—the woman’s doctors were curious enough about her ailment to call upon Emmanuel Bissé, MD, PhD, a researcher at Universitätsklinikum Freiburg in Freiburg, Germany, who discovered the Kirklareli mutation after sequencing the woman’s DNA.

Dr Bissé, in turn, recruited Dr Olson and his team to help determine why the histidine-to-leucine change caused anemia in the daughter but not the father.

Coincidentally, Ivan Birukou, a graduate student in Dr Olson’s lab, had already generated the Kirklareli mutation in human hemoglobin to study how the protein rapidly and selectively binds oxygen.

“Emmanuel wrote to me and said, ‘I know you’ve been making all these mutants in hemoglobin, and you’ve probably done the H58L mutation in [alpha] chains. Does this phenotype make sense?’” Dr Olson recalled.

“I said, ‘We can do a really neat study here, because we’ve already made the mutant hemoglobin in a recombinant system.’ We actually had a crystal structure [matching Kirklareli] that Ivan and [staff scientist] Jayashree Soman never published but had deposited in the Protein Data Bank. We had made this mutation to try to understand what the distal histidine was doing in alpha subunits.”

 

 

The researchers found in a 2010 study that replacing the histidine, which forms a strong hydrogen bond to oxygen, with leucine caused a dramatic decrease in oxygen affinity and an increase in carbon monoxide binding.

Dr Olson and Birukou realized back then that histidine played a key role in discriminating between oxygen and carbon monoxide in hemoglobin.

“When Emmanuel wrote to me about his discovery, I already ‘knew’ what was happening with respect to carbon monoxide binding,” Dr Olson said.

He said the normal hydrogen bond causes bound oxygen to stick more tightly to hemoglobin in the same way hydrogen bonds cause spilled soda to feel sticky.

“When you touch it, the sugar oxygens and hydrogens make hydrogen bonds with the polysaccharides on your finger,” Dr Olson said. “That stickiness helps hold onto oxygen. But leucine is more like an oil, like butane or hexane, and oxygen does not stick well inside hemoglobin. In contrast, bound carbon monoxide is more like methane or ethane and can’t form hydrogen bonds.”

Andres Benitez Cardenas, PhD, a researcher in Dr Olson’s lab, did the experiment in which he put carbon monoxide on the mutant alpha subunit of hemoglobin Kirklareli. The bound carbon monoxide slowed down oxidation of the protein and prevented loss of heme and precipitation.

“In effect, Andres did the ‘smoking experiment’ to show why the father’s hemoglobin didn’t denature and cause anemia,” Dr Olson said.

He noted that the effect caused by Kirklareli, though unusual, is not unique. Patients with hemoglobin Zurich also have an abnormal form of hemoglobin that more readily binds to carbon monoxide.

Photo by Tiffany Dawn Nicholson
Woman smoking

Smoking can prevent anemia in individuals with a rare hemoglobin mutation, according to research published in the Journal of Biological Chemistry.

The so-called Kirklareli mutation was found to be the cause of mild anemia in a young woman in Germany.

But a smoking habit protected the young woman’s father, who also carried the mutation, from developing anemia.

The Kirklareli mutation is one of more than 1000 discovered so far in adult human hemoglobin.

Most of these mutations appear to have no effect on people, but when medical problems occur, the disease is called a hemoglobinopathy and often named after the city or hospital where it was discovered. In this case, the family was living in Mannheim, Germany, but the father was born in the Turkish city of Kirklareli.

The Kirklareli mutation did not affect the iron content of the father’s blood, but it did appear to be the root cause of the young woman’s chronic anemia, according to researchers.

Further investigation revealed that absorbing carbon monoxide from cigarette smoke is therapeutic for individuals with this rare genetic disorder.

The Kirklareli mutation is in the alpha subunit of human hemoglobin (H58L) and causes it to rapidly auto-oxidize, which causes the protein to fall apart, lose heme, and precipitate. As a result, the protein loses its ability to carry oxygen. Eventually, red cells become deformed and are destroyed.

This mutation also gives the protein an 80,000-fold higher affinity for carbon monoxide than for oxygen. Carbon monoxide from a cigarette will be selectively taken up by the mutant hemoglobin and prevent it from oxidizing and denaturing.

This high affinity for carbon monoxide explained why the father showed no signs of anemia, the researchers said.

“He may never be an athlete because his blood can’t carry as much oxygen, but smoking has prevented him from being anemic,” said study author John Olson, PhD, of Rice University in Houston, Texas.

“And there’s a side benefit. People with this trait are more resistant to carbon monoxide poisoning.”

Dr Olson said he doesn’t know how or if doctors treated the young woman, but he suspects her iron-deficiency anemia was more an annoyance than a threat to her life and would not recommend she start smoking to relieve it.

“She shouldn’t smoke,” Dr Olson said. “But she could take antioxidants, such as a lot of vitamin C, which would help prevent oxidation of her mutant hemoglobin. Her anemia is not that severe. At the same time, she shouldn’t worry too much about secondhand smoke, which might have a positive effect.”

After ruling out common causes of anemia—such as blood loss, gastritis, or congenital defects—the woman’s doctors were curious enough about her ailment to call upon Emmanuel Bissé, MD, PhD, a researcher at Universitätsklinikum Freiburg in Freiburg, Germany, who discovered the Kirklareli mutation after sequencing the woman’s DNA.

Dr Bissé, in turn, recruited Dr Olson and his team to help determine why the histidine-to-leucine change caused anemia in the daughter but not the father.

Coincidentally, Ivan Birukou, a graduate student in Dr Olson’s lab, had already generated the Kirklareli mutation in human hemoglobin to study how the protein rapidly and selectively binds oxygen.

“Emmanuel wrote to me and said, ‘I know you’ve been making all these mutants in hemoglobin, and you’ve probably done the H58L mutation in [alpha] chains. Does this phenotype make sense?’” Dr Olson recalled.

“I said, ‘We can do a really neat study here, because we’ve already made the mutant hemoglobin in a recombinant system.’ We actually had a crystal structure [matching Kirklareli] that Ivan and [staff scientist] Jayashree Soman never published but had deposited in the Protein Data Bank. We had made this mutation to try to understand what the distal histidine was doing in alpha subunits.”

 

 

The researchers found in a 2010 study that replacing the histidine, which forms a strong hydrogen bond to oxygen, with leucine caused a dramatic decrease in oxygen affinity and an increase in carbon monoxide binding.

Dr Olson and Birukou realized back then that histidine played a key role in discriminating between oxygen and carbon monoxide in hemoglobin.

“When Emmanuel wrote to me about his discovery, I already ‘knew’ what was happening with respect to carbon monoxide binding,” Dr Olson said.

He said the normal hydrogen bond causes bound oxygen to stick more tightly to hemoglobin in the same way hydrogen bonds cause spilled soda to feel sticky.

“When you touch it, the sugar oxygens and hydrogens make hydrogen bonds with the polysaccharides on your finger,” Dr Olson said. “That stickiness helps hold onto oxygen. But leucine is more like an oil, like butane or hexane, and oxygen does not stick well inside hemoglobin. In contrast, bound carbon monoxide is more like methane or ethane and can’t form hydrogen bonds.”

Andres Benitez Cardenas, PhD, a researcher in Dr Olson’s lab, did the experiment in which he put carbon monoxide on the mutant alpha subunit of hemoglobin Kirklareli. The bound carbon monoxide slowed down oxidation of the protein and prevented loss of heme and precipitation.

“In effect, Andres did the ‘smoking experiment’ to show why the father’s hemoglobin didn’t denature and cause anemia,” Dr Olson said.

He noted that the effect caused by Kirklareli, though unusual, is not unique. Patients with hemoglobin Zurich also have an abnormal form of hemoglobin that more readily binds to carbon monoxide.

Publications
Publications
Topics
Article Type
Display Headline
Understanding a rare hemoglobin mutation
Display Headline
Understanding a rare hemoglobin mutation
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Mitigating Stress Levels May Impact Seizures

Article Type
Changed
Thu, 12/15/2022 - 15:54
While RCTs have not proven stress management works, many patients say it does.

Clinicians would be wise to recommend stress reduction techniques to patients with epilepsy, despite the fact that randomized controlled trials have yet to demonstrate that stress management reduces the frequency of seizures. One survey has suggested that most patients who report stress-triggered seizures use some sort of stress reduction methods and most say they are effective. McKee et al also point out that studies have found that stress management does improve quality of life in this patient population. The investigators also recommended that stressed patients with epilepsy should be screened for depression, anxiety, and other treatable mood disorders since they are more common in these patients.

McKee HR, Privitera MD. Stress as a seizure precipitant: Identification, associated factors, and treatment options. Seizure. 2017; 44:21-26.

Publications
Sections
While RCTs have not proven stress management works, many patients say it does.
While RCTs have not proven stress management works, many patients say it does.

Clinicians would be wise to recommend stress reduction techniques to patients with epilepsy, despite the fact that randomized controlled trials have yet to demonstrate that stress management reduces the frequency of seizures. One survey has suggested that most patients who report stress-triggered seizures use some sort of stress reduction methods and most say they are effective. McKee et al also point out that studies have found that stress management does improve quality of life in this patient population. The investigators also recommended that stressed patients with epilepsy should be screened for depression, anxiety, and other treatable mood disorders since they are more common in these patients.

McKee HR, Privitera MD. Stress as a seizure precipitant: Identification, associated factors, and treatment options. Seizure. 2017; 44:21-26.

Clinicians would be wise to recommend stress reduction techniques to patients with epilepsy, despite the fact that randomized controlled trials have yet to demonstrate that stress management reduces the frequency of seizures. One survey has suggested that most patients who report stress-triggered seizures use some sort of stress reduction methods and most say they are effective. McKee et al also point out that studies have found that stress management does improve quality of life in this patient population. The investigators also recommended that stressed patients with epilepsy should be screened for depression, anxiety, and other treatable mood disorders since they are more common in these patients.

McKee HR, Privitera MD. Stress as a seizure precipitant: Identification, associated factors, and treatment options. Seizure. 2017; 44:21-26.

Publications
Publications
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME

Restless Leg Syndrome More Common in Temporal Lobe Epilepsy

Article Type
Changed
Thu, 12/15/2022 - 15:54
Right-side TLE patients are more likely to have restless legs.

Patients with temporal lobe epilepsy (TLE) are more likely to experience restless leg syndrome (RLS) than the general public, according to a recent outpatient clinic analysis, which compared 98 TLE patients to 50 controls who did not have a history of epilepsy or any family members with the disorder. The investigators also found that the odds of developing RLS were 4.6 times greater in patients with right-sided TLE, when compared to left-sided TLE. They also suggested that worsening RLS may serve as an early warning of an impending seizure in some patients.

Geyer JD, Geyer EE, Fetterman Z, Carney PR. Epilepsy and restless legs syndrome. Epilepsy Behav. 2017;68:41-44.

 

Publications
Sections
Right-side TLE patients are more likely to have restless legs.
Right-side TLE patients are more likely to have restless legs.

Patients with temporal lobe epilepsy (TLE) are more likely to experience restless leg syndrome (RLS) than the general public, according to a recent outpatient clinic analysis, which compared 98 TLE patients to 50 controls who did not have a history of epilepsy or any family members with the disorder. The investigators also found that the odds of developing RLS were 4.6 times greater in patients with right-sided TLE, when compared to left-sided TLE. They also suggested that worsening RLS may serve as an early warning of an impending seizure in some patients.

Geyer JD, Geyer EE, Fetterman Z, Carney PR. Epilepsy and restless legs syndrome. Epilepsy Behav. 2017;68:41-44.

 

Patients with temporal lobe epilepsy (TLE) are more likely to experience restless leg syndrome (RLS) than the general public, according to a recent outpatient clinic analysis, which compared 98 TLE patients to 50 controls who did not have a history of epilepsy or any family members with the disorder. The investigators also found that the odds of developing RLS were 4.6 times greater in patients with right-sided TLE, when compared to left-sided TLE. They also suggested that worsening RLS may serve as an early warning of an impending seizure in some patients.

Geyer JD, Geyer EE, Fetterman Z, Carney PR. Epilepsy and restless legs syndrome. Epilepsy Behav. 2017;68:41-44.

 

Publications
Publications
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME

Half of Patients With Epilepsy Do Not Receive Medication Soon Enough

Article Type
Changed
Thu, 12/15/2022 - 15:54
Medicare records found 79.6% of new patients receive monotherapy, but it’s delayed in 50%.

A recent analysis of Medicare records has found that among 3706 new cases of epilepsy, 79.6% had received 1 antiepilepsy drug within 1 year of follow-up. However, only 50% of patients had received prompt therapy, which was defined as receiving the first medication within 30 days of diagnosis. The delay in initiating monotherapy was detected when researchers performed retrospective analyses of 2008–2010 Medicare administrative claims that were obtained from a 5% random sample of patients. The investigators have called for additional research to determine the reasons for the delays and have urged the development of new paradigms to improve patient care.

Martin RC, Faught E, Szaflarski JP, et al. What does the U.S. Medicare administrative claims database tell us about initial antiepileptic drug treatment for older adults with new-onset epilepsy? Epilepsia. 2017[Epub ahead of print]

Publications
Sections
Medicare records found 79.6% of new patients receive monotherapy, but it’s delayed in 50%.
Medicare records found 79.6% of new patients receive monotherapy, but it’s delayed in 50%.

A recent analysis of Medicare records has found that among 3706 new cases of epilepsy, 79.6% had received 1 antiepilepsy drug within 1 year of follow-up. However, only 50% of patients had received prompt therapy, which was defined as receiving the first medication within 30 days of diagnosis. The delay in initiating monotherapy was detected when researchers performed retrospective analyses of 2008–2010 Medicare administrative claims that were obtained from a 5% random sample of patients. The investigators have called for additional research to determine the reasons for the delays and have urged the development of new paradigms to improve patient care.

Martin RC, Faught E, Szaflarski JP, et al. What does the U.S. Medicare administrative claims database tell us about initial antiepileptic drug treatment for older adults with new-onset epilepsy? Epilepsia. 2017[Epub ahead of print]

A recent analysis of Medicare records has found that among 3706 new cases of epilepsy, 79.6% had received 1 antiepilepsy drug within 1 year of follow-up. However, only 50% of patients had received prompt therapy, which was defined as receiving the first medication within 30 days of diagnosis. The delay in initiating monotherapy was detected when researchers performed retrospective analyses of 2008–2010 Medicare administrative claims that were obtained from a 5% random sample of patients. The investigators have called for additional research to determine the reasons for the delays and have urged the development of new paradigms to improve patient care.

Martin RC, Faught E, Szaflarski JP, et al. What does the U.S. Medicare administrative claims database tell us about initial antiepileptic drug treatment for older adults with new-onset epilepsy? Epilepsia. 2017[Epub ahead of print]

Publications
Publications
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME

Urgent colonoscopy for LGIB: Consider case by case

Article Type
Changed
Fri, 01/18/2019 - 16:33

 

Colonoscopy performed within 24 hours of lower gastrointestinal bleeding appears safe and well tolerated but does not appear to improve a number of important clinical outcomes when compared with elective colonoscopy, according to the findings of a systematic review and meta-analysis.

Such “urgent colonoscopy” may, however, reduce hospital length of stay and cost, Abdul M. Kouanda, MD, of the University of California, San Francisco, and his colleagues reported online in Gastrointestinal Endoscopy.

 

In a pooled analysis of data from 12 studies with a total of 10,172 patients who underwent urgent colonoscopy, and 14,224 patients who underwent elective colonoscopy, the former was associated with increased use of endoscopic therapeutic interventions, compared with elective colonoscopy (relative risk, 1.70), but not with improved bleeding source localization (RR, 1.08), adverse event rates (RR, 1.05), rebleeding rates (RR, 1.14), transfusion requirements (RR, 1.02), or mortality (RR, 1.17), the investigators found (Gastrointest Endosc. 2017 Feb 4. doi: 10.1016/j.gie.2017.01.035).

The findings are based on nine studies from the United States, two from Japan, and one from Spain. Nine were retrospective cohort studies, two were randomized controlled trials, and one was a prospective cohort study.

With respect to the 70% greater use of therapeutic interventions with urgent colonoscopy, a subanalysis showed that the difference between urgent and elective colonoscopy was evident only in the randomized trials; no difference was seen in the prospective trials. With further stratification of urgent colonoscopy into procedures performed within 12 hours, the observation of increased therapeutic interventions was no longer statistically significant (RR, 3.46), they said.

As for bleeding source localization, the outcomes remained similar when retrospective studies were analyzed separately, and with colonoscopy performed within 12 hours. Blood transfusions decreased with urgent colonoscopy when only retrospective studies were analyzed (RR, 0.84).

The investigators noted that there was a trend toward decreased length of hospital stay among those undergoing urgent colonoscopy (mean of 4.8 days vs. 6.4 days with elective colonoscopy). Only two studies looked at cost: One showed a decrease in hospital costs with urgent vs. elective colonoscopy, while one showed no difference.

“In our pooled analysis, the mean hospital costs in the urgent colonoscopy group were $24,866, compared with $27,691 in the elective group; however, the difference between the two was not statistically significant,” they wrote.

The annual incidence of lower gastrointestinal bleeding (LGIB) in the United States is 20.5-35.7 out of 100,000 patients, and the incidence increases with age; there is a 200-fold increase in incidence from the 3rd to 9th decade of life, the investigators said, adding that the incidence is rising as the population ages.

“Such a trend has important implications for both the quality of care for treating LGIB and the associated costs to the overall U.S. health care system,” they wrote, noting that while colonoscopy is appropriate for evaluating LGIB in most cases, no clear consensus exists with respect to timing of colonoscopy.

Even a recent American Society for Gastrointestinal Endoscopy guideline recommending that initial colonoscopy for severe and hemodynamically stable hematochezia be performed within 8-24 hours of admission is based only on moderate-quality level evidence that is “fraught with a number of limitations,” they wrote.

The current study was designed to “further clarify the utility of urgent versus elective colonoscopy in evaluating patients hospitalized with a lower GI bleed,” they added.

The lack of clinical benefit seen in this study “may be secondary to the benign, often self-resolving natural history in the majority of LGIB cases. However, there may be a subset of patients who could benefit from early intervention (such as severe blood loss, hemodynamically unstable patients), and thus the decision to pursue urgent colonoscopy should be made on a case-by-case basis,” they said.

Further, although several critical patient outcomes did not appear to be impacted by urgent vs. elective colonoscopy in this study, the trends toward a decrease in length of stay suggest that earlier performance of colonoscopy may lead to earlier and better identification of low-risk and high-risk stigmata, allowing those with low-risk lesions to be discharged much earlier.

“Additionally, earlier discharge of patients could also reduce their risk of health care–associated infections and adverse events,” the investigators noted.

The findings with respect to length of stay and cost “align perfectly with the new focus in health care on providing high quality and safe care to patients while at the same time containing medical costs,” they wrote, adding that clinicians should carefully consider all factors when deciding to pursue urgent colonoscopy.”

The study is limited by heterogeneity and publications bias, and by factors inherent in meta-analyses, but it also has several strengths, including a large number of studies and patients. Also, it is the first of its kind to examine “all of the available literature to elucidate the time frame for performing colonoscopy in patients with hematochezia,” the investigators said, concluding that further research is needed to identify subsets of patients who will benefit from early intervention, to evaluate the cost effectiveness of urgent colonoscopy, and to look at – in larger randomized controlled trials – the overall benefit of urgent colonoscopy.

The authors reported having no disclosures.

 

 

Publications
Topics
Sections

 

Colonoscopy performed within 24 hours of lower gastrointestinal bleeding appears safe and well tolerated but does not appear to improve a number of important clinical outcomes when compared with elective colonoscopy, according to the findings of a systematic review and meta-analysis.

Such “urgent colonoscopy” may, however, reduce hospital length of stay and cost, Abdul M. Kouanda, MD, of the University of California, San Francisco, and his colleagues reported online in Gastrointestinal Endoscopy.

 

In a pooled analysis of data from 12 studies with a total of 10,172 patients who underwent urgent colonoscopy, and 14,224 patients who underwent elective colonoscopy, the former was associated with increased use of endoscopic therapeutic interventions, compared with elective colonoscopy (relative risk, 1.70), but not with improved bleeding source localization (RR, 1.08), adverse event rates (RR, 1.05), rebleeding rates (RR, 1.14), transfusion requirements (RR, 1.02), or mortality (RR, 1.17), the investigators found (Gastrointest Endosc. 2017 Feb 4. doi: 10.1016/j.gie.2017.01.035).

The findings are based on nine studies from the United States, two from Japan, and one from Spain. Nine were retrospective cohort studies, two were randomized controlled trials, and one was a prospective cohort study.

With respect to the 70% greater use of therapeutic interventions with urgent colonoscopy, a subanalysis showed that the difference between urgent and elective colonoscopy was evident only in the randomized trials; no difference was seen in the prospective trials. With further stratification of urgent colonoscopy into procedures performed within 12 hours, the observation of increased therapeutic interventions was no longer statistically significant (RR, 3.46), they said.

As for bleeding source localization, the outcomes remained similar when retrospective studies were analyzed separately, and with colonoscopy performed within 12 hours. Blood transfusions decreased with urgent colonoscopy when only retrospective studies were analyzed (RR, 0.84).

The investigators noted that there was a trend toward decreased length of hospital stay among those undergoing urgent colonoscopy (mean of 4.8 days vs. 6.4 days with elective colonoscopy). Only two studies looked at cost: One showed a decrease in hospital costs with urgent vs. elective colonoscopy, while one showed no difference.

“In our pooled analysis, the mean hospital costs in the urgent colonoscopy group were $24,866, compared with $27,691 in the elective group; however, the difference between the two was not statistically significant,” they wrote.

The annual incidence of lower gastrointestinal bleeding (LGIB) in the United States is 20.5-35.7 out of 100,000 patients, and the incidence increases with age; there is a 200-fold increase in incidence from the 3rd to 9th decade of life, the investigators said, adding that the incidence is rising as the population ages.

“Such a trend has important implications for both the quality of care for treating LGIB and the associated costs to the overall U.S. health care system,” they wrote, noting that while colonoscopy is appropriate for evaluating LGIB in most cases, no clear consensus exists with respect to timing of colonoscopy.

Even a recent American Society for Gastrointestinal Endoscopy guideline recommending that initial colonoscopy for severe and hemodynamically stable hematochezia be performed within 8-24 hours of admission is based only on moderate-quality level evidence that is “fraught with a number of limitations,” they wrote.

The current study was designed to “further clarify the utility of urgent versus elective colonoscopy in evaluating patients hospitalized with a lower GI bleed,” they added.

The lack of clinical benefit seen in this study “may be secondary to the benign, often self-resolving natural history in the majority of LGIB cases. However, there may be a subset of patients who could benefit from early intervention (such as severe blood loss, hemodynamically unstable patients), and thus the decision to pursue urgent colonoscopy should be made on a case-by-case basis,” they said.

Further, although several critical patient outcomes did not appear to be impacted by urgent vs. elective colonoscopy in this study, the trends toward a decrease in length of stay suggest that earlier performance of colonoscopy may lead to earlier and better identification of low-risk and high-risk stigmata, allowing those with low-risk lesions to be discharged much earlier.

“Additionally, earlier discharge of patients could also reduce their risk of health care–associated infections and adverse events,” the investigators noted.

The findings with respect to length of stay and cost “align perfectly with the new focus in health care on providing high quality and safe care to patients while at the same time containing medical costs,” they wrote, adding that clinicians should carefully consider all factors when deciding to pursue urgent colonoscopy.”

The study is limited by heterogeneity and publications bias, and by factors inherent in meta-analyses, but it also has several strengths, including a large number of studies and patients. Also, it is the first of its kind to examine “all of the available literature to elucidate the time frame for performing colonoscopy in patients with hematochezia,” the investigators said, concluding that further research is needed to identify subsets of patients who will benefit from early intervention, to evaluate the cost effectiveness of urgent colonoscopy, and to look at – in larger randomized controlled trials – the overall benefit of urgent colonoscopy.

The authors reported having no disclosures.

 

 

 

Colonoscopy performed within 24 hours of lower gastrointestinal bleeding appears safe and well tolerated but does not appear to improve a number of important clinical outcomes when compared with elective colonoscopy, according to the findings of a systematic review and meta-analysis.

Such “urgent colonoscopy” may, however, reduce hospital length of stay and cost, Abdul M. Kouanda, MD, of the University of California, San Francisco, and his colleagues reported online in Gastrointestinal Endoscopy.

 

In a pooled analysis of data from 12 studies with a total of 10,172 patients who underwent urgent colonoscopy, and 14,224 patients who underwent elective colonoscopy, the former was associated with increased use of endoscopic therapeutic interventions, compared with elective colonoscopy (relative risk, 1.70), but not with improved bleeding source localization (RR, 1.08), adverse event rates (RR, 1.05), rebleeding rates (RR, 1.14), transfusion requirements (RR, 1.02), or mortality (RR, 1.17), the investigators found (Gastrointest Endosc. 2017 Feb 4. doi: 10.1016/j.gie.2017.01.035).

The findings are based on nine studies from the United States, two from Japan, and one from Spain. Nine were retrospective cohort studies, two were randomized controlled trials, and one was a prospective cohort study.

With respect to the 70% greater use of therapeutic interventions with urgent colonoscopy, a subanalysis showed that the difference between urgent and elective colonoscopy was evident only in the randomized trials; no difference was seen in the prospective trials. With further stratification of urgent colonoscopy into procedures performed within 12 hours, the observation of increased therapeutic interventions was no longer statistically significant (RR, 3.46), they said.

As for bleeding source localization, the outcomes remained similar when retrospective studies were analyzed separately, and with colonoscopy performed within 12 hours. Blood transfusions decreased with urgent colonoscopy when only retrospective studies were analyzed (RR, 0.84).

The investigators noted that there was a trend toward decreased length of hospital stay among those undergoing urgent colonoscopy (mean of 4.8 days vs. 6.4 days with elective colonoscopy). Only two studies looked at cost: One showed a decrease in hospital costs with urgent vs. elective colonoscopy, while one showed no difference.

“In our pooled analysis, the mean hospital costs in the urgent colonoscopy group were $24,866, compared with $27,691 in the elective group; however, the difference between the two was not statistically significant,” they wrote.

The annual incidence of lower gastrointestinal bleeding (LGIB) in the United States is 20.5-35.7 out of 100,000 patients, and the incidence increases with age; there is a 200-fold increase in incidence from the 3rd to 9th decade of life, the investigators said, adding that the incidence is rising as the population ages.

“Such a trend has important implications for both the quality of care for treating LGIB and the associated costs to the overall U.S. health care system,” they wrote, noting that while colonoscopy is appropriate for evaluating LGIB in most cases, no clear consensus exists with respect to timing of colonoscopy.

Even a recent American Society for Gastrointestinal Endoscopy guideline recommending that initial colonoscopy for severe and hemodynamically stable hematochezia be performed within 8-24 hours of admission is based only on moderate-quality level evidence that is “fraught with a number of limitations,” they wrote.

The current study was designed to “further clarify the utility of urgent versus elective colonoscopy in evaluating patients hospitalized with a lower GI bleed,” they added.

The lack of clinical benefit seen in this study “may be secondary to the benign, often self-resolving natural history in the majority of LGIB cases. However, there may be a subset of patients who could benefit from early intervention (such as severe blood loss, hemodynamically unstable patients), and thus the decision to pursue urgent colonoscopy should be made on a case-by-case basis,” they said.

Further, although several critical patient outcomes did not appear to be impacted by urgent vs. elective colonoscopy in this study, the trends toward a decrease in length of stay suggest that earlier performance of colonoscopy may lead to earlier and better identification of low-risk and high-risk stigmata, allowing those with low-risk lesions to be discharged much earlier.

“Additionally, earlier discharge of patients could also reduce their risk of health care–associated infections and adverse events,” the investigators noted.

The findings with respect to length of stay and cost “align perfectly with the new focus in health care on providing high quality and safe care to patients while at the same time containing medical costs,” they wrote, adding that clinicians should carefully consider all factors when deciding to pursue urgent colonoscopy.”

The study is limited by heterogeneity and publications bias, and by factors inherent in meta-analyses, but it also has several strengths, including a large number of studies and patients. Also, it is the first of its kind to examine “all of the available literature to elucidate the time frame for performing colonoscopy in patients with hematochezia,” the investigators said, concluding that further research is needed to identify subsets of patients who will benefit from early intervention, to evaluate the cost effectiveness of urgent colonoscopy, and to look at – in larger randomized controlled trials – the overall benefit of urgent colonoscopy.

The authors reported having no disclosures.

 

 

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM GASTROINTESTINAL ENDOSCOPY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Urgent vs. elective colonoscopy for lower GI bleeding does not appear to improve a number of important clinical outcomes.

Major finding: Urgent vs. elective colonoscopy was not associated with improved bleeding source localization (RR, 1.08), adverse event rates (RR, 1.05), rebleeding rates (RR, 1.14), transfusion requirements (RR, 1.02) or mortality (RR, 1.17).

Data source: A systematic review and meta-analysis of 12 studies including more than 24,000 patients.

Disclosures: The authors reported having no disclosures.

No AR in CTCs linked with better survival in advanced prostate cancer

Article Type
Changed
Fri, 01/18/2019 - 16:33

 

The presence and amount of full-length androgen receptor biomarker detected in the circulating tumor cells of people with metastatic castration-resistant prostate cancer can inform prognosis, a prospective study reveals.

Investigators report significant differences in prostate-specific antigen 50 (PSA50) values, PSA progression-free survival, clinical and/or radiologic progression-free survival, as well as overall survival, based on baseline levels of the amplified androgen receptor full-length (AR-FL) marker. The findings suggest quantification of AR-FL could serve as a clinically useful molecular biomarker in addition to AR-V7 status.

Prognosis differed among the 48% of patients with no detectable AR-FL marker, the 26% with amplification values below a median, and the remaining 26% with values above the median. The study included 202 men tested before starting hormonal treatment with either abiraterone or enzalutamide.

“Despite androgen deprivation, the androgen receptor continues to play a crucial role in prostate cancer,” Emmanuel S. Antonarakis, MBBCh, of Johns Hopkins University in Baltimore, said at in a press briefing held at the 2017 genitourinary cancers symposium sponsored by the American Society of Clinical Oncology, ASTRO, and the Society of Urologic Oncology. Dr. Antonarakis presented the findings on behalf of lead author John Silberstein, MD, and their coinvestigators.

Researchers found an inverse association with higher level of AR-FL and PSA50 responses. Also, men who did not achieve a PSA50 response had a mean of 55.4 transcripts, compared with 6.7 transcripts for those who did. Analyzed another way, the AR-FL–negative patients had a 62% PSA response rate, compared with 54% among the AR-FL–positive patients with amplification below the median and 28% for AR-FL–positive patients with values above the median.

In a multivariate analysis, controlling for AR-V7 and clinical variables, AR-FL remained prognostic for inferior PSA progression-free survival (hazard ratio, 1.06, P = .04). “A similar picture was seen with radiographic progression-free survival,” Dr. Antonarakis said. The best prognosis was for patients with undetectable AR-FL and the worst was for patients with detectable values above the median (HR, 1.04). However, AR-FL only trended toward significance (P = .13).

Similarly, for overall survival, AR-FL–negative patients had the best prognosis and patients with AR-FL above median had the worst in the multivariate analysis (HR, 1.07). “AR-FL reached borderline clinical significance,” he said (P = .06).

The presence of AR-V7 was independently prognostic in the multivariate analysis as well. “In conjunction with AR-V7, AR-FL quantification could serve as an additional biomarker to detect abiraterone or enzalutamide sensitivity or resistance,” Dr. Antonarakis said.

The current research builds on previous findings in this patient population. For example, genetic aberrations in circulating tumor DNA were associated with treatment resistance and inferior outcomes, including a worse progression-free survival, Dr. Antonarakis said (Clin. Cancer Res. 2015;21:2315-24). Other researchers demonstrated similar outcomes, both worse progression-free survival and overall survival among patients who had amplification or mutation of AR, compared with wild type, Dr. Antonarakis said.

These investigators used cell-free DNA to quantify AR, and the current study assessed circulating tumor cell–derived AR.

“Our vision is, very shortly in the future, we will have a liquid biopsy in patients to fully characterize their full complement of AR – patients with copy number gains, mutations in their genes, and splicing variance in the clinic,” Dr. Antonarakis said. It’s important to consider all three factors, he added.

Did you see any patients who were AR-V7 positive but AR-FL negative? study discussant Angelo Demarzo, MD, PhD, of Johns Hopkins University in Baltimore asked. “We have yet to find a patient like this. AR full length so far is always present when AR-V7 is positive,” Dr. Antonarakis replied. He added, however, “There is a subset of patients who are AR-V7 negative who have a high burden of AR full length, and they will still have a high risk.”
 

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

The presence and amount of full-length androgen receptor biomarker detected in the circulating tumor cells of people with metastatic castration-resistant prostate cancer can inform prognosis, a prospective study reveals.

Investigators report significant differences in prostate-specific antigen 50 (PSA50) values, PSA progression-free survival, clinical and/or radiologic progression-free survival, as well as overall survival, based on baseline levels of the amplified androgen receptor full-length (AR-FL) marker. The findings suggest quantification of AR-FL could serve as a clinically useful molecular biomarker in addition to AR-V7 status.

Prognosis differed among the 48% of patients with no detectable AR-FL marker, the 26% with amplification values below a median, and the remaining 26% with values above the median. The study included 202 men tested before starting hormonal treatment with either abiraterone or enzalutamide.

“Despite androgen deprivation, the androgen receptor continues to play a crucial role in prostate cancer,” Emmanuel S. Antonarakis, MBBCh, of Johns Hopkins University in Baltimore, said at in a press briefing held at the 2017 genitourinary cancers symposium sponsored by the American Society of Clinical Oncology, ASTRO, and the Society of Urologic Oncology. Dr. Antonarakis presented the findings on behalf of lead author John Silberstein, MD, and their coinvestigators.

Researchers found an inverse association with higher level of AR-FL and PSA50 responses. Also, men who did not achieve a PSA50 response had a mean of 55.4 transcripts, compared with 6.7 transcripts for those who did. Analyzed another way, the AR-FL–negative patients had a 62% PSA response rate, compared with 54% among the AR-FL–positive patients with amplification below the median and 28% for AR-FL–positive patients with values above the median.

In a multivariate analysis, controlling for AR-V7 and clinical variables, AR-FL remained prognostic for inferior PSA progression-free survival (hazard ratio, 1.06, P = .04). “A similar picture was seen with radiographic progression-free survival,” Dr. Antonarakis said. The best prognosis was for patients with undetectable AR-FL and the worst was for patients with detectable values above the median (HR, 1.04). However, AR-FL only trended toward significance (P = .13).

Similarly, for overall survival, AR-FL–negative patients had the best prognosis and patients with AR-FL above median had the worst in the multivariate analysis (HR, 1.07). “AR-FL reached borderline clinical significance,” he said (P = .06).

The presence of AR-V7 was independently prognostic in the multivariate analysis as well. “In conjunction with AR-V7, AR-FL quantification could serve as an additional biomarker to detect abiraterone or enzalutamide sensitivity or resistance,” Dr. Antonarakis said.

The current research builds on previous findings in this patient population. For example, genetic aberrations in circulating tumor DNA were associated with treatment resistance and inferior outcomes, including a worse progression-free survival, Dr. Antonarakis said (Clin. Cancer Res. 2015;21:2315-24). Other researchers demonstrated similar outcomes, both worse progression-free survival and overall survival among patients who had amplification or mutation of AR, compared with wild type, Dr. Antonarakis said.

These investigators used cell-free DNA to quantify AR, and the current study assessed circulating tumor cell–derived AR.

“Our vision is, very shortly in the future, we will have a liquid biopsy in patients to fully characterize their full complement of AR – patients with copy number gains, mutations in their genes, and splicing variance in the clinic,” Dr. Antonarakis said. It’s important to consider all three factors, he added.

Did you see any patients who were AR-V7 positive but AR-FL negative? study discussant Angelo Demarzo, MD, PhD, of Johns Hopkins University in Baltimore asked. “We have yet to find a patient like this. AR full length so far is always present when AR-V7 is positive,” Dr. Antonarakis replied. He added, however, “There is a subset of patients who are AR-V7 negative who have a high burden of AR full length, and they will still have a high risk.”
 

 

The presence and amount of full-length androgen receptor biomarker detected in the circulating tumor cells of people with metastatic castration-resistant prostate cancer can inform prognosis, a prospective study reveals.

Investigators report significant differences in prostate-specific antigen 50 (PSA50) values, PSA progression-free survival, clinical and/or radiologic progression-free survival, as well as overall survival, based on baseline levels of the amplified androgen receptor full-length (AR-FL) marker. The findings suggest quantification of AR-FL could serve as a clinically useful molecular biomarker in addition to AR-V7 status.

Prognosis differed among the 48% of patients with no detectable AR-FL marker, the 26% with amplification values below a median, and the remaining 26% with values above the median. The study included 202 men tested before starting hormonal treatment with either abiraterone or enzalutamide.

“Despite androgen deprivation, the androgen receptor continues to play a crucial role in prostate cancer,” Emmanuel S. Antonarakis, MBBCh, of Johns Hopkins University in Baltimore, said at in a press briefing held at the 2017 genitourinary cancers symposium sponsored by the American Society of Clinical Oncology, ASTRO, and the Society of Urologic Oncology. Dr. Antonarakis presented the findings on behalf of lead author John Silberstein, MD, and their coinvestigators.

Researchers found an inverse association with higher level of AR-FL and PSA50 responses. Also, men who did not achieve a PSA50 response had a mean of 55.4 transcripts, compared with 6.7 transcripts for those who did. Analyzed another way, the AR-FL–negative patients had a 62% PSA response rate, compared with 54% among the AR-FL–positive patients with amplification below the median and 28% for AR-FL–positive patients with values above the median.

In a multivariate analysis, controlling for AR-V7 and clinical variables, AR-FL remained prognostic for inferior PSA progression-free survival (hazard ratio, 1.06, P = .04). “A similar picture was seen with radiographic progression-free survival,” Dr. Antonarakis said. The best prognosis was for patients with undetectable AR-FL and the worst was for patients with detectable values above the median (HR, 1.04). However, AR-FL only trended toward significance (P = .13).

Similarly, for overall survival, AR-FL–negative patients had the best prognosis and patients with AR-FL above median had the worst in the multivariate analysis (HR, 1.07). “AR-FL reached borderline clinical significance,” he said (P = .06).

The presence of AR-V7 was independently prognostic in the multivariate analysis as well. “In conjunction with AR-V7, AR-FL quantification could serve as an additional biomarker to detect abiraterone or enzalutamide sensitivity or resistance,” Dr. Antonarakis said.

The current research builds on previous findings in this patient population. For example, genetic aberrations in circulating tumor DNA were associated with treatment resistance and inferior outcomes, including a worse progression-free survival, Dr. Antonarakis said (Clin. Cancer Res. 2015;21:2315-24). Other researchers demonstrated similar outcomes, both worse progression-free survival and overall survival among patients who had amplification or mutation of AR, compared with wild type, Dr. Antonarakis said.

These investigators used cell-free DNA to quantify AR, and the current study assessed circulating tumor cell–derived AR.

“Our vision is, very shortly in the future, we will have a liquid biopsy in patients to fully characterize their full complement of AR – patients with copy number gains, mutations in their genes, and splicing variance in the clinic,” Dr. Antonarakis said. It’s important to consider all three factors, he added.

Did you see any patients who were AR-V7 positive but AR-FL negative? study discussant Angelo Demarzo, MD, PhD, of Johns Hopkins University in Baltimore asked. “We have yet to find a patient like this. AR full length so far is always present when AR-V7 is positive,” Dr. Antonarakis replied. He added, however, “There is a subset of patients who are AR-V7 negative who have a high burden of AR full length, and they will still have a high risk.”
 

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: A full-length androgen receptor biomarker can classify patients with metastatic castration-resistant prostate cancer and inform prognosis.

Major finding: Biomarker-negative patients had the best prognosis for overall survival, compared with those AR-FL levels above the median (HR, 1.07; P = .06).

Data source: Prospective study of 202 patients with advanced prostate cancer treated with abiraterone or enzalutamide.

Disclosures: The study was funded with support from the Prostate Cancer Foundation, the Department of Defense Prostate Cancer Research Program, and the Patrick C. Walsh Fund. Dr. Antonarakis is a consultant/advisor to Sanofi, Dendreon, Medivation, Janssen Biotech, ESSA, and Astellas Pharma; receives honoraria from Sanofi, Dendreon, Medivation, Janssen Biotech, ESSA, and Astellas Pharma; and receives travel and accommodation expense support from Sanofi, Dendreon, and Medivation.