ED visits spike among California Medicaid patients

Article Type
Changed
Display Headline
ED visits spike among California Medicaid patients

Medicaid patients in California increased their use of emergency departments by 35.6% over a 5-year period while use by privately insured patients barely increased.

"Increasing ED use by Medicaid beneficiaries could reflect decreasing access to primary care, which is supported by our findings of high and increasing rates of ED use for ambulatory care sensitive conditions by Medicaid patients," wrote Dr. Renee Hsia. The report was published in the Sept. 18 issue of JAMA.

Dr. Hsia of the University of California, San Francisco, and her coauthors examined rates of ED utilization in California from 2005 to 2010 and broke down the data by insurance status (JAMA 2013;310:1181-81). They looked at adult patients younger than 65 years, because these patients "have experienced the greatest changes in insurance coverage in recent years, and are likely to see the biggest shifts as a result of health care reform."

The investigators based their retrospective analysis on the California Office of Statewide Health Planning and Development’s Emergency Discharge Data and Patient Discharge Data. The study grouped patients by insurance status: Medicaid, private, uninsured or self-pay, or other (workers compensation, CHAMPUS/TRICARE, Title V, and Veterans Affairs, and similar coverage).

Overall, ED visits jumped from 5.4 million to 6.1 million over the 5-year period – a 13% increase. While visits for patients with private insurance barely increased, with a 5-year difference of just 1%, visits by Medicaid patients increased by 35.6%, and visits by uninsured patients rose 25.4%. Visits by patients with coverage in the "other" category decreased by more than 10%, the authors noted.

Medicaid patients also maintained the highest rate of visits such as hypertension, which are potentially preventable with primary care. Among Medicaid recipients, the average yearly rate of ED visits for these problems was 54.76/1,000, compared with 10.93/1,000 for those with private insurance and 16.6/1,000 for those who were uninsured.

These visits showed the same kinds of coverage-dependent increases over the study period: a 6.8% hike among Medicaid beneficiaries and 6.2% among the uninsured, but a decline of 0.7% among privately insured patients.

Neither Dr. Hsia nor her coauthors reported any financial disclosures.

[email protected]

Author and Disclosure Information

Publications
Topics
Legacy Keywords
Medicaid, California, emergency department, er visits, Medicaid beneficiaries, primary care access, ambulatory care, Dr. Renee Hsia,
Sections
Author and Disclosure Information

Author and Disclosure Information

Medicaid patients in California increased their use of emergency departments by 35.6% over a 5-year period while use by privately insured patients barely increased.

"Increasing ED use by Medicaid beneficiaries could reflect decreasing access to primary care, which is supported by our findings of high and increasing rates of ED use for ambulatory care sensitive conditions by Medicaid patients," wrote Dr. Renee Hsia. The report was published in the Sept. 18 issue of JAMA.

Dr. Hsia of the University of California, San Francisco, and her coauthors examined rates of ED utilization in California from 2005 to 2010 and broke down the data by insurance status (JAMA 2013;310:1181-81). They looked at adult patients younger than 65 years, because these patients "have experienced the greatest changes in insurance coverage in recent years, and are likely to see the biggest shifts as a result of health care reform."

The investigators based their retrospective analysis on the California Office of Statewide Health Planning and Development’s Emergency Discharge Data and Patient Discharge Data. The study grouped patients by insurance status: Medicaid, private, uninsured or self-pay, or other (workers compensation, CHAMPUS/TRICARE, Title V, and Veterans Affairs, and similar coverage).

Overall, ED visits jumped from 5.4 million to 6.1 million over the 5-year period – a 13% increase. While visits for patients with private insurance barely increased, with a 5-year difference of just 1%, visits by Medicaid patients increased by 35.6%, and visits by uninsured patients rose 25.4%. Visits by patients with coverage in the "other" category decreased by more than 10%, the authors noted.

Medicaid patients also maintained the highest rate of visits such as hypertension, which are potentially preventable with primary care. Among Medicaid recipients, the average yearly rate of ED visits for these problems was 54.76/1,000, compared with 10.93/1,000 for those with private insurance and 16.6/1,000 for those who were uninsured.

These visits showed the same kinds of coverage-dependent increases over the study period: a 6.8% hike among Medicaid beneficiaries and 6.2% among the uninsured, but a decline of 0.7% among privately insured patients.

Neither Dr. Hsia nor her coauthors reported any financial disclosures.

[email protected]

Medicaid patients in California increased their use of emergency departments by 35.6% over a 5-year period while use by privately insured patients barely increased.

"Increasing ED use by Medicaid beneficiaries could reflect decreasing access to primary care, which is supported by our findings of high and increasing rates of ED use for ambulatory care sensitive conditions by Medicaid patients," wrote Dr. Renee Hsia. The report was published in the Sept. 18 issue of JAMA.

Dr. Hsia of the University of California, San Francisco, and her coauthors examined rates of ED utilization in California from 2005 to 2010 and broke down the data by insurance status (JAMA 2013;310:1181-81). They looked at adult patients younger than 65 years, because these patients "have experienced the greatest changes in insurance coverage in recent years, and are likely to see the biggest shifts as a result of health care reform."

The investigators based their retrospective analysis on the California Office of Statewide Health Planning and Development’s Emergency Discharge Data and Patient Discharge Data. The study grouped patients by insurance status: Medicaid, private, uninsured or self-pay, or other (workers compensation, CHAMPUS/TRICARE, Title V, and Veterans Affairs, and similar coverage).

Overall, ED visits jumped from 5.4 million to 6.1 million over the 5-year period – a 13% increase. While visits for patients with private insurance barely increased, with a 5-year difference of just 1%, visits by Medicaid patients increased by 35.6%, and visits by uninsured patients rose 25.4%. Visits by patients with coverage in the "other" category decreased by more than 10%, the authors noted.

Medicaid patients also maintained the highest rate of visits such as hypertension, which are potentially preventable with primary care. Among Medicaid recipients, the average yearly rate of ED visits for these problems was 54.76/1,000, compared with 10.93/1,000 for those with private insurance and 16.6/1,000 for those who were uninsured.

These visits showed the same kinds of coverage-dependent increases over the study period: a 6.8% hike among Medicaid beneficiaries and 6.2% among the uninsured, but a decline of 0.7% among privately insured patients.

Neither Dr. Hsia nor her coauthors reported any financial disclosures.

[email protected]

Publications
Publications
Topics
Article Type
Display Headline
ED visits spike among California Medicaid patients
Display Headline
ED visits spike among California Medicaid patients
Legacy Keywords
Medicaid, California, emergency department, er visits, Medicaid beneficiaries, primary care access, ambulatory care, Dr. Renee Hsia,
Legacy Keywords
Medicaid, California, emergency department, er visits, Medicaid beneficiaries, primary care access, ambulatory care, Dr. Renee Hsia,
Sections
Article Source

FROM JAMA

PURLs Copyright

Inside the Article

Vitals

Major finding: From 2005 to 2010, ED visits among California Medicaid patients increased 35.6% while visits for privately insured patients remained almost the same.

Data source: Claims data from several state sources.

Disclosures: The investigators reported no financial conflicts of interest.

A third of older adults may have biomarkers of preclinical Alzheimer’s

Study supports criteria, but data are confusing
Article Type
Changed
Display Headline
A third of older adults may have biomarkers of preclinical Alzheimer’s

A combination of cerebrospinal fluid biomarkers and simple cognitive testing identified stages of preclinical Alzheimer’s that were associated with cognitive decline and death over a decade of follow-up in a prospective, longitudinal study.

Preclinical disease was present in 31% of adults aged 65 years or older who were living independently in the community and was a reliable predictor of progression. The findings suggest that preclinical staging is not only possible, but could be a useful adjunct for stratifying research populations in therapeutic trials, according Dr. Stephanie J. Vos, the lead investigator (Lancet Neurol. 2013 Sept. 4 [doi:10.1016/S1474-4422(13)70194-7]).

The study aimed to identify the prevalence and long-term outcomes of preclinical Alzheimer’s disease in elderly subjects who were cognitively normal at baseline. Dr. Vos, of the Alzheimer’s Center Limburg at Maastricht (the Netherlands) University, and her colleagues used the combination of biomarkers and cognitive testing to define preclinical stages similar to those recently proposed by the Preclinical Working Group of the National Institute on Aging and the Alzheimer’s Association. These criteria propose three progressive stages for cognitively normal subjects:

• Stage 1, cognitively normal individuals with abnormal amyloid markers.

• Stage 2, abnormal amyloid and neuronal injury markers.

• Stage 3, abnormal amyloid and neuronal injury markers with subtle cognitive changes.

Dr. Vos’ study involved 311 subjects who were cognitively normal at baseline. They underwent lumbar puncture to ascertain cerebrospinal fluid levels of beta-amyloid-42 (Abeta-42), total tau, and phosphorylated tau. They also completed cognitive testing with the Clinical Dementia Rating (CDR) scale and Mini Mental State Exam (MMSE). Each year thereafter, subjects had additional cognitive testing. The primary outcome was the proportion of patients in each preclinical stage at baseline. Secondary outcomes were progression of cognitive decline and mortality.

CSF samples were dichotomized as normal or abnormal based on a level that the investigators determined. The cutoff values for abnormal biomarker measurements that best differentiated subjects with no baseline memory deficits from those in a separate cohort with symptomatic Alzheimer’s were Abeta-42 levels less than 459 pg/mL, total tau levels greater than 339 pg/mL, and phosphorylated tau levels greater than 67 pg/mL.

A CDR sum of boxes score of 0 was considered normal memory; scores of 0.5 or higher during follow-up indicated progression to symptomatic Alzheimer’s.

Subjects were stratified according to a combination of memory scores and biomarkers. Normal subjects had no memory impairment and normal biomarkers. Subjects were classified as stage 1 if only their Abeta-42 was abnormal. Stage 2 patients had abnormal Abeta-42and abnormal total or phosphorylated tau levels. Stage 3 subjects had abnormal biomarkers plus memory impairment equal to 0.5 on the CDR. Those in stage 1, 2, or 3 were considered to have preclinical Alzheimer’s.

Those who had normal Abeta-42 but abnormal tau – a marker of neuronal injury – were considered to have suspected non-Alzheimer’s pathophysiology (SNAP), regardless of their baseline memory score.

Patients were a mean of 73 years old at baseline. They had a mean MMSE score was 28.9 and a mean CDR sum of boxes score of 0.03. One-third (34%) were positive for the high-risk apolipoprotein (APOE) epsilon-4 allele.

At baseline, 129 (41%) were classified as normal; 47 (15%) as stage 1; 36 (12%) as stage 2; 13 (4%) as stage 3; and 72 (23%) as being in the SNAP group. The remaining 14 (5%) were unclassified.

Preclinical Alzheimer’s (stages 1-3) was significantly more prevalent among those older than 72 years than in those younger (37% vs. 26%), and in APOE epsilon-4 carriers than in noncarriers (47% vs. 23%).

At 5 years, 110 subjects were available for follow-up; 14 were available at 10 years. By the end of the study, 20 subjects had died.

After a median follow-up of 4 years, progression had occurred in 2% of the normal group; 13% of those in stage 1; 25% of those in stage 2; 54% of those in stage 3; 6% of those in the SNAP group, and 29% of those in the unclassified group, for a total of 32 subjects.

Of those who progressed, symptomatic Alzheimer’s with a CDR sum of boxes score of 0.5 occurred in 22 (69%), CDR 1 symptomatic disease developed in 6 (19%), and CDR 2 symptomatic disease arose in 4 (13%).

Interestingly, the authors noted, "neither age (younger than 72 years vs. 72 years or older) nor APOE genotype predicted the rate of decline," although these subanalyses had limited statistical power because of the small sample sizes. "Although APOE epsilon-4 is often a good predictor of cognitive decline in unselected populations, the absence of its prognostic utility in individuals with AD pathological abnormalities is consistent with findings from previous studies."

 

 

After adjustment for multiple covariates, subjects with baseline stage 1 disease were not significantly more likely to progress than was the normal group. However, those in baseline stages 2 and 3 were more likely to progress (hazard ratio 14.3 and 33.8, respectively). Those in the SNAP group were not at a significantly increased risk of progression, compared with the normal group.

After adjustment for covariates, the risk of death was significantly greater in those with baseline preclinical disease (HR 6.2). When the stages were individually assessed, the risk increased as the stages did: HR 3.7 for stage 1, 6.0 for stage 2, and 31.5 for stage 3. Those in the SNAP group were 5.2 times more likely to die by the end of follow-up than were those in the normal group.

Nine subjects with baseline preclinical disease who died received a postpartum autopsy diagnosis. Of these, one had low neuropathological changes consistent with Alzheimer’s and the rest had intermediate-high changes.

Four subjects in the SNAP group who died underwent autopsy. Of these, three had low-level neuropathological changes, including vascular comorbidities. All had a neuritic plaque score of 0. This finding suggests that the cognitive changes were linked to other disorders, the authors said.

Dr. Vos received support from the Center for Translational Molecular Medicine, project LeARN and the EU/EFPIA Innovative Medicines Initiative Joint Undertaking, and Internationale Stichting Alzheimer Onderzoek. Several coauthors were investigators for industry-sponsored studies testing anti-dementia drugs or had ties with pharmaceutical companies developing Alzheimer’s diagnostic tests or therapies.

[email protected]

On Twitter @Alz_Gal

Click for Credit Link
Body

The study by Dr. Stephanie Vos and her colleagues supports the emerging understanding of preclinical staging of Alzheimer’s disease – a concept useful not only for identifying at-risk populations, but for stratifying patient groups in therapeutic trials, Dr. Ronald Petersen wrote in an accompanying editorial (Lancet Neurol. 2013 Sept. 4 [doi:10.1016/S1474-4422(13)70217-5]).

However, he noted, the authors’ data on disease progression is "a bit more difficult to interpret than the frequencies of participants in each stage of preclinical AD."

For example, he noted, the researchers "chose to label the category for the next phase of progression in the AD spectrum as symptomatic AD rather than the more conventional mild cognitive impairment due to AD, amnestic MCI, or prodromal AD. The reason for this terminology is unclear and is likely to add to confusion in the published work."

Dr. Vos and her colleagues’ dichotomization of subjects as normal or abnormal at baseline made no allowance for a symptomatic predementia, he noted – a position that "is inconsistent with the published work from other groups in the specialty." This elimination of mild cognitive impairment from the preclinical staging equation complicates any comparison to prior work.

"The investigators [also] contend that the Clinical Dementia Rating scale is sufficient to verify symptomatic AD irrespective of cognitive test results. The CDR is a staging and not a diagnostic instrument; thus, why all normal participants in this study who progressed were classified as having a diagnosis of symptomatic AD is not clear."

"Notwithstanding these concerns, this study is an important contribution to the published work and supports the NIA-AA criteria for preclinical AD," Dr. Petersen concluded. "Importantly, despite the differences in methodology and types of participants, the findings from this study, which used CSF, and the Mayo Clinic Study of Aging, which used neuroimaging, converge convincingly."

Dr. Ronald Petersen is director of the Mayo Clinic Alzheimer’s Disease Research Center in Rochester, Minn. He had no financial disclosures.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
cerebrospinal fluid, csf biomarkers, cognitive testing, Alzheimer’s, cognitive decline,
Click for Credit Link
Click for Credit Link
Author and Disclosure Information

Author and Disclosure Information

Body

The study by Dr. Stephanie Vos and her colleagues supports the emerging understanding of preclinical staging of Alzheimer’s disease – a concept useful not only for identifying at-risk populations, but for stratifying patient groups in therapeutic trials, Dr. Ronald Petersen wrote in an accompanying editorial (Lancet Neurol. 2013 Sept. 4 [doi:10.1016/S1474-4422(13)70217-5]).

However, he noted, the authors’ data on disease progression is "a bit more difficult to interpret than the frequencies of participants in each stage of preclinical AD."

For example, he noted, the researchers "chose to label the category for the next phase of progression in the AD spectrum as symptomatic AD rather than the more conventional mild cognitive impairment due to AD, amnestic MCI, or prodromal AD. The reason for this terminology is unclear and is likely to add to confusion in the published work."

Dr. Vos and her colleagues’ dichotomization of subjects as normal or abnormal at baseline made no allowance for a symptomatic predementia, he noted – a position that "is inconsistent with the published work from other groups in the specialty." This elimination of mild cognitive impairment from the preclinical staging equation complicates any comparison to prior work.

"The investigators [also] contend that the Clinical Dementia Rating scale is sufficient to verify symptomatic AD irrespective of cognitive test results. The CDR is a staging and not a diagnostic instrument; thus, why all normal participants in this study who progressed were classified as having a diagnosis of symptomatic AD is not clear."

"Notwithstanding these concerns, this study is an important contribution to the published work and supports the NIA-AA criteria for preclinical AD," Dr. Petersen concluded. "Importantly, despite the differences in methodology and types of participants, the findings from this study, which used CSF, and the Mayo Clinic Study of Aging, which used neuroimaging, converge convincingly."

Dr. Ronald Petersen is director of the Mayo Clinic Alzheimer’s Disease Research Center in Rochester, Minn. He had no financial disclosures.

Body

The study by Dr. Stephanie Vos and her colleagues supports the emerging understanding of preclinical staging of Alzheimer’s disease – a concept useful not only for identifying at-risk populations, but for stratifying patient groups in therapeutic trials, Dr. Ronald Petersen wrote in an accompanying editorial (Lancet Neurol. 2013 Sept. 4 [doi:10.1016/S1474-4422(13)70217-5]).

However, he noted, the authors’ data on disease progression is "a bit more difficult to interpret than the frequencies of participants in each stage of preclinical AD."

For example, he noted, the researchers "chose to label the category for the next phase of progression in the AD spectrum as symptomatic AD rather than the more conventional mild cognitive impairment due to AD, amnestic MCI, or prodromal AD. The reason for this terminology is unclear and is likely to add to confusion in the published work."

Dr. Vos and her colleagues’ dichotomization of subjects as normal or abnormal at baseline made no allowance for a symptomatic predementia, he noted – a position that "is inconsistent with the published work from other groups in the specialty." This elimination of mild cognitive impairment from the preclinical staging equation complicates any comparison to prior work.

"The investigators [also] contend that the Clinical Dementia Rating scale is sufficient to verify symptomatic AD irrespective of cognitive test results. The CDR is a staging and not a diagnostic instrument; thus, why all normal participants in this study who progressed were classified as having a diagnosis of symptomatic AD is not clear."

"Notwithstanding these concerns, this study is an important contribution to the published work and supports the NIA-AA criteria for preclinical AD," Dr. Petersen concluded. "Importantly, despite the differences in methodology and types of participants, the findings from this study, which used CSF, and the Mayo Clinic Study of Aging, which used neuroimaging, converge convincingly."

Dr. Ronald Petersen is director of the Mayo Clinic Alzheimer’s Disease Research Center in Rochester, Minn. He had no financial disclosures.

Title
Study supports criteria, but data are confusing
Study supports criteria, but data are confusing

A combination of cerebrospinal fluid biomarkers and simple cognitive testing identified stages of preclinical Alzheimer’s that were associated with cognitive decline and death over a decade of follow-up in a prospective, longitudinal study.

Preclinical disease was present in 31% of adults aged 65 years or older who were living independently in the community and was a reliable predictor of progression. The findings suggest that preclinical staging is not only possible, but could be a useful adjunct for stratifying research populations in therapeutic trials, according Dr. Stephanie J. Vos, the lead investigator (Lancet Neurol. 2013 Sept. 4 [doi:10.1016/S1474-4422(13)70194-7]).

The study aimed to identify the prevalence and long-term outcomes of preclinical Alzheimer’s disease in elderly subjects who were cognitively normal at baseline. Dr. Vos, of the Alzheimer’s Center Limburg at Maastricht (the Netherlands) University, and her colleagues used the combination of biomarkers and cognitive testing to define preclinical stages similar to those recently proposed by the Preclinical Working Group of the National Institute on Aging and the Alzheimer’s Association. These criteria propose three progressive stages for cognitively normal subjects:

• Stage 1, cognitively normal individuals with abnormal amyloid markers.

• Stage 2, abnormal amyloid and neuronal injury markers.

• Stage 3, abnormal amyloid and neuronal injury markers with subtle cognitive changes.

Dr. Vos’ study involved 311 subjects who were cognitively normal at baseline. They underwent lumbar puncture to ascertain cerebrospinal fluid levels of beta-amyloid-42 (Abeta-42), total tau, and phosphorylated tau. They also completed cognitive testing with the Clinical Dementia Rating (CDR) scale and Mini Mental State Exam (MMSE). Each year thereafter, subjects had additional cognitive testing. The primary outcome was the proportion of patients in each preclinical stage at baseline. Secondary outcomes were progression of cognitive decline and mortality.

CSF samples were dichotomized as normal or abnormal based on a level that the investigators determined. The cutoff values for abnormal biomarker measurements that best differentiated subjects with no baseline memory deficits from those in a separate cohort with symptomatic Alzheimer’s were Abeta-42 levels less than 459 pg/mL, total tau levels greater than 339 pg/mL, and phosphorylated tau levels greater than 67 pg/mL.

A CDR sum of boxes score of 0 was considered normal memory; scores of 0.5 or higher during follow-up indicated progression to symptomatic Alzheimer’s.

Subjects were stratified according to a combination of memory scores and biomarkers. Normal subjects had no memory impairment and normal biomarkers. Subjects were classified as stage 1 if only their Abeta-42 was abnormal. Stage 2 patients had abnormal Abeta-42and abnormal total or phosphorylated tau levels. Stage 3 subjects had abnormal biomarkers plus memory impairment equal to 0.5 on the CDR. Those in stage 1, 2, or 3 were considered to have preclinical Alzheimer’s.

Those who had normal Abeta-42 but abnormal tau – a marker of neuronal injury – were considered to have suspected non-Alzheimer’s pathophysiology (SNAP), regardless of their baseline memory score.

Patients were a mean of 73 years old at baseline. They had a mean MMSE score was 28.9 and a mean CDR sum of boxes score of 0.03. One-third (34%) were positive for the high-risk apolipoprotein (APOE) epsilon-4 allele.

At baseline, 129 (41%) were classified as normal; 47 (15%) as stage 1; 36 (12%) as stage 2; 13 (4%) as stage 3; and 72 (23%) as being in the SNAP group. The remaining 14 (5%) were unclassified.

Preclinical Alzheimer’s (stages 1-3) was significantly more prevalent among those older than 72 years than in those younger (37% vs. 26%), and in APOE epsilon-4 carriers than in noncarriers (47% vs. 23%).

At 5 years, 110 subjects were available for follow-up; 14 were available at 10 years. By the end of the study, 20 subjects had died.

After a median follow-up of 4 years, progression had occurred in 2% of the normal group; 13% of those in stage 1; 25% of those in stage 2; 54% of those in stage 3; 6% of those in the SNAP group, and 29% of those in the unclassified group, for a total of 32 subjects.

Of those who progressed, symptomatic Alzheimer’s with a CDR sum of boxes score of 0.5 occurred in 22 (69%), CDR 1 symptomatic disease developed in 6 (19%), and CDR 2 symptomatic disease arose in 4 (13%).

Interestingly, the authors noted, "neither age (younger than 72 years vs. 72 years or older) nor APOE genotype predicted the rate of decline," although these subanalyses had limited statistical power because of the small sample sizes. "Although APOE epsilon-4 is often a good predictor of cognitive decline in unselected populations, the absence of its prognostic utility in individuals with AD pathological abnormalities is consistent with findings from previous studies."

 

 

After adjustment for multiple covariates, subjects with baseline stage 1 disease were not significantly more likely to progress than was the normal group. However, those in baseline stages 2 and 3 were more likely to progress (hazard ratio 14.3 and 33.8, respectively). Those in the SNAP group were not at a significantly increased risk of progression, compared with the normal group.

After adjustment for covariates, the risk of death was significantly greater in those with baseline preclinical disease (HR 6.2). When the stages were individually assessed, the risk increased as the stages did: HR 3.7 for stage 1, 6.0 for stage 2, and 31.5 for stage 3. Those in the SNAP group were 5.2 times more likely to die by the end of follow-up than were those in the normal group.

Nine subjects with baseline preclinical disease who died received a postpartum autopsy diagnosis. Of these, one had low neuropathological changes consistent with Alzheimer’s and the rest had intermediate-high changes.

Four subjects in the SNAP group who died underwent autopsy. Of these, three had low-level neuropathological changes, including vascular comorbidities. All had a neuritic plaque score of 0. This finding suggests that the cognitive changes were linked to other disorders, the authors said.

Dr. Vos received support from the Center for Translational Molecular Medicine, project LeARN and the EU/EFPIA Innovative Medicines Initiative Joint Undertaking, and Internationale Stichting Alzheimer Onderzoek. Several coauthors were investigators for industry-sponsored studies testing anti-dementia drugs or had ties with pharmaceutical companies developing Alzheimer’s diagnostic tests or therapies.

[email protected]

On Twitter @Alz_Gal

A combination of cerebrospinal fluid biomarkers and simple cognitive testing identified stages of preclinical Alzheimer’s that were associated with cognitive decline and death over a decade of follow-up in a prospective, longitudinal study.

Preclinical disease was present in 31% of adults aged 65 years or older who were living independently in the community and was a reliable predictor of progression. The findings suggest that preclinical staging is not only possible, but could be a useful adjunct for stratifying research populations in therapeutic trials, according Dr. Stephanie J. Vos, the lead investigator (Lancet Neurol. 2013 Sept. 4 [doi:10.1016/S1474-4422(13)70194-7]).

The study aimed to identify the prevalence and long-term outcomes of preclinical Alzheimer’s disease in elderly subjects who were cognitively normal at baseline. Dr. Vos, of the Alzheimer’s Center Limburg at Maastricht (the Netherlands) University, and her colleagues used the combination of biomarkers and cognitive testing to define preclinical stages similar to those recently proposed by the Preclinical Working Group of the National Institute on Aging and the Alzheimer’s Association. These criteria propose three progressive stages for cognitively normal subjects:

• Stage 1, cognitively normal individuals with abnormal amyloid markers.

• Stage 2, abnormal amyloid and neuronal injury markers.

• Stage 3, abnormal amyloid and neuronal injury markers with subtle cognitive changes.

Dr. Vos’ study involved 311 subjects who were cognitively normal at baseline. They underwent lumbar puncture to ascertain cerebrospinal fluid levels of beta-amyloid-42 (Abeta-42), total tau, and phosphorylated tau. They also completed cognitive testing with the Clinical Dementia Rating (CDR) scale and Mini Mental State Exam (MMSE). Each year thereafter, subjects had additional cognitive testing. The primary outcome was the proportion of patients in each preclinical stage at baseline. Secondary outcomes were progression of cognitive decline and mortality.

CSF samples were dichotomized as normal or abnormal based on a level that the investigators determined. The cutoff values for abnormal biomarker measurements that best differentiated subjects with no baseline memory deficits from those in a separate cohort with symptomatic Alzheimer’s were Abeta-42 levels less than 459 pg/mL, total tau levels greater than 339 pg/mL, and phosphorylated tau levels greater than 67 pg/mL.

A CDR sum of boxes score of 0 was considered normal memory; scores of 0.5 or higher during follow-up indicated progression to symptomatic Alzheimer’s.

Subjects were stratified according to a combination of memory scores and biomarkers. Normal subjects had no memory impairment and normal biomarkers. Subjects were classified as stage 1 if only their Abeta-42 was abnormal. Stage 2 patients had abnormal Abeta-42and abnormal total or phosphorylated tau levels. Stage 3 subjects had abnormal biomarkers plus memory impairment equal to 0.5 on the CDR. Those in stage 1, 2, or 3 were considered to have preclinical Alzheimer’s.

Those who had normal Abeta-42 but abnormal tau – a marker of neuronal injury – were considered to have suspected non-Alzheimer’s pathophysiology (SNAP), regardless of their baseline memory score.

Patients were a mean of 73 years old at baseline. They had a mean MMSE score was 28.9 and a mean CDR sum of boxes score of 0.03. One-third (34%) were positive for the high-risk apolipoprotein (APOE) epsilon-4 allele.

At baseline, 129 (41%) were classified as normal; 47 (15%) as stage 1; 36 (12%) as stage 2; 13 (4%) as stage 3; and 72 (23%) as being in the SNAP group. The remaining 14 (5%) were unclassified.

Preclinical Alzheimer’s (stages 1-3) was significantly more prevalent among those older than 72 years than in those younger (37% vs. 26%), and in APOE epsilon-4 carriers than in noncarriers (47% vs. 23%).

At 5 years, 110 subjects were available for follow-up; 14 were available at 10 years. By the end of the study, 20 subjects had died.

After a median follow-up of 4 years, progression had occurred in 2% of the normal group; 13% of those in stage 1; 25% of those in stage 2; 54% of those in stage 3; 6% of those in the SNAP group, and 29% of those in the unclassified group, for a total of 32 subjects.

Of those who progressed, symptomatic Alzheimer’s with a CDR sum of boxes score of 0.5 occurred in 22 (69%), CDR 1 symptomatic disease developed in 6 (19%), and CDR 2 symptomatic disease arose in 4 (13%).

Interestingly, the authors noted, "neither age (younger than 72 years vs. 72 years or older) nor APOE genotype predicted the rate of decline," although these subanalyses had limited statistical power because of the small sample sizes. "Although APOE epsilon-4 is often a good predictor of cognitive decline in unselected populations, the absence of its prognostic utility in individuals with AD pathological abnormalities is consistent with findings from previous studies."

 

 

After adjustment for multiple covariates, subjects with baseline stage 1 disease were not significantly more likely to progress than was the normal group. However, those in baseline stages 2 and 3 were more likely to progress (hazard ratio 14.3 and 33.8, respectively). Those in the SNAP group were not at a significantly increased risk of progression, compared with the normal group.

After adjustment for covariates, the risk of death was significantly greater in those with baseline preclinical disease (HR 6.2). When the stages were individually assessed, the risk increased as the stages did: HR 3.7 for stage 1, 6.0 for stage 2, and 31.5 for stage 3. Those in the SNAP group were 5.2 times more likely to die by the end of follow-up than were those in the normal group.

Nine subjects with baseline preclinical disease who died received a postpartum autopsy diagnosis. Of these, one had low neuropathological changes consistent with Alzheimer’s and the rest had intermediate-high changes.

Four subjects in the SNAP group who died underwent autopsy. Of these, three had low-level neuropathological changes, including vascular comorbidities. All had a neuritic plaque score of 0. This finding suggests that the cognitive changes were linked to other disorders, the authors said.

Dr. Vos received support from the Center for Translational Molecular Medicine, project LeARN and the EU/EFPIA Innovative Medicines Initiative Joint Undertaking, and Internationale Stichting Alzheimer Onderzoek. Several coauthors were investigators for industry-sponsored studies testing anti-dementia drugs or had ties with pharmaceutical companies developing Alzheimer’s diagnostic tests or therapies.

[email protected]

On Twitter @Alz_Gal

Publications
Publications
Topics
Article Type
Display Headline
A third of older adults may have biomarkers of preclinical Alzheimer’s
Display Headline
A third of older adults may have biomarkers of preclinical Alzheimer’s
Legacy Keywords
cerebrospinal fluid, csf biomarkers, cognitive testing, Alzheimer’s, cognitive decline,
Legacy Keywords
cerebrospinal fluid, csf biomarkers, cognitive testing, Alzheimer’s, cognitive decline,
Article Source

FROM THE LANCET NEUROLOGY

PURLs Copyright

Inside the Article

Vitals

Major finding: Cognitively normal elders who had abnormal levels of beta amyloid or tau in CSF had up to 33-fold greater risk for developing symptomatic Alzheimer’s disease compared with those who had normal biomarker levels.

Data source: A prospective, longitudinal study of 311 adults followed for up to 15 years.

Disclosures: Dr. Vos received support from the Center for Translational Molecular Medicine, project LeARN and the EU/EFPIA Innovative Medicines Initiative Joint Undertaking, and Internationale Stichting Alzheimer Onderzoek. Several coauthors were investigators for industry-sponsored studies testing anti-dementia drugs or had ties with pharmaceutical companies developing Alzheimer’s diagnostic tests or therapies.

Dexamethasone improves outcomes for infants with bronchiolitis, atopy history

Article Type
Changed
Display Headline
Dexamethasone improves outcomes for infants with bronchiolitis, atopy history

A 5-day course of dexamethasone significantly shortened hospital stays for infants with bronchiolitis who had eczema or close relatives with asthma.

The randomized, placebo-controlled study suggests that a family history of atopy could identify a subset of babies who would benefit from the addition of a corticosteroid to the usual salbutamol therapy for acute bronchiolitis, according to Dr. Khalid Alansari and colleagues. The report was published in the Sept. 16 issue of Pediatrics.

The researchers examined 7-day outcomes in 200 infants with acute bronchiolitis who were at a high risk of asthma, as determined by having at least one first-degree relative with either asthma or eczema. All of the children (mean age 3.5 months) were admitted to a pediatric hospital for treatment, wrote Dr. Alansari of Weill Cornell Medical College, Doha, Qatar, and coauthors. Infants who received dexamethasone were discharged 8 hours earlier than were those receiving standard treatment. The mean duration of symptoms was 4.5 days (Pediatrics 2013 Sept. 13 [doi: 10.1542/peds.2012-3746]).

The study’s primary outcome was time until discharge. Secondary outcomes included the number of patients who needed epinephrine treatment, readmission for a shorter stay in an infirmary site, and revisiting the emergency department or another clinic for the same illness. A study nurse made daily calls to assess the patients after discharge.

Infants in the dexamethasone group were discharged at a mean of 18.6 hours – significantly sooner than those in the control group (27 hours). Epinephrine was necessary for 19 infants in the dexamethasone group and 31 in the placebo group – again a significant difference.

Similar numbers in each group needed readmission and additional outpatient visits in the week after discharge. During the follow-up week, 22% of the dexamethasone group needed infirmary care and the mean stay was 17 hours, compared with 21% of the placebo group with a mean stay of 18 hours.

Nineteen in the dexamethasone group and 11 in the placebo group made a clinic visit (18.6% vs. 11%); this difference was not significant.

The chest radiograph was normal in about 37% of infants studied. About half showed lesser infiltrates; 15% had a lobar collapse or consolidation.

More than 70% had a full sibling with asthma. About 20% had a parent with the disease; in 5%, both parents had it. About 20% of patients had both eczema and first-degree relative with asthma.

All of the infants received 2.5 mg salbutamol nebulization at baseline and at 30, 60, and 120 minutes, and then every 2 hours until discharge. Nebulized epinephrine (0.5 mL/kg with a maximum dose of 5 mL) was available if needed. In addition, they were randomized to either placebo or to a 5-day course of dexamethasone 1 mg/mL, at a rate of 1 mL/kg on day 1, reduced to 0.6 mL/kg for days 2-5.

The study was sponsored by Hamad Medical Corporation. The authors reported no financial conflicts.

[email protected]

Author and Disclosure Information

Michele G. Sullivan, Family Practice News Digital Network

Publications
Topics
Legacy Keywords
dexamethasone, bronchiolitis, eczema, atopy, salbutamol therapy, Dr. Khalid Alansari,
Sections
Author and Disclosure Information

Michele G. Sullivan, Family Practice News Digital Network

Author and Disclosure Information

Michele G. Sullivan, Family Practice News Digital Network

A 5-day course of dexamethasone significantly shortened hospital stays for infants with bronchiolitis who had eczema or close relatives with asthma.

The randomized, placebo-controlled study suggests that a family history of atopy could identify a subset of babies who would benefit from the addition of a corticosteroid to the usual salbutamol therapy for acute bronchiolitis, according to Dr. Khalid Alansari and colleagues. The report was published in the Sept. 16 issue of Pediatrics.

The researchers examined 7-day outcomes in 200 infants with acute bronchiolitis who were at a high risk of asthma, as determined by having at least one first-degree relative with either asthma or eczema. All of the children (mean age 3.5 months) were admitted to a pediatric hospital for treatment, wrote Dr. Alansari of Weill Cornell Medical College, Doha, Qatar, and coauthors. Infants who received dexamethasone were discharged 8 hours earlier than were those receiving standard treatment. The mean duration of symptoms was 4.5 days (Pediatrics 2013 Sept. 13 [doi: 10.1542/peds.2012-3746]).

The study’s primary outcome was time until discharge. Secondary outcomes included the number of patients who needed epinephrine treatment, readmission for a shorter stay in an infirmary site, and revisiting the emergency department or another clinic for the same illness. A study nurse made daily calls to assess the patients after discharge.

Infants in the dexamethasone group were discharged at a mean of 18.6 hours – significantly sooner than those in the control group (27 hours). Epinephrine was necessary for 19 infants in the dexamethasone group and 31 in the placebo group – again a significant difference.

Similar numbers in each group needed readmission and additional outpatient visits in the week after discharge. During the follow-up week, 22% of the dexamethasone group needed infirmary care and the mean stay was 17 hours, compared with 21% of the placebo group with a mean stay of 18 hours.

Nineteen in the dexamethasone group and 11 in the placebo group made a clinic visit (18.6% vs. 11%); this difference was not significant.

The chest radiograph was normal in about 37% of infants studied. About half showed lesser infiltrates; 15% had a lobar collapse or consolidation.

More than 70% had a full sibling with asthma. About 20% had a parent with the disease; in 5%, both parents had it. About 20% of patients had both eczema and first-degree relative with asthma.

All of the infants received 2.5 mg salbutamol nebulization at baseline and at 30, 60, and 120 minutes, and then every 2 hours until discharge. Nebulized epinephrine (0.5 mL/kg with a maximum dose of 5 mL) was available if needed. In addition, they were randomized to either placebo or to a 5-day course of dexamethasone 1 mg/mL, at a rate of 1 mL/kg on day 1, reduced to 0.6 mL/kg for days 2-5.

The study was sponsored by Hamad Medical Corporation. The authors reported no financial conflicts.

[email protected]

A 5-day course of dexamethasone significantly shortened hospital stays for infants with bronchiolitis who had eczema or close relatives with asthma.

The randomized, placebo-controlled study suggests that a family history of atopy could identify a subset of babies who would benefit from the addition of a corticosteroid to the usual salbutamol therapy for acute bronchiolitis, according to Dr. Khalid Alansari and colleagues. The report was published in the Sept. 16 issue of Pediatrics.

The researchers examined 7-day outcomes in 200 infants with acute bronchiolitis who were at a high risk of asthma, as determined by having at least one first-degree relative with either asthma or eczema. All of the children (mean age 3.5 months) were admitted to a pediatric hospital for treatment, wrote Dr. Alansari of Weill Cornell Medical College, Doha, Qatar, and coauthors. Infants who received dexamethasone were discharged 8 hours earlier than were those receiving standard treatment. The mean duration of symptoms was 4.5 days (Pediatrics 2013 Sept. 13 [doi: 10.1542/peds.2012-3746]).

The study’s primary outcome was time until discharge. Secondary outcomes included the number of patients who needed epinephrine treatment, readmission for a shorter stay in an infirmary site, and revisiting the emergency department or another clinic for the same illness. A study nurse made daily calls to assess the patients after discharge.

Infants in the dexamethasone group were discharged at a mean of 18.6 hours – significantly sooner than those in the control group (27 hours). Epinephrine was necessary for 19 infants in the dexamethasone group and 31 in the placebo group – again a significant difference.

Similar numbers in each group needed readmission and additional outpatient visits in the week after discharge. During the follow-up week, 22% of the dexamethasone group needed infirmary care and the mean stay was 17 hours, compared with 21% of the placebo group with a mean stay of 18 hours.

Nineteen in the dexamethasone group and 11 in the placebo group made a clinic visit (18.6% vs. 11%); this difference was not significant.

The chest radiograph was normal in about 37% of infants studied. About half showed lesser infiltrates; 15% had a lobar collapse or consolidation.

More than 70% had a full sibling with asthma. About 20% had a parent with the disease; in 5%, both parents had it. About 20% of patients had both eczema and first-degree relative with asthma.

All of the infants received 2.5 mg salbutamol nebulization at baseline and at 30, 60, and 120 minutes, and then every 2 hours until discharge. Nebulized epinephrine (0.5 mL/kg with a maximum dose of 5 mL) was available if needed. In addition, they were randomized to either placebo or to a 5-day course of dexamethasone 1 mg/mL, at a rate of 1 mL/kg on day 1, reduced to 0.6 mL/kg for days 2-5.

The study was sponsored by Hamad Medical Corporation. The authors reported no financial conflicts.

[email protected]

Publications
Publications
Topics
Article Type
Display Headline
Dexamethasone improves outcomes for infants with bronchiolitis, atopy history
Display Headline
Dexamethasone improves outcomes for infants with bronchiolitis, atopy history
Legacy Keywords
dexamethasone, bronchiolitis, eczema, atopy, salbutamol therapy, Dr. Khalid Alansari,
Legacy Keywords
dexamethasone, bronchiolitis, eczema, atopy, salbutamol therapy, Dr. Khalid Alansari,
Sections
Article Source

PURLs Copyright

Inside the Article

Diagnosing Alzheimer’s: The eyes may have it

Article Type
Changed
Display Headline
Diagnosing Alzheimer’s: The eyes may have it

You never know what will happen when you look into a mouse’s eyes.

Twelve years ago, Dr. Lee Goldstein was investigating reactive oxygen species’ effect on the brain of a young Alzheimer’s model mouse. Holding the tiny creature in his hand, he carefully inserted a miniscule microdialysis probe through its skull. As he did, he happened to look right into the mouse’s face. And since he was doing some unrelated work on cataracts at the time, something unusual caught his very practiced gaze.

Creative Commons Attribution License; PLoS ONE 2010;5:e10659
This stereo image of a lens from a 64-year-old male subject with Down syndrome shows a characteristic circumferential supranuclear cataract presenting as an annular half-toroid band of opacification in the deep cortical and supranuclear subregions of the lens.

"The mouse had a cataract. I looked at the other eye, and there was a cataract there, too. That’s very unusual – really not ever seen – in a mouse this age."

Then he looked at all the other mice he was using in that experiment, all of which were older. They all had bilateral cataracts. My first thought was, "This can’t be related to Alzheimer’s disease."

But in fact, he said, it appeared to be. He and his lab soon showed that the cataracts contained a large concentration of aggregated beta-amyloid (Abeta) in the same fraction that’s measured in today’s cerebrospinal fluid Alzheimer’s biomarker tests.

That first observation has birthed two investigational noninvasive amyloid eye tests, which Dr. Goldstein envisions could some day be part of everyone’s annual physical exam.

In people destined to develop Alzheimer’s, some research suggests that Abeta proteins may begin to accumulate in the lens long before they build up to dangerous levels in the brain. If this turns out to be a reliable marker of risk, it could be a sign that would trigger early, presymptomatic Alzheimer’s treatment.

That’s in the future, though, because right now, there is no such treatment. But at this time, Dr. Goldstein said, an amyloid eye test could prove invaluable in reaching that goal. One reason that symptomatic patients don’t respond to investigational drugs could be that by the time the patients are treated, irreversible brain damage has already occurred.

"Once you have cognitive symptoms, the horse is not only out of the barn, it’s run out of the state," said Dr. Goldstein, director of the molecular aging and development laboratory at Boston University. "I hate the term ‘mild cognitive impairment,’ because by the time you have that, there’s nothing mild about it."

Researchers now almost universally agree that the best way to get a true picture of any drug’s potential effectiveness in Alzheimer’s will be to implement treatment before symptoms set in. In addition, Dr. Goldstein said, "Research pools are polluted. Control groups contain subjects who would develop Alzheimer’s if they live long enough," which could be skewing study results. Lens amyloid measurements might help stratify groups in drug studies, and even be a way to track very early effect on amyloid.

But that is a future yet to be determined. In the meantime, researchers still need definitive proof that supranuclear amyloid cataracts are inextricably linked to the amyloid brain plaques of Alzheimer’s.

Initial findings

In 2003, Dr. Goldstein, then at Harvard Medical School, published his original proof of concept study. It comprised postmortem eye and brain specimens from nine subjects with Alzheimer’s and eight controls without the disease, and samples of primary aqueous humor from three people without the disorder who were undergoing cataract surgery (Lancet 2003;361:1258-65).

Abeta-40 and Abeta-42 were present in all of the lenses, in amounts similar to those seen in the corresponding brains. But in patients with Alzheimer’s, the protein aggregated into clumps within the lens fiber cells, forming unusual supranuclear cataracts at the equatorial periphery that appeared to be different from common age-related cataracts and that weren’t present in the control subjects.

The cataract location is an important clue to how long the Abeta has been accumulating, Dr. Goldstein said. Lens fiber cells are particularly long lived, remaining alive for as long as a person lives or until the lens is removed during cataract surgery. The lens starts to form in very early fetal life, with more and more lens cells forming in an outward direction, creating a virtual map of a person’s lifetime, "like the rings of a tree," he said.

Dr. Goldstein and his team discovered that these distinctive cataracts develop in some patients with Alzheimer’s. They appear toward the outer edge of the lens and are composed of the same toxic Abeta protein that builds up in the brain. "The history of amyloid in the body is time stamped in the lens," he said.

 

 

As lens fiber cells age, they lose most of their organelles and become transparent – just the right state for a device meant to focus light. "They also make tons of Abeta," Dr. Goldstein said, and it appears to have a very specific function in the eye, one Dr. Rudolph Tanzi and his colleagues at Massachusetts General Hospital, Boston, identified in collaboration with Dr. Goldstein’s team (PLoS ONE 2010;5:e9505).

Dr. Lee Goldstein

"It turns out to be a very potent antimicrobial peptide," one of several the eye and brain produce to defend themselves. Amyloid’s sticky nature causes foreign invaders to clump together, so they’re more easily destroyed. This finding also suggests that Abeta could have a similar function in the brain, supporting some theories that Alzheimer’s might be at least partially triggered by a hyperinflammatory response toward an invading pathogen or another immunoreactive incident.

Testing lenses in Down syndrome patients

Interesting as all of that is, it doesn’t prove the theory that the lens amyloid record somehow tracks Alzheimer’s development. But other studies do explore that concept, including one Dr. Goldstein published in 2010. In this study, Dr. Goldstein and his colleagues examined lens amyloid in people with Down syndrome, a group predestined to develop Alzheimer’s (PLoS ONE 2010;5:e10659). The genetic mutation that causes the syndrome also increases production of the amyloid precursor protein (APP), Abeta’s antecedent.

The lenses from subjects with Down syndrome, aged 2-69 years, were compared with lenses from control subjects and people with both familial and late-onset Alzheimer’s. "The 2-year-old with Down syndrome in this study actually had more lens amyloid than the adults with familial Alzheimer’s," Dr. Goldstein said. In unpublished data, he added, the protein has even been observed in Down syndrome fetal lenses.

He expanded on this work in a poster presented at the 2013 Alzheimer’s Association International Conference. Dr. Goldstein and his team have developed and validated a laser eye scanning instrument that noninvasively measures how light is reflected from the tiniest particles – in this case clumps of Abeta protein – within the lens of living human subjects.

"We hypothesize that due to the trisomy of chromosome 21 in Down syndrome (and triplication of the APP gene), which results in increased expression of Abeta in the lens, the intensity of scattered light in Down syndrome patients will be higher than [in] age-matched controls," he noted in the poster.

Not everyone agrees with this idea, however. It has stirred controversy since he first introduced the idea, when, he said, "mainstream Alzheimer’s research simply didn’t believe it." In fact, at least two other researchers’ studies have come to quite different conclusions.

Dr. Charles Eberhart, a pathologist at Johns Hopkins University, Baltimore, published his data in the journal Brain Pathology (2013 June 28 [doi:10.1111/bpa.12070]). The study examined retinas, lenses, and brains from 11 patients with Alzheimer’s, 6 with Parkinson’s, and 6 age-matched controls. Eight eyes (five from Alzheimer’s patients and three from controls) did have cataracts. Dr. Eberhart and his colleagues used immunohistochemistry and Congo red staining to look for amyloid, phosphorylated tau, and alpha-synuclein.

"The short answer is – we didn’t find any amyloid deposits in the lens, or any abnormal tau accumulations," he said in an interview.

The study has two possible interpretations, he said: Either Abeta, tau, and synuclein don’t accumulate in Alzheimer’s eyes as they do in Alzheimer’s brains, or they are there, but simply not detected by his methods. "It certainly might be there. All we can say is that with this method, which is the accepted way of determining amyloid in brain tissue, we didn’t see it in eyes," he said.

The second study, conducted by Dr. Ralph Michael of the Universitat Autònoma de Barcelona and his colleagues, came to a similar conclusion (Exp. Eye Res. 2013;106:5-13). It involved 39 lenses and brains from 21 Alzheimer’s patients, and 15 lenses from age-matched controls. Six of the Alzheimer’s lenses and seven control lenses had cataracts. These investigators used staining methods similar to those in the Hopkins study.

"Beta-amyloid immunohistochemistry was positive in the brain tissues but not in the cornea sample," they wrote. "Lenses from control and AD [Alzheimer’s disease] donors were, without exception, negative after Congo red, thioflavin, and beta-amyloid immunohistochemical staining. ... The absence of staining in AD and control lenses with the techniques employed lead us to conclude that there is no beta-amyloid in lenses from donors with AD or in control cortical cataracts."

Dr. Goldstein said he doesn’t doubt these findings. Congo red staining yields a difficult-to-interpret sign, he said. Amyloid appears red under standard light spectroscopy, but takes on a very characteristic shade, called apple green under polarized light. "This is an old staining method that’s not very sensitive nor is it specific for Abeta – it’s also highly variable."

 

 

Technique is critical, he added. "It took us years to perfect our technique for the lens. It’s very difficult to work with lens, harder to work with old lens, and extremely hard to work with old, sick lens."

Instead of relying solely on Congo red or other staining techniques, Dr. Goldstein’s team confirmed their findings using a combination of biochemical analyses, immunogold electron microscopy, and two different types of mass spectrometry – methods he said are irrefutably accurate. "You can’t argue with this unless you are willing to argue with the very concept of mass spectrometry. It’s the gold standard," he said.

Confirmation in transgenic mice and Down syndrome patients strengthens the hypothesis, he said, as do the conclusions of his most recent paper. It looked at data from 1,249 people included in the Framingham Eye Study, and found a genetic link between a specific type of midlife cataracts (consistent with those previously found in Alzheimer’s) and later cognitive and brain structural changes associated with Alzheimer’s (PLoS ONE 2012;7:e43728) .

The culprit appeared to be a mutation of a gene that codes for delta-catenin, which Dr. Goldstein postulated may normally help suppress Abeta production. The altered form, however, appears to affect neuronal structure and is instead associated with an increase in Abeta-42 production in cell culture. The malformed delta-catenin protein was also found throughout the lenses of study subjects with Alzheimer’s, but not in control lenses.

Screening patients in the future?

Dr. Goldstein said he envisions a future in which annual lens exams might guide risk assessment and treatment initiation. But physicians who might someday screen patients certainly won’t have a mass spectrometer in the back room.

He has invented two devices, he said, that will fill that need. The most recent is a laser scanning ophthalmoscope that uses dynamic light scattering to detect the tiniest amyloid particles in the lens – particles less than 30 nm. This is the device he’s using in the ongoing Boston University/Boston Children’s Hospital study of lens amyloid in children with Down syndrome.

The second device combines optical imaging with aftobetin, a fluorescent amyloid ligand. Dr. Goldstein holds a patent on this device, which he invented in partnership with Cognoptix (formerly Neuroptix), a company he cofounded in 2001, although he is no longer operationally affiliated with it.

Cognoptix has developed the SAPPHIRE II system, a combination drug/device that detects amyloid in the lens using aftobetin. The company licensed aftobetin from the University of California, San Diego. It’s formulated into an ophthalmic ointment administered prior to scanning with the SAPPHIRE II system. The procedure uses fluorescent ligand scanning to detect amyloid aggregates in the lens, said Paul Hartung, president and chief executive officer of the Acton, Mass., company.

"We use an eye-safe laser tuned to pick up the fluorescence. It doesn’t require dilation of the pupil, and it has the capability of actually registering itself in the correct location in the eye," he said in an interview.

SAPPHIRE II has had a busy year, including a proof of concept study published in May and reported at the Alzheimer’s Association International Conference. In this study, the system successfully differentiated five Alzheimer’s patients from five controls (Front. Neurol. 2013 May 27 [doi:10.3389/fneur.2013.00062]).

Cognoptix has begun a second study testing the system against PET amyloid brain imaging in 20 patients with probable Alzheimer’s and 20 controls, Mr. Hartung said.

A third planned study is a pivotal phase III trial that will enroll 400 subjects, all of whom will undergo both the eye exam and PET amyloid imaging. It’s designed to support premarketing approval, Mr. Hartung said. Currently SAPPHIRE II has an investigational device exemption from the Food and Drug Administration’s Center for Devices and Radiological Health.

"Our end goal is to get this into the general practitioner’s office, where about 40% of Alzheimer’s drug prescriptions are written by general practitioners who really have no data on hand. Right now, based on cognitive assessments, they have only a 50-50 chance of getting the right diagnosis," Mr. Hartung said.

Dr. Goldstein and Mr. Hartung hold financial interests in devices to measure lens amyloid. Dr. Ralph Michael listed no financial disclosures. Dr. Charles Eberhart said he had no relevant financial disclosures.

[email protected]

On Twitter @Alz_Gal

Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
Alzheimers, cataracts, eyes
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

You never know what will happen when you look into a mouse’s eyes.

Twelve years ago, Dr. Lee Goldstein was investigating reactive oxygen species’ effect on the brain of a young Alzheimer’s model mouse. Holding the tiny creature in his hand, he carefully inserted a miniscule microdialysis probe through its skull. As he did, he happened to look right into the mouse’s face. And since he was doing some unrelated work on cataracts at the time, something unusual caught his very practiced gaze.

Creative Commons Attribution License; PLoS ONE 2010;5:e10659
This stereo image of a lens from a 64-year-old male subject with Down syndrome shows a characteristic circumferential supranuclear cataract presenting as an annular half-toroid band of opacification in the deep cortical and supranuclear subregions of the lens.

"The mouse had a cataract. I looked at the other eye, and there was a cataract there, too. That’s very unusual – really not ever seen – in a mouse this age."

Then he looked at all the other mice he was using in that experiment, all of which were older. They all had bilateral cataracts. My first thought was, "This can’t be related to Alzheimer’s disease."

But in fact, he said, it appeared to be. He and his lab soon showed that the cataracts contained a large concentration of aggregated beta-amyloid (Abeta) in the same fraction that’s measured in today’s cerebrospinal fluid Alzheimer’s biomarker tests.

That first observation has birthed two investigational noninvasive amyloid eye tests, which Dr. Goldstein envisions could some day be part of everyone’s annual physical exam.

In people destined to develop Alzheimer’s, some research suggests that Abeta proteins may begin to accumulate in the lens long before they build up to dangerous levels in the brain. If this turns out to be a reliable marker of risk, it could be a sign that would trigger early, presymptomatic Alzheimer’s treatment.

That’s in the future, though, because right now, there is no such treatment. But at this time, Dr. Goldstein said, an amyloid eye test could prove invaluable in reaching that goal. One reason that symptomatic patients don’t respond to investigational drugs could be that by the time the patients are treated, irreversible brain damage has already occurred.

"Once you have cognitive symptoms, the horse is not only out of the barn, it’s run out of the state," said Dr. Goldstein, director of the molecular aging and development laboratory at Boston University. "I hate the term ‘mild cognitive impairment,’ because by the time you have that, there’s nothing mild about it."

Researchers now almost universally agree that the best way to get a true picture of any drug’s potential effectiveness in Alzheimer’s will be to implement treatment before symptoms set in. In addition, Dr. Goldstein said, "Research pools are polluted. Control groups contain subjects who would develop Alzheimer’s if they live long enough," which could be skewing study results. Lens amyloid measurements might help stratify groups in drug studies, and even be a way to track very early effect on amyloid.

But that is a future yet to be determined. In the meantime, researchers still need definitive proof that supranuclear amyloid cataracts are inextricably linked to the amyloid brain plaques of Alzheimer’s.

Initial findings

In 2003, Dr. Goldstein, then at Harvard Medical School, published his original proof of concept study. It comprised postmortem eye and brain specimens from nine subjects with Alzheimer’s and eight controls without the disease, and samples of primary aqueous humor from three people without the disorder who were undergoing cataract surgery (Lancet 2003;361:1258-65).

Abeta-40 and Abeta-42 were present in all of the lenses, in amounts similar to those seen in the corresponding brains. But in patients with Alzheimer’s, the protein aggregated into clumps within the lens fiber cells, forming unusual supranuclear cataracts at the equatorial periphery that appeared to be different from common age-related cataracts and that weren’t present in the control subjects.

The cataract location is an important clue to how long the Abeta has been accumulating, Dr. Goldstein said. Lens fiber cells are particularly long lived, remaining alive for as long as a person lives or until the lens is removed during cataract surgery. The lens starts to form in very early fetal life, with more and more lens cells forming in an outward direction, creating a virtual map of a person’s lifetime, "like the rings of a tree," he said.

Dr. Goldstein and his team discovered that these distinctive cataracts develop in some patients with Alzheimer’s. They appear toward the outer edge of the lens and are composed of the same toxic Abeta protein that builds up in the brain. "The history of amyloid in the body is time stamped in the lens," he said.

 

 

As lens fiber cells age, they lose most of their organelles and become transparent – just the right state for a device meant to focus light. "They also make tons of Abeta," Dr. Goldstein said, and it appears to have a very specific function in the eye, one Dr. Rudolph Tanzi and his colleagues at Massachusetts General Hospital, Boston, identified in collaboration with Dr. Goldstein’s team (PLoS ONE 2010;5:e9505).

Dr. Lee Goldstein

"It turns out to be a very potent antimicrobial peptide," one of several the eye and brain produce to defend themselves. Amyloid’s sticky nature causes foreign invaders to clump together, so they’re more easily destroyed. This finding also suggests that Abeta could have a similar function in the brain, supporting some theories that Alzheimer’s might be at least partially triggered by a hyperinflammatory response toward an invading pathogen or another immunoreactive incident.

Testing lenses in Down syndrome patients

Interesting as all of that is, it doesn’t prove the theory that the lens amyloid record somehow tracks Alzheimer’s development. But other studies do explore that concept, including one Dr. Goldstein published in 2010. In this study, Dr. Goldstein and his colleagues examined lens amyloid in people with Down syndrome, a group predestined to develop Alzheimer’s (PLoS ONE 2010;5:e10659). The genetic mutation that causes the syndrome also increases production of the amyloid precursor protein (APP), Abeta’s antecedent.

The lenses from subjects with Down syndrome, aged 2-69 years, were compared with lenses from control subjects and people with both familial and late-onset Alzheimer’s. "The 2-year-old with Down syndrome in this study actually had more lens amyloid than the adults with familial Alzheimer’s," Dr. Goldstein said. In unpublished data, he added, the protein has even been observed in Down syndrome fetal lenses.

He expanded on this work in a poster presented at the 2013 Alzheimer’s Association International Conference. Dr. Goldstein and his team have developed and validated a laser eye scanning instrument that noninvasively measures how light is reflected from the tiniest particles – in this case clumps of Abeta protein – within the lens of living human subjects.

"We hypothesize that due to the trisomy of chromosome 21 in Down syndrome (and triplication of the APP gene), which results in increased expression of Abeta in the lens, the intensity of scattered light in Down syndrome patients will be higher than [in] age-matched controls," he noted in the poster.

Not everyone agrees with this idea, however. It has stirred controversy since he first introduced the idea, when, he said, "mainstream Alzheimer’s research simply didn’t believe it." In fact, at least two other researchers’ studies have come to quite different conclusions.

Dr. Charles Eberhart, a pathologist at Johns Hopkins University, Baltimore, published his data in the journal Brain Pathology (2013 June 28 [doi:10.1111/bpa.12070]). The study examined retinas, lenses, and brains from 11 patients with Alzheimer’s, 6 with Parkinson’s, and 6 age-matched controls. Eight eyes (five from Alzheimer’s patients and three from controls) did have cataracts. Dr. Eberhart and his colleagues used immunohistochemistry and Congo red staining to look for amyloid, phosphorylated tau, and alpha-synuclein.

"The short answer is – we didn’t find any amyloid deposits in the lens, or any abnormal tau accumulations," he said in an interview.

The study has two possible interpretations, he said: Either Abeta, tau, and synuclein don’t accumulate in Alzheimer’s eyes as they do in Alzheimer’s brains, or they are there, but simply not detected by his methods. "It certainly might be there. All we can say is that with this method, which is the accepted way of determining amyloid in brain tissue, we didn’t see it in eyes," he said.

The second study, conducted by Dr. Ralph Michael of the Universitat Autònoma de Barcelona and his colleagues, came to a similar conclusion (Exp. Eye Res. 2013;106:5-13). It involved 39 lenses and brains from 21 Alzheimer’s patients, and 15 lenses from age-matched controls. Six of the Alzheimer’s lenses and seven control lenses had cataracts. These investigators used staining methods similar to those in the Hopkins study.

"Beta-amyloid immunohistochemistry was positive in the brain tissues but not in the cornea sample," they wrote. "Lenses from control and AD [Alzheimer’s disease] donors were, without exception, negative after Congo red, thioflavin, and beta-amyloid immunohistochemical staining. ... The absence of staining in AD and control lenses with the techniques employed lead us to conclude that there is no beta-amyloid in lenses from donors with AD or in control cortical cataracts."

Dr. Goldstein said he doesn’t doubt these findings. Congo red staining yields a difficult-to-interpret sign, he said. Amyloid appears red under standard light spectroscopy, but takes on a very characteristic shade, called apple green under polarized light. "This is an old staining method that’s not very sensitive nor is it specific for Abeta – it’s also highly variable."

 

 

Technique is critical, he added. "It took us years to perfect our technique for the lens. It’s very difficult to work with lens, harder to work with old lens, and extremely hard to work with old, sick lens."

Instead of relying solely on Congo red or other staining techniques, Dr. Goldstein’s team confirmed their findings using a combination of biochemical analyses, immunogold electron microscopy, and two different types of mass spectrometry – methods he said are irrefutably accurate. "You can’t argue with this unless you are willing to argue with the very concept of mass spectrometry. It’s the gold standard," he said.

Confirmation in transgenic mice and Down syndrome patients strengthens the hypothesis, he said, as do the conclusions of his most recent paper. It looked at data from 1,249 people included in the Framingham Eye Study, and found a genetic link between a specific type of midlife cataracts (consistent with those previously found in Alzheimer’s) and later cognitive and brain structural changes associated with Alzheimer’s (PLoS ONE 2012;7:e43728) .

The culprit appeared to be a mutation of a gene that codes for delta-catenin, which Dr. Goldstein postulated may normally help suppress Abeta production. The altered form, however, appears to affect neuronal structure and is instead associated with an increase in Abeta-42 production in cell culture. The malformed delta-catenin protein was also found throughout the lenses of study subjects with Alzheimer’s, but not in control lenses.

Screening patients in the future?

Dr. Goldstein said he envisions a future in which annual lens exams might guide risk assessment and treatment initiation. But physicians who might someday screen patients certainly won’t have a mass spectrometer in the back room.

He has invented two devices, he said, that will fill that need. The most recent is a laser scanning ophthalmoscope that uses dynamic light scattering to detect the tiniest amyloid particles in the lens – particles less than 30 nm. This is the device he’s using in the ongoing Boston University/Boston Children’s Hospital study of lens amyloid in children with Down syndrome.

The second device combines optical imaging with aftobetin, a fluorescent amyloid ligand. Dr. Goldstein holds a patent on this device, which he invented in partnership with Cognoptix (formerly Neuroptix), a company he cofounded in 2001, although he is no longer operationally affiliated with it.

Cognoptix has developed the SAPPHIRE II system, a combination drug/device that detects amyloid in the lens using aftobetin. The company licensed aftobetin from the University of California, San Diego. It’s formulated into an ophthalmic ointment administered prior to scanning with the SAPPHIRE II system. The procedure uses fluorescent ligand scanning to detect amyloid aggregates in the lens, said Paul Hartung, president and chief executive officer of the Acton, Mass., company.

"We use an eye-safe laser tuned to pick up the fluorescence. It doesn’t require dilation of the pupil, and it has the capability of actually registering itself in the correct location in the eye," he said in an interview.

SAPPHIRE II has had a busy year, including a proof of concept study published in May and reported at the Alzheimer’s Association International Conference. In this study, the system successfully differentiated five Alzheimer’s patients from five controls (Front. Neurol. 2013 May 27 [doi:10.3389/fneur.2013.00062]).

Cognoptix has begun a second study testing the system against PET amyloid brain imaging in 20 patients with probable Alzheimer’s and 20 controls, Mr. Hartung said.

A third planned study is a pivotal phase III trial that will enroll 400 subjects, all of whom will undergo both the eye exam and PET amyloid imaging. It’s designed to support premarketing approval, Mr. Hartung said. Currently SAPPHIRE II has an investigational device exemption from the Food and Drug Administration’s Center for Devices and Radiological Health.

"Our end goal is to get this into the general practitioner’s office, where about 40% of Alzheimer’s drug prescriptions are written by general practitioners who really have no data on hand. Right now, based on cognitive assessments, they have only a 50-50 chance of getting the right diagnosis," Mr. Hartung said.

Dr. Goldstein and Mr. Hartung hold financial interests in devices to measure lens amyloid. Dr. Ralph Michael listed no financial disclosures. Dr. Charles Eberhart said he had no relevant financial disclosures.

[email protected]

On Twitter @Alz_Gal

You never know what will happen when you look into a mouse’s eyes.

Twelve years ago, Dr. Lee Goldstein was investigating reactive oxygen species’ effect on the brain of a young Alzheimer’s model mouse. Holding the tiny creature in his hand, he carefully inserted a miniscule microdialysis probe through its skull. As he did, he happened to look right into the mouse’s face. And since he was doing some unrelated work on cataracts at the time, something unusual caught his very practiced gaze.

Creative Commons Attribution License; PLoS ONE 2010;5:e10659
This stereo image of a lens from a 64-year-old male subject with Down syndrome shows a characteristic circumferential supranuclear cataract presenting as an annular half-toroid band of opacification in the deep cortical and supranuclear subregions of the lens.

"The mouse had a cataract. I looked at the other eye, and there was a cataract there, too. That’s very unusual – really not ever seen – in a mouse this age."

Then he looked at all the other mice he was using in that experiment, all of which were older. They all had bilateral cataracts. My first thought was, "This can’t be related to Alzheimer’s disease."

But in fact, he said, it appeared to be. He and his lab soon showed that the cataracts contained a large concentration of aggregated beta-amyloid (Abeta) in the same fraction that’s measured in today’s cerebrospinal fluid Alzheimer’s biomarker tests.

That first observation has birthed two investigational noninvasive amyloid eye tests, which Dr. Goldstein envisions could some day be part of everyone’s annual physical exam.

In people destined to develop Alzheimer’s, some research suggests that Abeta proteins may begin to accumulate in the lens long before they build up to dangerous levels in the brain. If this turns out to be a reliable marker of risk, it could be a sign that would trigger early, presymptomatic Alzheimer’s treatment.

That’s in the future, though, because right now, there is no such treatment. But at this time, Dr. Goldstein said, an amyloid eye test could prove invaluable in reaching that goal. One reason that symptomatic patients don’t respond to investigational drugs could be that by the time the patients are treated, irreversible brain damage has already occurred.

"Once you have cognitive symptoms, the horse is not only out of the barn, it’s run out of the state," said Dr. Goldstein, director of the molecular aging and development laboratory at Boston University. "I hate the term ‘mild cognitive impairment,’ because by the time you have that, there’s nothing mild about it."

Researchers now almost universally agree that the best way to get a true picture of any drug’s potential effectiveness in Alzheimer’s will be to implement treatment before symptoms set in. In addition, Dr. Goldstein said, "Research pools are polluted. Control groups contain subjects who would develop Alzheimer’s if they live long enough," which could be skewing study results. Lens amyloid measurements might help stratify groups in drug studies, and even be a way to track very early effect on amyloid.

But that is a future yet to be determined. In the meantime, researchers still need definitive proof that supranuclear amyloid cataracts are inextricably linked to the amyloid brain plaques of Alzheimer’s.

Initial findings

In 2003, Dr. Goldstein, then at Harvard Medical School, published his original proof of concept study. It comprised postmortem eye and brain specimens from nine subjects with Alzheimer’s and eight controls without the disease, and samples of primary aqueous humor from three people without the disorder who were undergoing cataract surgery (Lancet 2003;361:1258-65).

Abeta-40 and Abeta-42 were present in all of the lenses, in amounts similar to those seen in the corresponding brains. But in patients with Alzheimer’s, the protein aggregated into clumps within the lens fiber cells, forming unusual supranuclear cataracts at the equatorial periphery that appeared to be different from common age-related cataracts and that weren’t present in the control subjects.

The cataract location is an important clue to how long the Abeta has been accumulating, Dr. Goldstein said. Lens fiber cells are particularly long lived, remaining alive for as long as a person lives or until the lens is removed during cataract surgery. The lens starts to form in very early fetal life, with more and more lens cells forming in an outward direction, creating a virtual map of a person’s lifetime, "like the rings of a tree," he said.

Dr. Goldstein and his team discovered that these distinctive cataracts develop in some patients with Alzheimer’s. They appear toward the outer edge of the lens and are composed of the same toxic Abeta protein that builds up in the brain. "The history of amyloid in the body is time stamped in the lens," he said.

 

 

As lens fiber cells age, they lose most of their organelles and become transparent – just the right state for a device meant to focus light. "They also make tons of Abeta," Dr. Goldstein said, and it appears to have a very specific function in the eye, one Dr. Rudolph Tanzi and his colleagues at Massachusetts General Hospital, Boston, identified in collaboration with Dr. Goldstein’s team (PLoS ONE 2010;5:e9505).

Dr. Lee Goldstein

"It turns out to be a very potent antimicrobial peptide," one of several the eye and brain produce to defend themselves. Amyloid’s sticky nature causes foreign invaders to clump together, so they’re more easily destroyed. This finding also suggests that Abeta could have a similar function in the brain, supporting some theories that Alzheimer’s might be at least partially triggered by a hyperinflammatory response toward an invading pathogen or another immunoreactive incident.

Testing lenses in Down syndrome patients

Interesting as all of that is, it doesn’t prove the theory that the lens amyloid record somehow tracks Alzheimer’s development. But other studies do explore that concept, including one Dr. Goldstein published in 2010. In this study, Dr. Goldstein and his colleagues examined lens amyloid in people with Down syndrome, a group predestined to develop Alzheimer’s (PLoS ONE 2010;5:e10659). The genetic mutation that causes the syndrome also increases production of the amyloid precursor protein (APP), Abeta’s antecedent.

The lenses from subjects with Down syndrome, aged 2-69 years, were compared with lenses from control subjects and people with both familial and late-onset Alzheimer’s. "The 2-year-old with Down syndrome in this study actually had more lens amyloid than the adults with familial Alzheimer’s," Dr. Goldstein said. In unpublished data, he added, the protein has even been observed in Down syndrome fetal lenses.

He expanded on this work in a poster presented at the 2013 Alzheimer’s Association International Conference. Dr. Goldstein and his team have developed and validated a laser eye scanning instrument that noninvasively measures how light is reflected from the tiniest particles – in this case clumps of Abeta protein – within the lens of living human subjects.

"We hypothesize that due to the trisomy of chromosome 21 in Down syndrome (and triplication of the APP gene), which results in increased expression of Abeta in the lens, the intensity of scattered light in Down syndrome patients will be higher than [in] age-matched controls," he noted in the poster.

Not everyone agrees with this idea, however. It has stirred controversy since he first introduced the idea, when, he said, "mainstream Alzheimer’s research simply didn’t believe it." In fact, at least two other researchers’ studies have come to quite different conclusions.

Dr. Charles Eberhart, a pathologist at Johns Hopkins University, Baltimore, published his data in the journal Brain Pathology (2013 June 28 [doi:10.1111/bpa.12070]). The study examined retinas, lenses, and brains from 11 patients with Alzheimer’s, 6 with Parkinson’s, and 6 age-matched controls. Eight eyes (five from Alzheimer’s patients and three from controls) did have cataracts. Dr. Eberhart and his colleagues used immunohistochemistry and Congo red staining to look for amyloid, phosphorylated tau, and alpha-synuclein.

"The short answer is – we didn’t find any amyloid deposits in the lens, or any abnormal tau accumulations," he said in an interview.

The study has two possible interpretations, he said: Either Abeta, tau, and synuclein don’t accumulate in Alzheimer’s eyes as they do in Alzheimer’s brains, or they are there, but simply not detected by his methods. "It certainly might be there. All we can say is that with this method, which is the accepted way of determining amyloid in brain tissue, we didn’t see it in eyes," he said.

The second study, conducted by Dr. Ralph Michael of the Universitat Autònoma de Barcelona and his colleagues, came to a similar conclusion (Exp. Eye Res. 2013;106:5-13). It involved 39 lenses and brains from 21 Alzheimer’s patients, and 15 lenses from age-matched controls. Six of the Alzheimer’s lenses and seven control lenses had cataracts. These investigators used staining methods similar to those in the Hopkins study.

"Beta-amyloid immunohistochemistry was positive in the brain tissues but not in the cornea sample," they wrote. "Lenses from control and AD [Alzheimer’s disease] donors were, without exception, negative after Congo red, thioflavin, and beta-amyloid immunohistochemical staining. ... The absence of staining in AD and control lenses with the techniques employed lead us to conclude that there is no beta-amyloid in lenses from donors with AD or in control cortical cataracts."

Dr. Goldstein said he doesn’t doubt these findings. Congo red staining yields a difficult-to-interpret sign, he said. Amyloid appears red under standard light spectroscopy, but takes on a very characteristic shade, called apple green under polarized light. "This is an old staining method that’s not very sensitive nor is it specific for Abeta – it’s also highly variable."

 

 

Technique is critical, he added. "It took us years to perfect our technique for the lens. It’s very difficult to work with lens, harder to work with old lens, and extremely hard to work with old, sick lens."

Instead of relying solely on Congo red or other staining techniques, Dr. Goldstein’s team confirmed their findings using a combination of biochemical analyses, immunogold electron microscopy, and two different types of mass spectrometry – methods he said are irrefutably accurate. "You can’t argue with this unless you are willing to argue with the very concept of mass spectrometry. It’s the gold standard," he said.

Confirmation in transgenic mice and Down syndrome patients strengthens the hypothesis, he said, as do the conclusions of his most recent paper. It looked at data from 1,249 people included in the Framingham Eye Study, and found a genetic link between a specific type of midlife cataracts (consistent with those previously found in Alzheimer’s) and later cognitive and brain structural changes associated with Alzheimer’s (PLoS ONE 2012;7:e43728) .

The culprit appeared to be a mutation of a gene that codes for delta-catenin, which Dr. Goldstein postulated may normally help suppress Abeta production. The altered form, however, appears to affect neuronal structure and is instead associated with an increase in Abeta-42 production in cell culture. The malformed delta-catenin protein was also found throughout the lenses of study subjects with Alzheimer’s, but not in control lenses.

Screening patients in the future?

Dr. Goldstein said he envisions a future in which annual lens exams might guide risk assessment and treatment initiation. But physicians who might someday screen patients certainly won’t have a mass spectrometer in the back room.

He has invented two devices, he said, that will fill that need. The most recent is a laser scanning ophthalmoscope that uses dynamic light scattering to detect the tiniest amyloid particles in the lens – particles less than 30 nm. This is the device he’s using in the ongoing Boston University/Boston Children’s Hospital study of lens amyloid in children with Down syndrome.

The second device combines optical imaging with aftobetin, a fluorescent amyloid ligand. Dr. Goldstein holds a patent on this device, which he invented in partnership with Cognoptix (formerly Neuroptix), a company he cofounded in 2001, although he is no longer operationally affiliated with it.

Cognoptix has developed the SAPPHIRE II system, a combination drug/device that detects amyloid in the lens using aftobetin. The company licensed aftobetin from the University of California, San Diego. It’s formulated into an ophthalmic ointment administered prior to scanning with the SAPPHIRE II system. The procedure uses fluorescent ligand scanning to detect amyloid aggregates in the lens, said Paul Hartung, president and chief executive officer of the Acton, Mass., company.

"We use an eye-safe laser tuned to pick up the fluorescence. It doesn’t require dilation of the pupil, and it has the capability of actually registering itself in the correct location in the eye," he said in an interview.

SAPPHIRE II has had a busy year, including a proof of concept study published in May and reported at the Alzheimer’s Association International Conference. In this study, the system successfully differentiated five Alzheimer’s patients from five controls (Front. Neurol. 2013 May 27 [doi:10.3389/fneur.2013.00062]).

Cognoptix has begun a second study testing the system against PET amyloid brain imaging in 20 patients with probable Alzheimer’s and 20 controls, Mr. Hartung said.

A third planned study is a pivotal phase III trial that will enroll 400 subjects, all of whom will undergo both the eye exam and PET amyloid imaging. It’s designed to support premarketing approval, Mr. Hartung said. Currently SAPPHIRE II has an investigational device exemption from the Food and Drug Administration’s Center for Devices and Radiological Health.

"Our end goal is to get this into the general practitioner’s office, where about 40% of Alzheimer’s drug prescriptions are written by general practitioners who really have no data on hand. Right now, based on cognitive assessments, they have only a 50-50 chance of getting the right diagnosis," Mr. Hartung said.

Dr. Goldstein and Mr. Hartung hold financial interests in devices to measure lens amyloid. Dr. Ralph Michael listed no financial disclosures. Dr. Charles Eberhart said he had no relevant financial disclosures.

[email protected]

On Twitter @Alz_Gal

Publications
Publications
Topics
Article Type
Display Headline
Diagnosing Alzheimer’s: The eyes may have it
Display Headline
Diagnosing Alzheimer’s: The eyes may have it
Legacy Keywords
Alzheimers, cataracts, eyes
Legacy Keywords
Alzheimers, cataracts, eyes
Article Source

PURLs Copyright

Inside the Article

Dexamethasone improves outcomes for infants with bronchiolitis, atopy history

Article Type
Changed
Display Headline
Dexamethasone improves outcomes for infants with bronchiolitis, atopy history

A 5-day course of dexamethasone significantly shortened hospital stays for infants with bronchiolitis who had eczema or close relatives with asthma.

The randomized, placebo-controlled study suggests that a family history of atopy could identify a subset of babies who would benefit from the addition of a corticosteroid to the usual salbutamol therapy for acute bronchiolitis, according to Dr. Khalid Alansari and colleagues. The report was published in the Sept. 16 issue of Pediatrics.

The researchers examined 7-day outcomes in 200 infants with acute bronchiolitis who were at a high risk of asthma, as determined by having at least one first-degree relative with either asthma or eczema. All of the children (mean age 3.5 months) were admitted to a pediatric hospital for treatment, wrote Dr. Alansari of Weill Cornell Medical College, Doha, Qatar, and coauthors. Infants who received dexamethasone were discharged 8 hours earlier than were those receiving standard treatment. The mean duration of symptoms was 4.5 days (Pediatrics 2013 Sept. 13 [doi: 10.1542/peds.2012-3746]).

The study’s primary outcome was time until discharge. Secondary outcomes included the number of patients who needed epinephrine treatment, readmission for a shorter stay in an infirmary site, and revisiting the emergency department or another clinic for the same illness. A study nurse made daily calls to assess the patients after discharge.

Infants in the dexamethasone group were discharged at a mean of 18.6 hours – significantly sooner than those in the control group (27 hours). Epinephrine was necessary for 19 infants in the dexamethasone group and 31 in the placebo group – again a significant difference.

Similar numbers in each group needed readmission and additional outpatient visits in the week after discharge. During the follow-up week, 22% of the dexamethasone group needed infirmary care and the mean stay was 17 hours, compared with 21% of the placebo group with a mean stay of 18 hours.

Nineteen in the dexamethasone group and 11 in the placebo group made a clinic visit (18.6% vs. 11%); this difference was not significant.

The chest radiograph was normal in about 37% of infants studied. About half showed lesser infiltrates; 15% had a lobar collapse or consolidation.

More than 70% had a full sibling with asthma. About 20% had a parent with the disease; in 5%, both parents had it. About 20% of patients had both eczema and first-degree relative with asthma.

All of the infants received 2.5 mg salbutamol nebulization at baseline and at 30, 60, and 120 minutes, and then every 2 hours until discharge. Nebulized epinephrine (0.5 mL/kg with a maximum dose of 5 mL) was available if needed. In addition, they were randomized to either placebo or to a 5-day course of dexamethasone 1 mg/mL, at a rate of 1 mL/kg on day 1, reduced to 0.6 mL/kg for days 2-5.

The study was sponsored by Hamad Medical Corporation. The authors reported no financial conflicts.

[email protected]

Author and Disclosure Information

Publications
Topics
Legacy Keywords
dexamethasone, bronchiolitis, eczema, atopy, salbutamol therapy, Dr. Khalid Alansari,
Sections
Author and Disclosure Information

Author and Disclosure Information

A 5-day course of dexamethasone significantly shortened hospital stays for infants with bronchiolitis who had eczema or close relatives with asthma.

The randomized, placebo-controlled study suggests that a family history of atopy could identify a subset of babies who would benefit from the addition of a corticosteroid to the usual salbutamol therapy for acute bronchiolitis, according to Dr. Khalid Alansari and colleagues. The report was published in the Sept. 16 issue of Pediatrics.

The researchers examined 7-day outcomes in 200 infants with acute bronchiolitis who were at a high risk of asthma, as determined by having at least one first-degree relative with either asthma or eczema. All of the children (mean age 3.5 months) were admitted to a pediatric hospital for treatment, wrote Dr. Alansari of Weill Cornell Medical College, Doha, Qatar, and coauthors. Infants who received dexamethasone were discharged 8 hours earlier than were those receiving standard treatment. The mean duration of symptoms was 4.5 days (Pediatrics 2013 Sept. 13 [doi: 10.1542/peds.2012-3746]).

The study’s primary outcome was time until discharge. Secondary outcomes included the number of patients who needed epinephrine treatment, readmission for a shorter stay in an infirmary site, and revisiting the emergency department or another clinic for the same illness. A study nurse made daily calls to assess the patients after discharge.

Infants in the dexamethasone group were discharged at a mean of 18.6 hours – significantly sooner than those in the control group (27 hours). Epinephrine was necessary for 19 infants in the dexamethasone group and 31 in the placebo group – again a significant difference.

Similar numbers in each group needed readmission and additional outpatient visits in the week after discharge. During the follow-up week, 22% of the dexamethasone group needed infirmary care and the mean stay was 17 hours, compared with 21% of the placebo group with a mean stay of 18 hours.

Nineteen in the dexamethasone group and 11 in the placebo group made a clinic visit (18.6% vs. 11%); this difference was not significant.

The chest radiograph was normal in about 37% of infants studied. About half showed lesser infiltrates; 15% had a lobar collapse or consolidation.

More than 70% had a full sibling with asthma. About 20% had a parent with the disease; in 5%, both parents had it. About 20% of patients had both eczema and first-degree relative with asthma.

All of the infants received 2.5 mg salbutamol nebulization at baseline and at 30, 60, and 120 minutes, and then every 2 hours until discharge. Nebulized epinephrine (0.5 mL/kg with a maximum dose of 5 mL) was available if needed. In addition, they were randomized to either placebo or to a 5-day course of dexamethasone 1 mg/mL, at a rate of 1 mL/kg on day 1, reduced to 0.6 mL/kg for days 2-5.

The study was sponsored by Hamad Medical Corporation. The authors reported no financial conflicts.

[email protected]

A 5-day course of dexamethasone significantly shortened hospital stays for infants with bronchiolitis who had eczema or close relatives with asthma.

The randomized, placebo-controlled study suggests that a family history of atopy could identify a subset of babies who would benefit from the addition of a corticosteroid to the usual salbutamol therapy for acute bronchiolitis, according to Dr. Khalid Alansari and colleagues. The report was published in the Sept. 16 issue of Pediatrics.

The researchers examined 7-day outcomes in 200 infants with acute bronchiolitis who were at a high risk of asthma, as determined by having at least one first-degree relative with either asthma or eczema. All of the children (mean age 3.5 months) were admitted to a pediatric hospital for treatment, wrote Dr. Alansari of Weill Cornell Medical College, Doha, Qatar, and coauthors. Infants who received dexamethasone were discharged 8 hours earlier than were those receiving standard treatment. The mean duration of symptoms was 4.5 days (Pediatrics 2013 Sept. 13 [doi: 10.1542/peds.2012-3746]).

The study’s primary outcome was time until discharge. Secondary outcomes included the number of patients who needed epinephrine treatment, readmission for a shorter stay in an infirmary site, and revisiting the emergency department or another clinic for the same illness. A study nurse made daily calls to assess the patients after discharge.

Infants in the dexamethasone group were discharged at a mean of 18.6 hours – significantly sooner than those in the control group (27 hours). Epinephrine was necessary for 19 infants in the dexamethasone group and 31 in the placebo group – again a significant difference.

Similar numbers in each group needed readmission and additional outpatient visits in the week after discharge. During the follow-up week, 22% of the dexamethasone group needed infirmary care and the mean stay was 17 hours, compared with 21% of the placebo group with a mean stay of 18 hours.

Nineteen in the dexamethasone group and 11 in the placebo group made a clinic visit (18.6% vs. 11%); this difference was not significant.

The chest radiograph was normal in about 37% of infants studied. About half showed lesser infiltrates; 15% had a lobar collapse or consolidation.

More than 70% had a full sibling with asthma. About 20% had a parent with the disease; in 5%, both parents had it. About 20% of patients had both eczema and first-degree relative with asthma.

All of the infants received 2.5 mg salbutamol nebulization at baseline and at 30, 60, and 120 minutes, and then every 2 hours until discharge. Nebulized epinephrine (0.5 mL/kg with a maximum dose of 5 mL) was available if needed. In addition, they were randomized to either placebo or to a 5-day course of dexamethasone 1 mg/mL, at a rate of 1 mL/kg on day 1, reduced to 0.6 mL/kg for days 2-5.

The study was sponsored by Hamad Medical Corporation. The authors reported no financial conflicts.

[email protected]

Publications
Publications
Topics
Article Type
Display Headline
Dexamethasone improves outcomes for infants with bronchiolitis, atopy history
Display Headline
Dexamethasone improves outcomes for infants with bronchiolitis, atopy history
Legacy Keywords
dexamethasone, bronchiolitis, eczema, atopy, salbutamol therapy, Dr. Khalid Alansari,
Legacy Keywords
dexamethasone, bronchiolitis, eczema, atopy, salbutamol therapy, Dr. Khalid Alansari,
Sections
Article Source

FROM PEDIATRICS

PURLs Copyright

Inside the Article

Vitals

Major finding: Infants with bronchiolitis who had a family history of atopy who received dexamethasone added to salbutamol were discharged significantly sooner than were those receiving usual therapy (18.6 hours vs. 27 hours).

Data source: The randomized, placebo controlled study comprised 200 infants with acute bronchiolitis.

Disclosures: The study was sponsored by Hamad Medical Corporation. The authors reported no financial conflicts.

Patients with cirrhosis did well with laparoscopic cholecystectomy

Article Type
Changed
Display Headline
Patients with cirrhosis did well with laparoscopic cholecystectomy

Laparoscopic cholecystectomy is a good choice for many patients with liver cirrhosis who need the procedure.

In addition to quickly and effectively addressing the acute illness, laparoscopic cholecystectomy may offer a future advantage, Dr. Vincenzo Neri said at the minimally invasive surgery week annual meeting and endo expo.

"Some cirrhotic patients may be candidates for liver transplantation in the future," said Dr. Neri of the University of Foggia, Italy. "Laparoscopic cholecystectomy offers the chance of fewer right upper quadrant postoperative adhesions" that might complicate later transplant surgery.

He presented a retrospective analysis designed to evaluate the safety and usefulness of a laparoscopic approach in cirrhotic patients undergoing a cholecystectomy. The series comprised 65 patients with hepatic cirrhosis and symptomatic gallstone disease. Of these, six had planned open procedures and the rest laparoscopic procedures. There were 12 conversions to open surgery.

The patients were a mean of 58 years old. More than half had at least two comorbid conditions, including hypertension (14%), cardiac disease (9%), diabetes (12%), respiratory conditions (8%), cerebrovascular disease (4%), and other problems (11%).

Total bilirubin was more than 1 mg/dL in 51% of the group. Albumin was elevated in 61%, and platelets were below 160,000/mcL in 31%. More than a quarter (27%) had a prolonged prothrombin time. About 45% were a Child-Pugh class A, 20% were class B, and the rest were class C.

Cirrhosis was known preoperatively in only 24 patients. The diagnosis was made during the hospital stay in the rest of the patients.

The most common indication for admission and surgery was biliary colic (37%). Other indications included acute cholecystitis (17%), acute biliary pancreatitis (5%), gallbladder and common bile duct stones (5%), and acute cholecystitis with cholangitis (1%). Other indications were not specified.

Of the 12 conversions, 4 were due to acute cholecystitis. Other reasons for conversion were previous laparoscopy (3), acute pancreatitis (2), hypertrophic left hepatic lobe (2), and intraoperative cholangiography (1).

The investigators compared surgical outcomes to those in an unselected control group of 81 patients without cirrhosis who had undergone laparoscopic cholecystectomy.

The mean operative time in the laparoscopic cirrhotic group was 89 minutes – similar to that in the control group (85 minutes). Among the cirrhotic patients, both planned open and converted procedures lasted about the same time (141 and 149 minutes, respectively).

Length of stay was 5 days in the cirrhotic laparoscopy group and 3 in the noncirrhotic control group. Patients with open or converted surgery stayed a mean of 9 and 8 days, respectively.

The blood transfusion rate was 4% in the laparoscopic group, and 17% in both the open and converted groups. Fourteen percent of the laparoscopic group needed transfusion of blood products, compared with 17% of the open group and 33% of the converted group. Transfusions were significantly more common among patients with a Child-Pugh B score, with 26% needing plasma, 21% blood, and 21% platelets. Among Child-Pugh class A patients, 4% needed plasma, 3% blood, and 3% platelets. There were no transfusions in the Child-Pugh class C patients.

Postoperative complications were significantly more common among patients with planned open and converted procedures than total laparoscopies (27% vs. 5%). These included transient ascites (16% vs. 8%) and wound hematoma (8% vs. 4%).

The meeting was presented by the Society of Laparoendoscopic Surgeons and affiliated societies. Dr. Neri had no financial disclosures.

[email protected]

Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
Laparoscopic cholecystectomy, liver cirrhosis
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

Laparoscopic cholecystectomy is a good choice for many patients with liver cirrhosis who need the procedure.

In addition to quickly and effectively addressing the acute illness, laparoscopic cholecystectomy may offer a future advantage, Dr. Vincenzo Neri said at the minimally invasive surgery week annual meeting and endo expo.

"Some cirrhotic patients may be candidates for liver transplantation in the future," said Dr. Neri of the University of Foggia, Italy. "Laparoscopic cholecystectomy offers the chance of fewer right upper quadrant postoperative adhesions" that might complicate later transplant surgery.

He presented a retrospective analysis designed to evaluate the safety and usefulness of a laparoscopic approach in cirrhotic patients undergoing a cholecystectomy. The series comprised 65 patients with hepatic cirrhosis and symptomatic gallstone disease. Of these, six had planned open procedures and the rest laparoscopic procedures. There were 12 conversions to open surgery.

The patients were a mean of 58 years old. More than half had at least two comorbid conditions, including hypertension (14%), cardiac disease (9%), diabetes (12%), respiratory conditions (8%), cerebrovascular disease (4%), and other problems (11%).

Total bilirubin was more than 1 mg/dL in 51% of the group. Albumin was elevated in 61%, and platelets were below 160,000/mcL in 31%. More than a quarter (27%) had a prolonged prothrombin time. About 45% were a Child-Pugh class A, 20% were class B, and the rest were class C.

Cirrhosis was known preoperatively in only 24 patients. The diagnosis was made during the hospital stay in the rest of the patients.

The most common indication for admission and surgery was biliary colic (37%). Other indications included acute cholecystitis (17%), acute biliary pancreatitis (5%), gallbladder and common bile duct stones (5%), and acute cholecystitis with cholangitis (1%). Other indications were not specified.

Of the 12 conversions, 4 were due to acute cholecystitis. Other reasons for conversion were previous laparoscopy (3), acute pancreatitis (2), hypertrophic left hepatic lobe (2), and intraoperative cholangiography (1).

The investigators compared surgical outcomes to those in an unselected control group of 81 patients without cirrhosis who had undergone laparoscopic cholecystectomy.

The mean operative time in the laparoscopic cirrhotic group was 89 minutes – similar to that in the control group (85 minutes). Among the cirrhotic patients, both planned open and converted procedures lasted about the same time (141 and 149 minutes, respectively).

Length of stay was 5 days in the cirrhotic laparoscopy group and 3 in the noncirrhotic control group. Patients with open or converted surgery stayed a mean of 9 and 8 days, respectively.

The blood transfusion rate was 4% in the laparoscopic group, and 17% in both the open and converted groups. Fourteen percent of the laparoscopic group needed transfusion of blood products, compared with 17% of the open group and 33% of the converted group. Transfusions were significantly more common among patients with a Child-Pugh B score, with 26% needing plasma, 21% blood, and 21% platelets. Among Child-Pugh class A patients, 4% needed plasma, 3% blood, and 3% platelets. There were no transfusions in the Child-Pugh class C patients.

Postoperative complications were significantly more common among patients with planned open and converted procedures than total laparoscopies (27% vs. 5%). These included transient ascites (16% vs. 8%) and wound hematoma (8% vs. 4%).

The meeting was presented by the Society of Laparoendoscopic Surgeons and affiliated societies. Dr. Neri had no financial disclosures.

[email protected]

Laparoscopic cholecystectomy is a good choice for many patients with liver cirrhosis who need the procedure.

In addition to quickly and effectively addressing the acute illness, laparoscopic cholecystectomy may offer a future advantage, Dr. Vincenzo Neri said at the minimally invasive surgery week annual meeting and endo expo.

"Some cirrhotic patients may be candidates for liver transplantation in the future," said Dr. Neri of the University of Foggia, Italy. "Laparoscopic cholecystectomy offers the chance of fewer right upper quadrant postoperative adhesions" that might complicate later transplant surgery.

He presented a retrospective analysis designed to evaluate the safety and usefulness of a laparoscopic approach in cirrhotic patients undergoing a cholecystectomy. The series comprised 65 patients with hepatic cirrhosis and symptomatic gallstone disease. Of these, six had planned open procedures and the rest laparoscopic procedures. There were 12 conversions to open surgery.

The patients were a mean of 58 years old. More than half had at least two comorbid conditions, including hypertension (14%), cardiac disease (9%), diabetes (12%), respiratory conditions (8%), cerebrovascular disease (4%), and other problems (11%).

Total bilirubin was more than 1 mg/dL in 51% of the group. Albumin was elevated in 61%, and platelets were below 160,000/mcL in 31%. More than a quarter (27%) had a prolonged prothrombin time. About 45% were a Child-Pugh class A, 20% were class B, and the rest were class C.

Cirrhosis was known preoperatively in only 24 patients. The diagnosis was made during the hospital stay in the rest of the patients.

The most common indication for admission and surgery was biliary colic (37%). Other indications included acute cholecystitis (17%), acute biliary pancreatitis (5%), gallbladder and common bile duct stones (5%), and acute cholecystitis with cholangitis (1%). Other indications were not specified.

Of the 12 conversions, 4 were due to acute cholecystitis. Other reasons for conversion were previous laparoscopy (3), acute pancreatitis (2), hypertrophic left hepatic lobe (2), and intraoperative cholangiography (1).

The investigators compared surgical outcomes to those in an unselected control group of 81 patients without cirrhosis who had undergone laparoscopic cholecystectomy.

The mean operative time in the laparoscopic cirrhotic group was 89 minutes – similar to that in the control group (85 minutes). Among the cirrhotic patients, both planned open and converted procedures lasted about the same time (141 and 149 minutes, respectively).

Length of stay was 5 days in the cirrhotic laparoscopy group and 3 in the noncirrhotic control group. Patients with open or converted surgery stayed a mean of 9 and 8 days, respectively.

The blood transfusion rate was 4% in the laparoscopic group, and 17% in both the open and converted groups. Fourteen percent of the laparoscopic group needed transfusion of blood products, compared with 17% of the open group and 33% of the converted group. Transfusions were significantly more common among patients with a Child-Pugh B score, with 26% needing plasma, 21% blood, and 21% platelets. Among Child-Pugh class A patients, 4% needed plasma, 3% blood, and 3% platelets. There were no transfusions in the Child-Pugh class C patients.

Postoperative complications were significantly more common among patients with planned open and converted procedures than total laparoscopies (27% vs. 5%). These included transient ascites (16% vs. 8%) and wound hematoma (8% vs. 4%).

The meeting was presented by the Society of Laparoendoscopic Surgeons and affiliated societies. Dr. Neri had no financial disclosures.

[email protected]

Publications
Publications
Topics
Article Type
Display Headline
Patients with cirrhosis did well with laparoscopic cholecystectomy
Display Headline
Patients with cirrhosis did well with laparoscopic cholecystectomy
Legacy Keywords
Laparoscopic cholecystectomy, liver cirrhosis
Legacy Keywords
Laparoscopic cholecystectomy, liver cirrhosis
Article Source

FROM MINIMALLY INVASIVE SURGERY WEEK

PURLs Copyright

Inside the Article

Vitals

Major finding: Hepatic cirrhosis patients who underwent a laparoscopic cholecystectomy had surgical outcomes similar to those of patients with normal livers, with mean operative times of 89 and 85 minutes, respectively.

Data source: Retrospective study of 65 patients with hepatic cirrhosis and 81 with normal livers.

Disclosures: The meeting was presented by the Society of Laparoendoscopic Surgeons and affiliated societies. Dr. Neri had no financial disclosures.

Rotigotine patch improves restless legs symptoms

Article Type
Changed
Display Headline
Rotigotine patch improves restless legs symptoms

The dopamine receptor agonist rotigotine significantly reduced symptoms of restless legs syndrome in a multicenter placebo-controlled trial.

Administered via transdermal patch, rotigotine improved scores on both the International Restless Legs Syndrome Study Group rating scale (IRLS) total score and the Pittsburgh Sleep Quality Index (PSQI), Dr. Yuichi Inoue and his colleagues wrote in Sleep Medicine (2013 Aug. 21 [doi: 10.1016/j.sleep.2013.07.007]).

An interaction analysis suggested that baseline symptoms significantly correlated with treatment results, according to Dr. Inoue of Tokyo Medical University and his coauthors.

Rotigotine is a non-ergotamine dopaminergic agonist that was approved in the United States in 2007 for Parkinson’s disease. A year later, it was pulled from the market because the drug tended to crystallize within the patch, leading to underdosing. It was reapproved last year for moderate to severe restless legs syndrome.

The study was conducted at 44 institutions in Japan from February to December 2010. It randomized 284 adults with restless legs syndrome either to the rotigotine patch in a dosage of 2 mg or 3 mg or to placebo patch. The patients’ mean age was 51 years. About 60% of each group had severe restless legs syndrome as characterized by their IRLS score. The mean PSQI score was 7.6, indicating poor sleep quality.

A 5-week titration period was followed by 8 weeks of steady-state treatment. Down-titration was permitted if adverse events occurred; 5 patients were down-titrated. Treatment was discontinued in 31 patients; the most common reason was an adverse event, usually a skin reaction.

Patients in the active groups showed improvements in both outcome measures as early as 1 week after treatment began, the investigators reported. By the end of the study, the mean IRLS score decreased by 14.3 points in the 2-mg group, 14.6 points in the 3-mg group, and 11.6 points in the placebo group – a significant difference. Sixty percent of those taking 2 mg were considered responders, as were 66% of those taking 3 mg and 47% of those taking placebo.

The mean changes in the total PSQI were not significantly different between the active and placebo groups. However, the authors said, at the end of the study, the proportion of patients with a score of less than 5.5 (a total score of 5.5 or higher was defined as pathologic sleep disturbance) was significantly greater in the 2- and 3-mg groups than in the placebo group (77% and 74% vs. 56%, respectively).

The investigators also found some significant associations when they performed an interaction analysis. "The improvements ... among patients with more severe insomnia were greater in both rotigotine groups than in the placebo group when the patients were stratified by the PSQI total score. Patients with more severe insomnia also showed a greater improvement in insomnia symptoms following treatment with rotigotine."

Most patients taking the study drug experienced some form of adverse event (80% at 2 mg, 86% at 3 mg) – significantly more than those taking placebo (52%). Most of the problems were application-site reactions. Other adverse events that were more common in the active groups included nausea and somnolence.

The study was supported by Otsuka Pharmaceutical.

[email protected]

Author and Disclosure Information

Publications
Topics
Legacy Keywords
rotigotine, restless legs syndrome, IRLS
Sections
Author and Disclosure Information

Author and Disclosure Information

The dopamine receptor agonist rotigotine significantly reduced symptoms of restless legs syndrome in a multicenter placebo-controlled trial.

Administered via transdermal patch, rotigotine improved scores on both the International Restless Legs Syndrome Study Group rating scale (IRLS) total score and the Pittsburgh Sleep Quality Index (PSQI), Dr. Yuichi Inoue and his colleagues wrote in Sleep Medicine (2013 Aug. 21 [doi: 10.1016/j.sleep.2013.07.007]).

An interaction analysis suggested that baseline symptoms significantly correlated with treatment results, according to Dr. Inoue of Tokyo Medical University and his coauthors.

Rotigotine is a non-ergotamine dopaminergic agonist that was approved in the United States in 2007 for Parkinson’s disease. A year later, it was pulled from the market because the drug tended to crystallize within the patch, leading to underdosing. It was reapproved last year for moderate to severe restless legs syndrome.

The study was conducted at 44 institutions in Japan from February to December 2010. It randomized 284 adults with restless legs syndrome either to the rotigotine patch in a dosage of 2 mg or 3 mg or to placebo patch. The patients’ mean age was 51 years. About 60% of each group had severe restless legs syndrome as characterized by their IRLS score. The mean PSQI score was 7.6, indicating poor sleep quality.

A 5-week titration period was followed by 8 weeks of steady-state treatment. Down-titration was permitted if adverse events occurred; 5 patients were down-titrated. Treatment was discontinued in 31 patients; the most common reason was an adverse event, usually a skin reaction.

Patients in the active groups showed improvements in both outcome measures as early as 1 week after treatment began, the investigators reported. By the end of the study, the mean IRLS score decreased by 14.3 points in the 2-mg group, 14.6 points in the 3-mg group, and 11.6 points in the placebo group – a significant difference. Sixty percent of those taking 2 mg were considered responders, as were 66% of those taking 3 mg and 47% of those taking placebo.

The mean changes in the total PSQI were not significantly different between the active and placebo groups. However, the authors said, at the end of the study, the proportion of patients with a score of less than 5.5 (a total score of 5.5 or higher was defined as pathologic sleep disturbance) was significantly greater in the 2- and 3-mg groups than in the placebo group (77% and 74% vs. 56%, respectively).

The investigators also found some significant associations when they performed an interaction analysis. "The improvements ... among patients with more severe insomnia were greater in both rotigotine groups than in the placebo group when the patients were stratified by the PSQI total score. Patients with more severe insomnia also showed a greater improvement in insomnia symptoms following treatment with rotigotine."

Most patients taking the study drug experienced some form of adverse event (80% at 2 mg, 86% at 3 mg) – significantly more than those taking placebo (52%). Most of the problems were application-site reactions. Other adverse events that were more common in the active groups included nausea and somnolence.

The study was supported by Otsuka Pharmaceutical.

[email protected]

The dopamine receptor agonist rotigotine significantly reduced symptoms of restless legs syndrome in a multicenter placebo-controlled trial.

Administered via transdermal patch, rotigotine improved scores on both the International Restless Legs Syndrome Study Group rating scale (IRLS) total score and the Pittsburgh Sleep Quality Index (PSQI), Dr. Yuichi Inoue and his colleagues wrote in Sleep Medicine (2013 Aug. 21 [doi: 10.1016/j.sleep.2013.07.007]).

An interaction analysis suggested that baseline symptoms significantly correlated with treatment results, according to Dr. Inoue of Tokyo Medical University and his coauthors.

Rotigotine is a non-ergotamine dopaminergic agonist that was approved in the United States in 2007 for Parkinson’s disease. A year later, it was pulled from the market because the drug tended to crystallize within the patch, leading to underdosing. It was reapproved last year for moderate to severe restless legs syndrome.

The study was conducted at 44 institutions in Japan from February to December 2010. It randomized 284 adults with restless legs syndrome either to the rotigotine patch in a dosage of 2 mg or 3 mg or to placebo patch. The patients’ mean age was 51 years. About 60% of each group had severe restless legs syndrome as characterized by their IRLS score. The mean PSQI score was 7.6, indicating poor sleep quality.

A 5-week titration period was followed by 8 weeks of steady-state treatment. Down-titration was permitted if adverse events occurred; 5 patients were down-titrated. Treatment was discontinued in 31 patients; the most common reason was an adverse event, usually a skin reaction.

Patients in the active groups showed improvements in both outcome measures as early as 1 week after treatment began, the investigators reported. By the end of the study, the mean IRLS score decreased by 14.3 points in the 2-mg group, 14.6 points in the 3-mg group, and 11.6 points in the placebo group – a significant difference. Sixty percent of those taking 2 mg were considered responders, as were 66% of those taking 3 mg and 47% of those taking placebo.

The mean changes in the total PSQI were not significantly different between the active and placebo groups. However, the authors said, at the end of the study, the proportion of patients with a score of less than 5.5 (a total score of 5.5 or higher was defined as pathologic sleep disturbance) was significantly greater in the 2- and 3-mg groups than in the placebo group (77% and 74% vs. 56%, respectively).

The investigators also found some significant associations when they performed an interaction analysis. "The improvements ... among patients with more severe insomnia were greater in both rotigotine groups than in the placebo group when the patients were stratified by the PSQI total score. Patients with more severe insomnia also showed a greater improvement in insomnia symptoms following treatment with rotigotine."

Most patients taking the study drug experienced some form of adverse event (80% at 2 mg, 86% at 3 mg) – significantly more than those taking placebo (52%). Most of the problems were application-site reactions. Other adverse events that were more common in the active groups included nausea and somnolence.

The study was supported by Otsuka Pharmaceutical.

[email protected]

Publications
Publications
Topics
Article Type
Display Headline
Rotigotine patch improves restless legs symptoms
Display Headline
Rotigotine patch improves restless legs symptoms
Legacy Keywords
rotigotine, restless legs syndrome, IRLS
Legacy Keywords
rotigotine, restless legs syndrome, IRLS
Sections
Article Source

FROM SLEEP MEDICINE

PURLs Copyright

Inside the Article

Vitals

Major finding: Patients with restless legs syndrome who used a rotigotine patch experienced a decrease of about 15 points on the International Restless Legs Syndrome Study Group rating scale, compared with a 12-point decrease among those taking placebo, a significant difference.

Data source: A randomized placebo-controlled study comprising 284 patients.

Disclosures: The study was supported by Otsuka Pharmaceutical.

Tiotropium via Respimat didn’t raise COPD mortality risk

All of the treatments performed similarly in the risk of first COPD exacerbation, the study's secondary endpoint.
Article Type
Changed
Display Headline
Tiotropium via Respimat didn’t raise COPD mortality risk

Delivering the bronchodilator tiotropium with the Respimat inhaler did not increase the risk of death in chronic obstructive pulmonary disease, compared with the HandiHaler delivery system, according to a large study.

A randomized trial of more than 17,000 patients with COPD found that tiotropium Respimat 2.5 mcg and 5 mcg were both noninferior to tiotropium HandiHaler 18 mcg with respect to mortality, Dr. Robert A. Wise and his colleagues reported in the Sept. 8 issue of the New England Journal of Medicine and presented simultaneously at the European Respiratory Society Annual Congress (N. Eng. J. Med. 2013 [doi:10.1056/NEJMoa1303342]).

The 5-mcg Respimat dose also was as effective as the HandiHaler at preventing a first exacerbation of COPD, wrote Dr. Wise of Johns Hopkins Medical Center, Baltimore, and his colleagues.

Boehringer Ingelheim, which makes the Respimat inhaler, sponsored the study.

The 5-mcg Respimat dose and the 18-mcg HandiHaler dose have been shown to be pharmacokinetically equivalent. However, "concern about the safety of tiotropium Respimat was expressed when a post hoc pooled analysis of three 1-year trials and one 6-month placebo-controlled trial showed that [the 5-mcg dose] was associated with excess mortality in the planned treatment period (relative risk, 1.33), particularly among patients with known cardiac-rhythm disorders," Dr. Wise and the coauthors said. Subsequent meta-analyses appeared to confirm the finding.

The researchers designed the TIOSPIR (Tiotropium Safety and Performance in Respimat) trial to investigate the possible increased mortality risk in a large number of patients – many with preexisting cardiac disease. TIOSPIR included 1,825 patients with cardiac arrhythmias and 3,152 with ischemic heart disease, coronary artery disease, or heart failure.

The study comprised 17,135 patients with COPD, who were randomized to once-daily tiotropium at 2.5 mcg or 5 mcg delivered by Respimat, or 18 mcg delivered by HandiHaler. The study was designed to continue until at least 1,266 deaths had occurred. The mean follow-up was 2.3 years.

Average age of the patients was 65 years; 71% were male. About a third (38%) were current smokers. The mean forced expiratory volume in 1 second was 48% of predicted value.

Cardiac disease was not uncommon: 11% had a history of arrhythmia; 6% a prior heart attack; 2% a prior stroke; and 15% ischemic heart disease or coronary artery disease. Most (62%) were taking a long-acting beta2-agonist, and 68% used inhaled glucocorticoids.

The primary endpoint was the risk of death from any cause. Death occurred in 7.7% of the 2.5-mcg Respimat group, 7.4% of the 5-mcg Respimat group, and 7.7% of the HandiHaler group. There was no significant difference in the risk of death between the HandiHaler and either the 2.5-mcg or 5-mcg Respimat dose (hazard ratios, 1.00 and 0.96, respectively).

Cause of death was judged to be cardiovascular in 2% of cases from each group, and respiratory in about 2.5% of cases in each group.

"In particular, there was no increased risk of death among the 1,221 patients with a history of cardiac arrhythmia in the Respimat 5-mcg group as compared with the HandiHaler group (10.6% and 12.9%, respectively)," the authors noted, resulting in a nonsignificant hazard ratio of 0.81.

The risk of first COPD exacerbation was the secondary endpoint. In that measure, all of the treatments performed similarly. The hazard ratio for Respimat 5 mcg vs. HandiHaler was 0.98; exacerbations occurred in 48% of the Respimat 5-mcg group and 49% of the HandiHaler group, with median time to exacerbation of 756 and 719 days, respectively.

The investigators also performed a spirometry substudy. That showed that Respimat 5 mcg was noninferior to the HandiHaler treatment; the 2.5-mcg dose, however, was not as effective as the HandiHaler.

About a third of patients in each group experienced a serious adverse event. Most were respiratory in nature (about 17% in each group), followed by cardiovascular events (4% in each group).

Dr. Wise reported that he receives consulting fees from the company. Of the 11 coauthors, 6 are Boehringer Ingelheim employees and 4 reported consulting fees, research support, or other financial relationships with the company.

[email protected]

Body

Dr. Darcy Marciniuk

Dr. Darcy D. Marciniuk, FCCP, comments: In this study of more than 17,100 participants, tiotropium delivered via the Respimat device was as effective and safe, as when delivered via the HandiHaler. Post-hoc analysis and meta-analyses of prior smaller studies suggested a possible increased risk of death, but this large and carefully performed study, which included a significant number of participants with pre-existing cardiac disease, did not replicate those concerns.
These results demonstrate, once again, the value of large, well-performed clinical research trials designed to directly address important clinical issues. 

Author and Disclosure Information

Publications
Topics
Legacy Keywords
bronchodilator tiotropium, Respimat inhaler, chronic obstructive pulmonary disease, HandiHaler delivery system,
COPD, tiotropium Respimat, tiotropium HandiHaler, Dr. Robert A. Wise, New England Journal of Medicine, European Respiratory Society Annual Congress,



Sections
Author and Disclosure Information

Author and Disclosure Information

Body

Dr. Darcy Marciniuk

Dr. Darcy D. Marciniuk, FCCP, comments: In this study of more than 17,100 participants, tiotropium delivered via the Respimat device was as effective and safe, as when delivered via the HandiHaler. Post-hoc analysis and meta-analyses of prior smaller studies suggested a possible increased risk of death, but this large and carefully performed study, which included a significant number of participants with pre-existing cardiac disease, did not replicate those concerns.
These results demonstrate, once again, the value of large, well-performed clinical research trials designed to directly address important clinical issues. 

Body

Dr. Darcy Marciniuk

Dr. Darcy D. Marciniuk, FCCP, comments: In this study of more than 17,100 participants, tiotropium delivered via the Respimat device was as effective and safe, as when delivered via the HandiHaler. Post-hoc analysis and meta-analyses of prior smaller studies suggested a possible increased risk of death, but this large and carefully performed study, which included a significant number of participants with pre-existing cardiac disease, did not replicate those concerns.
These results demonstrate, once again, the value of large, well-performed clinical research trials designed to directly address important clinical issues. 

Title
All of the treatments performed similarly in the risk of first COPD exacerbation, the study's secondary endpoint.
All of the treatments performed similarly in the risk of first COPD exacerbation, the study's secondary endpoint.

Delivering the bronchodilator tiotropium with the Respimat inhaler did not increase the risk of death in chronic obstructive pulmonary disease, compared with the HandiHaler delivery system, according to a large study.

A randomized trial of more than 17,000 patients with COPD found that tiotropium Respimat 2.5 mcg and 5 mcg were both noninferior to tiotropium HandiHaler 18 mcg with respect to mortality, Dr. Robert A. Wise and his colleagues reported in the Sept. 8 issue of the New England Journal of Medicine and presented simultaneously at the European Respiratory Society Annual Congress (N. Eng. J. Med. 2013 [doi:10.1056/NEJMoa1303342]).

The 5-mcg Respimat dose also was as effective as the HandiHaler at preventing a first exacerbation of COPD, wrote Dr. Wise of Johns Hopkins Medical Center, Baltimore, and his colleagues.

Boehringer Ingelheim, which makes the Respimat inhaler, sponsored the study.

The 5-mcg Respimat dose and the 18-mcg HandiHaler dose have been shown to be pharmacokinetically equivalent. However, "concern about the safety of tiotropium Respimat was expressed when a post hoc pooled analysis of three 1-year trials and one 6-month placebo-controlled trial showed that [the 5-mcg dose] was associated with excess mortality in the planned treatment period (relative risk, 1.33), particularly among patients with known cardiac-rhythm disorders," Dr. Wise and the coauthors said. Subsequent meta-analyses appeared to confirm the finding.

The researchers designed the TIOSPIR (Tiotropium Safety and Performance in Respimat) trial to investigate the possible increased mortality risk in a large number of patients – many with preexisting cardiac disease. TIOSPIR included 1,825 patients with cardiac arrhythmias and 3,152 with ischemic heart disease, coronary artery disease, or heart failure.

The study comprised 17,135 patients with COPD, who were randomized to once-daily tiotropium at 2.5 mcg or 5 mcg delivered by Respimat, or 18 mcg delivered by HandiHaler. The study was designed to continue until at least 1,266 deaths had occurred. The mean follow-up was 2.3 years.

Average age of the patients was 65 years; 71% were male. About a third (38%) were current smokers. The mean forced expiratory volume in 1 second was 48% of predicted value.

Cardiac disease was not uncommon: 11% had a history of arrhythmia; 6% a prior heart attack; 2% a prior stroke; and 15% ischemic heart disease or coronary artery disease. Most (62%) were taking a long-acting beta2-agonist, and 68% used inhaled glucocorticoids.

The primary endpoint was the risk of death from any cause. Death occurred in 7.7% of the 2.5-mcg Respimat group, 7.4% of the 5-mcg Respimat group, and 7.7% of the HandiHaler group. There was no significant difference in the risk of death between the HandiHaler and either the 2.5-mcg or 5-mcg Respimat dose (hazard ratios, 1.00 and 0.96, respectively).

Cause of death was judged to be cardiovascular in 2% of cases from each group, and respiratory in about 2.5% of cases in each group.

"In particular, there was no increased risk of death among the 1,221 patients with a history of cardiac arrhythmia in the Respimat 5-mcg group as compared with the HandiHaler group (10.6% and 12.9%, respectively)," the authors noted, resulting in a nonsignificant hazard ratio of 0.81.

The risk of first COPD exacerbation was the secondary endpoint. In that measure, all of the treatments performed similarly. The hazard ratio for Respimat 5 mcg vs. HandiHaler was 0.98; exacerbations occurred in 48% of the Respimat 5-mcg group and 49% of the HandiHaler group, with median time to exacerbation of 756 and 719 days, respectively.

The investigators also performed a spirometry substudy. That showed that Respimat 5 mcg was noninferior to the HandiHaler treatment; the 2.5-mcg dose, however, was not as effective as the HandiHaler.

About a third of patients in each group experienced a serious adverse event. Most were respiratory in nature (about 17% in each group), followed by cardiovascular events (4% in each group).

Dr. Wise reported that he receives consulting fees from the company. Of the 11 coauthors, 6 are Boehringer Ingelheim employees and 4 reported consulting fees, research support, or other financial relationships with the company.

[email protected]

Delivering the bronchodilator tiotropium with the Respimat inhaler did not increase the risk of death in chronic obstructive pulmonary disease, compared with the HandiHaler delivery system, according to a large study.

A randomized trial of more than 17,000 patients with COPD found that tiotropium Respimat 2.5 mcg and 5 mcg were both noninferior to tiotropium HandiHaler 18 mcg with respect to mortality, Dr. Robert A. Wise and his colleagues reported in the Sept. 8 issue of the New England Journal of Medicine and presented simultaneously at the European Respiratory Society Annual Congress (N. Eng. J. Med. 2013 [doi:10.1056/NEJMoa1303342]).

The 5-mcg Respimat dose also was as effective as the HandiHaler at preventing a first exacerbation of COPD, wrote Dr. Wise of Johns Hopkins Medical Center, Baltimore, and his colleagues.

Boehringer Ingelheim, which makes the Respimat inhaler, sponsored the study.

The 5-mcg Respimat dose and the 18-mcg HandiHaler dose have been shown to be pharmacokinetically equivalent. However, "concern about the safety of tiotropium Respimat was expressed when a post hoc pooled analysis of three 1-year trials and one 6-month placebo-controlled trial showed that [the 5-mcg dose] was associated with excess mortality in the planned treatment period (relative risk, 1.33), particularly among patients with known cardiac-rhythm disorders," Dr. Wise and the coauthors said. Subsequent meta-analyses appeared to confirm the finding.

The researchers designed the TIOSPIR (Tiotropium Safety and Performance in Respimat) trial to investigate the possible increased mortality risk in a large number of patients – many with preexisting cardiac disease. TIOSPIR included 1,825 patients with cardiac arrhythmias and 3,152 with ischemic heart disease, coronary artery disease, or heart failure.

The study comprised 17,135 patients with COPD, who were randomized to once-daily tiotropium at 2.5 mcg or 5 mcg delivered by Respimat, or 18 mcg delivered by HandiHaler. The study was designed to continue until at least 1,266 deaths had occurred. The mean follow-up was 2.3 years.

Average age of the patients was 65 years; 71% were male. About a third (38%) were current smokers. The mean forced expiratory volume in 1 second was 48% of predicted value.

Cardiac disease was not uncommon: 11% had a history of arrhythmia; 6% a prior heart attack; 2% a prior stroke; and 15% ischemic heart disease or coronary artery disease. Most (62%) were taking a long-acting beta2-agonist, and 68% used inhaled glucocorticoids.

The primary endpoint was the risk of death from any cause. Death occurred in 7.7% of the 2.5-mcg Respimat group, 7.4% of the 5-mcg Respimat group, and 7.7% of the HandiHaler group. There was no significant difference in the risk of death between the HandiHaler and either the 2.5-mcg or 5-mcg Respimat dose (hazard ratios, 1.00 and 0.96, respectively).

Cause of death was judged to be cardiovascular in 2% of cases from each group, and respiratory in about 2.5% of cases in each group.

"In particular, there was no increased risk of death among the 1,221 patients with a history of cardiac arrhythmia in the Respimat 5-mcg group as compared with the HandiHaler group (10.6% and 12.9%, respectively)," the authors noted, resulting in a nonsignificant hazard ratio of 0.81.

The risk of first COPD exacerbation was the secondary endpoint. In that measure, all of the treatments performed similarly. The hazard ratio for Respimat 5 mcg vs. HandiHaler was 0.98; exacerbations occurred in 48% of the Respimat 5-mcg group and 49% of the HandiHaler group, with median time to exacerbation of 756 and 719 days, respectively.

The investigators also performed a spirometry substudy. That showed that Respimat 5 mcg was noninferior to the HandiHaler treatment; the 2.5-mcg dose, however, was not as effective as the HandiHaler.

About a third of patients in each group experienced a serious adverse event. Most were respiratory in nature (about 17% in each group), followed by cardiovascular events (4% in each group).

Dr. Wise reported that he receives consulting fees from the company. Of the 11 coauthors, 6 are Boehringer Ingelheim employees and 4 reported consulting fees, research support, or other financial relationships with the company.

[email protected]

Publications
Publications
Topics
Article Type
Display Headline
Tiotropium via Respimat didn’t raise COPD mortality risk
Display Headline
Tiotropium via Respimat didn’t raise COPD mortality risk
Legacy Keywords
bronchodilator tiotropium, Respimat inhaler, chronic obstructive pulmonary disease, HandiHaler delivery system,
COPD, tiotropium Respimat, tiotropium HandiHaler, Dr. Robert A. Wise, New England Journal of Medicine, European Respiratory Society Annual Congress,



Legacy Keywords
bronchodilator tiotropium, Respimat inhaler, chronic obstructive pulmonary disease, HandiHaler delivery system,
COPD, tiotropium Respimat, tiotropium HandiHaler, Dr. Robert A. Wise, New England Journal of Medicine, European Respiratory Society Annual Congress,



Sections
Article Source

FROM THE ERS ANNUAL CONGRESS

PURLs Copyright

Inside the Article

Vitals

Major finding: Patients with chronic obstructive pulmonary were at no increased risk of death if they used 2.5 or 5 mcg tiotropium delivered by the Respimat inhaler, compared with 18 mcg tiotropium delivered by HandiHaler (HR, 1.00 and 0.98, respectively).

Data source: The randomized, double-blind study comprised 17,135 patients with COPD.

Disclosures: Boehringer Ingelheim sponsored the study. Dr. Wise reported that he receives consulting fees from the company. Of the 11 coauthors, 6 are Boehringer Ingelheim employees and 4 reported consulting fees, research support, or other financial relationships with the company.

Dual therapy cuts hospitalizations, surgery in IBD

Article Type
Changed
Display Headline
Dual therapy cuts hospitalizations, surgery in IBD

Early dual therapy with infliximab and an immunomodulator significantly decreased the 1-year risk of hospitalization and surgery in patients with inflammatory bowel disease, in a cohort study of almost 20,500 patients.

The improvements in hospitalization and surgery rates became apparent as quickly as 5 months after therapy initiation. By 9 months after dual therapy began, there was an 86% decrease in hospitalization and a 92% decrease in surgery compared with the rates in those who had not taken these medications.

The study shows that aggressive treatment of inflammatory bowel disease can reap robust results, Dr. Neena S. Abraham and colleagues reported in the Aug. 29 issue of Clinical Gastroenterology and Hepatology (doi:10.1016/j.cgh.2013.04.051).

The results probably underestimate the potential benefits of dual therapy, wrote Dr. Abraham of the Mayo Clinic, Scottsdale, Ariz., and associates. Only 11% of patients in the database took one of the studied treatment regimens, and of those, most (85%) got an immunomodulator only.

"Given the paucity prescribed dual therapy (8.5%), the dose-response data are even more impressive and suggest that if dual therapy had been initiated earlier, greater benefit may have been observed," the investigators wrote.

The study examined 1-year hospitalization and surgical rates in 20,474 patients with either ulcerative colitis or Crohn’s disease. The patients were all included in a database that covered 176 U.S. Department of Veteran Affairs facilities.

Three treatment protocols were examined: immunomodulator monotherapy, anti–tumor necrosis factor (TNF)-alpha monotherapy with infliximab, and dual therapy with an immunomodulator and infliximab.

Most of the patients (12,432) had ulcerative colitis (UC); the remainder had Crohn’s disease (CD). Most (94%) were male; the mean age was 72 years.

The most common strategy was the anti–TNF-alpha alone (8%). Among patients receiving infliximab, 63% had evidence of induction therapy preceding a maintenance regimen.

"Despite their clear therapeutic benefit, less than 15% of patients with inflammatory bowel disease receive anti-TNF monotherapy, and 40% of patients with active CD receive anti-TNF agents in combination with immunomodulator agents (thiopurines or methotrexate)," they commented.

Most patients (66%) were taking other drugs, including steroids (14%), nonsteroidal anti-inflammatory agents (39%), aspirin (21%), cyclosporine (0.21%), antimetabolites (0.67%), and antibiotics (29%).

A 50% relative reduction in hospitalization occurred at 7.7 months with dual therapy compared with 9 months with immunomodulator monotherapy and 8 months with anti–TNF-alpha therapy. All of these differences were statistically significant after the model was adjusted for diagnosis (UC or CD), smoking status, race/ethnicity, and other medications that could modify the treatment effect.

By 1 year, there was a 45% relative reduction in hospitalization for immunomodulator therapy and a 78% relative reduction with anti–TNF-alpha monotherapy. But patients taking dual therapy reached a similar reduction of 73% by 9 months.

"Results suggested that, if dual therapy had been initiated earlier, perhaps a greater response in the outcome may have been observed," the investigators wrote.

There was a similar beneficial effect on surgeries related to the disorders. In the first year of treatment, there were 276 procedures. By 9 months, there was a 28% risk reduction associated with immunomodulator monotherapy, a 90% reduction associated with anti–TNF-alpha monotherapy, and a 92% reduction associated with dual therapy.

This also suggests that a "greater response may have been observed with earlier initiation of dual therapy," the authors said.

Safety concerns may be one reason for the dearth of these treatment protocols, the authors noted. However, recent long-term safety data suggest that there is a very low risk of mortality associated with them.

"[Five-year] outcome data on infliximab use has failed to demonstrate increased risk of mortality, and, although increased risk of infection was observed, the presence of severe disease and use of prednisone or narcotics carried higher risks," they said, referring to a 2012 study (Am. J. Gastroenterol. 2012;107:1409-22).

A 2011 study also suggested that the benefits of dual therapy outweigh the risks of developing a serious infection or cancer (Clin. J. Gastroenterol. Hepatol. 2012;10:46-51).

Janssen Biotech funded the study. None of the authors reported having any relevant financial disclosures.

[email protected]

Author and Disclosure Information

Publications
Topics
Legacy Keywords
Early dual therapy, infliximab, immunomodulator, inflammatory bowel disease
Author and Disclosure Information

Author and Disclosure Information

Early dual therapy with infliximab and an immunomodulator significantly decreased the 1-year risk of hospitalization and surgery in patients with inflammatory bowel disease, in a cohort study of almost 20,500 patients.

The improvements in hospitalization and surgery rates became apparent as quickly as 5 months after therapy initiation. By 9 months after dual therapy began, there was an 86% decrease in hospitalization and a 92% decrease in surgery compared with the rates in those who had not taken these medications.

The study shows that aggressive treatment of inflammatory bowel disease can reap robust results, Dr. Neena S. Abraham and colleagues reported in the Aug. 29 issue of Clinical Gastroenterology and Hepatology (doi:10.1016/j.cgh.2013.04.051).

The results probably underestimate the potential benefits of dual therapy, wrote Dr. Abraham of the Mayo Clinic, Scottsdale, Ariz., and associates. Only 11% of patients in the database took one of the studied treatment regimens, and of those, most (85%) got an immunomodulator only.

"Given the paucity prescribed dual therapy (8.5%), the dose-response data are even more impressive and suggest that if dual therapy had been initiated earlier, greater benefit may have been observed," the investigators wrote.

The study examined 1-year hospitalization and surgical rates in 20,474 patients with either ulcerative colitis or Crohn’s disease. The patients were all included in a database that covered 176 U.S. Department of Veteran Affairs facilities.

Three treatment protocols were examined: immunomodulator monotherapy, anti–tumor necrosis factor (TNF)-alpha monotherapy with infliximab, and dual therapy with an immunomodulator and infliximab.

Most of the patients (12,432) had ulcerative colitis (UC); the remainder had Crohn’s disease (CD). Most (94%) were male; the mean age was 72 years.

The most common strategy was the anti–TNF-alpha alone (8%). Among patients receiving infliximab, 63% had evidence of induction therapy preceding a maintenance regimen.

"Despite their clear therapeutic benefit, less than 15% of patients with inflammatory bowel disease receive anti-TNF monotherapy, and 40% of patients with active CD receive anti-TNF agents in combination with immunomodulator agents (thiopurines or methotrexate)," they commented.

Most patients (66%) were taking other drugs, including steroids (14%), nonsteroidal anti-inflammatory agents (39%), aspirin (21%), cyclosporine (0.21%), antimetabolites (0.67%), and antibiotics (29%).

A 50% relative reduction in hospitalization occurred at 7.7 months with dual therapy compared with 9 months with immunomodulator monotherapy and 8 months with anti–TNF-alpha therapy. All of these differences were statistically significant after the model was adjusted for diagnosis (UC or CD), smoking status, race/ethnicity, and other medications that could modify the treatment effect.

By 1 year, there was a 45% relative reduction in hospitalization for immunomodulator therapy and a 78% relative reduction with anti–TNF-alpha monotherapy. But patients taking dual therapy reached a similar reduction of 73% by 9 months.

"Results suggested that, if dual therapy had been initiated earlier, perhaps a greater response in the outcome may have been observed," the investigators wrote.

There was a similar beneficial effect on surgeries related to the disorders. In the first year of treatment, there were 276 procedures. By 9 months, there was a 28% risk reduction associated with immunomodulator monotherapy, a 90% reduction associated with anti–TNF-alpha monotherapy, and a 92% reduction associated with dual therapy.

This also suggests that a "greater response may have been observed with earlier initiation of dual therapy," the authors said.

Safety concerns may be one reason for the dearth of these treatment protocols, the authors noted. However, recent long-term safety data suggest that there is a very low risk of mortality associated with them.

"[Five-year] outcome data on infliximab use has failed to demonstrate increased risk of mortality, and, although increased risk of infection was observed, the presence of severe disease and use of prednisone or narcotics carried higher risks," they said, referring to a 2012 study (Am. J. Gastroenterol. 2012;107:1409-22).

A 2011 study also suggested that the benefits of dual therapy outweigh the risks of developing a serious infection or cancer (Clin. J. Gastroenterol. Hepatol. 2012;10:46-51).

Janssen Biotech funded the study. None of the authors reported having any relevant financial disclosures.

[email protected]

Early dual therapy with infliximab and an immunomodulator significantly decreased the 1-year risk of hospitalization and surgery in patients with inflammatory bowel disease, in a cohort study of almost 20,500 patients.

The improvements in hospitalization and surgery rates became apparent as quickly as 5 months after therapy initiation. By 9 months after dual therapy began, there was an 86% decrease in hospitalization and a 92% decrease in surgery compared with the rates in those who had not taken these medications.

The study shows that aggressive treatment of inflammatory bowel disease can reap robust results, Dr. Neena S. Abraham and colleagues reported in the Aug. 29 issue of Clinical Gastroenterology and Hepatology (doi:10.1016/j.cgh.2013.04.051).

The results probably underestimate the potential benefits of dual therapy, wrote Dr. Abraham of the Mayo Clinic, Scottsdale, Ariz., and associates. Only 11% of patients in the database took one of the studied treatment regimens, and of those, most (85%) got an immunomodulator only.

"Given the paucity prescribed dual therapy (8.5%), the dose-response data are even more impressive and suggest that if dual therapy had been initiated earlier, greater benefit may have been observed," the investigators wrote.

The study examined 1-year hospitalization and surgical rates in 20,474 patients with either ulcerative colitis or Crohn’s disease. The patients were all included in a database that covered 176 U.S. Department of Veteran Affairs facilities.

Three treatment protocols were examined: immunomodulator monotherapy, anti–tumor necrosis factor (TNF)-alpha monotherapy with infliximab, and dual therapy with an immunomodulator and infliximab.

Most of the patients (12,432) had ulcerative colitis (UC); the remainder had Crohn’s disease (CD). Most (94%) were male; the mean age was 72 years.

The most common strategy was the anti–TNF-alpha alone (8%). Among patients receiving infliximab, 63% had evidence of induction therapy preceding a maintenance regimen.

"Despite their clear therapeutic benefit, less than 15% of patients with inflammatory bowel disease receive anti-TNF monotherapy, and 40% of patients with active CD receive anti-TNF agents in combination with immunomodulator agents (thiopurines or methotrexate)," they commented.

Most patients (66%) were taking other drugs, including steroids (14%), nonsteroidal anti-inflammatory agents (39%), aspirin (21%), cyclosporine (0.21%), antimetabolites (0.67%), and antibiotics (29%).

A 50% relative reduction in hospitalization occurred at 7.7 months with dual therapy compared with 9 months with immunomodulator monotherapy and 8 months with anti–TNF-alpha therapy. All of these differences were statistically significant after the model was adjusted for diagnosis (UC or CD), smoking status, race/ethnicity, and other medications that could modify the treatment effect.

By 1 year, there was a 45% relative reduction in hospitalization for immunomodulator therapy and a 78% relative reduction with anti–TNF-alpha monotherapy. But patients taking dual therapy reached a similar reduction of 73% by 9 months.

"Results suggested that, if dual therapy had been initiated earlier, perhaps a greater response in the outcome may have been observed," the investigators wrote.

There was a similar beneficial effect on surgeries related to the disorders. In the first year of treatment, there were 276 procedures. By 9 months, there was a 28% risk reduction associated with immunomodulator monotherapy, a 90% reduction associated with anti–TNF-alpha monotherapy, and a 92% reduction associated with dual therapy.

This also suggests that a "greater response may have been observed with earlier initiation of dual therapy," the authors said.

Safety concerns may be one reason for the dearth of these treatment protocols, the authors noted. However, recent long-term safety data suggest that there is a very low risk of mortality associated with them.

"[Five-year] outcome data on infliximab use has failed to demonstrate increased risk of mortality, and, although increased risk of infection was observed, the presence of severe disease and use of prednisone or narcotics carried higher risks," they said, referring to a 2012 study (Am. J. Gastroenterol. 2012;107:1409-22).

A 2011 study also suggested that the benefits of dual therapy outweigh the risks of developing a serious infection or cancer (Clin. J. Gastroenterol. Hepatol. 2012;10:46-51).

Janssen Biotech funded the study. None of the authors reported having any relevant financial disclosures.

[email protected]

Publications
Publications
Topics
Article Type
Display Headline
Dual therapy cuts hospitalizations, surgery in IBD
Display Headline
Dual therapy cuts hospitalizations, surgery in IBD
Legacy Keywords
Early dual therapy, infliximab, immunomodulator, inflammatory bowel disease
Legacy Keywords
Early dual therapy, infliximab, immunomodulator, inflammatory bowel disease
Article Source

FROM CLINICAL GASTROENTERGOLOGY AND HEPATOLOGY

PURLs Copyright

Inside the Article

Vitals

Major finding: For patients with inflammatory bowel disease, dual therapy with infliximab and an immunomodulator was associated with a 50% decrease in hospitalizations by 7.7 months, compared with 8 months with anti–TNF-alpha monotherapy and 9 months with immunomodulator monotherapy.

Data source: A study of 20,474 patients with ulcerative colitis or Crohn’s disease.

Disclosures: Janssen Biotech funded the study. None of the authors reported having any relevant financial disclosures.

Subcutaneous ICD effective at VT/VF conversion, with few inappropriate shocks

ICD Devices – Back to the Future
Article Type
Changed
Display Headline
Subcutaneous ICD effective at VT/VF conversion, with few inappropriate shocks

A newly approved, totally subcutaneous implantable cardioverter-defibrillator was nearly 100% effective in detecting and treating ventricular arrhythmias, without the need for transvenous leads.

This exceeded the prespecified effectiveness goal of 88% in an 11-month safety and efficacy analysis of 314 patients who were eligible for an ICD and did not require pacing, Dr. Raul Weiss and colleagues reported in the August 26 issue of Circulation.

The device had a mean time to therapy of about 15 seconds, with some intervals as long as 18 seconds or more, noted Dr. Weiss of Ohio State University, Columbus, and his coinvestigators. There were no clinically relevant consequences for this and, in fact, they said, "a slightly longer time to therapy has the benefit of allowing spontaneous self-termination of many [tachycardia/fibrillation episodes]."

The subcutaneous implantable cardioverter-defibrillator (S-ICD) by Cameron Health/Boston Scientific was approved by the Food and Drug Administration in September 2012. It is the first ICD that can be placed without transvenous leads, which are associated with numerous complications, mechanical failure, and complicated extraction procedures. The subcutaneous pulse generator and electrode are placed extrathoracically, without the need for fluoroscopic guidance.

The company contends that this design spares the risks associated with an intravascular location or mechanical stresses caused by cardiac contractions. The authors noted, however, that "avoiding the intravascular space has its own limitations because the first-generation S-ICD lacks the ability to provide antitachycardia pacing, advanced diagnostics, or radiofrequency interrogation with remote monitoring."

As such, they said, the device should be considered as an add-on to other "tools available to combat sudden death."

The multicenter study examined the device’s efficacy and safety for the treatment of ventricular tachycardia/fibrillation (VT/VF) in a cohort of 314 patients. Of these, 276 (88%) had more than 180 days of follow-up; there were three deaths.

The patients had a mean of age of 52 years, and most (74%) were male. Most implantations (79%) were for primary prevention indications. The patients’ mean baseline ejection fraction was 36%. Congestive heart failure was the most common pathology (61%); others included hypertension (58%), prior myocardial infarction (41%), and atrial fibrillation (15%). Thirteen percent had previously had a transvenous ICD that was extracted because of infection, vascular injury/clot, or device/lead failure.

By 180 days, there were no device-related complications in almost all patients (99%). There were no lead failures, cases of endocarditis or bacteria, cardiac tamponade or perforation, pneumothorax or hemothorax, or subclavian vein occlusion (Circulation 2013;128:944-53 [doi:10.1161/CIRCULATIONAHA.113.0]).

Eight patients died during the study. Five were noncardiac, nonsudden, and not related to device implantation. Two patients died unwitnessed. Of these, interrogation showed successful treatment of a single arrhythmic interval. The other device was not interrogated because the patient wasn’t immediately discovered, but this patient had been diagnosed with atypical pneumonia and hypoxia before he died. The last death occurred outside the United States and there was no additional information available.

In 304 patients who completed the full testing protocol, there was a 100% conversion rate of spontaneous VT/VF episodes. Acute conversion of VF was successful after one or two shocks in 265 of these patients. Seventeen patients were unevaluable because of clinical issues that precluded testing; when these 17 patients were imputed as clinical failures, the efficacy rate dropped to 95%.

Among 21 patients, there were 119 spontaneous VT/VF episodes, 38 of which were discrete and the rest of which occurred during VT/VF storms. Among the 38 discrete episodes, the devices delivered 43 appropriate shocks, all of which were successful in resolving the arrhythmias. Most (35) converted after the first shock; two more required more than one shock. One episode of monomorphic VT did not convert with shocking but did spontaneously terminate while the device was recharging.

Inappropriate shocks occurred in 25 patients, all caused by oversensing. Of these, 22 had oversensed T waves or broad QRS complexes and three experienced oversensing due to external noise while working with electrical equipment. Most of the patients (32) were treated noninvasively with system reprogramming or medication changes.

A programming change resulted in a 56% relative reduction of inappropriate shocks caused by oversensing and a 70% relative reduction in shocks caused by supraventricular tachycardia.

Nine patients had an invasive procedure to deal with the problem: two of these had the device removed, two had it turned off, one had the electrode repositioned, and one pulse generator was repositioned. There was one Maze surgery, one ablation, and one electrophysiology study without ablation.

Dr. Weiss and his colleagues noted that the follow-up time in the study was relatively short – not long enough to account for the possibility of progression of conduction disease, which could happen in unknown degrees over time, depending on the patient and the substrate.

 

 

The early treatment also was associated with a significantly lower proportion of any intracranial hemorrhage (14.8% vs. 17.6%) and a nonsignificantly lower proportion of symptomatic intracranial hemorrhage (3.7% vs. 4.5%).

Early treatment was not associated with mortality – a finding contrary to the authors’ prior study. "This may be because of the limited number of patients treated ultra-early in the current cohort," they said.

Dr. Weiss reported no financial disclosures, but 10 of the 19 coauthors did report multiple relationships with pharmaceutical companies. Dr. Weiss and eight of the 11 coauthors are consultants for Cameron Health. Three have received research grants from the company. Dr. Weiss and three of the coauthors are consultants for and have received research grants from Boston Scientific or Cameron Health.

[email protected]

On Twitter @Alz_Gal

Body

A new, stripped-down model of the implantable cardioverter-defibrillator seems safe and effective in this relatively small study, but its lean features menu will send large, long-term data collection back in time, Dr. Leslie A. Saxon said in an accompanying editorial (Circulation 2013;128:938-940).


Dr. Leslie A Saxon

The subcutaneous implantable device harkens back to the first generation of such equipment, and it lacks the hardware and software upgrades that enabled their rapid evolution to multifunctional devices with performance data that can be instantaneously acquired.

"Without the ability to remotely collect episodes in all subcutaneous device recipients, it is difficult to know how the learning around spontaneous VT/VF episodes and treatment will occur, other than the old-fashioned way, through case reports and post-approval registries," she wrote. "This is a significant limitation from a clinical learning and safety advisory perspective."

The study involved 314 patients, 21 of whom received a total of 119 shocks for spontaneous arrhythmias. The success rates were high – 92% for ventricular tachycardia and 97% for ventricular fibrillation. "Although these data are reassuring and comparable to transvenous ICD success rates, the overall number of treated episodes is incredibly small in comparison with the data on transvenous defibrillator therapies delivered outside the hospital, over the life of the device, that are available for analysis in tens of thousands of patients," Dr. Saxon noted.

Patient selection will be key in appropriately deploying the device, she said.

"Selecting the appropriate candidate for the subcutaneous rather than transvenous ICD requires an acknowledgment of the strengths and limitation of each device and an educated guess, as well, regarding the clinical course of the patient, over the battery life of the ICD," she said. "Patients with advanced symptom class heart failure or very depressed ventricular function may face additional risk at implant because of the need for at least two ventricular fibrillation inductions and the longer times to defibrillation associated with the subcutaneous device. There remain a significant percentage of patients with clinical profiles favorable for a subcutaneous device. These predominately include those with prohibitive vascular access issues and those at heightened risk for major systemic infection with an indwelling chronic vascular lead."

Dr. Saxon is chief of the division of cardiovascular medicine at the University of Southern California. She is on the medical advisory boards of Boston Scientific and Medtronic and was a principal investigator on the device study.

Author and Disclosure Information

Publications
Topics
Legacy Keywords
subcutaneous implantable cardioverter-defibrillator, ventricular arrhythmias, transvenous lead, Dr. Raul Weiss,
Author and Disclosure Information

Author and Disclosure Information

Related Articles
Body

A new, stripped-down model of the implantable cardioverter-defibrillator seems safe and effective in this relatively small study, but its lean features menu will send large, long-term data collection back in time, Dr. Leslie A. Saxon said in an accompanying editorial (Circulation 2013;128:938-940).


Dr. Leslie A Saxon

The subcutaneous implantable device harkens back to the first generation of such equipment, and it lacks the hardware and software upgrades that enabled their rapid evolution to multifunctional devices with performance data that can be instantaneously acquired.

"Without the ability to remotely collect episodes in all subcutaneous device recipients, it is difficult to know how the learning around spontaneous VT/VF episodes and treatment will occur, other than the old-fashioned way, through case reports and post-approval registries," she wrote. "This is a significant limitation from a clinical learning and safety advisory perspective."

The study involved 314 patients, 21 of whom received a total of 119 shocks for spontaneous arrhythmias. The success rates were high – 92% for ventricular tachycardia and 97% for ventricular fibrillation. "Although these data are reassuring and comparable to transvenous ICD success rates, the overall number of treated episodes is incredibly small in comparison with the data on transvenous defibrillator therapies delivered outside the hospital, over the life of the device, that are available for analysis in tens of thousands of patients," Dr. Saxon noted.

Patient selection will be key in appropriately deploying the device, she said.

"Selecting the appropriate candidate for the subcutaneous rather than transvenous ICD requires an acknowledgment of the strengths and limitation of each device and an educated guess, as well, regarding the clinical course of the patient, over the battery life of the ICD," she said. "Patients with advanced symptom class heart failure or very depressed ventricular function may face additional risk at implant because of the need for at least two ventricular fibrillation inductions and the longer times to defibrillation associated with the subcutaneous device. There remain a significant percentage of patients with clinical profiles favorable for a subcutaneous device. These predominately include those with prohibitive vascular access issues and those at heightened risk for major systemic infection with an indwelling chronic vascular lead."

Dr. Saxon is chief of the division of cardiovascular medicine at the University of Southern California. She is on the medical advisory boards of Boston Scientific and Medtronic and was a principal investigator on the device study.

Body

A new, stripped-down model of the implantable cardioverter-defibrillator seems safe and effective in this relatively small study, but its lean features menu will send large, long-term data collection back in time, Dr. Leslie A. Saxon said in an accompanying editorial (Circulation 2013;128:938-940).


Dr. Leslie A Saxon

The subcutaneous implantable device harkens back to the first generation of such equipment, and it lacks the hardware and software upgrades that enabled their rapid evolution to multifunctional devices with performance data that can be instantaneously acquired.

"Without the ability to remotely collect episodes in all subcutaneous device recipients, it is difficult to know how the learning around spontaneous VT/VF episodes and treatment will occur, other than the old-fashioned way, through case reports and post-approval registries," she wrote. "This is a significant limitation from a clinical learning and safety advisory perspective."

The study involved 314 patients, 21 of whom received a total of 119 shocks for spontaneous arrhythmias. The success rates were high – 92% for ventricular tachycardia and 97% for ventricular fibrillation. "Although these data are reassuring and comparable to transvenous ICD success rates, the overall number of treated episodes is incredibly small in comparison with the data on transvenous defibrillator therapies delivered outside the hospital, over the life of the device, that are available for analysis in tens of thousands of patients," Dr. Saxon noted.

Patient selection will be key in appropriately deploying the device, she said.

"Selecting the appropriate candidate for the subcutaneous rather than transvenous ICD requires an acknowledgment of the strengths and limitation of each device and an educated guess, as well, regarding the clinical course of the patient, over the battery life of the ICD," she said. "Patients with advanced symptom class heart failure or very depressed ventricular function may face additional risk at implant because of the need for at least two ventricular fibrillation inductions and the longer times to defibrillation associated with the subcutaneous device. There remain a significant percentage of patients with clinical profiles favorable for a subcutaneous device. These predominately include those with prohibitive vascular access issues and those at heightened risk for major systemic infection with an indwelling chronic vascular lead."

Dr. Saxon is chief of the division of cardiovascular medicine at the University of Southern California. She is on the medical advisory boards of Boston Scientific and Medtronic and was a principal investigator on the device study.

Title
ICD Devices – Back to the Future
ICD Devices – Back to the Future

A newly approved, totally subcutaneous implantable cardioverter-defibrillator was nearly 100% effective in detecting and treating ventricular arrhythmias, without the need for transvenous leads.

This exceeded the prespecified effectiveness goal of 88% in an 11-month safety and efficacy analysis of 314 patients who were eligible for an ICD and did not require pacing, Dr. Raul Weiss and colleagues reported in the August 26 issue of Circulation.

The device had a mean time to therapy of about 15 seconds, with some intervals as long as 18 seconds or more, noted Dr. Weiss of Ohio State University, Columbus, and his coinvestigators. There were no clinically relevant consequences for this and, in fact, they said, "a slightly longer time to therapy has the benefit of allowing spontaneous self-termination of many [tachycardia/fibrillation episodes]."

The subcutaneous implantable cardioverter-defibrillator (S-ICD) by Cameron Health/Boston Scientific was approved by the Food and Drug Administration in September 2012. It is the first ICD that can be placed without transvenous leads, which are associated with numerous complications, mechanical failure, and complicated extraction procedures. The subcutaneous pulse generator and electrode are placed extrathoracically, without the need for fluoroscopic guidance.

The company contends that this design spares the risks associated with an intravascular location or mechanical stresses caused by cardiac contractions. The authors noted, however, that "avoiding the intravascular space has its own limitations because the first-generation S-ICD lacks the ability to provide antitachycardia pacing, advanced diagnostics, or radiofrequency interrogation with remote monitoring."

As such, they said, the device should be considered as an add-on to other "tools available to combat sudden death."

The multicenter study examined the device’s efficacy and safety for the treatment of ventricular tachycardia/fibrillation (VT/VF) in a cohort of 314 patients. Of these, 276 (88%) had more than 180 days of follow-up; there were three deaths.

The patients had a mean of age of 52 years, and most (74%) were male. Most implantations (79%) were for primary prevention indications. The patients’ mean baseline ejection fraction was 36%. Congestive heart failure was the most common pathology (61%); others included hypertension (58%), prior myocardial infarction (41%), and atrial fibrillation (15%). Thirteen percent had previously had a transvenous ICD that was extracted because of infection, vascular injury/clot, or device/lead failure.

By 180 days, there were no device-related complications in almost all patients (99%). There were no lead failures, cases of endocarditis or bacteria, cardiac tamponade or perforation, pneumothorax or hemothorax, or subclavian vein occlusion (Circulation 2013;128:944-53 [doi:10.1161/CIRCULATIONAHA.113.0]).

Eight patients died during the study. Five were noncardiac, nonsudden, and not related to device implantation. Two patients died unwitnessed. Of these, interrogation showed successful treatment of a single arrhythmic interval. The other device was not interrogated because the patient wasn’t immediately discovered, but this patient had been diagnosed with atypical pneumonia and hypoxia before he died. The last death occurred outside the United States and there was no additional information available.

In 304 patients who completed the full testing protocol, there was a 100% conversion rate of spontaneous VT/VF episodes. Acute conversion of VF was successful after one or two shocks in 265 of these patients. Seventeen patients were unevaluable because of clinical issues that precluded testing; when these 17 patients were imputed as clinical failures, the efficacy rate dropped to 95%.

Among 21 patients, there were 119 spontaneous VT/VF episodes, 38 of which were discrete and the rest of which occurred during VT/VF storms. Among the 38 discrete episodes, the devices delivered 43 appropriate shocks, all of which were successful in resolving the arrhythmias. Most (35) converted after the first shock; two more required more than one shock. One episode of monomorphic VT did not convert with shocking but did spontaneously terminate while the device was recharging.

Inappropriate shocks occurred in 25 patients, all caused by oversensing. Of these, 22 had oversensed T waves or broad QRS complexes and three experienced oversensing due to external noise while working with electrical equipment. Most of the patients (32) were treated noninvasively with system reprogramming or medication changes.

A programming change resulted in a 56% relative reduction of inappropriate shocks caused by oversensing and a 70% relative reduction in shocks caused by supraventricular tachycardia.

Nine patients had an invasive procedure to deal with the problem: two of these had the device removed, two had it turned off, one had the electrode repositioned, and one pulse generator was repositioned. There was one Maze surgery, one ablation, and one electrophysiology study without ablation.

Dr. Weiss and his colleagues noted that the follow-up time in the study was relatively short – not long enough to account for the possibility of progression of conduction disease, which could happen in unknown degrees over time, depending on the patient and the substrate.

 

 

The early treatment also was associated with a significantly lower proportion of any intracranial hemorrhage (14.8% vs. 17.6%) and a nonsignificantly lower proportion of symptomatic intracranial hemorrhage (3.7% vs. 4.5%).

Early treatment was not associated with mortality – a finding contrary to the authors’ prior study. "This may be because of the limited number of patients treated ultra-early in the current cohort," they said.

Dr. Weiss reported no financial disclosures, but 10 of the 19 coauthors did report multiple relationships with pharmaceutical companies. Dr. Weiss and eight of the 11 coauthors are consultants for Cameron Health. Three have received research grants from the company. Dr. Weiss and three of the coauthors are consultants for and have received research grants from Boston Scientific or Cameron Health.

[email protected]

On Twitter @Alz_Gal

A newly approved, totally subcutaneous implantable cardioverter-defibrillator was nearly 100% effective in detecting and treating ventricular arrhythmias, without the need for transvenous leads.

This exceeded the prespecified effectiveness goal of 88% in an 11-month safety and efficacy analysis of 314 patients who were eligible for an ICD and did not require pacing, Dr. Raul Weiss and colleagues reported in the August 26 issue of Circulation.

The device had a mean time to therapy of about 15 seconds, with some intervals as long as 18 seconds or more, noted Dr. Weiss of Ohio State University, Columbus, and his coinvestigators. There were no clinically relevant consequences for this and, in fact, they said, "a slightly longer time to therapy has the benefit of allowing spontaneous self-termination of many [tachycardia/fibrillation episodes]."

The subcutaneous implantable cardioverter-defibrillator (S-ICD) by Cameron Health/Boston Scientific was approved by the Food and Drug Administration in September 2012. It is the first ICD that can be placed without transvenous leads, which are associated with numerous complications, mechanical failure, and complicated extraction procedures. The subcutaneous pulse generator and electrode are placed extrathoracically, without the need for fluoroscopic guidance.

The company contends that this design spares the risks associated with an intravascular location or mechanical stresses caused by cardiac contractions. The authors noted, however, that "avoiding the intravascular space has its own limitations because the first-generation S-ICD lacks the ability to provide antitachycardia pacing, advanced diagnostics, or radiofrequency interrogation with remote monitoring."

As such, they said, the device should be considered as an add-on to other "tools available to combat sudden death."

The multicenter study examined the device’s efficacy and safety for the treatment of ventricular tachycardia/fibrillation (VT/VF) in a cohort of 314 patients. Of these, 276 (88%) had more than 180 days of follow-up; there were three deaths.

The patients had a mean of age of 52 years, and most (74%) were male. Most implantations (79%) were for primary prevention indications. The patients’ mean baseline ejection fraction was 36%. Congestive heart failure was the most common pathology (61%); others included hypertension (58%), prior myocardial infarction (41%), and atrial fibrillation (15%). Thirteen percent had previously had a transvenous ICD that was extracted because of infection, vascular injury/clot, or device/lead failure.

By 180 days, there were no device-related complications in almost all patients (99%). There were no lead failures, cases of endocarditis or bacteria, cardiac tamponade or perforation, pneumothorax or hemothorax, or subclavian vein occlusion (Circulation 2013;128:944-53 [doi:10.1161/CIRCULATIONAHA.113.0]).

Eight patients died during the study. Five were noncardiac, nonsudden, and not related to device implantation. Two patients died unwitnessed. Of these, interrogation showed successful treatment of a single arrhythmic interval. The other device was not interrogated because the patient wasn’t immediately discovered, but this patient had been diagnosed with atypical pneumonia and hypoxia before he died. The last death occurred outside the United States and there was no additional information available.

In 304 patients who completed the full testing protocol, there was a 100% conversion rate of spontaneous VT/VF episodes. Acute conversion of VF was successful after one or two shocks in 265 of these patients. Seventeen patients were unevaluable because of clinical issues that precluded testing; when these 17 patients were imputed as clinical failures, the efficacy rate dropped to 95%.

Among 21 patients, there were 119 spontaneous VT/VF episodes, 38 of which were discrete and the rest of which occurred during VT/VF storms. Among the 38 discrete episodes, the devices delivered 43 appropriate shocks, all of which were successful in resolving the arrhythmias. Most (35) converted after the first shock; two more required more than one shock. One episode of monomorphic VT did not convert with shocking but did spontaneously terminate while the device was recharging.

Inappropriate shocks occurred in 25 patients, all caused by oversensing. Of these, 22 had oversensed T waves or broad QRS complexes and three experienced oversensing due to external noise while working with electrical equipment. Most of the patients (32) were treated noninvasively with system reprogramming or medication changes.

A programming change resulted in a 56% relative reduction of inappropriate shocks caused by oversensing and a 70% relative reduction in shocks caused by supraventricular tachycardia.

Nine patients had an invasive procedure to deal with the problem: two of these had the device removed, two had it turned off, one had the electrode repositioned, and one pulse generator was repositioned. There was one Maze surgery, one ablation, and one electrophysiology study without ablation.

Dr. Weiss and his colleagues noted that the follow-up time in the study was relatively short – not long enough to account for the possibility of progression of conduction disease, which could happen in unknown degrees over time, depending on the patient and the substrate.

 

 

The early treatment also was associated with a significantly lower proportion of any intracranial hemorrhage (14.8% vs. 17.6%) and a nonsignificantly lower proportion of symptomatic intracranial hemorrhage (3.7% vs. 4.5%).

Early treatment was not associated with mortality – a finding contrary to the authors’ prior study. "This may be because of the limited number of patients treated ultra-early in the current cohort," they said.

Dr. Weiss reported no financial disclosures, but 10 of the 19 coauthors did report multiple relationships with pharmaceutical companies. Dr. Weiss and eight of the 11 coauthors are consultants for Cameron Health. Three have received research grants from the company. Dr. Weiss and three of the coauthors are consultants for and have received research grants from Boston Scientific or Cameron Health.

[email protected]

On Twitter @Alz_Gal

Publications
Publications
Topics
Article Type
Display Headline
Subcutaneous ICD effective at VT/VF conversion, with few inappropriate shocks
Display Headline
Subcutaneous ICD effective at VT/VF conversion, with few inappropriate shocks
Legacy Keywords
subcutaneous implantable cardioverter-defibrillator, ventricular arrhythmias, transvenous lead, Dr. Raul Weiss,
Legacy Keywords
subcutaneous implantable cardioverter-defibrillator, ventricular arrhythmias, transvenous lead, Dr. Raul Weiss,
Article Source

FROM CIRCULATION

PURLs Copyright

Inside the Article

Vitals

Major finding: A totally subcutaneous ICD device corrected 92% of ventricular tachycardia and 97% of ventricular fibrillation episodes, with a 13% rate of inappropriate shocks.

Data source: The prospective study involved 314 patients.

Disclosures: Dr. Weiss and eight of the 11 coauthors are consultants for Cameron Health. Three have received research grants from the company. Dr. Weiss and three of the coauthors are consultants for and have received research grants from Boston Scientific or Cameron Health.