Brain Exercises Don't Improve General Cognitive Function

Article Type
Changed
Thu, 12/06/2018 - 14:35
Display Headline
Brain Exercises Don't Improve General Cognitive Function

Major Finding: Improvements seen in brain training tasks translated poorly to performance on benchmarking tests that used similar cognitive functions (effect sizes, 0.01–0.22).

Data Source: A 6-week trial of “brain training” exercises in 11,430 participants.

Disclosures: The authors reported having no financial conflicts of interest.

“Brain training” does not improve general cognitive function, according to a 6-week trial of more than 11,000 participants.

The study results “provide no evidence for any generalized improvements in cognitive function following brain training in a large sample of healthy adults,” Adrian M. Owen and his colleagues reported.

The participants were divided into three groups: the experimental group 1 (4,678 subjects), which did six tasks emphasizing reasoning, planning, and problem solving; experimental group 2 (4,014 subjects), which practiced six tasks focusing on short-term memory, attention, visuospatial processing, and mathematics; and a control group (2,738 subjects), which answered various research questions using the Internet.

The tasks given to group 2 were considered to be most like those of commercially available “brain training” programs, said Mr. Owen of the Medical Research Council Cognition and Brain Sciences Unit, Cambridge, England, and his colleagues.

The participants were assessed before and after the intervention using benchmarking tests that measured reasoning, verbal short-term memory, spatial working memory, and paired-associates learning. Participants completed an average of 24 training sessions over the 6-week period (range, 1–188). The tasks were performed for a minimum of 10 minutes a day, three times a week.

All three groups improved on the tasks they had been assigned to practice during the trial (effect sizes: group 1, 0.73–1.63; group 2, 0.72–0.97; controls, 0.33). However, postintervention improvements on the benchmarking tests were much smaller (effect sizes: 0.01–0.22 for all groups). The control group improved slightly more than the experimental groups on two measures.

The groups were similar in age (average, 39–40 years) and gender (each group had 4–5 times as many female participants). No relationship was seen between number of training sessions performed or age of participants and postintervention benchmarking test scores.

Although participants improved at their assigned tasks, “training-related improvements may not even generalize to other tasks that use similar cognitive functions,” the researchers said (Nature 2010 Apr. 20 [doi:10.1038/nature09042

“Six weeks of regular computerized brain training confers no greater benefit than simply answering general knowledge questions using the Internet,” they concluded.

My Take

Credible Study on Complex Question

The notion of exercising the mind to reduce its deterioration is popular in the world of Alzheimer's disease: Do more crossword puzzles and you will slow the progression of dementia. But is it true? Epidemiological studies have shown mixed results, possibly reflecting presymptomatic-stage disease, confounding medical issues, and medications influencing outcomes.

Most people “exercise” their brain during their daily activities whether they conceptualize it this way or not.

Cognitive tasks rely on the integration of multiple brain regions that are geographically distant and serve different functions. Because a related, nonidentical task might use this network, it is conceivable that related tasks may be performed with greater facility and dexterity.

The background of the question is complex, but given the effort required to achieve even a “simple” practice effect, studies such as this one that fail to show any major translational skill differences after a mere 6 weeks of “brain exercises” are certainly credible.

RICHARD J. CASELLI, M.D., is a professor of neurology at the Mayo Clinic Arizona, Scottsdale. He has no financial conflicts of interest related to this subject.

Vitals

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Major Finding: Improvements seen in brain training tasks translated poorly to performance on benchmarking tests that used similar cognitive functions (effect sizes, 0.01–0.22).

Data Source: A 6-week trial of “brain training” exercises in 11,430 participants.

Disclosures: The authors reported having no financial conflicts of interest.

“Brain training” does not improve general cognitive function, according to a 6-week trial of more than 11,000 participants.

The study results “provide no evidence for any generalized improvements in cognitive function following brain training in a large sample of healthy adults,” Adrian M. Owen and his colleagues reported.

The participants were divided into three groups: the experimental group 1 (4,678 subjects), which did six tasks emphasizing reasoning, planning, and problem solving; experimental group 2 (4,014 subjects), which practiced six tasks focusing on short-term memory, attention, visuospatial processing, and mathematics; and a control group (2,738 subjects), which answered various research questions using the Internet.

The tasks given to group 2 were considered to be most like those of commercially available “brain training” programs, said Mr. Owen of the Medical Research Council Cognition and Brain Sciences Unit, Cambridge, England, and his colleagues.

The participants were assessed before and after the intervention using benchmarking tests that measured reasoning, verbal short-term memory, spatial working memory, and paired-associates learning. Participants completed an average of 24 training sessions over the 6-week period (range, 1–188). The tasks were performed for a minimum of 10 minutes a day, three times a week.

All three groups improved on the tasks they had been assigned to practice during the trial (effect sizes: group 1, 0.73–1.63; group 2, 0.72–0.97; controls, 0.33). However, postintervention improvements on the benchmarking tests were much smaller (effect sizes: 0.01–0.22 for all groups). The control group improved slightly more than the experimental groups on two measures.

The groups were similar in age (average, 39–40 years) and gender (each group had 4–5 times as many female participants). No relationship was seen between number of training sessions performed or age of participants and postintervention benchmarking test scores.

Although participants improved at their assigned tasks, “training-related improvements may not even generalize to other tasks that use similar cognitive functions,” the researchers said (Nature 2010 Apr. 20 [doi:10.1038/nature09042

“Six weeks of regular computerized brain training confers no greater benefit than simply answering general knowledge questions using the Internet,” they concluded.

My Take

Credible Study on Complex Question

The notion of exercising the mind to reduce its deterioration is popular in the world of Alzheimer's disease: Do more crossword puzzles and you will slow the progression of dementia. But is it true? Epidemiological studies have shown mixed results, possibly reflecting presymptomatic-stage disease, confounding medical issues, and medications influencing outcomes.

Most people “exercise” their brain during their daily activities whether they conceptualize it this way or not.

Cognitive tasks rely on the integration of multiple brain regions that are geographically distant and serve different functions. Because a related, nonidentical task might use this network, it is conceivable that related tasks may be performed with greater facility and dexterity.

The background of the question is complex, but given the effort required to achieve even a “simple” practice effect, studies such as this one that fail to show any major translational skill differences after a mere 6 weeks of “brain exercises” are certainly credible.

RICHARD J. CASELLI, M.D., is a professor of neurology at the Mayo Clinic Arizona, Scottsdale. He has no financial conflicts of interest related to this subject.

Vitals

Major Finding: Improvements seen in brain training tasks translated poorly to performance on benchmarking tests that used similar cognitive functions (effect sizes, 0.01–0.22).

Data Source: A 6-week trial of “brain training” exercises in 11,430 participants.

Disclosures: The authors reported having no financial conflicts of interest.

“Brain training” does not improve general cognitive function, according to a 6-week trial of more than 11,000 participants.

The study results “provide no evidence for any generalized improvements in cognitive function following brain training in a large sample of healthy adults,” Adrian M. Owen and his colleagues reported.

The participants were divided into three groups: the experimental group 1 (4,678 subjects), which did six tasks emphasizing reasoning, planning, and problem solving; experimental group 2 (4,014 subjects), which practiced six tasks focusing on short-term memory, attention, visuospatial processing, and mathematics; and a control group (2,738 subjects), which answered various research questions using the Internet.

The tasks given to group 2 were considered to be most like those of commercially available “brain training” programs, said Mr. Owen of the Medical Research Council Cognition and Brain Sciences Unit, Cambridge, England, and his colleagues.

The participants were assessed before and after the intervention using benchmarking tests that measured reasoning, verbal short-term memory, spatial working memory, and paired-associates learning. Participants completed an average of 24 training sessions over the 6-week period (range, 1–188). The tasks were performed for a minimum of 10 minutes a day, three times a week.

All three groups improved on the tasks they had been assigned to practice during the trial (effect sizes: group 1, 0.73–1.63; group 2, 0.72–0.97; controls, 0.33). However, postintervention improvements on the benchmarking tests were much smaller (effect sizes: 0.01–0.22 for all groups). The control group improved slightly more than the experimental groups on two measures.

The groups were similar in age (average, 39–40 years) and gender (each group had 4–5 times as many female participants). No relationship was seen between number of training sessions performed or age of participants and postintervention benchmarking test scores.

Although participants improved at their assigned tasks, “training-related improvements may not even generalize to other tasks that use similar cognitive functions,” the researchers said (Nature 2010 Apr. 20 [doi:10.1038/nature09042

“Six weeks of regular computerized brain training confers no greater benefit than simply answering general knowledge questions using the Internet,” they concluded.

My Take

Credible Study on Complex Question

The notion of exercising the mind to reduce its deterioration is popular in the world of Alzheimer's disease: Do more crossword puzzles and you will slow the progression of dementia. But is it true? Epidemiological studies have shown mixed results, possibly reflecting presymptomatic-stage disease, confounding medical issues, and medications influencing outcomes.

Most people “exercise” their brain during their daily activities whether they conceptualize it this way or not.

Cognitive tasks rely on the integration of multiple brain regions that are geographically distant and serve different functions. Because a related, nonidentical task might use this network, it is conceivable that related tasks may be performed with greater facility and dexterity.

The background of the question is complex, but given the effort required to achieve even a “simple” practice effect, studies such as this one that fail to show any major translational skill differences after a mere 6 weeks of “brain exercises” are certainly credible.

RICHARD J. CASELLI, M.D., is a professor of neurology at the Mayo Clinic Arizona, Scottsdale. He has no financial conflicts of interest related to this subject.

Vitals

Publications
Publications
Topics
Article Type
Display Headline
Brain Exercises Don't Improve General Cognitive Function
Display Headline
Brain Exercises Don't Improve General Cognitive Function
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Brain Exercises Fail to Increase Cognitive Power

Article Type
Changed
Mon, 01/07/2019 - 11:20
Display Headline
Brain Exercises Fail to Increase Cognitive Power

Major Finding: Improvements seen in “brain training” tasks translated poorly to performance on benchmarking tests that used similar cognitive functions (effect sizes, 0.01–0.22).

Data Source: A 6-week trial of brain training exercises in 11,430 participants

Disclosures: The authors reported having no financial conflicts of interest.

“Brain training” does not improve general cognitive function, according to a 6-week trial of more than 11,000 participants.

The study results “provide no evidence for any generalized improvements in cognitive function following brain training in a large sample of healthy adults,” Adrian M. Owen and his colleagues reported.

The participants were divided into three groups: the experimental group 1 (4,678 subjects), which did six tasks emphasizing reasoning, planning, and problem solving; experimental group 2 (4,014 subjects), which practiced six tasks focusing on short-term memory, attention, visuospatial processing, and mathematics; and a control group (2,738 subjects), which answered various research questions using the Internet. The groups were matched in size initially, but more of the control group members dropped out before the final assessment. The participants were recruited from among viewers of a British science television show (Nature 2010 Apr. 20 [doi:10.1038/nature09042

The tasks given to group 2 were considered to be most like those of commercially available “brain training” programs, said Mr. Owen of the Medical Research Council Cognition and Brain Sciences Unit, Cambridge, England, and his colleagues.

The participants were assessed before and after the intervention using benchmarking tests that measured reasoning, verbal short-term memory, spatial working memory, and paired-associates learning. These validated cognitive assessment tools (at www.cambridgebrainsciences.com

Participants completed an average of 24 training sessions over the 6-week period (range, 1–188). The tasks were performed for a minimum of 10 minutes a day, three times a week. All three groups improved on the tasks they had been assigned to practice during the trial (effect sizes: group 1, 0.73–1.63; group 2, 0.72–0.97; controls, 0.33). However, postintervention improvements on the benchmarking tests were much smaller (effect sizes, 0.01–0.22 for all groups). The control group improved slightly more than the experimental groups on two measures.

The groups were similar in age (average, 39–40 years) and gender (each group had 4–5 times as many female participants). No relationship was seen between number of training sessions performed or age of participants and postintervention benchmarking test scores. The scores on two tests reflected small gender differences.

Although participants improved at their assigned tasks, “training-related improvements may not even generalize to other tasks that use similar cognitive functions,” the researchers said.

My Take

Credible Study Addresses a Complex Question

The notion of exercising the mind to reduce its deterioration is popular in the world of Alzheimer's disease: Do crossword puzzles and you will slow the progression of dementia. But is it true? Epidemiological studies have shown mixed results, possibly reflecting presymptomatic-stage disease, confounding medical issues, and medications influencing outcomes.

Most people “exercise” their brain during their daily activities whether or not they conceptualize it in this way. The term “brain training” implies some kind of special activity that the term “practice” lacks, but acquiring any new skill requires enhanced attention, and with increasing task familiarity comes greater automaticity and increasing dexterity. Functional brain imaging studies show activation of prefrontal cortices during the early attentional practice stage that diminishes and vanishes as any skill becomes automatic (Proc. Natl. Acad. Sci. USA 1998;95:853–60).

Cognitive tasks rely on the integration of multiple brain regions that are geographically distant and serve different functions. Because a related, nonidentical task might use this network, it is conceivable that related tasks may be performed with greater facility and dexterity.

Given the effort required to achieve even a “simple” practice effect, studies such as that of Mr. Owen and his colleagues that fail to show any major translational skill differences after a mere 6 weeks of “brain exercises” that sound far less grueling than the practice of professional musicians and athletes are certainly credible.

RICHARD J. CASELLI, M.D., medical editor of

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Major Finding: Improvements seen in “brain training” tasks translated poorly to performance on benchmarking tests that used similar cognitive functions (effect sizes, 0.01–0.22).

Data Source: A 6-week trial of brain training exercises in 11,430 participants

Disclosures: The authors reported having no financial conflicts of interest.

“Brain training” does not improve general cognitive function, according to a 6-week trial of more than 11,000 participants.

The study results “provide no evidence for any generalized improvements in cognitive function following brain training in a large sample of healthy adults,” Adrian M. Owen and his colleagues reported.

The participants were divided into three groups: the experimental group 1 (4,678 subjects), which did six tasks emphasizing reasoning, planning, and problem solving; experimental group 2 (4,014 subjects), which practiced six tasks focusing on short-term memory, attention, visuospatial processing, and mathematics; and a control group (2,738 subjects), which answered various research questions using the Internet. The groups were matched in size initially, but more of the control group members dropped out before the final assessment. The participants were recruited from among viewers of a British science television show (Nature 2010 Apr. 20 [doi:10.1038/nature09042

The tasks given to group 2 were considered to be most like those of commercially available “brain training” programs, said Mr. Owen of the Medical Research Council Cognition and Brain Sciences Unit, Cambridge, England, and his colleagues.

The participants were assessed before and after the intervention using benchmarking tests that measured reasoning, verbal short-term memory, spatial working memory, and paired-associates learning. These validated cognitive assessment tools (at www.cambridgebrainsciences.com

Participants completed an average of 24 training sessions over the 6-week period (range, 1–188). The tasks were performed for a minimum of 10 minutes a day, three times a week. All three groups improved on the tasks they had been assigned to practice during the trial (effect sizes: group 1, 0.73–1.63; group 2, 0.72–0.97; controls, 0.33). However, postintervention improvements on the benchmarking tests were much smaller (effect sizes, 0.01–0.22 for all groups). The control group improved slightly more than the experimental groups on two measures.

The groups were similar in age (average, 39–40 years) and gender (each group had 4–5 times as many female participants). No relationship was seen between number of training sessions performed or age of participants and postintervention benchmarking test scores. The scores on two tests reflected small gender differences.

Although participants improved at their assigned tasks, “training-related improvements may not even generalize to other tasks that use similar cognitive functions,” the researchers said.

My Take

Credible Study Addresses a Complex Question

The notion of exercising the mind to reduce its deterioration is popular in the world of Alzheimer's disease: Do crossword puzzles and you will slow the progression of dementia. But is it true? Epidemiological studies have shown mixed results, possibly reflecting presymptomatic-stage disease, confounding medical issues, and medications influencing outcomes.

Most people “exercise” their brain during their daily activities whether or not they conceptualize it in this way. The term “brain training” implies some kind of special activity that the term “practice” lacks, but acquiring any new skill requires enhanced attention, and with increasing task familiarity comes greater automaticity and increasing dexterity. Functional brain imaging studies show activation of prefrontal cortices during the early attentional practice stage that diminishes and vanishes as any skill becomes automatic (Proc. Natl. Acad. Sci. USA 1998;95:853–60).

Cognitive tasks rely on the integration of multiple brain regions that are geographically distant and serve different functions. Because a related, nonidentical task might use this network, it is conceivable that related tasks may be performed with greater facility and dexterity.

Given the effort required to achieve even a “simple” practice effect, studies such as that of Mr. Owen and his colleagues that fail to show any major translational skill differences after a mere 6 weeks of “brain exercises” that sound far less grueling than the practice of professional musicians and athletes are certainly credible.

RICHARD J. CASELLI, M.D., medical editor of

Major Finding: Improvements seen in “brain training” tasks translated poorly to performance on benchmarking tests that used similar cognitive functions (effect sizes, 0.01–0.22).

Data Source: A 6-week trial of brain training exercises in 11,430 participants

Disclosures: The authors reported having no financial conflicts of interest.

“Brain training” does not improve general cognitive function, according to a 6-week trial of more than 11,000 participants.

The study results “provide no evidence for any generalized improvements in cognitive function following brain training in a large sample of healthy adults,” Adrian M. Owen and his colleagues reported.

The participants were divided into three groups: the experimental group 1 (4,678 subjects), which did six tasks emphasizing reasoning, planning, and problem solving; experimental group 2 (4,014 subjects), which practiced six tasks focusing on short-term memory, attention, visuospatial processing, and mathematics; and a control group (2,738 subjects), which answered various research questions using the Internet. The groups were matched in size initially, but more of the control group members dropped out before the final assessment. The participants were recruited from among viewers of a British science television show (Nature 2010 Apr. 20 [doi:10.1038/nature09042

The tasks given to group 2 were considered to be most like those of commercially available “brain training” programs, said Mr. Owen of the Medical Research Council Cognition and Brain Sciences Unit, Cambridge, England, and his colleagues.

The participants were assessed before and after the intervention using benchmarking tests that measured reasoning, verbal short-term memory, spatial working memory, and paired-associates learning. These validated cognitive assessment tools (at www.cambridgebrainsciences.com

Participants completed an average of 24 training sessions over the 6-week period (range, 1–188). The tasks were performed for a minimum of 10 minutes a day, three times a week. All three groups improved on the tasks they had been assigned to practice during the trial (effect sizes: group 1, 0.73–1.63; group 2, 0.72–0.97; controls, 0.33). However, postintervention improvements on the benchmarking tests were much smaller (effect sizes, 0.01–0.22 for all groups). The control group improved slightly more than the experimental groups on two measures.

The groups were similar in age (average, 39–40 years) and gender (each group had 4–5 times as many female participants). No relationship was seen between number of training sessions performed or age of participants and postintervention benchmarking test scores. The scores on two tests reflected small gender differences.

Although participants improved at their assigned tasks, “training-related improvements may not even generalize to other tasks that use similar cognitive functions,” the researchers said.

My Take

Credible Study Addresses a Complex Question

The notion of exercising the mind to reduce its deterioration is popular in the world of Alzheimer's disease: Do crossword puzzles and you will slow the progression of dementia. But is it true? Epidemiological studies have shown mixed results, possibly reflecting presymptomatic-stage disease, confounding medical issues, and medications influencing outcomes.

Most people “exercise” their brain during their daily activities whether or not they conceptualize it in this way. The term “brain training” implies some kind of special activity that the term “practice” lacks, but acquiring any new skill requires enhanced attention, and with increasing task familiarity comes greater automaticity and increasing dexterity. Functional brain imaging studies show activation of prefrontal cortices during the early attentional practice stage that diminishes and vanishes as any skill becomes automatic (Proc. Natl. Acad. Sci. USA 1998;95:853–60).

Cognitive tasks rely on the integration of multiple brain regions that are geographically distant and serve different functions. Because a related, nonidentical task might use this network, it is conceivable that related tasks may be performed with greater facility and dexterity.

Given the effort required to achieve even a “simple” practice effect, studies such as that of Mr. Owen and his colleagues that fail to show any major translational skill differences after a mere 6 weeks of “brain exercises” that sound far less grueling than the practice of professional musicians and athletes are certainly credible.

RICHARD J. CASELLI, M.D., medical editor of

Publications
Publications
Topics
Article Type
Display Headline
Brain Exercises Fail to Increase Cognitive Power
Display Headline
Brain Exercises Fail to Increase Cognitive Power
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Colon Cancer Risk Rises With Low Vitamin D

Article Type
Changed
Fri, 01/18/2019 - 00:11
Display Headline
Colon Cancer Risk Rises With Low Vitamin D

Major Finding: People with more than 75 nmol/L of circulating 25-hydroxy-vitamin D (25-[OH]D) had a 40% lower risk of developing colorectal cancer over a mean 3.8 years of follow-up, compared with those with less than 25 nmol/L.

Data Source: A nested case-control investigation compared 1,248 participants in the EPIC study who developed first-incident colorectal cancer after enrollment and 1,248 healthy controls matched for age, sex, study center, menopausal status, and other characteristics.

Disclosures: The study was supported by a grant from the World Cancer Research Fund. None of the investigators reported having conflicts of interest.

Serum levels of vitamin D were inversely associated with colorectal cancer risk, according to findings from one of the first large investigations conducted in western European populations.

Participants with more than 75 nmol/L of circulating 25-hydroxy-vitamin D (25-[OH]D) had a 40% lower risk of developing colorectal cancer over a mean 3.8 years of follow-up, compared with those with less than 25 nmol/L, said Mazda Jenab, Ph.D., of the International Agency for Research on Cancer, Lyon, France, and colleagues.

“Higher circulating 25-(OH)D concentration was associated with lower colorectal risk in a dose-response manner,” they reported (BMJ 2010 Jan. 10 [doi:10.1136/bmj.b5500

The investigation “suggests that raising very low levels of 25-(OH)D to the mid-range may protect against colon cancer.”

The subjects were enrolled during 1992-1998 in the EPIC (European Prospective Investigation into Cancer and Nutrition) study, which includes 520,000 participants at 23 centers in 10 western European nations.

The nested case-control study looked at 1,248 participants in the EPIC study who developed first-incident colorectal cancer (CRC) after enrollment and 1,248 healthy controls matched for age, sex, study center, menopausal status, and other characteristics.

Dr. Douglas K. Rex commented in an interview, “This is the largest study to address this issue, and the results were consistent across a range of European countries.

The authors called for prospective, randomized, controlled trials of vitamin D supplementation, but the results seem strong enough that patients interested in lowering their risk of CRC can be informed that higher blood levels of 25-(OH)D are associated with risk reduction.” Dr. Rex is distinguished professor of medicine at Indiana University, Indianapolis, and director of endoscopy at Indiana University Hospital.

At enrollment, the researchers measured prediagnostic 25-(OH)D levels using blood samples analyzed by enzyme immunoassay, and gauged dietary intake of vitamin D and calcium using questionnaires.

Using blood sample data in addition to dietary intake data accounted for endogenous vitamin D production from sun exposure.

Multivariate analysis controlled for possible confounders including body mass index, physical activity, smoking, education, and intake of fruits, vegetables, meats, and alcohol.

In addition, the investigation results were adjusted for season of blood collection.

The participants in this observational study had a mean age of 58 years, and about half were men. They were divided into five groups based on their circulating vitamin D levels: less than 25 nmol/L, 25-50 nmol/L, 50-75 nmol/L (reference group), 75-100 nmol/L, and more than 100 nmol/L.

The incidence ratio for colorectal cancer was 1.32 for those in the lowest quintile, compared with 0.77 for those in the highest, a statistically significant difference. Those in the second-lowest quintile had an incidence ratio of 1.29 for colorectal cancer.

The association between serum 25-(OH)D and disease was stronger for colon cancer (1.90 vs. 0.71 for the lowest and highest quintiles, respectively) than for rectal cancer (0.77 vs. 0.82).

When the analysis was restricted to dietary intake, however, the researchers found that “higher consumption of dietary calcium, but not dietary vitamin D, was found to be associated with a reduced risk.” Participants with the lowest levels of dietary calcium had an incidence rate of 1.33, versus 0.95 in those with the highest dietary intake of calcium.

In addition, alcohol intake appeared to be a risk factor, as “the highest colorectal cancer risk was seen in those with the lowest circulating levels of 25-(OH)D and the highest level of alcohol consumption,” with an incidence rate of 1.46, compared with 0.82 in the group with the highest level of vitamin D and lowest alcohol intake.

One limitation of the study was the short follow-up time. “However, exclusion of cases with less than 2 years of follow-up did not alter any of the findings,” the researchers said.

Randomized trials are needed to “determine whether vitamin D has a causal role in colorectal cancer prevention or whether it is a marker of other events” before recommending vitamin D supplementation for this purpose, the authors noted.

“The potential cancer risk benefits of higher vitamin D levels should be balanced” against having caution for the toxic potential and the risk of serious adverse events, they added.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Major Finding: People with more than 75 nmol/L of circulating 25-hydroxy-vitamin D (25-[OH]D) had a 40% lower risk of developing colorectal cancer over a mean 3.8 years of follow-up, compared with those with less than 25 nmol/L.

Data Source: A nested case-control investigation compared 1,248 participants in the EPIC study who developed first-incident colorectal cancer after enrollment and 1,248 healthy controls matched for age, sex, study center, menopausal status, and other characteristics.

Disclosures: The study was supported by a grant from the World Cancer Research Fund. None of the investigators reported having conflicts of interest.

Serum levels of vitamin D were inversely associated with colorectal cancer risk, according to findings from one of the first large investigations conducted in western European populations.

Participants with more than 75 nmol/L of circulating 25-hydroxy-vitamin D (25-[OH]D) had a 40% lower risk of developing colorectal cancer over a mean 3.8 years of follow-up, compared with those with less than 25 nmol/L, said Mazda Jenab, Ph.D., of the International Agency for Research on Cancer, Lyon, France, and colleagues.

“Higher circulating 25-(OH)D concentration was associated with lower colorectal risk in a dose-response manner,” they reported (BMJ 2010 Jan. 10 [doi:10.1136/bmj.b5500

The investigation “suggests that raising very low levels of 25-(OH)D to the mid-range may protect against colon cancer.”

The subjects were enrolled during 1992-1998 in the EPIC (European Prospective Investigation into Cancer and Nutrition) study, which includes 520,000 participants at 23 centers in 10 western European nations.

The nested case-control study looked at 1,248 participants in the EPIC study who developed first-incident colorectal cancer (CRC) after enrollment and 1,248 healthy controls matched for age, sex, study center, menopausal status, and other characteristics.

Dr. Douglas K. Rex commented in an interview, “This is the largest study to address this issue, and the results were consistent across a range of European countries.

The authors called for prospective, randomized, controlled trials of vitamin D supplementation, but the results seem strong enough that patients interested in lowering their risk of CRC can be informed that higher blood levels of 25-(OH)D are associated with risk reduction.” Dr. Rex is distinguished professor of medicine at Indiana University, Indianapolis, and director of endoscopy at Indiana University Hospital.

At enrollment, the researchers measured prediagnostic 25-(OH)D levels using blood samples analyzed by enzyme immunoassay, and gauged dietary intake of vitamin D and calcium using questionnaires.

Using blood sample data in addition to dietary intake data accounted for endogenous vitamin D production from sun exposure.

Multivariate analysis controlled for possible confounders including body mass index, physical activity, smoking, education, and intake of fruits, vegetables, meats, and alcohol.

In addition, the investigation results were adjusted for season of blood collection.

The participants in this observational study had a mean age of 58 years, and about half were men. They were divided into five groups based on their circulating vitamin D levels: less than 25 nmol/L, 25-50 nmol/L, 50-75 nmol/L (reference group), 75-100 nmol/L, and more than 100 nmol/L.

The incidence ratio for colorectal cancer was 1.32 for those in the lowest quintile, compared with 0.77 for those in the highest, a statistically significant difference. Those in the second-lowest quintile had an incidence ratio of 1.29 for colorectal cancer.

The association between serum 25-(OH)D and disease was stronger for colon cancer (1.90 vs. 0.71 for the lowest and highest quintiles, respectively) than for rectal cancer (0.77 vs. 0.82).

When the analysis was restricted to dietary intake, however, the researchers found that “higher consumption of dietary calcium, but not dietary vitamin D, was found to be associated with a reduced risk.” Participants with the lowest levels of dietary calcium had an incidence rate of 1.33, versus 0.95 in those with the highest dietary intake of calcium.

In addition, alcohol intake appeared to be a risk factor, as “the highest colorectal cancer risk was seen in those with the lowest circulating levels of 25-(OH)D and the highest level of alcohol consumption,” with an incidence rate of 1.46, compared with 0.82 in the group with the highest level of vitamin D and lowest alcohol intake.

One limitation of the study was the short follow-up time. “However, exclusion of cases with less than 2 years of follow-up did not alter any of the findings,” the researchers said.

Randomized trials are needed to “determine whether vitamin D has a causal role in colorectal cancer prevention or whether it is a marker of other events” before recommending vitamin D supplementation for this purpose, the authors noted.

“The potential cancer risk benefits of higher vitamin D levels should be balanced” against having caution for the toxic potential and the risk of serious adverse events, they added.

Major Finding: People with more than 75 nmol/L of circulating 25-hydroxy-vitamin D (25-[OH]D) had a 40% lower risk of developing colorectal cancer over a mean 3.8 years of follow-up, compared with those with less than 25 nmol/L.

Data Source: A nested case-control investigation compared 1,248 participants in the EPIC study who developed first-incident colorectal cancer after enrollment and 1,248 healthy controls matched for age, sex, study center, menopausal status, and other characteristics.

Disclosures: The study was supported by a grant from the World Cancer Research Fund. None of the investigators reported having conflicts of interest.

Serum levels of vitamin D were inversely associated with colorectal cancer risk, according to findings from one of the first large investigations conducted in western European populations.

Participants with more than 75 nmol/L of circulating 25-hydroxy-vitamin D (25-[OH]D) had a 40% lower risk of developing colorectal cancer over a mean 3.8 years of follow-up, compared with those with less than 25 nmol/L, said Mazda Jenab, Ph.D., of the International Agency for Research on Cancer, Lyon, France, and colleagues.

“Higher circulating 25-(OH)D concentration was associated with lower colorectal risk in a dose-response manner,” they reported (BMJ 2010 Jan. 10 [doi:10.1136/bmj.b5500

The investigation “suggests that raising very low levels of 25-(OH)D to the mid-range may protect against colon cancer.”

The subjects were enrolled during 1992-1998 in the EPIC (European Prospective Investigation into Cancer and Nutrition) study, which includes 520,000 participants at 23 centers in 10 western European nations.

The nested case-control study looked at 1,248 participants in the EPIC study who developed first-incident colorectal cancer (CRC) after enrollment and 1,248 healthy controls matched for age, sex, study center, menopausal status, and other characteristics.

Dr. Douglas K. Rex commented in an interview, “This is the largest study to address this issue, and the results were consistent across a range of European countries.

The authors called for prospective, randomized, controlled trials of vitamin D supplementation, but the results seem strong enough that patients interested in lowering their risk of CRC can be informed that higher blood levels of 25-(OH)D are associated with risk reduction.” Dr. Rex is distinguished professor of medicine at Indiana University, Indianapolis, and director of endoscopy at Indiana University Hospital.

At enrollment, the researchers measured prediagnostic 25-(OH)D levels using blood samples analyzed by enzyme immunoassay, and gauged dietary intake of vitamin D and calcium using questionnaires.

Using blood sample data in addition to dietary intake data accounted for endogenous vitamin D production from sun exposure.

Multivariate analysis controlled for possible confounders including body mass index, physical activity, smoking, education, and intake of fruits, vegetables, meats, and alcohol.

In addition, the investigation results were adjusted for season of blood collection.

The participants in this observational study had a mean age of 58 years, and about half were men. They were divided into five groups based on their circulating vitamin D levels: less than 25 nmol/L, 25-50 nmol/L, 50-75 nmol/L (reference group), 75-100 nmol/L, and more than 100 nmol/L.

The incidence ratio for colorectal cancer was 1.32 for those in the lowest quintile, compared with 0.77 for those in the highest, a statistically significant difference. Those in the second-lowest quintile had an incidence ratio of 1.29 for colorectal cancer.

The association between serum 25-(OH)D and disease was stronger for colon cancer (1.90 vs. 0.71 for the lowest and highest quintiles, respectively) than for rectal cancer (0.77 vs. 0.82).

When the analysis was restricted to dietary intake, however, the researchers found that “higher consumption of dietary calcium, but not dietary vitamin D, was found to be associated with a reduced risk.” Participants with the lowest levels of dietary calcium had an incidence rate of 1.33, versus 0.95 in those with the highest dietary intake of calcium.

In addition, alcohol intake appeared to be a risk factor, as “the highest colorectal cancer risk was seen in those with the lowest circulating levels of 25-(OH)D and the highest level of alcohol consumption,” with an incidence rate of 1.46, compared with 0.82 in the group with the highest level of vitamin D and lowest alcohol intake.

One limitation of the study was the short follow-up time. “However, exclusion of cases with less than 2 years of follow-up did not alter any of the findings,” the researchers said.

Randomized trials are needed to “determine whether vitamin D has a causal role in colorectal cancer prevention or whether it is a marker of other events” before recommending vitamin D supplementation for this purpose, the authors noted.

“The potential cancer risk benefits of higher vitamin D levels should be balanced” against having caution for the toxic potential and the risk of serious adverse events, they added.

Publications
Publications
Topics
Article Type
Display Headline
Colon Cancer Risk Rises With Low Vitamin D
Display Headline
Colon Cancer Risk Rises With Low Vitamin D
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Colon Cancer Risk Rises With Low Vitamin D

Article Type
Changed
Thu, 12/06/2018 - 14:19
Display Headline
Colon Cancer Risk Rises With Low Vitamin D

Serum levels of vitamin D were inversely associated with colorectal cancer risk, according to findings from one of the first large studies done in western European populations.

Participants with more than 75 nmol/L of circulating 25-hydroxy-vitamin D (25-[OH]D) had a 40% lower risk of developing colorectal cancer over a mean 3.8 years of follow-up, compared with those with less than 25 nmol/L, said Mazda Jenab, Ph.D., of the International Agency for Research on Cancer, Lyon, France, and colleagues.

“Higher circulating 25-(OH)D concentration was associated with lower colorectal risk in a dose-response manner,” they reported (BMJ 2010 Jan. 10 [doi:10.1136/bmj.b5500]). The study “suggests that raising very low levels of 25-(OH)D to the mid-range may protect against colon cancer.”

The subjects were enrolled during 1992-1998 in the EPIC (European Prospective Investigation into Cancer and Nutrition) study, which includes 520,000 participants at 23 centers in 10 western European nations. The nested case-control study looked at 1,248 participants in the EPIC study who developed first-incident colorectal cancer (CRC) after enrollment and 1,248 healthy controls matched for age, sex, study center, and other characteristics.

“This is the largest study to address this issue,” commented Dr. Douglas K. Rex, distinguished professor of medicine at Indiana University, Indianapolis, and director of endoscopy at Indiana University Hospital. “The results seem strong enough that patients interested in lowering their risk of CRC can be informed that higher blood levels of 25-(OH)D are associated with risk reduction.”

At enrollment, the researchers measured prediagnostic 25-(OH)D levels using blood samples analyzed by enzyme immunoassay, and gauged dietary intake of vitamin D and calcium using questionnaires. Using blood sample data in addition to dietary intake data accounted for endogenous vitamin D production from sun exposure.

Multivariate analysis controlled for possible confounders including body mass index, physical activity, smoking, education, and intake of fruits, vegetables, meats, and alcohol.

The participants in this observational study had a mean age of 58 years, and about half were men. They were divided into five groups based on their circulating vitamin D levels: less than 25 nmol/L, 25-50 nmol/L, 50-75 nmol/L (reference group), 75-100 nmol/L, and more than 100 nmol/L.

The incidence ratio for colorectal cancer was 1.32 for those in the lowest quintile, compared with 0.77 for those in the highest, a statistically significant difference. The association between serum 25-(OH)D and disease was stronger for colon cancer (1.90 vs. 0.71 for the lowest and highest quintiles, respectively) than for rectal cancer (0.77 vs. 0.82).

When the analysis was restricted to dietary intake, however, the researchers found that “higher consumption of dietary calcium, but not dietary vitamin D, was found to be associated with a reduced risk.” Participants with the lowest levels of dietary calcium had an incidence rate of 1.33, versus 0.95 in those with the highest dietary intake of calcium.

Alcohol intake also appeared to be a risk factor, as “the highest colorectal cancer risk was seen in those with the lowest circulating levels of 25-(OH)D and the highest level of alcohol consumption,” with an incidence rate of 1.46, compared with 0.82 in those with the highest vitamin D level and lowest alcohol intake.

One limitation of the study was the short follow-up time. “However, exclusion of cases with less than 2 years of follow-up did not alter any of the findings,” the researchers said.

“The potential cancer risk benefits of higher vitamin D levels should be balanced with caution for the toxic potential” and the risk of serious adverse events, the authors noted.

Disclosures: The study was supported by a grant from the World Cancer Research Fund. None of the investigators reported having conflicts of interest.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Serum levels of vitamin D were inversely associated with colorectal cancer risk, according to findings from one of the first large studies done in western European populations.

Participants with more than 75 nmol/L of circulating 25-hydroxy-vitamin D (25-[OH]D) had a 40% lower risk of developing colorectal cancer over a mean 3.8 years of follow-up, compared with those with less than 25 nmol/L, said Mazda Jenab, Ph.D., of the International Agency for Research on Cancer, Lyon, France, and colleagues.

“Higher circulating 25-(OH)D concentration was associated with lower colorectal risk in a dose-response manner,” they reported (BMJ 2010 Jan. 10 [doi:10.1136/bmj.b5500]). The study “suggests that raising very low levels of 25-(OH)D to the mid-range may protect against colon cancer.”

The subjects were enrolled during 1992-1998 in the EPIC (European Prospective Investigation into Cancer and Nutrition) study, which includes 520,000 participants at 23 centers in 10 western European nations. The nested case-control study looked at 1,248 participants in the EPIC study who developed first-incident colorectal cancer (CRC) after enrollment and 1,248 healthy controls matched for age, sex, study center, and other characteristics.

“This is the largest study to address this issue,” commented Dr. Douglas K. Rex, distinguished professor of medicine at Indiana University, Indianapolis, and director of endoscopy at Indiana University Hospital. “The results seem strong enough that patients interested in lowering their risk of CRC can be informed that higher blood levels of 25-(OH)D are associated with risk reduction.”

At enrollment, the researchers measured prediagnostic 25-(OH)D levels using blood samples analyzed by enzyme immunoassay, and gauged dietary intake of vitamin D and calcium using questionnaires. Using blood sample data in addition to dietary intake data accounted for endogenous vitamin D production from sun exposure.

Multivariate analysis controlled for possible confounders including body mass index, physical activity, smoking, education, and intake of fruits, vegetables, meats, and alcohol.

The participants in this observational study had a mean age of 58 years, and about half were men. They were divided into five groups based on their circulating vitamin D levels: less than 25 nmol/L, 25-50 nmol/L, 50-75 nmol/L (reference group), 75-100 nmol/L, and more than 100 nmol/L.

The incidence ratio for colorectal cancer was 1.32 for those in the lowest quintile, compared with 0.77 for those in the highest, a statistically significant difference. The association between serum 25-(OH)D and disease was stronger for colon cancer (1.90 vs. 0.71 for the lowest and highest quintiles, respectively) than for rectal cancer (0.77 vs. 0.82).

When the analysis was restricted to dietary intake, however, the researchers found that “higher consumption of dietary calcium, but not dietary vitamin D, was found to be associated with a reduced risk.” Participants with the lowest levels of dietary calcium had an incidence rate of 1.33, versus 0.95 in those with the highest dietary intake of calcium.

Alcohol intake also appeared to be a risk factor, as “the highest colorectal cancer risk was seen in those with the lowest circulating levels of 25-(OH)D and the highest level of alcohol consumption,” with an incidence rate of 1.46, compared with 0.82 in those with the highest vitamin D level and lowest alcohol intake.

One limitation of the study was the short follow-up time. “However, exclusion of cases with less than 2 years of follow-up did not alter any of the findings,” the researchers said.

“The potential cancer risk benefits of higher vitamin D levels should be balanced with caution for the toxic potential” and the risk of serious adverse events, the authors noted.

Disclosures: The study was supported by a grant from the World Cancer Research Fund. None of the investigators reported having conflicts of interest.

Serum levels of vitamin D were inversely associated with colorectal cancer risk, according to findings from one of the first large studies done in western European populations.

Participants with more than 75 nmol/L of circulating 25-hydroxy-vitamin D (25-[OH]D) had a 40% lower risk of developing colorectal cancer over a mean 3.8 years of follow-up, compared with those with less than 25 nmol/L, said Mazda Jenab, Ph.D., of the International Agency for Research on Cancer, Lyon, France, and colleagues.

“Higher circulating 25-(OH)D concentration was associated with lower colorectal risk in a dose-response manner,” they reported (BMJ 2010 Jan. 10 [doi:10.1136/bmj.b5500]). The study “suggests that raising very low levels of 25-(OH)D to the mid-range may protect against colon cancer.”

The subjects were enrolled during 1992-1998 in the EPIC (European Prospective Investigation into Cancer and Nutrition) study, which includes 520,000 participants at 23 centers in 10 western European nations. The nested case-control study looked at 1,248 participants in the EPIC study who developed first-incident colorectal cancer (CRC) after enrollment and 1,248 healthy controls matched for age, sex, study center, and other characteristics.

“This is the largest study to address this issue,” commented Dr. Douglas K. Rex, distinguished professor of medicine at Indiana University, Indianapolis, and director of endoscopy at Indiana University Hospital. “The results seem strong enough that patients interested in lowering their risk of CRC can be informed that higher blood levels of 25-(OH)D are associated with risk reduction.”

At enrollment, the researchers measured prediagnostic 25-(OH)D levels using blood samples analyzed by enzyme immunoassay, and gauged dietary intake of vitamin D and calcium using questionnaires. Using blood sample data in addition to dietary intake data accounted for endogenous vitamin D production from sun exposure.

Multivariate analysis controlled for possible confounders including body mass index, physical activity, smoking, education, and intake of fruits, vegetables, meats, and alcohol.

The participants in this observational study had a mean age of 58 years, and about half were men. They were divided into five groups based on their circulating vitamin D levels: less than 25 nmol/L, 25-50 nmol/L, 50-75 nmol/L (reference group), 75-100 nmol/L, and more than 100 nmol/L.

The incidence ratio for colorectal cancer was 1.32 for those in the lowest quintile, compared with 0.77 for those in the highest, a statistically significant difference. The association between serum 25-(OH)D and disease was stronger for colon cancer (1.90 vs. 0.71 for the lowest and highest quintiles, respectively) than for rectal cancer (0.77 vs. 0.82).

When the analysis was restricted to dietary intake, however, the researchers found that “higher consumption of dietary calcium, but not dietary vitamin D, was found to be associated with a reduced risk.” Participants with the lowest levels of dietary calcium had an incidence rate of 1.33, versus 0.95 in those with the highest dietary intake of calcium.

Alcohol intake also appeared to be a risk factor, as “the highest colorectal cancer risk was seen in those with the lowest circulating levels of 25-(OH)D and the highest level of alcohol consumption,” with an incidence rate of 1.46, compared with 0.82 in those with the highest vitamin D level and lowest alcohol intake.

One limitation of the study was the short follow-up time. “However, exclusion of cases with less than 2 years of follow-up did not alter any of the findings,” the researchers said.

“The potential cancer risk benefits of higher vitamin D levels should be balanced with caution for the toxic potential” and the risk of serious adverse events, the authors noted.

Disclosures: The study was supported by a grant from the World Cancer Research Fund. None of the investigators reported having conflicts of interest.

Publications
Publications
Topics
Article Type
Display Headline
Colon Cancer Risk Rises With Low Vitamin D
Display Headline
Colon Cancer Risk Rises With Low Vitamin D
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Consensus Statement on Biologics Updated

Article Type
Changed
Thu, 12/06/2018 - 14:28
Display Headline
Consensus Statement on Biologics Updated

New data have prompted an update of a widely cited consensus statement on biologic agents for rheumatic arthritis, psoriatic arthritis, ankylosing spondylitis, and other rheumatic diseases, according to Dr. Daniel E. Furst and the other members of the international expert panel that revised the document.

The addition of biologics to the treatment options for rheumatic diseases has greatly improved outcomes, and research continues on the best use of these agents. The panel, made up of rheumatologists from universities in Europe, North America, South America, Australia, and Asia, cited new findings encompassing tumor necrosis factor (TNF)–alpha blocking agents, abatacept, rituximab, tocilizumab, and interleukin 1 (IL1) receptor antagonists.

TNF-Alpha Blockers

The TNF-alpha blockers infliximab, adalimumab, and etanercept are most often used for rheumatoid arthritis in combination with other disease-modifying antirheumatic drugs (DMARDs) such as methotrexate. New evidence has indicated that combining methotrexate with a TNF-alpha inhibitor is more effective for RA than is a combination of DMARDs without a TNF-alpha blocker, according to the new consensus statement (Ann. Rheum. Dis. 2010;69[suppl 1]:i2–29).

The dose of TNF-alpha blockers can be lowered during times of RA remission or low disease activity without loss of effectiveness, according to the update. Also, one TNF-alpha blocker may be substituted for another that has stopped working for RA, according to one randomized, controlled trial and several retrospective, observational studies cited by Dr. Furst, the Carl M. Pearson professor of medicine at the David Geffen School of Medicine, of the University of California, Los Angeles, and his coauthors.

In patients with psoriatic arthritis, all of the approved TNF-alpha blockers have been shown to be equally effective. Golimumab was approved for this indication since the last consensus statement on biologic agents was released.

In ankylosing spondylitis, regular infliximab therapy was more effective than “on demand” therapy. Adding methotrexate to infliximab did not increase effectiveness of treatment, the update noted.

Cardiovascular events decreased in patients taking TNF-alpha blockers, according to the results of several new studies. Although previous studies found a link between TNF-alpha blockers and a higher risk of solid tumors, subsequent analyses of the same data found no link.

Caution and repeat testing should be used when these agents are used in populations with a high prevalence of tuberculosis. Some evidence suggests that TNF-alpha antagonist therapy can be reinitiated following TB treatment.

TNF-alpha blockers have been associated with the development or exacerbation of psoriasis, but prescribing a different TNF-alpha blocker may resolve the problem, according to the update.

Abatacept

New evidence suggests that in methotrexate-naive patients with early RA, initiating treatment with methotrexate plus abatacept is more effective than using methotrexate plus placebo, Dr. Furst and his colleagues wrote.

Autoimmune disease incidence was not increased with abatacept, according to the clinical trial database for the drug.

Recent evidence supports earlier findings that abatacept use decreased response to vaccinations for influenza, pneumococcal, and tetanus infections, so the previous recommendation that live vaccines not be used within 3 months of abatacept treatment remains valid.

Rituximab

Clinical trials have shown that rituximab can slow radiographic progression of rheumatoid arthritis for up to 2 years. Also, after one or more TNF-alpha blockers have been ineffective, rituximab has been shown to be more effective than another TNF-alpha inhibitor, according to the update on biologic agents.

Rituximab is contraindicated in patients with hepatitis B infection, as fatal HBV reactivation has been reported with its use in non-Hodgkin's lymphoma patients. Risk of other serious infections did not increase with repeated courses of the drug, and did not increase in patients who received another biologic after rituximab.

Like abatacept, rituximab decreased the immune response to pneumococcal vaccine, but unlike abatacept, it did not decrease response to tetanus vaccine. However, live vaccines should be given before rituximab, the update authors said.

Tocilizumab

Recent studies showed that tocilizumab, used alone or in combination with methotrexate for rheumatoid arthritis in patients with an unsatisfactory response to DMARDs or TNF-alpha blockers, did not increase rates of cardiovascular events or cerebrovascular accidents.

However, the update said that tocilizumab has been linked with cases of peritonitis, lower GI perforation, fistulae, and intra-abdominal abscess. Hepatic failure and liver damage have not been reported, but liver function should be monitored because of increased bilirubin levels with this drug.

IL1 Blockers

Anakinra is the only IL1 blocker approved for the treatment of RA in the United States. Rilonacept has been approved for cryopyrin-associated periodic syndromes, but is clinically effective in only a few patients with the autoinflammatory syndrome.

 

 

Anakinra did not interfere with the effectiveness of tetanus vaccine, according to the findings of one controlled trial.

The authors said they had no conflicts of interest pertaining to the update.

My Take

Recommendations Are Evolving

This consensus statement remains an evolving document as the state of research and appearance of new drugs require constant updating. This statement represents a good, data-based statement supplemented by findings gathered from reports of worldwide experience with these agents and expertise from about 150 recognized experts.

One important aspect of the statement is that the level of evidence for the recommendations is included, allowing readers to judge the document in context. The recommendations can be a focus for local statements by the European Union as well as Central and South American countries.

The statement also offers a resource for off-label drug use, because it contains extensive appendices of documented off-label use of biologic agents.

DANIEL E. FURST, M.D., the Carl M. Pearson professor of medicine at the David Geffen School of Medicine, University of California, Los Angeles, is the lead author on the updated consensus statement on biologic agents for the treatment of rheumatic diseases, 2009.

Dr. Furst

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

New data have prompted an update of a widely cited consensus statement on biologic agents for rheumatic arthritis, psoriatic arthritis, ankylosing spondylitis, and other rheumatic diseases, according to Dr. Daniel E. Furst and the other members of the international expert panel that revised the document.

The addition of biologics to the treatment options for rheumatic diseases has greatly improved outcomes, and research continues on the best use of these agents. The panel, made up of rheumatologists from universities in Europe, North America, South America, Australia, and Asia, cited new findings encompassing tumor necrosis factor (TNF)–alpha blocking agents, abatacept, rituximab, tocilizumab, and interleukin 1 (IL1) receptor antagonists.

TNF-Alpha Blockers

The TNF-alpha blockers infliximab, adalimumab, and etanercept are most often used for rheumatoid arthritis in combination with other disease-modifying antirheumatic drugs (DMARDs) such as methotrexate. New evidence has indicated that combining methotrexate with a TNF-alpha inhibitor is more effective for RA than is a combination of DMARDs without a TNF-alpha blocker, according to the new consensus statement (Ann. Rheum. Dis. 2010;69[suppl 1]:i2–29).

The dose of TNF-alpha blockers can be lowered during times of RA remission or low disease activity without loss of effectiveness, according to the update. Also, one TNF-alpha blocker may be substituted for another that has stopped working for RA, according to one randomized, controlled trial and several retrospective, observational studies cited by Dr. Furst, the Carl M. Pearson professor of medicine at the David Geffen School of Medicine, of the University of California, Los Angeles, and his coauthors.

In patients with psoriatic arthritis, all of the approved TNF-alpha blockers have been shown to be equally effective. Golimumab was approved for this indication since the last consensus statement on biologic agents was released.

In ankylosing spondylitis, regular infliximab therapy was more effective than “on demand” therapy. Adding methotrexate to infliximab did not increase effectiveness of treatment, the update noted.

Cardiovascular events decreased in patients taking TNF-alpha blockers, according to the results of several new studies. Although previous studies found a link between TNF-alpha blockers and a higher risk of solid tumors, subsequent analyses of the same data found no link.

Caution and repeat testing should be used when these agents are used in populations with a high prevalence of tuberculosis. Some evidence suggests that TNF-alpha antagonist therapy can be reinitiated following TB treatment.

TNF-alpha blockers have been associated with the development or exacerbation of psoriasis, but prescribing a different TNF-alpha blocker may resolve the problem, according to the update.

Abatacept

New evidence suggests that in methotrexate-naive patients with early RA, initiating treatment with methotrexate plus abatacept is more effective than using methotrexate plus placebo, Dr. Furst and his colleagues wrote.

Autoimmune disease incidence was not increased with abatacept, according to the clinical trial database for the drug.

Recent evidence supports earlier findings that abatacept use decreased response to vaccinations for influenza, pneumococcal, and tetanus infections, so the previous recommendation that live vaccines not be used within 3 months of abatacept treatment remains valid.

Rituximab

Clinical trials have shown that rituximab can slow radiographic progression of rheumatoid arthritis for up to 2 years. Also, after one or more TNF-alpha blockers have been ineffective, rituximab has been shown to be more effective than another TNF-alpha inhibitor, according to the update on biologic agents.

Rituximab is contraindicated in patients with hepatitis B infection, as fatal HBV reactivation has been reported with its use in non-Hodgkin's lymphoma patients. Risk of other serious infections did not increase with repeated courses of the drug, and did not increase in patients who received another biologic after rituximab.

Like abatacept, rituximab decreased the immune response to pneumococcal vaccine, but unlike abatacept, it did not decrease response to tetanus vaccine. However, live vaccines should be given before rituximab, the update authors said.

Tocilizumab

Recent studies showed that tocilizumab, used alone or in combination with methotrexate for rheumatoid arthritis in patients with an unsatisfactory response to DMARDs or TNF-alpha blockers, did not increase rates of cardiovascular events or cerebrovascular accidents.

However, the update said that tocilizumab has been linked with cases of peritonitis, lower GI perforation, fistulae, and intra-abdominal abscess. Hepatic failure and liver damage have not been reported, but liver function should be monitored because of increased bilirubin levels with this drug.

IL1 Blockers

Anakinra is the only IL1 blocker approved for the treatment of RA in the United States. Rilonacept has been approved for cryopyrin-associated periodic syndromes, but is clinically effective in only a few patients with the autoinflammatory syndrome.

 

 

Anakinra did not interfere with the effectiveness of tetanus vaccine, according to the findings of one controlled trial.

The authors said they had no conflicts of interest pertaining to the update.

My Take

Recommendations Are Evolving

This consensus statement remains an evolving document as the state of research and appearance of new drugs require constant updating. This statement represents a good, data-based statement supplemented by findings gathered from reports of worldwide experience with these agents and expertise from about 150 recognized experts.

One important aspect of the statement is that the level of evidence for the recommendations is included, allowing readers to judge the document in context. The recommendations can be a focus for local statements by the European Union as well as Central and South American countries.

The statement also offers a resource for off-label drug use, because it contains extensive appendices of documented off-label use of biologic agents.

DANIEL E. FURST, M.D., the Carl M. Pearson professor of medicine at the David Geffen School of Medicine, University of California, Los Angeles, is the lead author on the updated consensus statement on biologic agents for the treatment of rheumatic diseases, 2009.

Dr. Furst

New data have prompted an update of a widely cited consensus statement on biologic agents for rheumatic arthritis, psoriatic arthritis, ankylosing spondylitis, and other rheumatic diseases, according to Dr. Daniel E. Furst and the other members of the international expert panel that revised the document.

The addition of biologics to the treatment options for rheumatic diseases has greatly improved outcomes, and research continues on the best use of these agents. The panel, made up of rheumatologists from universities in Europe, North America, South America, Australia, and Asia, cited new findings encompassing tumor necrosis factor (TNF)–alpha blocking agents, abatacept, rituximab, tocilizumab, and interleukin 1 (IL1) receptor antagonists.

TNF-Alpha Blockers

The TNF-alpha blockers infliximab, adalimumab, and etanercept are most often used for rheumatoid arthritis in combination with other disease-modifying antirheumatic drugs (DMARDs) such as methotrexate. New evidence has indicated that combining methotrexate with a TNF-alpha inhibitor is more effective for RA than is a combination of DMARDs without a TNF-alpha blocker, according to the new consensus statement (Ann. Rheum. Dis. 2010;69[suppl 1]:i2–29).

The dose of TNF-alpha blockers can be lowered during times of RA remission or low disease activity without loss of effectiveness, according to the update. Also, one TNF-alpha blocker may be substituted for another that has stopped working for RA, according to one randomized, controlled trial and several retrospective, observational studies cited by Dr. Furst, the Carl M. Pearson professor of medicine at the David Geffen School of Medicine, of the University of California, Los Angeles, and his coauthors.

In patients with psoriatic arthritis, all of the approved TNF-alpha blockers have been shown to be equally effective. Golimumab was approved for this indication since the last consensus statement on biologic agents was released.

In ankylosing spondylitis, regular infliximab therapy was more effective than “on demand” therapy. Adding methotrexate to infliximab did not increase effectiveness of treatment, the update noted.

Cardiovascular events decreased in patients taking TNF-alpha blockers, according to the results of several new studies. Although previous studies found a link between TNF-alpha blockers and a higher risk of solid tumors, subsequent analyses of the same data found no link.

Caution and repeat testing should be used when these agents are used in populations with a high prevalence of tuberculosis. Some evidence suggests that TNF-alpha antagonist therapy can be reinitiated following TB treatment.

TNF-alpha blockers have been associated with the development or exacerbation of psoriasis, but prescribing a different TNF-alpha blocker may resolve the problem, according to the update.

Abatacept

New evidence suggests that in methotrexate-naive patients with early RA, initiating treatment with methotrexate plus abatacept is more effective than using methotrexate plus placebo, Dr. Furst and his colleagues wrote.

Autoimmune disease incidence was not increased with abatacept, according to the clinical trial database for the drug.

Recent evidence supports earlier findings that abatacept use decreased response to vaccinations for influenza, pneumococcal, and tetanus infections, so the previous recommendation that live vaccines not be used within 3 months of abatacept treatment remains valid.

Rituximab

Clinical trials have shown that rituximab can slow radiographic progression of rheumatoid arthritis for up to 2 years. Also, after one or more TNF-alpha blockers have been ineffective, rituximab has been shown to be more effective than another TNF-alpha inhibitor, according to the update on biologic agents.

Rituximab is contraindicated in patients with hepatitis B infection, as fatal HBV reactivation has been reported with its use in non-Hodgkin's lymphoma patients. Risk of other serious infections did not increase with repeated courses of the drug, and did not increase in patients who received another biologic after rituximab.

Like abatacept, rituximab decreased the immune response to pneumococcal vaccine, but unlike abatacept, it did not decrease response to tetanus vaccine. However, live vaccines should be given before rituximab, the update authors said.

Tocilizumab

Recent studies showed that tocilizumab, used alone or in combination with methotrexate for rheumatoid arthritis in patients with an unsatisfactory response to DMARDs or TNF-alpha blockers, did not increase rates of cardiovascular events or cerebrovascular accidents.

However, the update said that tocilizumab has been linked with cases of peritonitis, lower GI perforation, fistulae, and intra-abdominal abscess. Hepatic failure and liver damage have not been reported, but liver function should be monitored because of increased bilirubin levels with this drug.

IL1 Blockers

Anakinra is the only IL1 blocker approved for the treatment of RA in the United States. Rilonacept has been approved for cryopyrin-associated periodic syndromes, but is clinically effective in only a few patients with the autoinflammatory syndrome.

 

 

Anakinra did not interfere with the effectiveness of tetanus vaccine, according to the findings of one controlled trial.

The authors said they had no conflicts of interest pertaining to the update.

My Take

Recommendations Are Evolving

This consensus statement remains an evolving document as the state of research and appearance of new drugs require constant updating. This statement represents a good, data-based statement supplemented by findings gathered from reports of worldwide experience with these agents and expertise from about 150 recognized experts.

One important aspect of the statement is that the level of evidence for the recommendations is included, allowing readers to judge the document in context. The recommendations can be a focus for local statements by the European Union as well as Central and South American countries.

The statement also offers a resource for off-label drug use, because it contains extensive appendices of documented off-label use of biologic agents.

DANIEL E. FURST, M.D., the Carl M. Pearson professor of medicine at the David Geffen School of Medicine, University of California, Los Angeles, is the lead author on the updated consensus statement on biologic agents for the treatment of rheumatic diseases, 2009.

Dr. Furst

Publications
Publications
Topics
Article Type
Display Headline
Consensus Statement on Biologics Updated
Display Headline
Consensus Statement on Biologics Updated
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Deformational Plagiocephaly Not Tied to Frequent OM

Article Type
Changed
Thu, 12/06/2018 - 15:53
Display Headline
Deformational Plagiocephaly Not Tied to Frequent OM

Children with deformational plagiocephaly do not have significantly higher rates of otitis media, compared with children in the general population, according to a recent study.

Deformational plagiocephaly previously has been reported to be associated with otitis media (OM), but this study did not find a significant association. Children in the study with more severe deformity had a higher rate of ear infection, compared with those with less severe deformity, but this trend also was not significant, wrote Adam Purzycki and his colleagues at the Wake Forest University Medical Center's North Carolina Institute for Cleft and Craniofacial Deformities, in Winston-Salem.

The retrospective study included 1,112 children with deformational plagiocephaly who presented between February 2004 and June 2006. Age at presentation was 3–12 months (mean 5.6 months); 723 were boys and 389 were girls. Of this group, 559 (50.3%) were reported by their parents as having had at least one ear infection, Mr. Purzycki and his coworkers said.

In the patients with deformational plagiocephaly, the incidence of OM was not higher than that reported by the Centers for Disease Control and Prevention for children in general. The severity of deformity showed a nonsignificant correlation with the number of ear infections: Of the 793 patients with the milder plagiocephaly severity levels I-III, 387 (48.8%) had had at least one ear infection. Of the remaining 319 patients with higher severity levels IV-V, 172 (53.9%) had been diagnosed with at least one ear infection. The number of ear infections was determined by self-reports from patients' guardians (J. Craniofac. Surg. 2009;20:1407-11).

A subset of 124 patients were examined by tympanometry to diagnose clinical or subclinical OM or fluid collection resulting from abnormal drainage, which would suggest eustachian tube dysfunction. Of these, 121 had readings indicative of OM. “The significantly high percentage of typanogram readings that pointed to otitis media, whether clinical or subclinical, suggests an overall malfunction of the middle ear drainage function of the eustachian tube in these children,” they wrote.

No conflicts of interest or study funding were reported.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Children with deformational plagiocephaly do not have significantly higher rates of otitis media, compared with children in the general population, according to a recent study.

Deformational plagiocephaly previously has been reported to be associated with otitis media (OM), but this study did not find a significant association. Children in the study with more severe deformity had a higher rate of ear infection, compared with those with less severe deformity, but this trend also was not significant, wrote Adam Purzycki and his colleagues at the Wake Forest University Medical Center's North Carolina Institute for Cleft and Craniofacial Deformities, in Winston-Salem.

The retrospective study included 1,112 children with deformational plagiocephaly who presented between February 2004 and June 2006. Age at presentation was 3–12 months (mean 5.6 months); 723 were boys and 389 were girls. Of this group, 559 (50.3%) were reported by their parents as having had at least one ear infection, Mr. Purzycki and his coworkers said.

In the patients with deformational plagiocephaly, the incidence of OM was not higher than that reported by the Centers for Disease Control and Prevention for children in general. The severity of deformity showed a nonsignificant correlation with the number of ear infections: Of the 793 patients with the milder plagiocephaly severity levels I-III, 387 (48.8%) had had at least one ear infection. Of the remaining 319 patients with higher severity levels IV-V, 172 (53.9%) had been diagnosed with at least one ear infection. The number of ear infections was determined by self-reports from patients' guardians (J. Craniofac. Surg. 2009;20:1407-11).

A subset of 124 patients were examined by tympanometry to diagnose clinical or subclinical OM or fluid collection resulting from abnormal drainage, which would suggest eustachian tube dysfunction. Of these, 121 had readings indicative of OM. “The significantly high percentage of typanogram readings that pointed to otitis media, whether clinical or subclinical, suggests an overall malfunction of the middle ear drainage function of the eustachian tube in these children,” they wrote.

No conflicts of interest or study funding were reported.

Children with deformational plagiocephaly do not have significantly higher rates of otitis media, compared with children in the general population, according to a recent study.

Deformational plagiocephaly previously has been reported to be associated with otitis media (OM), but this study did not find a significant association. Children in the study with more severe deformity had a higher rate of ear infection, compared with those with less severe deformity, but this trend also was not significant, wrote Adam Purzycki and his colleagues at the Wake Forest University Medical Center's North Carolina Institute for Cleft and Craniofacial Deformities, in Winston-Salem.

The retrospective study included 1,112 children with deformational plagiocephaly who presented between February 2004 and June 2006. Age at presentation was 3–12 months (mean 5.6 months); 723 were boys and 389 were girls. Of this group, 559 (50.3%) were reported by their parents as having had at least one ear infection, Mr. Purzycki and his coworkers said.

In the patients with deformational plagiocephaly, the incidence of OM was not higher than that reported by the Centers for Disease Control and Prevention for children in general. The severity of deformity showed a nonsignificant correlation with the number of ear infections: Of the 793 patients with the milder plagiocephaly severity levels I-III, 387 (48.8%) had had at least one ear infection. Of the remaining 319 patients with higher severity levels IV-V, 172 (53.9%) had been diagnosed with at least one ear infection. The number of ear infections was determined by self-reports from patients' guardians (J. Craniofac. Surg. 2009;20:1407-11).

A subset of 124 patients were examined by tympanometry to diagnose clinical or subclinical OM or fluid collection resulting from abnormal drainage, which would suggest eustachian tube dysfunction. Of these, 121 had readings indicative of OM. “The significantly high percentage of typanogram readings that pointed to otitis media, whether clinical or subclinical, suggests an overall malfunction of the middle ear drainage function of the eustachian tube in these children,” they wrote.

No conflicts of interest or study funding were reported.

Publications
Publications
Topics
Article Type
Display Headline
Deformational Plagiocephaly Not Tied to Frequent OM
Display Headline
Deformational Plagiocephaly Not Tied to Frequent OM
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Suicide History Can Reveal Bipolar Disorder

Article Type
Changed
Thu, 01/17/2019 - 23:48
Display Headline
Suicide History Can Reveal Bipolar Disorder

Patients who make a serious suicide attempt and have a family history of suicide are more likely to have bipolar disorder, compared with suicide attempters without those two characteristics, according to a small retrospective study.

Because bipolar disorder is often overlooked or misdiagnosed, Dr. Sébastien Guillaume, of the department of psychological medicine and psychiatry at Montpellier University Hospital, France, and his colleagues sought to identify “factors that could alert a clinician to the possibility of BPD during an assessment of a depressed patient with a history of suicidal behavior.” They found that “a serious [suicide attempt], a familial history of completed suicide in first-degree relatives, and a higher level of affective intensity are strongly associated with a diagnosis of BPD.”

Better clinical detection of BPD, therefore, “would increase the possibility of patients receiving adequate treatment and, as such, is likely to be successful in preventing recurrence and completed suicides” (J. Affect. Disord. 2009 July 15; doi:10.1016/j.jad.2009.06.006

The study included 211 patients who were hospitalized at Montpellier University Hospital after a suicide attempt. The patients were aged 18-75 years and had recurrent major depressive disorder (RMDD) or bipolar disorder (BPD). In 135 RMDD patients, 29% had made a serious suicide attempt, as determined by scores on the Risk Rescue Rating Scale and Suicidal Intent Scale, compared with 45% of 76 BPD patients (odds ratio 1.99).

Family history of completed suicide in first-degree relatives was present in 12% of the RMDD patients and in 29% of the BPD patients (OR 3.03).

In addition, suicide attempters with both a history of serious attempts and a family suicide history had a higher risk of being diagnosed with BPD, compared with patients with either of the characteristics alone, Dr. Guillaume and his colleagues said.

Patients were assessed using the Mini International Neuropsychiatric Interview administered by trained psychiatrists, medical records, information from relatives, and self-administered questionnaires—including the Barratt Impulsivity Scale, the Buss-Durkee Hostility Inventory, the Beck Hopelessness Scale, the Tridimensional Personality Questionnaire, and the Affect Intensity Measure Assessments—were done when patients were in remission from the depressive state. Univariate and multivariate analyses were performed to determine the clinical characteristics most closely linked with a diagnosis of BPD in suicide attempters.

The BPD and RMDD groups had similar education levels, age range, and age of onset of their mood disorder; however, more BPD patients were male (37% vs. 20%).

In addition to more serious suicide attempts and greater likelihood of family suicide history, the BPD patients were more likely to have a high score on the Affect Intensity Measure (43% vs. 27%; OR 2.25) and have a high novelty-seeking score (36% vs. 26%; OR 2.28). “The potential link between emotional reactivity and… increased suicide risk” warrants further study, the authors said.

Suicide attempters with BPD had more comorbid substance use (37% vs. 21%; OR 2.13), but fewer eating disorders (14% vs. 28%; OR 0.43), compared with those with RMDD, the investigators said.

The study's limitations include the small number of BPD patients and its retrospective design. The research was supported by Montpellier University Hospital and the Agence Nationale de la Recherche, but neither institution had a role in the design, analysis, interpretation, writing, or publication of the data. The authors had no conflicts of interest.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Patients who make a serious suicide attempt and have a family history of suicide are more likely to have bipolar disorder, compared with suicide attempters without those two characteristics, according to a small retrospective study.

Because bipolar disorder is often overlooked or misdiagnosed, Dr. Sébastien Guillaume, of the department of psychological medicine and psychiatry at Montpellier University Hospital, France, and his colleagues sought to identify “factors that could alert a clinician to the possibility of BPD during an assessment of a depressed patient with a history of suicidal behavior.” They found that “a serious [suicide attempt], a familial history of completed suicide in first-degree relatives, and a higher level of affective intensity are strongly associated with a diagnosis of BPD.”

Better clinical detection of BPD, therefore, “would increase the possibility of patients receiving adequate treatment and, as such, is likely to be successful in preventing recurrence and completed suicides” (J. Affect. Disord. 2009 July 15; doi:10.1016/j.jad.2009.06.006

The study included 211 patients who were hospitalized at Montpellier University Hospital after a suicide attempt. The patients were aged 18-75 years and had recurrent major depressive disorder (RMDD) or bipolar disorder (BPD). In 135 RMDD patients, 29% had made a serious suicide attempt, as determined by scores on the Risk Rescue Rating Scale and Suicidal Intent Scale, compared with 45% of 76 BPD patients (odds ratio 1.99).

Family history of completed suicide in first-degree relatives was present in 12% of the RMDD patients and in 29% of the BPD patients (OR 3.03).

In addition, suicide attempters with both a history of serious attempts and a family suicide history had a higher risk of being diagnosed with BPD, compared with patients with either of the characteristics alone, Dr. Guillaume and his colleagues said.

Patients were assessed using the Mini International Neuropsychiatric Interview administered by trained psychiatrists, medical records, information from relatives, and self-administered questionnaires—including the Barratt Impulsivity Scale, the Buss-Durkee Hostility Inventory, the Beck Hopelessness Scale, the Tridimensional Personality Questionnaire, and the Affect Intensity Measure Assessments—were done when patients were in remission from the depressive state. Univariate and multivariate analyses were performed to determine the clinical characteristics most closely linked with a diagnosis of BPD in suicide attempters.

The BPD and RMDD groups had similar education levels, age range, and age of onset of their mood disorder; however, more BPD patients were male (37% vs. 20%).

In addition to more serious suicide attempts and greater likelihood of family suicide history, the BPD patients were more likely to have a high score on the Affect Intensity Measure (43% vs. 27%; OR 2.25) and have a high novelty-seeking score (36% vs. 26%; OR 2.28). “The potential link between emotional reactivity and… increased suicide risk” warrants further study, the authors said.

Suicide attempters with BPD had more comorbid substance use (37% vs. 21%; OR 2.13), but fewer eating disorders (14% vs. 28%; OR 0.43), compared with those with RMDD, the investigators said.

The study's limitations include the small number of BPD patients and its retrospective design. The research was supported by Montpellier University Hospital and the Agence Nationale de la Recherche, but neither institution had a role in the design, analysis, interpretation, writing, or publication of the data. The authors had no conflicts of interest.

Patients who make a serious suicide attempt and have a family history of suicide are more likely to have bipolar disorder, compared with suicide attempters without those two characteristics, according to a small retrospective study.

Because bipolar disorder is often overlooked or misdiagnosed, Dr. Sébastien Guillaume, of the department of psychological medicine and psychiatry at Montpellier University Hospital, France, and his colleagues sought to identify “factors that could alert a clinician to the possibility of BPD during an assessment of a depressed patient with a history of suicidal behavior.” They found that “a serious [suicide attempt], a familial history of completed suicide in first-degree relatives, and a higher level of affective intensity are strongly associated with a diagnosis of BPD.”

Better clinical detection of BPD, therefore, “would increase the possibility of patients receiving adequate treatment and, as such, is likely to be successful in preventing recurrence and completed suicides” (J. Affect. Disord. 2009 July 15; doi:10.1016/j.jad.2009.06.006

The study included 211 patients who were hospitalized at Montpellier University Hospital after a suicide attempt. The patients were aged 18-75 years and had recurrent major depressive disorder (RMDD) or bipolar disorder (BPD). In 135 RMDD patients, 29% had made a serious suicide attempt, as determined by scores on the Risk Rescue Rating Scale and Suicidal Intent Scale, compared with 45% of 76 BPD patients (odds ratio 1.99).

Family history of completed suicide in first-degree relatives was present in 12% of the RMDD patients and in 29% of the BPD patients (OR 3.03).

In addition, suicide attempters with both a history of serious attempts and a family suicide history had a higher risk of being diagnosed with BPD, compared with patients with either of the characteristics alone, Dr. Guillaume and his colleagues said.

Patients were assessed using the Mini International Neuropsychiatric Interview administered by trained psychiatrists, medical records, information from relatives, and self-administered questionnaires—including the Barratt Impulsivity Scale, the Buss-Durkee Hostility Inventory, the Beck Hopelessness Scale, the Tridimensional Personality Questionnaire, and the Affect Intensity Measure Assessments—were done when patients were in remission from the depressive state. Univariate and multivariate analyses were performed to determine the clinical characteristics most closely linked with a diagnosis of BPD in suicide attempters.

The BPD and RMDD groups had similar education levels, age range, and age of onset of their mood disorder; however, more BPD patients were male (37% vs. 20%).

In addition to more serious suicide attempts and greater likelihood of family suicide history, the BPD patients were more likely to have a high score on the Affect Intensity Measure (43% vs. 27%; OR 2.25) and have a high novelty-seeking score (36% vs. 26%; OR 2.28). “The potential link between emotional reactivity and… increased suicide risk” warrants further study, the authors said.

Suicide attempters with BPD had more comorbid substance use (37% vs. 21%; OR 2.13), but fewer eating disorders (14% vs. 28%; OR 0.43), compared with those with RMDD, the investigators said.

The study's limitations include the small number of BPD patients and its retrospective design. The research was supported by Montpellier University Hospital and the Agence Nationale de la Recherche, but neither institution had a role in the design, analysis, interpretation, writing, or publication of the data. The authors had no conflicts of interest.

Publications
Publications
Topics
Article Type
Display Headline
Suicide History Can Reveal Bipolar Disorder
Display Headline
Suicide History Can Reveal Bipolar Disorder
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Use of Meal Replacement Products Increases Weight Loss

Article Type
Changed
Thu, 12/06/2018 - 14:47
Display Headline
Use of Meal Replacement Products Increases Weight Loss

PHILADELPHIA — For patients trying to lose weight, meal replacement products boost the odds of success, according to Dr. Robert F. Kushner, a professor in the department of medicine at Northwestern University, Chicago.

In a recent study, the most important predictors of successful weight reduction were found to be number of physician counseling sessions attended, use of meal replacement products, and minutes of weekly activity (Obesity 2009;17:713–22).

When used to replace one or two meals per day, meal replacement products—including bars, liquid shakes, and frozen dinners—have been shown to increase weight loss (Diabetes Care 2007;30:1374–83). “If you don't use [meal replacements], I would encourage you to start recommending [them] because it's evidence-based outcomes. It works,” said Dr. Kushner, president of the Obesity Society and author of two books on weight loss.

These products help patients cut caloric intake, and the key to managing obesity is “calories, calories, calories,” he said at the annual meeting of the American College of Physicians.

Any diet that restricts calories results in the same average amount of weight loss, regardless of the ratio of fat, carbohydrates, and protein (N. Engl. J. Med. 2009;360:859–73). “Any diet will work, as long as you follow the diet,” and it's important to get that message out to patients, he said.

Weight loss is especially important for patients with diabetes or prediabetes. In one study, a program of weight loss and exercise led to a 58% reduction in diabetes risk, compared with 31% with metformin alone (N. Engl. J. Med. 2002;346:393–403). Diabetes is improved even if patients regain weight, Dr. Kushner added, so “it's better to have lost weight and regained it than never have lost it at all.”

Exercise alone “is not a very effective modality for weight loss,” he noted. “The amount of calories that you'd actually have to burn off in exercise is huge—much more than people actually think.”

Although adding exercise to calorie restriction does not result in significantly greater short-term weight loss (Med. Sci. Sports Exerc. 1999;31[suppl]:S547–52), it can be effective over the long term to keep weight down, especially if the patient engages in at least 200 minutes of moderately vigorous activity a week (JAMA 1999;282:1554–60). “It is one of the most effective components to keep weight off,” perhaps because it allows some “wiggle room” in calorie intake, he said.

Pharmacotherapy alone is also not very effective, yielding an additional weight loss that's generally less than 5 kg at 1 year (Ann. Int. Med. 2005;142:525–31), and most patients will lose only 5% of their body weight on medication alone. However, that may rise to 8%-15% if they also make lifestyle changes. Patients should be counseled about the importance of combining medication with diet and exercise, Dr. Kushner said. Available drugs include phentermine, sibutramine, and orlistat.

“The last time a medication was approved in this country for obesity care was 10 years ago” when orlistat was approved, but several experimental agents have shown promise, he said. Newer-generation obesity drugs now in trials are taking “a whole new direction in obesity care” by harnessing natural peptides, including peptide YY and glucagonlike peptide-1 analogues.

Bariatric surgery is the last option to consider. “Internists clearly have a role in identifying and treating and referring patients and managing [bariatric surgery] patients, so it's imperative [to] have a familiarity with these procedures,” Dr. Kushner said.

For selected patients, especially those with comorbidities, “the outcomes are really quite spectacular,” he added. “Diabetes is gone in three out of four patients that have bariatric surgery.” Hyperlipidemia, hypertension, and sleep apnea often improve or resolve after surgery.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

PHILADELPHIA — For patients trying to lose weight, meal replacement products boost the odds of success, according to Dr. Robert F. Kushner, a professor in the department of medicine at Northwestern University, Chicago.

In a recent study, the most important predictors of successful weight reduction were found to be number of physician counseling sessions attended, use of meal replacement products, and minutes of weekly activity (Obesity 2009;17:713–22).

When used to replace one or two meals per day, meal replacement products—including bars, liquid shakes, and frozen dinners—have been shown to increase weight loss (Diabetes Care 2007;30:1374–83). “If you don't use [meal replacements], I would encourage you to start recommending [them] because it's evidence-based outcomes. It works,” said Dr. Kushner, president of the Obesity Society and author of two books on weight loss.

These products help patients cut caloric intake, and the key to managing obesity is “calories, calories, calories,” he said at the annual meeting of the American College of Physicians.

Any diet that restricts calories results in the same average amount of weight loss, regardless of the ratio of fat, carbohydrates, and protein (N. Engl. J. Med. 2009;360:859–73). “Any diet will work, as long as you follow the diet,” and it's important to get that message out to patients, he said.

Weight loss is especially important for patients with diabetes or prediabetes. In one study, a program of weight loss and exercise led to a 58% reduction in diabetes risk, compared with 31% with metformin alone (N. Engl. J. Med. 2002;346:393–403). Diabetes is improved even if patients regain weight, Dr. Kushner added, so “it's better to have lost weight and regained it than never have lost it at all.”

Exercise alone “is not a very effective modality for weight loss,” he noted. “The amount of calories that you'd actually have to burn off in exercise is huge—much more than people actually think.”

Although adding exercise to calorie restriction does not result in significantly greater short-term weight loss (Med. Sci. Sports Exerc. 1999;31[suppl]:S547–52), it can be effective over the long term to keep weight down, especially if the patient engages in at least 200 minutes of moderately vigorous activity a week (JAMA 1999;282:1554–60). “It is one of the most effective components to keep weight off,” perhaps because it allows some “wiggle room” in calorie intake, he said.

Pharmacotherapy alone is also not very effective, yielding an additional weight loss that's generally less than 5 kg at 1 year (Ann. Int. Med. 2005;142:525–31), and most patients will lose only 5% of their body weight on medication alone. However, that may rise to 8%-15% if they also make lifestyle changes. Patients should be counseled about the importance of combining medication with diet and exercise, Dr. Kushner said. Available drugs include phentermine, sibutramine, and orlistat.

“The last time a medication was approved in this country for obesity care was 10 years ago” when orlistat was approved, but several experimental agents have shown promise, he said. Newer-generation obesity drugs now in trials are taking “a whole new direction in obesity care” by harnessing natural peptides, including peptide YY and glucagonlike peptide-1 analogues.

Bariatric surgery is the last option to consider. “Internists clearly have a role in identifying and treating and referring patients and managing [bariatric surgery] patients, so it's imperative [to] have a familiarity with these procedures,” Dr. Kushner said.

For selected patients, especially those with comorbidities, “the outcomes are really quite spectacular,” he added. “Diabetes is gone in three out of four patients that have bariatric surgery.” Hyperlipidemia, hypertension, and sleep apnea often improve or resolve after surgery.

PHILADELPHIA — For patients trying to lose weight, meal replacement products boost the odds of success, according to Dr. Robert F. Kushner, a professor in the department of medicine at Northwestern University, Chicago.

In a recent study, the most important predictors of successful weight reduction were found to be number of physician counseling sessions attended, use of meal replacement products, and minutes of weekly activity (Obesity 2009;17:713–22).

When used to replace one or two meals per day, meal replacement products—including bars, liquid shakes, and frozen dinners—have been shown to increase weight loss (Diabetes Care 2007;30:1374–83). “If you don't use [meal replacements], I would encourage you to start recommending [them] because it's evidence-based outcomes. It works,” said Dr. Kushner, president of the Obesity Society and author of two books on weight loss.

These products help patients cut caloric intake, and the key to managing obesity is “calories, calories, calories,” he said at the annual meeting of the American College of Physicians.

Any diet that restricts calories results in the same average amount of weight loss, regardless of the ratio of fat, carbohydrates, and protein (N. Engl. J. Med. 2009;360:859–73). “Any diet will work, as long as you follow the diet,” and it's important to get that message out to patients, he said.

Weight loss is especially important for patients with diabetes or prediabetes. In one study, a program of weight loss and exercise led to a 58% reduction in diabetes risk, compared with 31% with metformin alone (N. Engl. J. Med. 2002;346:393–403). Diabetes is improved even if patients regain weight, Dr. Kushner added, so “it's better to have lost weight and regained it than never have lost it at all.”

Exercise alone “is not a very effective modality for weight loss,” he noted. “The amount of calories that you'd actually have to burn off in exercise is huge—much more than people actually think.”

Although adding exercise to calorie restriction does not result in significantly greater short-term weight loss (Med. Sci. Sports Exerc. 1999;31[suppl]:S547–52), it can be effective over the long term to keep weight down, especially if the patient engages in at least 200 minutes of moderately vigorous activity a week (JAMA 1999;282:1554–60). “It is one of the most effective components to keep weight off,” perhaps because it allows some “wiggle room” in calorie intake, he said.

Pharmacotherapy alone is also not very effective, yielding an additional weight loss that's generally less than 5 kg at 1 year (Ann. Int. Med. 2005;142:525–31), and most patients will lose only 5% of their body weight on medication alone. However, that may rise to 8%-15% if they also make lifestyle changes. Patients should be counseled about the importance of combining medication with diet and exercise, Dr. Kushner said. Available drugs include phentermine, sibutramine, and orlistat.

“The last time a medication was approved in this country for obesity care was 10 years ago” when orlistat was approved, but several experimental agents have shown promise, he said. Newer-generation obesity drugs now in trials are taking “a whole new direction in obesity care” by harnessing natural peptides, including peptide YY and glucagonlike peptide-1 analogues.

Bariatric surgery is the last option to consider. “Internists clearly have a role in identifying and treating and referring patients and managing [bariatric surgery] patients, so it's imperative [to] have a familiarity with these procedures,” Dr. Kushner said.

For selected patients, especially those with comorbidities, “the outcomes are really quite spectacular,” he added. “Diabetes is gone in three out of four patients that have bariatric surgery.” Hyperlipidemia, hypertension, and sleep apnea often improve or resolve after surgery.

Publications
Publications
Topics
Article Type
Display Headline
Use of Meal Replacement Products Increases Weight Loss
Display Headline
Use of Meal Replacement Products Increases Weight Loss
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Glucose Tests Flag Diabetes in ACS Patients

Article Type
Changed
Tue, 05/03/2022 - 16:03
Display Headline
Glucose Tests Flag Diabetes in ACS Patients

The combination of fasting and admission plasma glucose tests was a useful initial screening tool to identify diabetes in patients with acute coronary syndrome, according to a study of 140 patients in a coronary care unit.

It has been shown that diabetes is underdiagnosed in ACS patients and is a strong predictor of future cardiovascular mortality, Dr. Onyebuchi E. Okosieme of Cardiff (Wales) University and colleagues wrote.

The oral glucose tolerance test (OGTT) is the preferred method for detecting diabetes, but the OGTT is expensive and time-consuming and “is underused in clinical practice,” according to the authors. However, the alternatives—fasting plasma glucose (FPG) and admission plasma glucose (APG)—alone often fail to detect diabetes after a cardiac event.

In this study, each patient (average age 67 years, 79% men) underwent all three methods of testing glucose levels, and were classified as having normal glucose tolerance, impaired glucose tolerance, or diabetes.

According to the results of the OGTT, 27% of this population (38 patients) had previously undiagnosed diabetes, 39% (54 patients) had previously undetected impaired glucose tolerance, and the remainder had normal glucose tolerance. When the results of the other testing methods were compared with those of the preferred method, the FPG had 82% sensitivity and 65% specificity in detecting diabetes, whereas the APG had 67% sensitivity and 83% specificity.

No conflicts of interest were reported by the researchers.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

The combination of fasting and admission plasma glucose tests was a useful initial screening tool to identify diabetes in patients with acute coronary syndrome, according to a study of 140 patients in a coronary care unit.

It has been shown that diabetes is underdiagnosed in ACS patients and is a strong predictor of future cardiovascular mortality, Dr. Onyebuchi E. Okosieme of Cardiff (Wales) University and colleagues wrote.

The oral glucose tolerance test (OGTT) is the preferred method for detecting diabetes, but the OGTT is expensive and time-consuming and “is underused in clinical practice,” according to the authors. However, the alternatives—fasting plasma glucose (FPG) and admission plasma glucose (APG)—alone often fail to detect diabetes after a cardiac event.

In this study, each patient (average age 67 years, 79% men) underwent all three methods of testing glucose levels, and were classified as having normal glucose tolerance, impaired glucose tolerance, or diabetes.

According to the results of the OGTT, 27% of this population (38 patients) had previously undiagnosed diabetes, 39% (54 patients) had previously undetected impaired glucose tolerance, and the remainder had normal glucose tolerance. When the results of the other testing methods were compared with those of the preferred method, the FPG had 82% sensitivity and 65% specificity in detecting diabetes, whereas the APG had 67% sensitivity and 83% specificity.

No conflicts of interest were reported by the researchers.

The combination of fasting and admission plasma glucose tests was a useful initial screening tool to identify diabetes in patients with acute coronary syndrome, according to a study of 140 patients in a coronary care unit.

It has been shown that diabetes is underdiagnosed in ACS patients and is a strong predictor of future cardiovascular mortality, Dr. Onyebuchi E. Okosieme of Cardiff (Wales) University and colleagues wrote.

The oral glucose tolerance test (OGTT) is the preferred method for detecting diabetes, but the OGTT is expensive and time-consuming and “is underused in clinical practice,” according to the authors. However, the alternatives—fasting plasma glucose (FPG) and admission plasma glucose (APG)—alone often fail to detect diabetes after a cardiac event.

In this study, each patient (average age 67 years, 79% men) underwent all three methods of testing glucose levels, and were classified as having normal glucose tolerance, impaired glucose tolerance, or diabetes.

According to the results of the OGTT, 27% of this population (38 patients) had previously undiagnosed diabetes, 39% (54 patients) had previously undetected impaired glucose tolerance, and the remainder had normal glucose tolerance. When the results of the other testing methods were compared with those of the preferred method, the FPG had 82% sensitivity and 65% specificity in detecting diabetes, whereas the APG had 67% sensitivity and 83% specificity.

No conflicts of interest were reported by the researchers.

Publications
Publications
Topics
Article Type
Display Headline
Glucose Tests Flag Diabetes in ACS Patients
Display Headline
Glucose Tests Flag Diabetes in ACS Patients
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Patient-Chosen Surrogates Have a Role, Says Study

Article Type
Changed
Mon, 04/16/2018 - 12:49
Display Headline
Patient-Chosen Surrogates Have a Role, Says Study

When making treatment decisions for incapacitated patients, the wishes of family members are “generally respected as a key element of decision making,” despite professional guidelines that recommend putting the patient's wishes first, results of an interview-based study show.

The U.S. courts and professional organizations such as the American Medical Association have advocated putting the patient's wishes first, but this autonomy-based approach is limited by the lack of advance directives or living wills for most patients. And even patient-chosen surrogates have been shown to be inadequate at predicting what patients would want, according to Dr. Alexia M. Torke of the Indiana University Center for Aging Research, the Regenstrief Institute Inc., and the Fairbanks Center for Medical Ethics, Indianapolis, and her colleagues (J. Clin. Ethics 2008;19:110–9).

“The findings suggest that physicians' decision-making framework was broader and more complex than previously thought,” they said.

Using semistructured, in-depth interviews, the investigators sought to determine how physicians make care decisions about adult inpatients who lack decision-making capacity.

The researchers interviewed 21 physicians from a Midwestern academic medical center, of whom 13 were men and 15 were white. Six were interns, eight were residents, one was a fellow, and six were attendings. Of these, 20 had made a major medical decision for an incapacitated patient within the previous month.

Each interview consisted of open-ended questions and was audiotaped and transcribed. The transcripts were analyzed by two of the study researchers to identify the major themes, and these themes were explored further in subsequent interviews.

The three major themes regarding physician decision making for such patients were: patient-centered ethical guidelines, or the patient's wishes and best interest; surrogate-centered ethical guide- lines, or the wishes and interests of the decision-making surrogates for the patient (usually family members); and issues of knowledge and authority for both the physician and surrogates.

In the absence of an advance directive or living will, physicians sometimes try to ascertain what the patient would want by asking family members about the patient's values and any relevant previous statements, Dr. Torke and her colleagues said. Physicians also took into account their own assessment of the patient's pain and suffering and quality of life when deciding the course of care, they said.

But in addition to the patient's wishes and best interest, physicians also considered the surrogates' wishes and interests, such as religious preferences and the burden on the family. Although “physicians generally said that surrogates' needs were less important than patient-centered concerns,” surrogates' concerns were more influential in the real-world setting than in theory, the authors found. Physicians “struggled when the concerns of family members appeared to conflict with concerns centered on patients,” but they tended to give a surrogate's wishes more weight when they believed that the surrogate had a high level of caring and goodwill for the patient, according to the researchers.

And though “direct conflict between the wishes and needs of family members and those of patients” is perhaps the “most vexing” scenario for physicians, they recognize that families are not always capable of fulfilling a patient's wishes–for example, family members may feel that accommodating a patient who wants to die at home will be too much of a burden.

The third major theme in the interviews, knowledge and authority, included physicians' knowledge of appropriate clinical care and the established standard of care. Physicians “often appealed to clinical considerations to guide their decisions” and “often justified choices that had an ethical dimension using only such clinical considerations,” Dr. Torke and her colleagues said. Several interviewees said they had attempted to guide a surrogate toward what they believed to be the correct decision according to their clinical judgment.

Dr. Jeffrey Nichols, a geriatric medicine specialist in New York City, said the study is a valuable addition to the ethical literature. “[It] raises concerns as to how physicians should balance competing interests and needs for patients who lack capacity when simple notions of patient autonomy may be difficult to apply,” he said.

The study was limited by the inclusion of a single medical center, the possibility that physicians are not aware of their underlying motivations, and the inability to determine the relative importance of each component of the decision-making process.

“Future guidelines for surrogate decision making should take account of actual clinical practices, and should be expanded to explicitly address these additional considerations,” the researchers concluded.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

When making treatment decisions for incapacitated patients, the wishes of family members are “generally respected as a key element of decision making,” despite professional guidelines that recommend putting the patient's wishes first, results of an interview-based study show.

The U.S. courts and professional organizations such as the American Medical Association have advocated putting the patient's wishes first, but this autonomy-based approach is limited by the lack of advance directives or living wills for most patients. And even patient-chosen surrogates have been shown to be inadequate at predicting what patients would want, according to Dr. Alexia M. Torke of the Indiana University Center for Aging Research, the Regenstrief Institute Inc., and the Fairbanks Center for Medical Ethics, Indianapolis, and her colleagues (J. Clin. Ethics 2008;19:110–9).

“The findings suggest that physicians' decision-making framework was broader and more complex than previously thought,” they said.

Using semistructured, in-depth interviews, the investigators sought to determine how physicians make care decisions about adult inpatients who lack decision-making capacity.

The researchers interviewed 21 physicians from a Midwestern academic medical center, of whom 13 were men and 15 were white. Six were interns, eight were residents, one was a fellow, and six were attendings. Of these, 20 had made a major medical decision for an incapacitated patient within the previous month.

Each interview consisted of open-ended questions and was audiotaped and transcribed. The transcripts were analyzed by two of the study researchers to identify the major themes, and these themes were explored further in subsequent interviews.

The three major themes regarding physician decision making for such patients were: patient-centered ethical guidelines, or the patient's wishes and best interest; surrogate-centered ethical guide- lines, or the wishes and interests of the decision-making surrogates for the patient (usually family members); and issues of knowledge and authority for both the physician and surrogates.

In the absence of an advance directive or living will, physicians sometimes try to ascertain what the patient would want by asking family members about the patient's values and any relevant previous statements, Dr. Torke and her colleagues said. Physicians also took into account their own assessment of the patient's pain and suffering and quality of life when deciding the course of care, they said.

But in addition to the patient's wishes and best interest, physicians also considered the surrogates' wishes and interests, such as religious preferences and the burden on the family. Although “physicians generally said that surrogates' needs were less important than patient-centered concerns,” surrogates' concerns were more influential in the real-world setting than in theory, the authors found. Physicians “struggled when the concerns of family members appeared to conflict with concerns centered on patients,” but they tended to give a surrogate's wishes more weight when they believed that the surrogate had a high level of caring and goodwill for the patient, according to the researchers.

And though “direct conflict between the wishes and needs of family members and those of patients” is perhaps the “most vexing” scenario for physicians, they recognize that families are not always capable of fulfilling a patient's wishes–for example, family members may feel that accommodating a patient who wants to die at home will be too much of a burden.

The third major theme in the interviews, knowledge and authority, included physicians' knowledge of appropriate clinical care and the established standard of care. Physicians “often appealed to clinical considerations to guide their decisions” and “often justified choices that had an ethical dimension using only such clinical considerations,” Dr. Torke and her colleagues said. Several interviewees said they had attempted to guide a surrogate toward what they believed to be the correct decision according to their clinical judgment.

Dr. Jeffrey Nichols, a geriatric medicine specialist in New York City, said the study is a valuable addition to the ethical literature. “[It] raises concerns as to how physicians should balance competing interests and needs for patients who lack capacity when simple notions of patient autonomy may be difficult to apply,” he said.

The study was limited by the inclusion of a single medical center, the possibility that physicians are not aware of their underlying motivations, and the inability to determine the relative importance of each component of the decision-making process.

“Future guidelines for surrogate decision making should take account of actual clinical practices, and should be expanded to explicitly address these additional considerations,” the researchers concluded.

When making treatment decisions for incapacitated patients, the wishes of family members are “generally respected as a key element of decision making,” despite professional guidelines that recommend putting the patient's wishes first, results of an interview-based study show.

The U.S. courts and professional organizations such as the American Medical Association have advocated putting the patient's wishes first, but this autonomy-based approach is limited by the lack of advance directives or living wills for most patients. And even patient-chosen surrogates have been shown to be inadequate at predicting what patients would want, according to Dr. Alexia M. Torke of the Indiana University Center for Aging Research, the Regenstrief Institute Inc., and the Fairbanks Center for Medical Ethics, Indianapolis, and her colleagues (J. Clin. Ethics 2008;19:110–9).

“The findings suggest that physicians' decision-making framework was broader and more complex than previously thought,” they said.

Using semistructured, in-depth interviews, the investigators sought to determine how physicians make care decisions about adult inpatients who lack decision-making capacity.

The researchers interviewed 21 physicians from a Midwestern academic medical center, of whom 13 were men and 15 were white. Six were interns, eight were residents, one was a fellow, and six were attendings. Of these, 20 had made a major medical decision for an incapacitated patient within the previous month.

Each interview consisted of open-ended questions and was audiotaped and transcribed. The transcripts were analyzed by two of the study researchers to identify the major themes, and these themes were explored further in subsequent interviews.

The three major themes regarding physician decision making for such patients were: patient-centered ethical guidelines, or the patient's wishes and best interest; surrogate-centered ethical guide- lines, or the wishes and interests of the decision-making surrogates for the patient (usually family members); and issues of knowledge and authority for both the physician and surrogates.

In the absence of an advance directive or living will, physicians sometimes try to ascertain what the patient would want by asking family members about the patient's values and any relevant previous statements, Dr. Torke and her colleagues said. Physicians also took into account their own assessment of the patient's pain and suffering and quality of life when deciding the course of care, they said.

But in addition to the patient's wishes and best interest, physicians also considered the surrogates' wishes and interests, such as religious preferences and the burden on the family. Although “physicians generally said that surrogates' needs were less important than patient-centered concerns,” surrogates' concerns were more influential in the real-world setting than in theory, the authors found. Physicians “struggled when the concerns of family members appeared to conflict with concerns centered on patients,” but they tended to give a surrogate's wishes more weight when they believed that the surrogate had a high level of caring and goodwill for the patient, according to the researchers.

And though “direct conflict between the wishes and needs of family members and those of patients” is perhaps the “most vexing” scenario for physicians, they recognize that families are not always capable of fulfilling a patient's wishes–for example, family members may feel that accommodating a patient who wants to die at home will be too much of a burden.

The third major theme in the interviews, knowledge and authority, included physicians' knowledge of appropriate clinical care and the established standard of care. Physicians “often appealed to clinical considerations to guide their decisions” and “often justified choices that had an ethical dimension using only such clinical considerations,” Dr. Torke and her colleagues said. Several interviewees said they had attempted to guide a surrogate toward what they believed to be the correct decision according to their clinical judgment.

Dr. Jeffrey Nichols, a geriatric medicine specialist in New York City, said the study is a valuable addition to the ethical literature. “[It] raises concerns as to how physicians should balance competing interests and needs for patients who lack capacity when simple notions of patient autonomy may be difficult to apply,” he said.

The study was limited by the inclusion of a single medical center, the possibility that physicians are not aware of their underlying motivations, and the inability to determine the relative importance of each component of the decision-making process.

“Future guidelines for surrogate decision making should take account of actual clinical practices, and should be expanded to explicitly address these additional considerations,” the researchers concluded.

Publications
Publications
Topics
Article Type
Display Headline
Patient-Chosen Surrogates Have a Role, Says Study
Display Headline
Patient-Chosen Surrogates Have a Role, Says Study
Article Source

PURLs Copyright

Inside the Article

Article PDF Media