Identifying cognitive impairment during the Annual Wellness Visit: Who can you trust?

Article Type
Changed
Mon, 01/14/2019 - 11:32
Display Headline
Identifying cognitive impairment during the Annual Wellness Visit: Who can you trust?

Abstract

Purpose Assessing for cognitive impairment is now mandated as part of the Medicare Annual Wellness Visit. This offers an unparalleled opportunity for early detection and treatment of dementia. However, physician observation supplemented by reports of patients and informants may be less effective than an objective screening test to achieve this goal.

Methods We used visual analog cognition scales (VACS) to quantify patient and informant subjective impressions of cognitive ability and compared these scales with the Folstein Mini-Mental State Exam (MMSE) and the Memory Orientation Screening Test (MOST) on a sample of 201 elderly patients seen for neuropsychological evaluation in a tertiary memory evaluation center. Outcome measures included dementia severity and scores from 3 standardized memory tests. Depression was also considered.

Results Patients were unable to judge their own cognition. Family informants rated only slightly better. Both screening tests outperformed patients and informants. The MOST was significantly better than the MMSE for determining dementia severity and memory for the total sample, as well as a subsample of patients who were less impaired and more typical of independent community-dwelling elders. Depression did not influence the test relationships.

Conclusions Neither patient nor informant subjective reports of cognition should be relied on to identify cognitive impairment within the Annual Wellness Visit. Providers would be best served by using a valid and reliable screening test for dementia.

As of January 2011, physicians are required to include detection of cognitive impairment as part of their health risk assessment in the Medicare Annual Wellness Visit.1 The Centers for Medicare and Medicaid Services (CMS) specifically mandate an “assessment of an individual’s cognitive function by direct observation, with due consideration of information obtained by way of patient report, concerns raised by family members, friends, caretakers, or others.”2 Unfortunately, these means of assessment may be unreliable.

Why observation alone won’t work. Physicians often fail to identify cognitive impairment3-5 until it becomes quite severe.6-8 This failure to diagnose may be due to time constraints,9,10 a focus on other health measures,11 or the lack of appropriate and usable tools.11-14 Reliance on patient self-report is also likely to be a flawed approach.15 A recent study found that most patients with dementia in a community sample denied they had memory problems.16 This is consistent with our clinical experience of 30 years in a tertiary memory assessment practice. These patients believe they are no worse off than their contemporaries and minimize or rationalize even demonstrable memory and functional problems. “I remember everything I need to remember” is a common response to the question, “How is your memory?”

During the comment period preceding implementation of the CMS regulation, 38 national organizations comprising the Partnership to Fight Chronic Disease17 argued that reliance on subjective measures alone is inadequate to achieve the stated goal of the legislation. We share this concern.

Improving cognition assessment. Although family complaints have been viewed as valid in at least 1 commonly used screening instrument, the AD8 (with more than 2 of 8 complaints likely to aid in dementia detection)18 does not reflect severity of impairment, nor does it provide a score to follow a patient’s course over time.

To better quantify the subjective perceptions of cognition by patients and their families, we developed the Visual Analog Cognition Scale (VACS)—which we’ll describe in a bit—and added it to our protocol of neuropsychological tests for dementia. Visual analog scales are well-accepted measures for a variety of subjective phenomena,19 including pain,20 treatment response,21 sleep,22 affective states,23 and quality of life.24 We designed this current study to delineate the degree to which patient or informant perspective could assist physicians in the identification process.

We examined VACS responses from a consecutive sample of patients seen in our practice from July through December 2010. Our goal was to quantify the perceptions of patients and their informants regarding patients’ cognitive states across 5 important areas and to determine the relationship between these ratings and the objective results of neuropsychological evaluation. We also wanted to measure the relative accuracy of such subjective ratings with that of 2 validated screening tools, the Folstein Mini-Mental State Exam (MMSE)25 and the recently published Memory Orientation Screening Test (MOST), which we developed.26

Methods

Subjects
We administered the VACS to 201 patients as part of a 4-hour comprehensive neuropsychological evaluation. Patients were referred by community-based physicians, typically in primary care, neurology, or psychiatry. The sample was 66% female (n=133), with an average age of 78.5 (±6.8) years and an average education of 13.2 (±3.2) years. Of the 201 patients, 7 could not complete the VACS because of confusion or visual impairment; 20 had no accompanying informant. Of the 181 accompanied patients, 89 informants were grown children (49%), 64 were spouses (35%), 12 were siblings (7%), and 16 were friends or paid caregivers (9%).

 

 

Procedure
An administrative assistant handed each patient and informant the VACS as they checked in at the front desk. We asked them to fill out the questionnaire in the waiting room and advised them not to discuss their ratings with each other. We then conducted a comprehensive neuropsychological evaluation of the patient while another clinician separately interviewed the informant regarding the patient’s current health, cognitive and emotional symptoms, and daily function.

Instruments
The VACS is a 5-item, visual analog scale with parallel forms for patients (VACS-P) and informants (VACS-I). The form instructs the user to “Rate yourself (or the patient with whom you came) in each of these 5 areas by circling a number that best describes how you (they) are doing.” The 5 areas and their descriptions are:

  • Attention: Keeping focused, avoiding being distracted, completing tasks
  • Initiation: Starting tasks, following through, staying busy and active
  • Judgment: Figuring things out and making good decisions
  • Memory: Remembering new information and how to do things
  • Self-care: Dressing, bathing, preparing food.

Each area has a visual analog scale of 1 to 10 below it, with each number occupying a box in a continuous sequence. Words appear above some of the numbers to help anchor the ratings in a systematic way: 1=very poor; 4=fair; 7=good; 10=very good.

The MMSE and its properties are well known. The MOST is a 29-point scale comprising 3-word recall, orientation to 6 date-and-time items, unforewarned recall of 12 pictured household items, and an 8-point clock drawing score. The validation study, using a total sample exceeding 1000 patients, demonstrated the MOST correlated highly and significantly (Pearson’s correlation coefficient [r]=0.81; P<.001) with dementia severity and 3 standardized memory tests. At a cutoff score of 18 points, it produced a 0.90 area under the curve (AUC) (95% confidence interval [CI], 0.87-0.94), with a sensitivity of 0.85 and specificity of 0.76, correctly classifying 83% of patients. Test-retest reliability was r≥0.90; P<.001 for both shorter (average 2-month) and longer (average 9-month) intervals.

With each patient, we conducted a diagnostic interview and administered a battery of standardized neuropsychological tests to assess intelligence, attention, executive function, language, and memory. The measures of primary interest for this investigation were the MOST, MMSE, delayed story memory (Wechsler Memory Scale-Revised [WMS-R] Logical Memory-II, or LM-II),27 delayed visual memory (WMS-R Visual Recall-II, or VR-II), delayed recall of a 12-item repeated presentation list of common grocery store items (Shopping List Test-Recall, or SLT-R), and the 15-item Geriatric Depression Scale (GDS-15).28 Additionally, each psychologist made a clinical diagnosis, according to Diagnostic and Statistical Manual of Mental Disorders [Fourth Edition] (DSM-IV)29 criteria and rated the patient’s dementia severity (DS) on a 0-to-3 Clinical Dementia Rating-type scale.30 We based diagnoses and severity ratings on age- and education-adjusted neuropsychological test scores, medical and psychiatric history, patient interview, and separate interview with a family informant.

Statistical methods
We calculated VACS totals for each patient and informant. Total VACS scores ranged from 5 to 50. MOST scores, comprising 3-word recall, 6-item orientation, 12-item list memory, and an 8-point clock drawing score, ranged from 0 to 29. We used the MMSE in the traditional method, counting the first error in spelling WORLD backwards, yielding a result of 0 to 30. The GDS score, 0 to 15, reflected the number of items indicating depression. We computed neuropsychological tests using standard scoring techniques. We rated dementia severity as: 0=normal cognition; 0.5=mild cognitive impairment; 1.0=mild dementia; 2.0=moderate dementia; and 3.0=severe dementia. We also assigned half-point ratings from 1 to 3.

We compared MOST, MMSE, VACS-P, and VACS-I scores with dementia severity and the 3 neuropsychological tests of delayed memory and the GDS-15. We computed Pearson’s correlation coefficients and their levels of significance vs 0. Tests of significant differences between correlations used Fisher’s z-transformation and tested the normalized difference vs 0.

Results

Diagnoses and dementia severity levels are listed in TABLE 1. TABLE 2 presents the mean scores for predictor and outcome variables. Correlations and significance ratings between the VACS-P, VACS-I, MOST, and MMSE with the criterion variables of Dementia Severity Rating, LM-II, VR-II, SLT-R, and GDS-15 are shown in TABLE 3.

Patients, on average, rated themselves as having “good” cognition overall. There was no difference in patient self-ratings between the top quartile of dementia severity (mean=34.6; SD= 8.6) and those in the lowest quartile (mean=36.4; SD=9.0). Informants rated the patients, on average, as having only “fair” cognition. Objective neuropsychological tests, however, found the patients, on average, to be mild to moderately demented and to have mild to moderate impairment on objective memory tests. Most patients were not depressed, with an average GDS score well below the clinical cutoff of 7 or more items. However, 30 of the 194 (15.5%) who completed the VACS-P fell into the clinical range for depression.

 

 

Patient self-ratings did not correlate (r=0.02) with dementia severity or with any of the 3 standardized memory tests. Informant scores correlated modestly with dementia severity and memory tests, but were significantly higher (P<.001) than those of the patients. Both the MOST (r=–0.86) and the MMSE (r=–0.76) had much stronger and highly significant (P<.001) correlations with dementia severity and with the memory measures (r=0.49–0.70). In addition, the MOST and MMSE were significantly (P<.001) better correlated with dementia severity and objective memory scores than were the informant ratings. Only the MMSE correlation with visual recall (P=.06) did not surpass that of the informant.

The MOST had a significantly higher correlation than the MMSE with dementia severity (P<.01) and with each of the 3 memory tests (P<.05). The MOST and MMSE scores were not related to level of depression (r=–0.01 and –0.03). Patient reports correlated significantly with depression level (r=–0.40; P<.001) as did those of the informants (r=–0.22; P<.01). Nevertheless, depression did not appear to be responsible for the limited relationship between patient self-ratings and objective test scores for cognition. When clinically depressed (GDS≥7) patients were removed from the analysis (remaining n=166), there was no significant improvement in the correlation between subjective ratings and objective scores.

We conducted a secondary analysis of patients whose cognition ranged between normal and mild-to-moderate dementia to see if more cognitively intact individuals would be more accurate at self-rating. In this subsample (n=127; mean age=77.3 years; 57% females), patient self-reports again did not correlate significantly (r=0.05) with dementia severity. Informant ratings remained modest, but significant (r=–0.25; P=.004) and statistically better (P<.05) than those of the patients. The MOST (r=–0.69; P<.001) and the MMSE (r=–0.54; P<.001) remained well-correlated with dementia severity and again outperformed the informant ratings (MOST, P<.001; MMSE, P<.05).

TABLE 1
Cognition diagnoses and severity levels in 201 consecutively evaluated elderly patients

Diagnosisn (%)
Normal cognition8 (4.0)
Mild cognitive impairment32 (15.9)
Dementia of all types161 (80.1)
  – Alzheimer’s disease90 (55.9)
  – Vascular dementia62 (38.5)
  – Frontotemporal dementia4 (2.5)
  – Other dementia5 (3.1)
Dementia severity rating 
Normal (0)8 (4.0)
Mild cognitive impairment (0.5)32 (15.9)
Mild dementia (1.0)42 (20.9)
Mild-moderate dementia (1.5)45 (22.4)
Moderate dementia (2.0)38 (18.9)
Moderate–severe dementia (2.5)27 (13.4)
Severe dementia (3.0)9 (4.5)

TABLE 2
Mean test scores for predictor and outcome variables

Predictor variablesMean (SD)Outcome variablesMean (SD)
MOST15.5 (5.7)Dementia Severity Rating1.5 (0.8)
MMSE23.2 (4.7)LM-II6.4 (8.2)
VACS-P35.6 (8.4)VR-II5.4 (7.7)
VACS-I27.6 (10.2)SLT-R4.3 (3.1)
  GDS-153.3 (3.3)
GDS-15, Geriatric Depression Scale-15; LM-II, Logical Memory-II; MMSE, Mini-Mental State Examination; MOST, Memory Orientation Screening Test; SD, standard deviation; SLT-R, Shopping List Test-Recall; VACS-I, Visual Analog Cognition Scale- Informant; VACS-P, Visual Analog Cognition Scale-Patient; VR-II, Visual Recall-II.

TABLE 3
How the MOST, MMSE, and VACS predictor variables compared with outcome measures

 Correlations of MOST, MMSE, VACS-P, and VACS-I to criterion measuresPairwise comparison of correlations of MOST, MMSE, and VACS-I to criterion measures (absolute values)
 MOST (n=201)MMSE (n=201)VACS-P (n=194)VACS-I (n=181)MOST vs MMSEMOST vs VACS-IMMSE vs VACS-I
 Pearson’s correlation coefficient (P value*)Z-ratio (P value*)
 rPrPrPrPZPZPZP
Dementia severity–0.86<.001–0.76<.0010.02.78–0.36<.0012.835.0058.723<.0015.954<.001
LM-II0.67<.0010.52<.001–0.03.680.20.0072.245.0255.72<.0013.533<.001
VR-II0.65<.0010.49<.001–0.02.780.33<.0012.481.0134.29<.0011.872.061
SLT-R0.70<.0010.56<.0010.01.890.28.0012.223.0265.735<.0013.564<.001
GDS-15–0.01.89–0.03.67–0.40<.001–0.22.003      
GDS-15, Geriatric Depression Scale-15; LM-II, Logical Memory II; MMSE, Mini-Mental State Examination; MOST, Memory Orientation Screening Test; SLT-R, Shopping List Test-Recall; VACS-I, Visual Analog Cognition Scale-Informant; VACS-P, Visual Analog Cognition Scale-Patient; VR-II, Visual Recall II.
*The minimum acceptable measure of statistical significance was .05.
Pearson’s correlation coefficient (at left) measures the strength of relationship between 2 variables. It can range from 0.0 (no correlation) to –1.0 or 1.0 (perfect correlation). The larger the number, the stronger the relationship. A negative coefficient indicates an inverse relationship.
Z-ratio (at right) reflects the size, or magnitude, of the difference between 2 correlations.

DISCUSSION

Results of this study demonstrate that patients referred for specialized memory evaluation had virtually no idea of the degree of their cognitive impairment. Patients, on average, rated their function in 5 critical areas of cognition and behavior as “good.” While 80% of these patients demonstrated dementia on formal evaluation, more than 95% rated themselves as having good or very good cognition. Their ratings did not correlate with any objective memory measures or expert clinician opinion.

Patient and informant ratings are unreliable. Patients with better cognition, who might visit their doctor alone for the Annual Wellness Visit and would appear more intact, were no better at judging their cognition than the total patient sample. Both the patients with good cognition and those with dementia rated themselves equally unimpaired. This finding is not unique to the visual analog scale that we used in this study. When 148 self-nominated “cognitively healthy” community-dwelling elders took the MOST and a battery of neuropsychological tests as part of a norming study for the MOST,31 more than 20% would be classified as having dementia based on their memory and executive function test scores. These findings strongly suggest that patients cannot be relied on to inform their physician of cognitive impairment.

 

 

While informants possessed some knowledge about a patient’s cognitive status and were able to supply helpful anecdotal information, their ratings correlated only modestly with objectively measured cognition. This is not surprising given the volume of research demonstrating rater and observer bias.

Rely instead on an objective cognitive screening test. Of greatest relevance, these results indicate that an objective cognitive screening test is more accurate in identifying and measuring cognitive impairment than is the rating of a patient or an informant. Both the MOST and MMSE outperformed patients and informants in assessing patients’ severity of cognitive impairment, including those with milder problems. This last finding is particularly important given that less impaired patients are more likely to visit their doctor without an informant and to appear relatively intact when interviewed or observed by the physician.17 Without an objective test, their cognitive impairment would likely be missed.32

The MOST outperformed the MMSE in detecting dementia and determining disease severity on a sample of 700 patients, and demonstrated twice the sensitivity for disease detection in those who were mildly impaired.26 The current study confirms that the MOST has a significantly higher correlation with dementia severity than does the MMSE, and significantly higher correlations with longer standardized memory tests.

MOST, MMSE test-taking time varies, too. Time constraints are an important consideration in a medical office. The average time to administer the MOST on cognitively impaired patients (a group that is slower to perform than patients with normal cognition) is 4.5 minutes.26 The MMSE, by comparison, takes 10 minutes or more.33,34

Cognition is as measurable as body mass index, blood pressure, height, weight, and level of depression, also mandated in the Annual Wellness Visit. Numbers are easily recorded and compared, while impressions or even a positive (>2) AD8 score are less precise. Provider observation, even if informed by family report, is not as sound a basis for risk analysis, treatment planning, or future monitoring as is an objective measure. Because several current screening tests for dementia possess known reliabilities over time,26,33,35 the physician can periodically repeat such a test to assess treatment response and ongoing risk.

Is there a place for a subjective rating scale? Possibly. A waiting room tool such as the VACS, combined with an objective test, may alert the clinician to a patient with anosognosia. These patients require different management strategies if treatment is to be effective. The care team faces an even greater challenge if an informant shares the patient’s lack of awareness. Conversely, a favorable cognitive screening result and a high score from the informant would give all parties assurance that cognition was normal.

Study limitations. The primary limitation of this study is that it was conducted in a tertiary memory center, where most patients have either suspected or demonstrated cognitive deficits. The relative proportion of normal to impaired patients is, consequently, different from that found in the primary care office, in which about 15% would have mild cognitive impairment36 and a similar percentage would have dementia.37 A replication of this study in such an environment would be helpful. On the other hand, without a companion neuropsychological evaluation as a criterion, the accuracy of self- or informant-report is more difficult to measure. As noted above, 20% of elders volunteering for a study on “normal cognitive functioning” showed significant objective deficits.31

Assessment of cognitive impairment in the primary care physician’s office is uniquely challenging. Physicians are taught to respond to the complaints of patients. But when a patient has dementia, that approach does not work. Family reports are helpful, but not sufficiently accurate. The recent Alzheimer’s Association report37 notes that “Medicare’s new Annual Wellness Visit includes assessment for possible cognitive impairment,” but also points out that “many existing barriers affect the ability or willingness of individuals and their caregivers to recognize cognitive impairment and to discuss it with their physician.” We agree, and we believe that a sound approach to this problem would be for primary care physicians to consistently use an objective tool to measure cognitive functioning in the Annual Wellness Visit and in follow-up visits. A score that reflects the current level of cognition, provides diagnostic information, and reflects change in cognitive status over time will optimize this unique opportunity for earlier detection and potentially earlier treatment of dementia.

 

 

CORRESPONDENCE
Mitchell Clionsky, PhD, ABPP (CN), Clionsky Neuro Systems, 155 Maple Street, Suite 203, Springfield, MA 01105; [email protected]

References

1. 111th US Congress. Patient protection and affordable care act. HR3590, section. 4103. Medicare coverage of annual wellness visit: providing a personalized prevention plan. Available at: http://thomas.loc.gov/cgi-bin/bdquery/z?d111:H.R.3590:#. Accessed February 19, 2011.

2. Department of Health and Human Services, Centers for Medicare and Medicaid Services. Amendment to HR 3590, section 4103, subpart B §410.15 (v). Fed Regist. November 29, 2010;75:73613-73614.

3. Boustani M, Peterson B, Hanson L, et al. Screening for dementia in primary care: a summary of the evidence for the US Preventive Services Task Force. Ann Intern Med. 2003;138:927-937.

4. Valcour VG, Masaki KH, Curb JD, et al. The detection of dementia in the primary care setting. Arch Intern Med. 2000;160:2964-2968.

5. Ganguli M, Rodriguez E, Mulsant B, et al. Detection and management of cognitive impairment in primary care. J Am Geriatr Soc. 2004;52:1668-1675.

6. Chodosh J, Petitti DB, Elliot M, et al. Physician recognition of cognitive impairment: evaluating the need for improvement. J Am Geriatr Soc. 2004;52:1051-1059.

7. Boise L, Neal MB, Kaye J. Dementia assessment in primary care: results from a study in three managed care systems. J Gerontol A Biol Sci Med Sci. 2004;59:M621-M626.

8. Callahan C, Hendrie H, Tierney W. Documentation and evaluation of cognitive impairment in elderly primary care patients. Ann Intern Med. 1995;122:422-429.

9. Boise L, Camicioli R, Morgan DL, et al. Diagnosing dementia: perspectives of primary care physicians. Gerontologist. 1999;39:457-464.

10. Tai-Seale M, McGuire TG, Zhang W. Time allocation in primary care office visits. Health Serv Res. 2007;42:1871-1894.

11. Boise L, Eckstrom E, Fagnan, L. The Rural Older Adult Memory (ROAM) study: a practice-based intervention to improve dementia screening and diagnosis. J Am Board Fam Med. 2010;23:486-498.

12. Brown J, Pengas G, Dawson K, et al. Self administered cognitive screening test for detection of Alzheimer’s disease cross sectional study. BMJ. 2009;338:b2030.-

13. Solomon P, Hirschoff Kelly B, et al. A 7 minute neurocognitive screening battery highly sensitive to Alzheimer’s disease. Arch Neurol. 1998;55:349-355.

14. Borson S, Scanlon J, Brush M, et al. The Mini-Cog: a cognitive vital signs measure for dementia screening. Int J Geriatr Psychiatry. 2000;15:1021-1027.

15. Sevush S, Leve N. Denial of memory deficit in Alzheimer’s disease. Am J Psychiatry. 1993;150:748-751.

16. Lehmann S, Black B, Shore A, et al. Living alone with dementia: lack of awareness adds to functional and cognitive vulnerabilities. Int Psychogeriatr. 2010;22:778-784.

17. Partnership to Fight Chronic Disease. Letter submitted via Internet to Donald Berwick, MD, Administrator; Centers for Medicare and Medicaid Services. Available at: http://www.google.com/url?sa=t&source=web&cd=3&ved=0CCsQFjAC&url=https%3A%2F%2Fwww.thenationalcouncil.org%2Fgalleries%2Fpolicy-file%2FMedicare%2520Wellness%2520visit%2520-%2520final.pdf&ei=W2x_ToCRMuzTiAL1xIi7Aw&usg=AFQjCNFPWOe8s5xD117o0zfwOpZ69rskAw. Accessed February 19, 2011.

18. Galvin JE, Roe CM, Powlishta KK, et al. The AD8, a brief informant interview to detect dementia. Neurology. 2005;65:559-561.

19. Marsh-Richard D, Hatzis E, Mathias C, et al. Adaptive visual analog scales (AVAS): a modifiable software program for the creation, administration, and scoring of visual analog scales. Behav Res Methods. 2009;41:99-106.

20. Keller S, Bann C, Dodd S, et al. Validity of the Brief Pain Inventory for use in documenting the outcomes of patients with noncancer pain. Clin J Pain. 2004;20:309-318.

21. LaStayo P, Larsen S, Smith S, et al. Feasibility and efficacy of eccentric exercise with older cancer survivors. J Geriatr Phys Ther. 2010;33:135-140.

22. Zisapel N, Tarrasch R, Laudon M. A comparison of visual analog scale and categorical ratings in assessing patients’ estimate of sleep quality. In: Lader MH, Cardinali DP, Pandi-Perumal SR, eds. Sleep and Sleep Disorders. New York, NY: Springer Science+Business Media; 2006:220–224.

23. Kindler C, Harms C, Amsler F, et al. The visual analog scale allows effective measurement of preoperative anxiety and detection of patients’ anesthetic concerns. Anesth Analg. 2000;90:706-712.

24. Bruce B, Fries JF. The Stanford Health Assessment Questionnaire: dimensions and practical applications. Health Qual Life Outcomes. 2003;1:20.-

25. Folstein MF, Folstein SE, McHugh PR. “Mini-Mental State”: a practical method for grading cognitive states of patients for the clinician. J Psychiatr Res. 1975;12:189-198.

26. Clionsky M, Clionsky E. Development and validation of the Memory Orientation Screening Test (MOST™): a better screening test for dementia. Am J Alzheimers Dis Other Demen. 2010;25:650-656.

27. Wechsler D. Wechsler Memory Scale–Revised. New York, NY: Psychological Corporation; 1987.

28. Sheikh JI, Yesavage JA. Geriatric Depression Scale (GDS): Recent evidence and development of a shorter version. Clin Gerontol. 1986;5:165-173.

29. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 4th ed. Washington, DC: American Psychiatric Association; 1994.

30. Morris JC. The Clinical Dementia Rating Scale (CDR): current version and scoring rules. Neurology. 1993;43:2412-2414.

31. NIH Clinical Trials Registry. Further validation of the Memory Orientation Screening Test (MOST): a 5-minute screening test for dementia in primary care practice. Available at: http://clinicaltrials.gov. Identifier NCT01057602. Last updated February 7, 2010. Accessed March 4, 2011.

32. Bradford A, Kunik M, Schulz P, et al. Missed and delayed diagnosis of dementia in primary care: prevalence and contributing factors. Alzheimer Dis Assoc Disord. 2009;23:306-314.

33. Pezzotti P, Scalmana S, Mastromattei A, et al. The accuracy of the MMSE in detecting cognitive impairment when administered by general practitioners: a prospective observational study. BMC Fam Pract. 2008;9:29.-

34. Lorentz L. Primary Care Tools for Clinicians: A Compendium of Forms, Questionnaires, and Rating Scales for Everyday Practice. St. Louis, Mo: Elsevier Mosby; 2005.

35. Solomon P, Pendlebury W. Recognition of Alzheimer’s disease: the 7 minute screen. Fam Med. 1998;30:265-271.

36. Peterson R. Mild cognitive impairment. N Engl J Med. 2011;364:2227-2234.

37. Alzheimer’s Association. 2011 Alzheimer’s disease facts and figures. Alzheimers Dement. 2011;7:208-244.

Article PDF
Author and Disclosure Information

Mitchell Clionsky, PhD, ABPP (CN)
[email protected]

Emilymarie Clionsky, MD
Clionsky Neuro Systems, Springfield, Mass

The authors report that they developed the Memory Orientation Screening Test and that their company, Clionsky Neuro Systems, licenses it for use.

Issue
The Journal of Family Practice - 60(11)
Publications
Topics
Page Number
653-659
Legacy Keywords
Mitchell Clionsky;PhD;ABPP (CN); Emilymarie Clionsky;MD; cognitive impairment; objective cognitive test; annual wellness visit; memory orientation screening; screening for cognitive impairment; Medicare Annual Wellness Visit; mini-mental state exam; MMSE; neuropsychological evaluation; tertiary memory evaluation center; dementia; informant; subjective reports; cognition
Sections
Author and Disclosure Information

Mitchell Clionsky, PhD, ABPP (CN)
[email protected]

Emilymarie Clionsky, MD
Clionsky Neuro Systems, Springfield, Mass

The authors report that they developed the Memory Orientation Screening Test and that their company, Clionsky Neuro Systems, licenses it for use.

Author and Disclosure Information

Mitchell Clionsky, PhD, ABPP (CN)
[email protected]

Emilymarie Clionsky, MD
Clionsky Neuro Systems, Springfield, Mass

The authors report that they developed the Memory Orientation Screening Test and that their company, Clionsky Neuro Systems, licenses it for use.

Article PDF
Article PDF

Abstract

Purpose Assessing for cognitive impairment is now mandated as part of the Medicare Annual Wellness Visit. This offers an unparalleled opportunity for early detection and treatment of dementia. However, physician observation supplemented by reports of patients and informants may be less effective than an objective screening test to achieve this goal.

Methods We used visual analog cognition scales (VACS) to quantify patient and informant subjective impressions of cognitive ability and compared these scales with the Folstein Mini-Mental State Exam (MMSE) and the Memory Orientation Screening Test (MOST) on a sample of 201 elderly patients seen for neuropsychological evaluation in a tertiary memory evaluation center. Outcome measures included dementia severity and scores from 3 standardized memory tests. Depression was also considered.

Results Patients were unable to judge their own cognition. Family informants rated only slightly better. Both screening tests outperformed patients and informants. The MOST was significantly better than the MMSE for determining dementia severity and memory for the total sample, as well as a subsample of patients who were less impaired and more typical of independent community-dwelling elders. Depression did not influence the test relationships.

Conclusions Neither patient nor informant subjective reports of cognition should be relied on to identify cognitive impairment within the Annual Wellness Visit. Providers would be best served by using a valid and reliable screening test for dementia.

As of January 2011, physicians are required to include detection of cognitive impairment as part of their health risk assessment in the Medicare Annual Wellness Visit.1 The Centers for Medicare and Medicaid Services (CMS) specifically mandate an “assessment of an individual’s cognitive function by direct observation, with due consideration of information obtained by way of patient report, concerns raised by family members, friends, caretakers, or others.”2 Unfortunately, these means of assessment may be unreliable.

Why observation alone won’t work. Physicians often fail to identify cognitive impairment3-5 until it becomes quite severe.6-8 This failure to diagnose may be due to time constraints,9,10 a focus on other health measures,11 or the lack of appropriate and usable tools.11-14 Reliance on patient self-report is also likely to be a flawed approach.15 A recent study found that most patients with dementia in a community sample denied they had memory problems.16 This is consistent with our clinical experience of 30 years in a tertiary memory assessment practice. These patients believe they are no worse off than their contemporaries and minimize or rationalize even demonstrable memory and functional problems. “I remember everything I need to remember” is a common response to the question, “How is your memory?”

During the comment period preceding implementation of the CMS regulation, 38 national organizations comprising the Partnership to Fight Chronic Disease17 argued that reliance on subjective measures alone is inadequate to achieve the stated goal of the legislation. We share this concern.

Improving cognition assessment. Although family complaints have been viewed as valid in at least 1 commonly used screening instrument, the AD8 (with more than 2 of 8 complaints likely to aid in dementia detection)18 does not reflect severity of impairment, nor does it provide a score to follow a patient’s course over time.

To better quantify the subjective perceptions of cognition by patients and their families, we developed the Visual Analog Cognition Scale (VACS)—which we’ll describe in a bit—and added it to our protocol of neuropsychological tests for dementia. Visual analog scales are well-accepted measures for a variety of subjective phenomena,19 including pain,20 treatment response,21 sleep,22 affective states,23 and quality of life.24 We designed this current study to delineate the degree to which patient or informant perspective could assist physicians in the identification process.

We examined VACS responses from a consecutive sample of patients seen in our practice from July through December 2010. Our goal was to quantify the perceptions of patients and their informants regarding patients’ cognitive states across 5 important areas and to determine the relationship between these ratings and the objective results of neuropsychological evaluation. We also wanted to measure the relative accuracy of such subjective ratings with that of 2 validated screening tools, the Folstein Mini-Mental State Exam (MMSE)25 and the recently published Memory Orientation Screening Test (MOST), which we developed.26

Methods

Subjects
We administered the VACS to 201 patients as part of a 4-hour comprehensive neuropsychological evaluation. Patients were referred by community-based physicians, typically in primary care, neurology, or psychiatry. The sample was 66% female (n=133), with an average age of 78.5 (±6.8) years and an average education of 13.2 (±3.2) years. Of the 201 patients, 7 could not complete the VACS because of confusion or visual impairment; 20 had no accompanying informant. Of the 181 accompanied patients, 89 informants were grown children (49%), 64 were spouses (35%), 12 were siblings (7%), and 16 were friends or paid caregivers (9%).

 

 

Procedure
An administrative assistant handed each patient and informant the VACS as they checked in at the front desk. We asked them to fill out the questionnaire in the waiting room and advised them not to discuss their ratings with each other. We then conducted a comprehensive neuropsychological evaluation of the patient while another clinician separately interviewed the informant regarding the patient’s current health, cognitive and emotional symptoms, and daily function.

Instruments
The VACS is a 5-item, visual analog scale with parallel forms for patients (VACS-P) and informants (VACS-I). The form instructs the user to “Rate yourself (or the patient with whom you came) in each of these 5 areas by circling a number that best describes how you (they) are doing.” The 5 areas and their descriptions are:

  • Attention: Keeping focused, avoiding being distracted, completing tasks
  • Initiation: Starting tasks, following through, staying busy and active
  • Judgment: Figuring things out and making good decisions
  • Memory: Remembering new information and how to do things
  • Self-care: Dressing, bathing, preparing food.

Each area has a visual analog scale of 1 to 10 below it, with each number occupying a box in a continuous sequence. Words appear above some of the numbers to help anchor the ratings in a systematic way: 1=very poor; 4=fair; 7=good; 10=very good.

The MMSE and its properties are well known. The MOST is a 29-point scale comprising 3-word recall, orientation to 6 date-and-time items, unforewarned recall of 12 pictured household items, and an 8-point clock drawing score. The validation study, using a total sample exceeding 1000 patients, demonstrated the MOST correlated highly and significantly (Pearson’s correlation coefficient [r]=0.81; P<.001) with dementia severity and 3 standardized memory tests. At a cutoff score of 18 points, it produced a 0.90 area under the curve (AUC) (95% confidence interval [CI], 0.87-0.94), with a sensitivity of 0.85 and specificity of 0.76, correctly classifying 83% of patients. Test-retest reliability was r≥0.90; P<.001 for both shorter (average 2-month) and longer (average 9-month) intervals.

With each patient, we conducted a diagnostic interview and administered a battery of standardized neuropsychological tests to assess intelligence, attention, executive function, language, and memory. The measures of primary interest for this investigation were the MOST, MMSE, delayed story memory (Wechsler Memory Scale-Revised [WMS-R] Logical Memory-II, or LM-II),27 delayed visual memory (WMS-R Visual Recall-II, or VR-II), delayed recall of a 12-item repeated presentation list of common grocery store items (Shopping List Test-Recall, or SLT-R), and the 15-item Geriatric Depression Scale (GDS-15).28 Additionally, each psychologist made a clinical diagnosis, according to Diagnostic and Statistical Manual of Mental Disorders [Fourth Edition] (DSM-IV)29 criteria and rated the patient’s dementia severity (DS) on a 0-to-3 Clinical Dementia Rating-type scale.30 We based diagnoses and severity ratings on age- and education-adjusted neuropsychological test scores, medical and psychiatric history, patient interview, and separate interview with a family informant.

Statistical methods
We calculated VACS totals for each patient and informant. Total VACS scores ranged from 5 to 50. MOST scores, comprising 3-word recall, 6-item orientation, 12-item list memory, and an 8-point clock drawing score, ranged from 0 to 29. We used the MMSE in the traditional method, counting the first error in spelling WORLD backwards, yielding a result of 0 to 30. The GDS score, 0 to 15, reflected the number of items indicating depression. We computed neuropsychological tests using standard scoring techniques. We rated dementia severity as: 0=normal cognition; 0.5=mild cognitive impairment; 1.0=mild dementia; 2.0=moderate dementia; and 3.0=severe dementia. We also assigned half-point ratings from 1 to 3.

We compared MOST, MMSE, VACS-P, and VACS-I scores with dementia severity and the 3 neuropsychological tests of delayed memory and the GDS-15. We computed Pearson’s correlation coefficients and their levels of significance vs 0. Tests of significant differences between correlations used Fisher’s z-transformation and tested the normalized difference vs 0.

Results

Diagnoses and dementia severity levels are listed in TABLE 1. TABLE 2 presents the mean scores for predictor and outcome variables. Correlations and significance ratings between the VACS-P, VACS-I, MOST, and MMSE with the criterion variables of Dementia Severity Rating, LM-II, VR-II, SLT-R, and GDS-15 are shown in TABLE 3.

Patients, on average, rated themselves as having “good” cognition overall. There was no difference in patient self-ratings between the top quartile of dementia severity (mean=34.6; SD= 8.6) and those in the lowest quartile (mean=36.4; SD=9.0). Informants rated the patients, on average, as having only “fair” cognition. Objective neuropsychological tests, however, found the patients, on average, to be mild to moderately demented and to have mild to moderate impairment on objective memory tests. Most patients were not depressed, with an average GDS score well below the clinical cutoff of 7 or more items. However, 30 of the 194 (15.5%) who completed the VACS-P fell into the clinical range for depression.

 

 

Patient self-ratings did not correlate (r=0.02) with dementia severity or with any of the 3 standardized memory tests. Informant scores correlated modestly with dementia severity and memory tests, but were significantly higher (P<.001) than those of the patients. Both the MOST (r=–0.86) and the MMSE (r=–0.76) had much stronger and highly significant (P<.001) correlations with dementia severity and with the memory measures (r=0.49–0.70). In addition, the MOST and MMSE were significantly (P<.001) better correlated with dementia severity and objective memory scores than were the informant ratings. Only the MMSE correlation with visual recall (P=.06) did not surpass that of the informant.

The MOST had a significantly higher correlation than the MMSE with dementia severity (P<.01) and with each of the 3 memory tests (P<.05). The MOST and MMSE scores were not related to level of depression (r=–0.01 and –0.03). Patient reports correlated significantly with depression level (r=–0.40; P<.001) as did those of the informants (r=–0.22; P<.01). Nevertheless, depression did not appear to be responsible for the limited relationship between patient self-ratings and objective test scores for cognition. When clinically depressed (GDS≥7) patients were removed from the analysis (remaining n=166), there was no significant improvement in the correlation between subjective ratings and objective scores.

We conducted a secondary analysis of patients whose cognition ranged between normal and mild-to-moderate dementia to see if more cognitively intact individuals would be more accurate at self-rating. In this subsample (n=127; mean age=77.3 years; 57% females), patient self-reports again did not correlate significantly (r=0.05) with dementia severity. Informant ratings remained modest, but significant (r=–0.25; P=.004) and statistically better (P<.05) than those of the patients. The MOST (r=–0.69; P<.001) and the MMSE (r=–0.54; P<.001) remained well-correlated with dementia severity and again outperformed the informant ratings (MOST, P<.001; MMSE, P<.05).

TABLE 1
Cognition diagnoses and severity levels in 201 consecutively evaluated elderly patients

Diagnosisn (%)
Normal cognition8 (4.0)
Mild cognitive impairment32 (15.9)
Dementia of all types161 (80.1)
  – Alzheimer’s disease90 (55.9)
  – Vascular dementia62 (38.5)
  – Frontotemporal dementia4 (2.5)
  – Other dementia5 (3.1)
Dementia severity rating 
Normal (0)8 (4.0)
Mild cognitive impairment (0.5)32 (15.9)
Mild dementia (1.0)42 (20.9)
Mild-moderate dementia (1.5)45 (22.4)
Moderate dementia (2.0)38 (18.9)
Moderate–severe dementia (2.5)27 (13.4)
Severe dementia (3.0)9 (4.5)

TABLE 2
Mean test scores for predictor and outcome variables

Predictor variablesMean (SD)Outcome variablesMean (SD)
MOST15.5 (5.7)Dementia Severity Rating1.5 (0.8)
MMSE23.2 (4.7)LM-II6.4 (8.2)
VACS-P35.6 (8.4)VR-II5.4 (7.7)
VACS-I27.6 (10.2)SLT-R4.3 (3.1)
  GDS-153.3 (3.3)
GDS-15, Geriatric Depression Scale-15; LM-II, Logical Memory-II; MMSE, Mini-Mental State Examination; MOST, Memory Orientation Screening Test; SD, standard deviation; SLT-R, Shopping List Test-Recall; VACS-I, Visual Analog Cognition Scale- Informant; VACS-P, Visual Analog Cognition Scale-Patient; VR-II, Visual Recall-II.

TABLE 3
How the MOST, MMSE, and VACS predictor variables compared with outcome measures

 Correlations of MOST, MMSE, VACS-P, and VACS-I to criterion measuresPairwise comparison of correlations of MOST, MMSE, and VACS-I to criterion measures (absolute values)
 MOST (n=201)MMSE (n=201)VACS-P (n=194)VACS-I (n=181)MOST vs MMSEMOST vs VACS-IMMSE vs VACS-I
 Pearson’s correlation coefficient (P value*)Z-ratio (P value*)
 rPrPrPrPZPZPZP
Dementia severity–0.86<.001–0.76<.0010.02.78–0.36<.0012.835.0058.723<.0015.954<.001
LM-II0.67<.0010.52<.001–0.03.680.20.0072.245.0255.72<.0013.533<.001
VR-II0.65<.0010.49<.001–0.02.780.33<.0012.481.0134.29<.0011.872.061
SLT-R0.70<.0010.56<.0010.01.890.28.0012.223.0265.735<.0013.564<.001
GDS-15–0.01.89–0.03.67–0.40<.001–0.22.003      
GDS-15, Geriatric Depression Scale-15; LM-II, Logical Memory II; MMSE, Mini-Mental State Examination; MOST, Memory Orientation Screening Test; SLT-R, Shopping List Test-Recall; VACS-I, Visual Analog Cognition Scale-Informant; VACS-P, Visual Analog Cognition Scale-Patient; VR-II, Visual Recall II.
*The minimum acceptable measure of statistical significance was .05.
Pearson’s correlation coefficient (at left) measures the strength of relationship between 2 variables. It can range from 0.0 (no correlation) to –1.0 or 1.0 (perfect correlation). The larger the number, the stronger the relationship. A negative coefficient indicates an inverse relationship.
Z-ratio (at right) reflects the size, or magnitude, of the difference between 2 correlations.

DISCUSSION

Results of this study demonstrate that patients referred for specialized memory evaluation had virtually no idea of the degree of their cognitive impairment. Patients, on average, rated their function in 5 critical areas of cognition and behavior as “good.” While 80% of these patients demonstrated dementia on formal evaluation, more than 95% rated themselves as having good or very good cognition. Their ratings did not correlate with any objective memory measures or expert clinician opinion.

Patient and informant ratings are unreliable. Patients with better cognition, who might visit their doctor alone for the Annual Wellness Visit and would appear more intact, were no better at judging their cognition than the total patient sample. Both the patients with good cognition and those with dementia rated themselves equally unimpaired. This finding is not unique to the visual analog scale that we used in this study. When 148 self-nominated “cognitively healthy” community-dwelling elders took the MOST and a battery of neuropsychological tests as part of a norming study for the MOST,31 more than 20% would be classified as having dementia based on their memory and executive function test scores. These findings strongly suggest that patients cannot be relied on to inform their physician of cognitive impairment.

 

 

While informants possessed some knowledge about a patient’s cognitive status and were able to supply helpful anecdotal information, their ratings correlated only modestly with objectively measured cognition. This is not surprising given the volume of research demonstrating rater and observer bias.

Rely instead on an objective cognitive screening test. Of greatest relevance, these results indicate that an objective cognitive screening test is more accurate in identifying and measuring cognitive impairment than is the rating of a patient or an informant. Both the MOST and MMSE outperformed patients and informants in assessing patients’ severity of cognitive impairment, including those with milder problems. This last finding is particularly important given that less impaired patients are more likely to visit their doctor without an informant and to appear relatively intact when interviewed or observed by the physician.17 Without an objective test, their cognitive impairment would likely be missed.32

The MOST outperformed the MMSE in detecting dementia and determining disease severity on a sample of 700 patients, and demonstrated twice the sensitivity for disease detection in those who were mildly impaired.26 The current study confirms that the MOST has a significantly higher correlation with dementia severity than does the MMSE, and significantly higher correlations with longer standardized memory tests.

MOST, MMSE test-taking time varies, too. Time constraints are an important consideration in a medical office. The average time to administer the MOST on cognitively impaired patients (a group that is slower to perform than patients with normal cognition) is 4.5 minutes.26 The MMSE, by comparison, takes 10 minutes or more.33,34

Cognition is as measurable as body mass index, blood pressure, height, weight, and level of depression, also mandated in the Annual Wellness Visit. Numbers are easily recorded and compared, while impressions or even a positive (>2) AD8 score are less precise. Provider observation, even if informed by family report, is not as sound a basis for risk analysis, treatment planning, or future monitoring as is an objective measure. Because several current screening tests for dementia possess known reliabilities over time,26,33,35 the physician can periodically repeat such a test to assess treatment response and ongoing risk.

Is there a place for a subjective rating scale? Possibly. A waiting room tool such as the VACS, combined with an objective test, may alert the clinician to a patient with anosognosia. These patients require different management strategies if treatment is to be effective. The care team faces an even greater challenge if an informant shares the patient’s lack of awareness. Conversely, a favorable cognitive screening result and a high score from the informant would give all parties assurance that cognition was normal.

Study limitations. The primary limitation of this study is that it was conducted in a tertiary memory center, where most patients have either suspected or demonstrated cognitive deficits. The relative proportion of normal to impaired patients is, consequently, different from that found in the primary care office, in which about 15% would have mild cognitive impairment36 and a similar percentage would have dementia.37 A replication of this study in such an environment would be helpful. On the other hand, without a companion neuropsychological evaluation as a criterion, the accuracy of self- or informant-report is more difficult to measure. As noted above, 20% of elders volunteering for a study on “normal cognitive functioning” showed significant objective deficits.31

Assessment of cognitive impairment in the primary care physician’s office is uniquely challenging. Physicians are taught to respond to the complaints of patients. But when a patient has dementia, that approach does not work. Family reports are helpful, but not sufficiently accurate. The recent Alzheimer’s Association report37 notes that “Medicare’s new Annual Wellness Visit includes assessment for possible cognitive impairment,” but also points out that “many existing barriers affect the ability or willingness of individuals and their caregivers to recognize cognitive impairment and to discuss it with their physician.” We agree, and we believe that a sound approach to this problem would be for primary care physicians to consistently use an objective tool to measure cognitive functioning in the Annual Wellness Visit and in follow-up visits. A score that reflects the current level of cognition, provides diagnostic information, and reflects change in cognitive status over time will optimize this unique opportunity for earlier detection and potentially earlier treatment of dementia.

 

 

CORRESPONDENCE
Mitchell Clionsky, PhD, ABPP (CN), Clionsky Neuro Systems, 155 Maple Street, Suite 203, Springfield, MA 01105; [email protected]

Abstract

Purpose Assessing for cognitive impairment is now mandated as part of the Medicare Annual Wellness Visit. This offers an unparalleled opportunity for early detection and treatment of dementia. However, physician observation supplemented by reports of patients and informants may be less effective than an objective screening test to achieve this goal.

Methods We used visual analog cognition scales (VACS) to quantify patient and informant subjective impressions of cognitive ability and compared these scales with the Folstein Mini-Mental State Exam (MMSE) and the Memory Orientation Screening Test (MOST) on a sample of 201 elderly patients seen for neuropsychological evaluation in a tertiary memory evaluation center. Outcome measures included dementia severity and scores from 3 standardized memory tests. Depression was also considered.

Results Patients were unable to judge their own cognition. Family informants rated only slightly better. Both screening tests outperformed patients and informants. The MOST was significantly better than the MMSE for determining dementia severity and memory for the total sample, as well as a subsample of patients who were less impaired and more typical of independent community-dwelling elders. Depression did not influence the test relationships.

Conclusions Neither patient nor informant subjective reports of cognition should be relied on to identify cognitive impairment within the Annual Wellness Visit. Providers would be best served by using a valid and reliable screening test for dementia.

As of January 2011, physicians are required to include detection of cognitive impairment as part of their health risk assessment in the Medicare Annual Wellness Visit.1 The Centers for Medicare and Medicaid Services (CMS) specifically mandate an “assessment of an individual’s cognitive function by direct observation, with due consideration of information obtained by way of patient report, concerns raised by family members, friends, caretakers, or others.”2 Unfortunately, these means of assessment may be unreliable.

Why observation alone won’t work. Physicians often fail to identify cognitive impairment3-5 until it becomes quite severe.6-8 This failure to diagnose may be due to time constraints,9,10 a focus on other health measures,11 or the lack of appropriate and usable tools.11-14 Reliance on patient self-report is also likely to be a flawed approach.15 A recent study found that most patients with dementia in a community sample denied they had memory problems.16 This is consistent with our clinical experience of 30 years in a tertiary memory assessment practice. These patients believe they are no worse off than their contemporaries and minimize or rationalize even demonstrable memory and functional problems. “I remember everything I need to remember” is a common response to the question, “How is your memory?”

During the comment period preceding implementation of the CMS regulation, 38 national organizations comprising the Partnership to Fight Chronic Disease17 argued that reliance on subjective measures alone is inadequate to achieve the stated goal of the legislation. We share this concern.

Improving cognition assessment. Although family complaints have been viewed as valid in at least 1 commonly used screening instrument, the AD8 (with more than 2 of 8 complaints likely to aid in dementia detection)18 does not reflect severity of impairment, nor does it provide a score to follow a patient’s course over time.

To better quantify the subjective perceptions of cognition by patients and their families, we developed the Visual Analog Cognition Scale (VACS)—which we’ll describe in a bit—and added it to our protocol of neuropsychological tests for dementia. Visual analog scales are well-accepted measures for a variety of subjective phenomena,19 including pain,20 treatment response,21 sleep,22 affective states,23 and quality of life.24 We designed this current study to delineate the degree to which patient or informant perspective could assist physicians in the identification process.

We examined VACS responses from a consecutive sample of patients seen in our practice from July through December 2010. Our goal was to quantify the perceptions of patients and their informants regarding patients’ cognitive states across 5 important areas and to determine the relationship between these ratings and the objective results of neuropsychological evaluation. We also wanted to measure the relative accuracy of such subjective ratings with that of 2 validated screening tools, the Folstein Mini-Mental State Exam (MMSE)25 and the recently published Memory Orientation Screening Test (MOST), which we developed.26

Methods

Subjects
We administered the VACS to 201 patients as part of a 4-hour comprehensive neuropsychological evaluation. Patients were referred by community-based physicians, typically in primary care, neurology, or psychiatry. The sample was 66% female (n=133), with an average age of 78.5 (±6.8) years and an average education of 13.2 (±3.2) years. Of the 201 patients, 7 could not complete the VACS because of confusion or visual impairment; 20 had no accompanying informant. Of the 181 accompanied patients, 89 informants were grown children (49%), 64 were spouses (35%), 12 were siblings (7%), and 16 were friends or paid caregivers (9%).

 

 

Procedure
An administrative assistant handed each patient and informant the VACS as they checked in at the front desk. We asked them to fill out the questionnaire in the waiting room and advised them not to discuss their ratings with each other. We then conducted a comprehensive neuropsychological evaluation of the patient while another clinician separately interviewed the informant regarding the patient’s current health, cognitive and emotional symptoms, and daily function.

Instruments
The VACS is a 5-item, visual analog scale with parallel forms for patients (VACS-P) and informants (VACS-I). The form instructs the user to “Rate yourself (or the patient with whom you came) in each of these 5 areas by circling a number that best describes how you (they) are doing.” The 5 areas and their descriptions are:

  • Attention: Keeping focused, avoiding being distracted, completing tasks
  • Initiation: Starting tasks, following through, staying busy and active
  • Judgment: Figuring things out and making good decisions
  • Memory: Remembering new information and how to do things
  • Self-care: Dressing, bathing, preparing food.

Each area has a visual analog scale of 1 to 10 below it, with each number occupying a box in a continuous sequence. Words appear above some of the numbers to help anchor the ratings in a systematic way: 1=very poor; 4=fair; 7=good; 10=very good.

The MMSE and its properties are well known. The MOST is a 29-point scale comprising 3-word recall, orientation to 6 date-and-time items, unforewarned recall of 12 pictured household items, and an 8-point clock drawing score. The validation study, using a total sample exceeding 1000 patients, demonstrated the MOST correlated highly and significantly (Pearson’s correlation coefficient [r]=0.81; P<.001) with dementia severity and 3 standardized memory tests. At a cutoff score of 18 points, it produced a 0.90 area under the curve (AUC) (95% confidence interval [CI], 0.87-0.94), with a sensitivity of 0.85 and specificity of 0.76, correctly classifying 83% of patients. Test-retest reliability was r≥0.90; P<.001 for both shorter (average 2-month) and longer (average 9-month) intervals.

With each patient, we conducted a diagnostic interview and administered a battery of standardized neuropsychological tests to assess intelligence, attention, executive function, language, and memory. The measures of primary interest for this investigation were the MOST, MMSE, delayed story memory (Wechsler Memory Scale-Revised [WMS-R] Logical Memory-II, or LM-II),27 delayed visual memory (WMS-R Visual Recall-II, or VR-II), delayed recall of a 12-item repeated presentation list of common grocery store items (Shopping List Test-Recall, or SLT-R), and the 15-item Geriatric Depression Scale (GDS-15).28 Additionally, each psychologist made a clinical diagnosis, according to Diagnostic and Statistical Manual of Mental Disorders [Fourth Edition] (DSM-IV)29 criteria and rated the patient’s dementia severity (DS) on a 0-to-3 Clinical Dementia Rating-type scale.30 We based diagnoses and severity ratings on age- and education-adjusted neuropsychological test scores, medical and psychiatric history, patient interview, and separate interview with a family informant.

Statistical methods
We calculated VACS totals for each patient and informant. Total VACS scores ranged from 5 to 50. MOST scores, comprising 3-word recall, 6-item orientation, 12-item list memory, and an 8-point clock drawing score, ranged from 0 to 29. We used the MMSE in the traditional method, counting the first error in spelling WORLD backwards, yielding a result of 0 to 30. The GDS score, 0 to 15, reflected the number of items indicating depression. We computed neuropsychological tests using standard scoring techniques. We rated dementia severity as: 0=normal cognition; 0.5=mild cognitive impairment; 1.0=mild dementia; 2.0=moderate dementia; and 3.0=severe dementia. We also assigned half-point ratings from 1 to 3.

We compared MOST, MMSE, VACS-P, and VACS-I scores with dementia severity and the 3 neuropsychological tests of delayed memory and the GDS-15. We computed Pearson’s correlation coefficients and their levels of significance vs 0. Tests of significant differences between correlations used Fisher’s z-transformation and tested the normalized difference vs 0.

Results

Diagnoses and dementia severity levels are listed in TABLE 1. TABLE 2 presents the mean scores for predictor and outcome variables. Correlations and significance ratings between the VACS-P, VACS-I, MOST, and MMSE with the criterion variables of Dementia Severity Rating, LM-II, VR-II, SLT-R, and GDS-15 are shown in TABLE 3.

Patients, on average, rated themselves as having “good” cognition overall. There was no difference in patient self-ratings between the top quartile of dementia severity (mean=34.6; SD= 8.6) and those in the lowest quartile (mean=36.4; SD=9.0). Informants rated the patients, on average, as having only “fair” cognition. Objective neuropsychological tests, however, found the patients, on average, to be mild to moderately demented and to have mild to moderate impairment on objective memory tests. Most patients were not depressed, with an average GDS score well below the clinical cutoff of 7 or more items. However, 30 of the 194 (15.5%) who completed the VACS-P fell into the clinical range for depression.

 

 

Patient self-ratings did not correlate (r=0.02) with dementia severity or with any of the 3 standardized memory tests. Informant scores correlated modestly with dementia severity and memory tests, but were significantly higher (P<.001) than those of the patients. Both the MOST (r=–0.86) and the MMSE (r=–0.76) had much stronger and highly significant (P<.001) correlations with dementia severity and with the memory measures (r=0.49–0.70). In addition, the MOST and MMSE were significantly (P<.001) better correlated with dementia severity and objective memory scores than were the informant ratings. Only the MMSE correlation with visual recall (P=.06) did not surpass that of the informant.

The MOST had a significantly higher correlation than the MMSE with dementia severity (P<.01) and with each of the 3 memory tests (P<.05). The MOST and MMSE scores were not related to level of depression (r=–0.01 and –0.03). Patient reports correlated significantly with depression level (r=–0.40; P<.001) as did those of the informants (r=–0.22; P<.01). Nevertheless, depression did not appear to be responsible for the limited relationship between patient self-ratings and objective test scores for cognition. When clinically depressed (GDS≥7) patients were removed from the analysis (remaining n=166), there was no significant improvement in the correlation between subjective ratings and objective scores.

We conducted a secondary analysis of patients whose cognition ranged between normal and mild-to-moderate dementia to see if more cognitively intact individuals would be more accurate at self-rating. In this subsample (n=127; mean age=77.3 years; 57% females), patient self-reports again did not correlate significantly (r=0.05) with dementia severity. Informant ratings remained modest, but significant (r=–0.25; P=.004) and statistically better (P<.05) than those of the patients. The MOST (r=–0.69; P<.001) and the MMSE (r=–0.54; P<.001) remained well-correlated with dementia severity and again outperformed the informant ratings (MOST, P<.001; MMSE, P<.05).

TABLE 1
Cognition diagnoses and severity levels in 201 consecutively evaluated elderly patients

Diagnosisn (%)
Normal cognition8 (4.0)
Mild cognitive impairment32 (15.9)
Dementia of all types161 (80.1)
  – Alzheimer’s disease90 (55.9)
  – Vascular dementia62 (38.5)
  – Frontotemporal dementia4 (2.5)
  – Other dementia5 (3.1)
Dementia severity rating 
Normal (0)8 (4.0)
Mild cognitive impairment (0.5)32 (15.9)
Mild dementia (1.0)42 (20.9)
Mild-moderate dementia (1.5)45 (22.4)
Moderate dementia (2.0)38 (18.9)
Moderate–severe dementia (2.5)27 (13.4)
Severe dementia (3.0)9 (4.5)

TABLE 2
Mean test scores for predictor and outcome variables

Predictor variablesMean (SD)Outcome variablesMean (SD)
MOST15.5 (5.7)Dementia Severity Rating1.5 (0.8)
MMSE23.2 (4.7)LM-II6.4 (8.2)
VACS-P35.6 (8.4)VR-II5.4 (7.7)
VACS-I27.6 (10.2)SLT-R4.3 (3.1)
  GDS-153.3 (3.3)
GDS-15, Geriatric Depression Scale-15; LM-II, Logical Memory-II; MMSE, Mini-Mental State Examination; MOST, Memory Orientation Screening Test; SD, standard deviation; SLT-R, Shopping List Test-Recall; VACS-I, Visual Analog Cognition Scale- Informant; VACS-P, Visual Analog Cognition Scale-Patient; VR-II, Visual Recall-II.

TABLE 3
How the MOST, MMSE, and VACS predictor variables compared with outcome measures

 Correlations of MOST, MMSE, VACS-P, and VACS-I to criterion measuresPairwise comparison of correlations of MOST, MMSE, and VACS-I to criterion measures (absolute values)
 MOST (n=201)MMSE (n=201)VACS-P (n=194)VACS-I (n=181)MOST vs MMSEMOST vs VACS-IMMSE vs VACS-I
 Pearson’s correlation coefficient (P value*)Z-ratio (P value*)
 rPrPrPrPZPZPZP
Dementia severity–0.86<.001–0.76<.0010.02.78–0.36<.0012.835.0058.723<.0015.954<.001
LM-II0.67<.0010.52<.001–0.03.680.20.0072.245.0255.72<.0013.533<.001
VR-II0.65<.0010.49<.001–0.02.780.33<.0012.481.0134.29<.0011.872.061
SLT-R0.70<.0010.56<.0010.01.890.28.0012.223.0265.735<.0013.564<.001
GDS-15–0.01.89–0.03.67–0.40<.001–0.22.003      
GDS-15, Geriatric Depression Scale-15; LM-II, Logical Memory II; MMSE, Mini-Mental State Examination; MOST, Memory Orientation Screening Test; SLT-R, Shopping List Test-Recall; VACS-I, Visual Analog Cognition Scale-Informant; VACS-P, Visual Analog Cognition Scale-Patient; VR-II, Visual Recall II.
*The minimum acceptable measure of statistical significance was .05.
Pearson’s correlation coefficient (at left) measures the strength of relationship between 2 variables. It can range from 0.0 (no correlation) to –1.0 or 1.0 (perfect correlation). The larger the number, the stronger the relationship. A negative coefficient indicates an inverse relationship.
Z-ratio (at right) reflects the size, or magnitude, of the difference between 2 correlations.

DISCUSSION

Results of this study demonstrate that patients referred for specialized memory evaluation had virtually no idea of the degree of their cognitive impairment. Patients, on average, rated their function in 5 critical areas of cognition and behavior as “good.” While 80% of these patients demonstrated dementia on formal evaluation, more than 95% rated themselves as having good or very good cognition. Their ratings did not correlate with any objective memory measures or expert clinician opinion.

Patient and informant ratings are unreliable. Patients with better cognition, who might visit their doctor alone for the Annual Wellness Visit and would appear more intact, were no better at judging their cognition than the total patient sample. Both the patients with good cognition and those with dementia rated themselves equally unimpaired. This finding is not unique to the visual analog scale that we used in this study. When 148 self-nominated “cognitively healthy” community-dwelling elders took the MOST and a battery of neuropsychological tests as part of a norming study for the MOST,31 more than 20% would be classified as having dementia based on their memory and executive function test scores. These findings strongly suggest that patients cannot be relied on to inform their physician of cognitive impairment.

 

 

While informants possessed some knowledge about a patient’s cognitive status and were able to supply helpful anecdotal information, their ratings correlated only modestly with objectively measured cognition. This is not surprising given the volume of research demonstrating rater and observer bias.

Rely instead on an objective cognitive screening test. Of greatest relevance, these results indicate that an objective cognitive screening test is more accurate in identifying and measuring cognitive impairment than is the rating of a patient or an informant. Both the MOST and MMSE outperformed patients and informants in assessing patients’ severity of cognitive impairment, including those with milder problems. This last finding is particularly important given that less impaired patients are more likely to visit their doctor without an informant and to appear relatively intact when interviewed or observed by the physician.17 Without an objective test, their cognitive impairment would likely be missed.32

The MOST outperformed the MMSE in detecting dementia and determining disease severity on a sample of 700 patients, and demonstrated twice the sensitivity for disease detection in those who were mildly impaired.26 The current study confirms that the MOST has a significantly higher correlation with dementia severity than does the MMSE, and significantly higher correlations with longer standardized memory tests.

MOST, MMSE test-taking time varies, too. Time constraints are an important consideration in a medical office. The average time to administer the MOST on cognitively impaired patients (a group that is slower to perform than patients with normal cognition) is 4.5 minutes.26 The MMSE, by comparison, takes 10 minutes or more.33,34

Cognition is as measurable as body mass index, blood pressure, height, weight, and level of depression, also mandated in the Annual Wellness Visit. Numbers are easily recorded and compared, while impressions or even a positive (>2) AD8 score are less precise. Provider observation, even if informed by family report, is not as sound a basis for risk analysis, treatment planning, or future monitoring as is an objective measure. Because several current screening tests for dementia possess known reliabilities over time,26,33,35 the physician can periodically repeat such a test to assess treatment response and ongoing risk.

Is there a place for a subjective rating scale? Possibly. A waiting room tool such as the VACS, combined with an objective test, may alert the clinician to a patient with anosognosia. These patients require different management strategies if treatment is to be effective. The care team faces an even greater challenge if an informant shares the patient’s lack of awareness. Conversely, a favorable cognitive screening result and a high score from the informant would give all parties assurance that cognition was normal.

Study limitations. The primary limitation of this study is that it was conducted in a tertiary memory center, where most patients have either suspected or demonstrated cognitive deficits. The relative proportion of normal to impaired patients is, consequently, different from that found in the primary care office, in which about 15% would have mild cognitive impairment36 and a similar percentage would have dementia.37 A replication of this study in such an environment would be helpful. On the other hand, without a companion neuropsychological evaluation as a criterion, the accuracy of self- or informant-report is more difficult to measure. As noted above, 20% of elders volunteering for a study on “normal cognitive functioning” showed significant objective deficits.31

Assessment of cognitive impairment in the primary care physician’s office is uniquely challenging. Physicians are taught to respond to the complaints of patients. But when a patient has dementia, that approach does not work. Family reports are helpful, but not sufficiently accurate. The recent Alzheimer’s Association report37 notes that “Medicare’s new Annual Wellness Visit includes assessment for possible cognitive impairment,” but also points out that “many existing barriers affect the ability or willingness of individuals and their caregivers to recognize cognitive impairment and to discuss it with their physician.” We agree, and we believe that a sound approach to this problem would be for primary care physicians to consistently use an objective tool to measure cognitive functioning in the Annual Wellness Visit and in follow-up visits. A score that reflects the current level of cognition, provides diagnostic information, and reflects change in cognitive status over time will optimize this unique opportunity for earlier detection and potentially earlier treatment of dementia.

 

 

CORRESPONDENCE
Mitchell Clionsky, PhD, ABPP (CN), Clionsky Neuro Systems, 155 Maple Street, Suite 203, Springfield, MA 01105; [email protected]

References

1. 111th US Congress. Patient protection and affordable care act. HR3590, section. 4103. Medicare coverage of annual wellness visit: providing a personalized prevention plan. Available at: http://thomas.loc.gov/cgi-bin/bdquery/z?d111:H.R.3590:#. Accessed February 19, 2011.

2. Department of Health and Human Services, Centers for Medicare and Medicaid Services. Amendment to HR 3590, section 4103, subpart B §410.15 (v). Fed Regist. November 29, 2010;75:73613-73614.

3. Boustani M, Peterson B, Hanson L, et al. Screening for dementia in primary care: a summary of the evidence for the US Preventive Services Task Force. Ann Intern Med. 2003;138:927-937.

4. Valcour VG, Masaki KH, Curb JD, et al. The detection of dementia in the primary care setting. Arch Intern Med. 2000;160:2964-2968.

5. Ganguli M, Rodriguez E, Mulsant B, et al. Detection and management of cognitive impairment in primary care. J Am Geriatr Soc. 2004;52:1668-1675.

6. Chodosh J, Petitti DB, Elliot M, et al. Physician recognition of cognitive impairment: evaluating the need for improvement. J Am Geriatr Soc. 2004;52:1051-1059.

7. Boise L, Neal MB, Kaye J. Dementia assessment in primary care: results from a study in three managed care systems. J Gerontol A Biol Sci Med Sci. 2004;59:M621-M626.

8. Callahan C, Hendrie H, Tierney W. Documentation and evaluation of cognitive impairment in elderly primary care patients. Ann Intern Med. 1995;122:422-429.

9. Boise L, Camicioli R, Morgan DL, et al. Diagnosing dementia: perspectives of primary care physicians. Gerontologist. 1999;39:457-464.

10. Tai-Seale M, McGuire TG, Zhang W. Time allocation in primary care office visits. Health Serv Res. 2007;42:1871-1894.

11. Boise L, Eckstrom E, Fagnan, L. The Rural Older Adult Memory (ROAM) study: a practice-based intervention to improve dementia screening and diagnosis. J Am Board Fam Med. 2010;23:486-498.

12. Brown J, Pengas G, Dawson K, et al. Self administered cognitive screening test for detection of Alzheimer’s disease cross sectional study. BMJ. 2009;338:b2030.-

13. Solomon P, Hirschoff Kelly B, et al. A 7 minute neurocognitive screening battery highly sensitive to Alzheimer’s disease. Arch Neurol. 1998;55:349-355.

14. Borson S, Scanlon J, Brush M, et al. The Mini-Cog: a cognitive vital signs measure for dementia screening. Int J Geriatr Psychiatry. 2000;15:1021-1027.

15. Sevush S, Leve N. Denial of memory deficit in Alzheimer’s disease. Am J Psychiatry. 1993;150:748-751.

16. Lehmann S, Black B, Shore A, et al. Living alone with dementia: lack of awareness adds to functional and cognitive vulnerabilities. Int Psychogeriatr. 2010;22:778-784.

17. Partnership to Fight Chronic Disease. Letter submitted via Internet to Donald Berwick, MD, Administrator; Centers for Medicare and Medicaid Services. Available at: http://www.google.com/url?sa=t&source=web&cd=3&ved=0CCsQFjAC&url=https%3A%2F%2Fwww.thenationalcouncil.org%2Fgalleries%2Fpolicy-file%2FMedicare%2520Wellness%2520visit%2520-%2520final.pdf&ei=W2x_ToCRMuzTiAL1xIi7Aw&usg=AFQjCNFPWOe8s5xD117o0zfwOpZ69rskAw. Accessed February 19, 2011.

18. Galvin JE, Roe CM, Powlishta KK, et al. The AD8, a brief informant interview to detect dementia. Neurology. 2005;65:559-561.

19. Marsh-Richard D, Hatzis E, Mathias C, et al. Adaptive visual analog scales (AVAS): a modifiable software program for the creation, administration, and scoring of visual analog scales. Behav Res Methods. 2009;41:99-106.

20. Keller S, Bann C, Dodd S, et al. Validity of the Brief Pain Inventory for use in documenting the outcomes of patients with noncancer pain. Clin J Pain. 2004;20:309-318.

21. LaStayo P, Larsen S, Smith S, et al. Feasibility and efficacy of eccentric exercise with older cancer survivors. J Geriatr Phys Ther. 2010;33:135-140.

22. Zisapel N, Tarrasch R, Laudon M. A comparison of visual analog scale and categorical ratings in assessing patients’ estimate of sleep quality. In: Lader MH, Cardinali DP, Pandi-Perumal SR, eds. Sleep and Sleep Disorders. New York, NY: Springer Science+Business Media; 2006:220–224.

23. Kindler C, Harms C, Amsler F, et al. The visual analog scale allows effective measurement of preoperative anxiety and detection of patients’ anesthetic concerns. Anesth Analg. 2000;90:706-712.

24. Bruce B, Fries JF. The Stanford Health Assessment Questionnaire: dimensions and practical applications. Health Qual Life Outcomes. 2003;1:20.-

25. Folstein MF, Folstein SE, McHugh PR. “Mini-Mental State”: a practical method for grading cognitive states of patients for the clinician. J Psychiatr Res. 1975;12:189-198.

26. Clionsky M, Clionsky E. Development and validation of the Memory Orientation Screening Test (MOST™): a better screening test for dementia. Am J Alzheimers Dis Other Demen. 2010;25:650-656.

27. Wechsler D. Wechsler Memory Scale–Revised. New York, NY: Psychological Corporation; 1987.

28. Sheikh JI, Yesavage JA. Geriatric Depression Scale (GDS): Recent evidence and development of a shorter version. Clin Gerontol. 1986;5:165-173.

29. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 4th ed. Washington, DC: American Psychiatric Association; 1994.

30. Morris JC. The Clinical Dementia Rating Scale (CDR): current version and scoring rules. Neurology. 1993;43:2412-2414.

31. NIH Clinical Trials Registry. Further validation of the Memory Orientation Screening Test (MOST): a 5-minute screening test for dementia in primary care practice. Available at: http://clinicaltrials.gov. Identifier NCT01057602. Last updated February 7, 2010. Accessed March 4, 2011.

32. Bradford A, Kunik M, Schulz P, et al. Missed and delayed diagnosis of dementia in primary care: prevalence and contributing factors. Alzheimer Dis Assoc Disord. 2009;23:306-314.

33. Pezzotti P, Scalmana S, Mastromattei A, et al. The accuracy of the MMSE in detecting cognitive impairment when administered by general practitioners: a prospective observational study. BMC Fam Pract. 2008;9:29.-

34. Lorentz L. Primary Care Tools for Clinicians: A Compendium of Forms, Questionnaires, and Rating Scales for Everyday Practice. St. Louis, Mo: Elsevier Mosby; 2005.

35. Solomon P, Pendlebury W. Recognition of Alzheimer’s disease: the 7 minute screen. Fam Med. 1998;30:265-271.

36. Peterson R. Mild cognitive impairment. N Engl J Med. 2011;364:2227-2234.

37. Alzheimer’s Association. 2011 Alzheimer’s disease facts and figures. Alzheimers Dement. 2011;7:208-244.

References

1. 111th US Congress. Patient protection and affordable care act. HR3590, section. 4103. Medicare coverage of annual wellness visit: providing a personalized prevention plan. Available at: http://thomas.loc.gov/cgi-bin/bdquery/z?d111:H.R.3590:#. Accessed February 19, 2011.

2. Department of Health and Human Services, Centers for Medicare and Medicaid Services. Amendment to HR 3590, section 4103, subpart B §410.15 (v). Fed Regist. November 29, 2010;75:73613-73614.

3. Boustani M, Peterson B, Hanson L, et al. Screening for dementia in primary care: a summary of the evidence for the US Preventive Services Task Force. Ann Intern Med. 2003;138:927-937.

4. Valcour VG, Masaki KH, Curb JD, et al. The detection of dementia in the primary care setting. Arch Intern Med. 2000;160:2964-2968.

5. Ganguli M, Rodriguez E, Mulsant B, et al. Detection and management of cognitive impairment in primary care. J Am Geriatr Soc. 2004;52:1668-1675.

6. Chodosh J, Petitti DB, Elliot M, et al. Physician recognition of cognitive impairment: evaluating the need for improvement. J Am Geriatr Soc. 2004;52:1051-1059.

7. Boise L, Neal MB, Kaye J. Dementia assessment in primary care: results from a study in three managed care systems. J Gerontol A Biol Sci Med Sci. 2004;59:M621-M626.

8. Callahan C, Hendrie H, Tierney W. Documentation and evaluation of cognitive impairment in elderly primary care patients. Ann Intern Med. 1995;122:422-429.

9. Boise L, Camicioli R, Morgan DL, et al. Diagnosing dementia: perspectives of primary care physicians. Gerontologist. 1999;39:457-464.

10. Tai-Seale M, McGuire TG, Zhang W. Time allocation in primary care office visits. Health Serv Res. 2007;42:1871-1894.

11. Boise L, Eckstrom E, Fagnan, L. The Rural Older Adult Memory (ROAM) study: a practice-based intervention to improve dementia screening and diagnosis. J Am Board Fam Med. 2010;23:486-498.

12. Brown J, Pengas G, Dawson K, et al. Self administered cognitive screening test for detection of Alzheimer’s disease cross sectional study. BMJ. 2009;338:b2030.-

13. Solomon P, Hirschoff Kelly B, et al. A 7 minute neurocognitive screening battery highly sensitive to Alzheimer’s disease. Arch Neurol. 1998;55:349-355.

14. Borson S, Scanlon J, Brush M, et al. The Mini-Cog: a cognitive vital signs measure for dementia screening. Int J Geriatr Psychiatry. 2000;15:1021-1027.

15. Sevush S, Leve N. Denial of memory deficit in Alzheimer’s disease. Am J Psychiatry. 1993;150:748-751.

16. Lehmann S, Black B, Shore A, et al. Living alone with dementia: lack of awareness adds to functional and cognitive vulnerabilities. Int Psychogeriatr. 2010;22:778-784.

17. Partnership to Fight Chronic Disease. Letter submitted via Internet to Donald Berwick, MD, Administrator; Centers for Medicare and Medicaid Services. Available at: http://www.google.com/url?sa=t&source=web&cd=3&ved=0CCsQFjAC&url=https%3A%2F%2Fwww.thenationalcouncil.org%2Fgalleries%2Fpolicy-file%2FMedicare%2520Wellness%2520visit%2520-%2520final.pdf&ei=W2x_ToCRMuzTiAL1xIi7Aw&usg=AFQjCNFPWOe8s5xD117o0zfwOpZ69rskAw. Accessed February 19, 2011.

18. Galvin JE, Roe CM, Powlishta KK, et al. The AD8, a brief informant interview to detect dementia. Neurology. 2005;65:559-561.

19. Marsh-Richard D, Hatzis E, Mathias C, et al. Adaptive visual analog scales (AVAS): a modifiable software program for the creation, administration, and scoring of visual analog scales. Behav Res Methods. 2009;41:99-106.

20. Keller S, Bann C, Dodd S, et al. Validity of the Brief Pain Inventory for use in documenting the outcomes of patients with noncancer pain. Clin J Pain. 2004;20:309-318.

21. LaStayo P, Larsen S, Smith S, et al. Feasibility and efficacy of eccentric exercise with older cancer survivors. J Geriatr Phys Ther. 2010;33:135-140.

22. Zisapel N, Tarrasch R, Laudon M. A comparison of visual analog scale and categorical ratings in assessing patients’ estimate of sleep quality. In: Lader MH, Cardinali DP, Pandi-Perumal SR, eds. Sleep and Sleep Disorders. New York, NY: Springer Science+Business Media; 2006:220–224.

23. Kindler C, Harms C, Amsler F, et al. The visual analog scale allows effective measurement of preoperative anxiety and detection of patients’ anesthetic concerns. Anesth Analg. 2000;90:706-712.

24. Bruce B, Fries JF. The Stanford Health Assessment Questionnaire: dimensions and practical applications. Health Qual Life Outcomes. 2003;1:20.-

25. Folstein MF, Folstein SE, McHugh PR. “Mini-Mental State”: a practical method for grading cognitive states of patients for the clinician. J Psychiatr Res. 1975;12:189-198.

26. Clionsky M, Clionsky E. Development and validation of the Memory Orientation Screening Test (MOST™): a better screening test for dementia. Am J Alzheimers Dis Other Demen. 2010;25:650-656.

27. Wechsler D. Wechsler Memory Scale–Revised. New York, NY: Psychological Corporation; 1987.

28. Sheikh JI, Yesavage JA. Geriatric Depression Scale (GDS): Recent evidence and development of a shorter version. Clin Gerontol. 1986;5:165-173.

29. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 4th ed. Washington, DC: American Psychiatric Association; 1994.

30. Morris JC. The Clinical Dementia Rating Scale (CDR): current version and scoring rules. Neurology. 1993;43:2412-2414.

31. NIH Clinical Trials Registry. Further validation of the Memory Orientation Screening Test (MOST): a 5-minute screening test for dementia in primary care practice. Available at: http://clinicaltrials.gov. Identifier NCT01057602. Last updated February 7, 2010. Accessed March 4, 2011.

32. Bradford A, Kunik M, Schulz P, et al. Missed and delayed diagnosis of dementia in primary care: prevalence and contributing factors. Alzheimer Dis Assoc Disord. 2009;23:306-314.

33. Pezzotti P, Scalmana S, Mastromattei A, et al. The accuracy of the MMSE in detecting cognitive impairment when administered by general practitioners: a prospective observational study. BMC Fam Pract. 2008;9:29.-

34. Lorentz L. Primary Care Tools for Clinicians: A Compendium of Forms, Questionnaires, and Rating Scales for Everyday Practice. St. Louis, Mo: Elsevier Mosby; 2005.

35. Solomon P, Pendlebury W. Recognition of Alzheimer’s disease: the 7 minute screen. Fam Med. 1998;30:265-271.

36. Peterson R. Mild cognitive impairment. N Engl J Med. 2011;364:2227-2234.

37. Alzheimer’s Association. 2011 Alzheimer’s disease facts and figures. Alzheimers Dement. 2011;7:208-244.

Issue
The Journal of Family Practice - 60(11)
Issue
The Journal of Family Practice - 60(11)
Page Number
653-659
Page Number
653-659
Publications
Publications
Topics
Article Type
Display Headline
Identifying cognitive impairment during the Annual Wellness Visit: Who can you trust?
Display Headline
Identifying cognitive impairment during the Annual Wellness Visit: Who can you trust?
Legacy Keywords
Mitchell Clionsky;PhD;ABPP (CN); Emilymarie Clionsky;MD; cognitive impairment; objective cognitive test; annual wellness visit; memory orientation screening; screening for cognitive impairment; Medicare Annual Wellness Visit; mini-mental state exam; MMSE; neuropsychological evaluation; tertiary memory evaluation center; dementia; informant; subjective reports; cognition
Legacy Keywords
Mitchell Clionsky;PhD;ABPP (CN); Emilymarie Clionsky;MD; cognitive impairment; objective cognitive test; annual wellness visit; memory orientation screening; screening for cognitive impairment; Medicare Annual Wellness Visit; mini-mental state exam; MMSE; neuropsychological evaluation; tertiary memory evaluation center; dementia; informant; subjective reports; cognition
Sections
Article Source

PURLs Copyright

Inside the Article

Article PDF Media