Table of Contents Author Guidelines Submit a Manuscript
Journal of Aging Research
Volume 2017, Article ID 8514582, 6 pages
https://doi.org/10.1155/2017/8514582
Research Article

Measuring Fluid Intelligence in Healthy Older Adults

1Department of Psychology, University of Calgary, 2500 University Drive NW, Calgary, AB, Canada T2N 1N4
2Department of Psychology, University of Toronto, 1265 Military Trail, Toronto, ON, Canada M1C 1A4

Correspondence should be addressed to Mohammed K. Shakeel; ac.yraglacu@lihtalak.demmahom

Received 25 October 2016; Revised 30 November 2016; Accepted 12 January 2017; Published 30 January 2017

Academic Editor: Elke Bromberg

Copyright © 2017 Mohammed K. Shakeel and Vina M. Goghari. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The present study evaluated subjective and objective cognitive measures as predictors of fluid intelligence in healthy older adults. We hypothesized that objective cognitive measures would predict fluid intelligence to a greater degree than self-reported cognitive functioning. Ninety-three healthy older (>65 years old) community-dwelling adults participated. Raven’s Advanced Progressive Matrices (RAPM) were used to measure fluid intelligence, Digit Span Sequencing (DSS) was used to measure working memory, Trail Making Test (TMT) was used to measure cognitive flexibility, Design Fluency Test (DFT) was used to measure creativity, and Tower Test (TT) was used to measure planning. The Cognitive Failures Questionnaire (CFQ) was used to measure subjective perceptions of cognitive functioning. RAPM was correlated with DSS, TT, and DFT. When CFQ was the only predictor, the regression model predicting fluid intelligence was not significant. When DSS, TMT, DFT, and TT were included in the model, there was a significant change in the model and the final model was also significant, with DFT as the only significant predictor. The model accounted for approximately 20% of the variability in fluid intelligence. Our findings suggest that the most reliable means of assessing fluid intelligence is to assess it directly.

1. Introduction

There has been a lot of interest in recent years in the use of subjective reports of cognitive functioning to predict the likelihood that a person will suffer from dementia [1]. However, the findings have been equivocal [2]. This may be in part because people are poor judges of their own mental functions [3], and this may be especially true of those who already have poor cognitive functioning [4]. This view has important implications for the use of subjective reports in dementias, which are characterized by cognitive, as well as metacognitive, deficits [5, 6]. There is a lack of empirical studies on the usefulness of making inferences about complex mental functions by assessing other, related mental functions (subjectively or objectively). This also has important implications for diagnosis, because it is often not feasible to administer time-consuming tests for complex functions (like intelligence) in a clinical setting, and may also require specific training in administration and interpretation. In contrast, though widely used, research has demonstrated that asking patients about their own mental functions may not be a fruitful approach [7, 8].

In the present study, we aimed to investigate, in healthy older adults, whether subjective and objective measures predicted complex cognitive functioning, namely, fluid intelligence. Fluid intelligence involves reasoning and problem solving for problems to which familiar solutions are not available. Fluid intelligence tends to decline with age [9] and is impaired in dementia [10]. Hence, fluid intelligence serves as an ideal candidate to investigate the usefulness of subjective and objective predictors for a complex cognitive function. However, as with many complex functions, it is not possible to reliably assess fluid intelligence using subjective reports. Therefore, we relied on using the Cognitive Failures Questionnaire, which requires the participant to answer questions that assess memory [11], attention [11], and executive functioning, which are related to fluid intelligence [1214].

We also wanted to assess how well subjective reports of cognitive functioning perform compared to objective measures like working memory [15], cognitive flexibility [16], creativity [17], and planning [18], which are related to fluid intelligence [16, 1922]. However, there is no consensus on the strength of the association between fluid intelligence and these variables (see, e.g., [19, 20, 2325]), and it also remains to be determined whether the association holds for older adults as it does for other age groups.

Furthermore, for this study, we specifically chose objective measures that are neither time-consuming nor difficult to administer in a clinical setting. To avoid redundancy, we also ensured that the subjective and objective measures assessed nonoverlapping constructs related to fluid intelligence. We hypothesized that objective measures of related cognitive processes would predict fluid intelligence better than subjective report of cognition.

2. Materials and Methods

2.1. Participants

Sociodemographic information is presented in Table 1. The sample consisted of healthy older community-dwelling adults aged between 65 and 86 years, recruited from Calgary, Alberta. All participants were proficient in English. To reduce the effects of other factors that could affect cognitive functions or the ability to adequately perform the tasks, participants were excluded if they had a head trauma, brain fever, neurological illness, dementia or altered consciousness, history of recent (3 months) use of benzodiazepines or illicit drugs, current visual, auditory, or motor impairment, cardiovascular conditions, breathing problems or pathologies associated with cognitive impairment such as stroke, Parkinson’s disease, intracranial hemorrhage, tumors, and normal pressure hydrocephalus, or a score of less than 27 on the Mini Mental State Examination [26]. Informed written consent was obtained from all participants. The study was approved by the University of Calgary Conjoint Faculties Research Ethics Board and is in accordance with the ethical standards of the Helsinki Declaration [27].

Table 1: Sociodemographic information ( = 93).
2.2. Measures

All data was collected in a single session held at the University of Calgary.

2.2.1. Raven’s Advanced Progressive Matrices (RAPM) [28]

Raven’s Advanced Progressive Matrices were used as a measure of fluid intelligence. The task requires participants to examine a series of images and select one out of 8 possible images to complete the pattern. The test has 36 items of progressively increasing difficulty. The total correct score obtained was used for the present analysis.

2.2.2. Digit Span Sequencing (DSS)

Digit Span Sequencing was used to measure working memory [29]. The test is part of the Wechsler Adult Intelligence Scale-IV (WAIS-IV). In this task, the participant has to mentally rearrange a series of verbally presented digits and recall them in sequential order. The total raw score was used for the present analysis.

2.2.3. Trail Making Test (TMT)

Trail Making Test Condition 4 was used as a measure of cognitive flexibility. Trail Making Test is part of the Delis-Kaplan Executive Function System (D-KEFS) [30]. For this task, the participant has to connect circles as quickly as possible while sequentially switching between circles which contain numbers and letters. The total raw score was used for the present analysis.

2.2.4. Design Fluency Test (DFT)

Design Fluency Test Condition 3 was used as a measure of creativity. It also measures problem solving ability and inhibition. It is part of the D-KEFS [30]. In this task the participant is presented with squares containing 10 dots, 5 empty and 5 filled in. The participant has to draw designs by connecting dots in the square while constantly switching between empty and filled dots. The participant is also asked to generate unique designs for each square without repeating any of the designs. The total raw score was used for the present analysis.

2.2.5. Tower Test (TT)

Tower Test was used as a measure of planning. Tower Test is part of the D-KEFS [30]. In this test, the participant has to move 5 colored disks of different sizes across 3 pegs to match a position shown by the investigator. The participant is asked to complete the task with as few moves as possible, moving only one disk at a time and without placing a larger disk over a smaller disk. The task gets progressively more difficult as the trials increase. Completing the trial in fewer moves results in higher achievement scores (as long as the task is completed within the time limit; for details, see [30]). The total achievement score was used for the present analysis.

2.2.6. Cognitive Failures Questionnaire (CFQ) [31]

Subjective evaluation of cognitive function in everyday life was assessed using the Cognitive Failures Questionnaire. Cognitive Failures Questionnaire is a 25-item self-report measure that evaluates difficulties in attention, memory, distractibility, and executive functions. The questionnaire has good validity and reliability [31]. An analysis of data from the Royal Navy showed that Cognitive Failures Questionnaire scores are also correlated with real world outcomes like accident proneness, human error, and psychological strain [32]. The participant has to read sentences and indicate how often in the past 6 months they have had any of the mentioned experiences. Examples of questions are “Do you leave important letters unanswered for days?”; “Do you fail to see what you want in a supermarket (although it is there)?”; “Do you start doing one thing at home and get distracted into doing something else (unintentionally)?”; “Do you say something and realize afterwards that it might be taken as insulting?”; “Do you find you cannot think of anything to say?”; and so forth. Higher scores on Cognitive Failures Questionnaire indicate more problems. The test showed good internal consistency in our sample (Spearman-Brown Coefficient for split-half reliability was .91). The total score was used for the present analysis.

2.3. Statistical Analysis

Complete data was available for 92 participants. The data was analyzed using correlational and hierarchical multiple regression analyses. Bayesian analysis was additionally used to evaluate the correlation between Cognitive Failures Questionnaire and Raven’s Advanced Progressive Matrices. Five univariate outliers (with -scores > 3.3) were excluded from the variables Cognitive Failures Questionnaire, Digit Span Sequencing, Design Fluency Test, and Trail Making Test. The final sample for analysis consisted of 93 participants. All assumptions for linear regression analysis were met.

3. Results

To observe the interrelationships between variables, Pearson’s bivariate correlation analysis was conducted. Raven’s Advanced Progressive Matrices were significantly correlated with Digit Span Sequencing (, ; = 2.15, ), Trail Making Test (, ; , ), and Design Fluency Test (, ; , ), but not with Cognitive Failures Questionnaire or Tower Test. Digit Span Sequencing was significantly correlated with Trail Making Test (, ; , ) and Tower Test (, ; , ). Trail Making Test was significantly correlated with Design Fluency Test (, ; , ) and Tower Test (, ; , ). Design Fluency Test was significantly correlated with Tower Test (, ; , ) (Tables 2 and 3).

Table 2: Correlations among variables ( = 93).
Table 3: Covariance among variables ( = 92).

Cognitive Failures Questionnaire was not correlated with fluid intelligence (, ; , ) or any other measure. The correlation between Cognitive Failures Questionnaire and fluid intelligence was also examined by estimating a Bayes factor. The results showed that the correlation had a JZS of 0.19. This demonstrates that the data was over 5 times more likely to occur under a model where Cognitive Failures Questionnaire was not related to Raven’s Advanced Progressive Matrices, rather than a model in which they were related. No other significant correlations were present.

3.1. Regressions

Means and standard deviations for all variables are presented in Table 4. Hierarchical linear regression with Raven’s Advanced Progressive Matrices as the dependent variable and Cognitive Failures Questionnaire as the independent variable in block 1 showed that the Cognitive Failures Questionnaire did not significantly predict the score on Raven’s Advanced Progressive Matrices ( = 0.49, ). When Digit Span Sequencing, Trail Making Test, Design Fluency Test, and Tower Test were included in block 2, it resulted in a significant final model ( = 4.48, ) which accounted for over 20% of the variability in Raven’s Advanced Progressive Matrices ( = .21; adjusted = .16). However, the only significant predictor in the final model was the Design Fluency Test ( = 0.33, = .29, = 2.68, , and ) (Table 5).

Table 4: Mean (standard deviation) of measures ( = 93).
Table 5: Hierarchical regression analysis of Digit Span Sequencing, Trail Making Test, Design Fluency Test, and Tower Test on Raven’s Advanced Progressive Matrices after controlling for Cognitive Failures Questionnaire ( = 92).

A follow-up exploratory regression analysis was conducted to determine whether age was a potential contributing factor. Hierarchical regression analysis with Raven’s Advanced Progressive Matrices as the dependent variable and participant’s age as the independent variable in block 1 showed that age did not significantly predict the score on Raven’s Advanced Progressive Matrices ( = 3.18, ). When the cognitive variables (Cognitive Failures Questionnaire, Digit Span Sequencing, Trail Making Test, Design Fluency Test, and Tower Test) were included in block 2, the results remained similar to the first regression analysis with the cognitive variables resulting in a significant final model ( = 3.78, ) of Raven’s Advanced Progressive Matrices.

4. Discussion

These results demonstrated that measures of working memory, cognitive flexibility, and creativity were significantly associated with fluid intelligence, but planning and subjective report of cognitive functioning were not. We also found that creativity was the only significant predictor of fluid intelligence in the regression model, which included subjective report of cognitive functioning, as well as working memory, cognitive flexibility, and planning as predictors.

Our finding that subjective reports are poor predictors of objective cognitive functioning is in agreement with previous studies. For instance, one meta-analysis [2] reported that, in cross-sectional community settings, people who report subjective memory impairments only have a 20% chance of actually suffering from dementia, while 60% of people with dementia do not report any memory problems. This discrepancy between subjective judgements and the actual status of cognitive functions has also been demonstrated in basic cognition literature [3, 4]. Our findings extend this view to inferring the level of fluid intelligence based on subjective report of cognitive functions like attention, memory, distractibility, and executive functions. Subjective report of cognitive functioning was not associated with any of the objective measures either. This may be because the Cognitive Failures Questionnaire primarily assesses memory, attention, and executive functions, while the objective measures assess working memory (DSS), cognitive flexibility (TMT), creativity (DFT), and planning (TT), and there may be little overlap in the cognitive functions that the subjective and objective measures assess.

Our results showed that objective measures of cognitive functioning (working memory, cognitive flexibility, and creativity) are significantly associated with fluid intelligence. However, the final regression model only accounted for around 20% of the variability in fluid intelligence, and the test of creativity was the only significant predictor in the final model. By most standards, a linear regression model accounting for 20% variability is acceptable; however, the purpose of the present analysis was to determine whether the predictors could be used as substitutes for assessing fluid intelligence directly, hence saving time and effort in clinical settings. With that goal in mind, five tasks accounting for 20% of the variability are unfortunately not practically useful. Moreover, the test of creativity was the only significant predictor.

There is no consensus in the literature on the nature of the relationship between creativity and fluid intelligence; while some studies have found them to be strongly related [19, 20], others have only reported a weak association [23] or no relation [24]. Our findings provide partial support to the view that creativity and fluid intelligence are positively related. However, it must be noted that creativity only made a modest contribution to predicting fluid intelligence in our analysis and hence further studies are required to fully understand the strength and nature of the relation between these constructs.

There have been studies that have shown fluid intelligence to be associated with working memory [21], cognitive flexibility [16], and planning [22], although the strength of the association between fluid intelligence and other cognitive functions has been disputed (see, e.g., [25]). Unlike studies with mostly younger participants, we did not find working memory, cognitive flexibility, or planning to be significant individual predictors of fluid intelligence in our sample of healthy older adults. However, we did find an overall combined association of these variables with fluid intelligence.

One limitation of our study was including only a single self-report measure while multiple objective measures were used. Self-report measures of cognitive functioning in everyday life tend to assess several cognitive domains simultaneously and there is considerable overlap of constructs assessed by the Cognitive Failures Questionnaire with constructs assessed by other measures like Perceived Deficits Questionnaire [33], Patient-Reported Outcomes in Cognitive Impairment [34], and so forth. Hence, the use of multiple self-report measures of cognition in everyday life would be redundant. To control for multiple comparisons, individual predictors were analyzed only in the presence of a significant final model. However, we acknowledge that the use of a single self-report measure is a limitation of this study.

5. Conclusion

The aim of the present study was to investigate whether subjective or objective measures that are associated with fluid intelligence can be used as substitutes to measuring fluid intelligence directly. Given that subjective reports were not able to predict fluid intelligence and objective measures did not substantially account for fluid intelligence, we conclude that neither subjective nor objective measures used in this study can be used as substitutes to measuring fluid intelligence directly, at least in older adults. This does not rule out the possibility that future studies may identify other subjective/objective measures that can be reliably used as proxy measures, but our findings suggest that the only reliable way to assess complex cognitive functions is to assess them directly. We further recommend the use of the shorter version of Raven’s Advanced Progressive Matrices, which takes around half the time as the full version yet adequately predicts scores on the full version [35].

Competing Interests

The authors declare no conflict of interests.

Acknowledgments

This work was supported by the Natural Sciences and Engineering Research Council of Canada Discovery Grant. Dr. Goghari was supported by a Canadian Institutes of Health Research New Investigator Award.

References

  1. B. Schmand, C. Jonker, C. Hooijer, and J. Lindeboom, “Subjective memory complaints may announce dementia,” Neurology, vol. 46, no. 1, pp. 121–125, 1996. View at Publisher · View at Google Scholar · View at Scopus
  2. A. J. Mitchell, “The clinical significance of subjective memory complaints in the diagnosis of mild cognitive impairment and dementia: a meta-analysis,” International Journal of Geriatric Psychiatry, vol. 23, no. 11, pp. 1191–1202, 2008. View at Publisher · View at Google Scholar · View at Scopus
  3. R. E. Nisbett and T. D. Wilson, “Telling more than we can know: verbal reports on mental processes,” Psychological Review, vol. 84, no. 3, pp. 231–259, 1977. View at Publisher · View at Google Scholar · View at Scopus
  4. J. Kruger and D. Dunning, “Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self-assessments,” Journal of Personality and Social Psychology, vol. 77, no. 6, pp. 1121–1134, 1999. View at Publisher · View at Google Scholar · View at Scopus
  5. R. J. Perry and J. R. Hodges, “Attention and executive deficits in Alzheimer's disease. A critical review,” Brain, vol. 122, no. 3, pp. 383–404, 1999. View at Publisher · View at Google Scholar · View at Scopus
  6. C. Derouesne, S. Thibault, S. Lagha-Pierucci, V. Baudouin-Madec, D. Ancri, and L. Lacomblez, “Decreased awareness of cognitive deficits in patients with mild dementia of the Alzheimer type,” International Journal of Geriatric Psychiatry, vol. 14, no. 12, pp. 1019–1030, 1999. View at Publisher · View at Google Scholar · View at Scopus
  7. E. P. Feher, G. J. Larrabee, A. Sudilovsky, and T. H. Crook, “Memory self-report in Alzheimer's disease and in age-associated memory impairment,” Journal of Geriatric Psychiatry and Neurology, vol. 7, no. 1, pp. 58–65, 1994. View at Google Scholar · View at Scopus
  8. D. B. Carr, S. Gray, J. Baty, and J. C. Morris, “The value of informant versus individual's complaints of memory impairment in early dementia,” Neurology, vol. 55, no. 11, pp. 1724–1726, 2000. View at Publisher · View at Google Scholar · View at Scopus
  9. A. S. Kaufman and J. L. Horn, “Age changes on tests of fluid and crystallized ability for women and men on the Kaufman Adolescent and Adult Intelligence Test (KAIT) at ages 17–94 years,” Archives of Clinical Neuropsychology, vol. 11, no. 2, pp. 97–121, 1996. View at Publisher · View at Google Scholar · View at Scopus
  10. C. L. Grady, J. V. Haxby, B. Horwitz, G. Berg, and S. I. Rapoport, “Neuropsychological and cerebral metabolic function in early vs late onset dementia of the Alzheimer type,” Neuropsychologia, vol. 25, no. 5, pp. 807–816, 1987. View at Publisher · View at Google Scholar · View at Scopus
  11. J. C. Wallace, S. J. Kass, and C. J. Stanny, “The cognitive failures questionnaire revisited: dimensions and correlates,” Journal of General Psychology, vol. 129, no. 3, pp. 238–256, 2002. View at Publisher · View at Google Scholar · View at Scopus
  12. J. A. Mogle, B. J. Lovett, R. S. Stawski, and M. J. Sliwinski, “What's so special about working memory? An examination of the relationships among working memory, secondary memory, and fluid intelligence: research report,” Psychological Science, vol. 19, no. 11, pp. 1071–1077, 2008. View at Publisher · View at Google Scholar · View at Scopus
  13. E. Hunt, J. W. Pellegrino, and P. L. Yee, “Individual differences in attention,” Psychology of Learning and Motivation, vol. 24, pp. 285–310, 1989. View at Publisher · View at Google Scholar
  14. N. P. Friedman, A. Miyake, R. P. Corley, S. E. Young, J. C. DeFries, and J. K. Hewitt, “Not all executive functions are related to intelligence,” Psychological Science, vol. 17, no. 2, pp. 172–179, 2006. View at Publisher · View at Google Scholar · View at Scopus
  15. M. J. Kane, D. Z. Hambrick, and A. R. A. Conway, “Working memory capacity and fluid intelligence are strongly related constructs: comment on Ackerman, Beier, and Boyle (2005),” Psychological Bulletin, vol. 131, no. 1, pp. 66–71, 2005. View at Publisher · View at Google Scholar · View at Scopus
  16. L. S. Colzato, N. C. Van Wouwe, T. J. Lavender, and B. Hommel, “Intelligence and cognitive flexibility: fluid intelligence correlates with feature ‘unbinding’ across perception and action,” Psychonomic Bulletin & Review, vol. 13, no. 6, pp. 1043–1048, 2006. View at Publisher · View at Google Scholar · View at Scopus
  17. M. Batey, A. Furnham, and X. Safiullina, “Intelligence, general knowledge and personality as predictors of creativity,” Learning and Individual Differences, vol. 20, no. 5, pp. 532–535, 2010. View at Publisher · View at Google Scholar · View at Scopus
  18. N. A. Zook, D. B. Davalos, E. L. DeLosh, and H. P. Davis, “Working memory, inhibition, and fluid intelligence as predictors of performance on Tower of Hanoi and London tasks,” Brain and Cognition, vol. 56, no. 3, pp. 286–292, 2004. View at Publisher · View at Google Scholar · View at Scopus
  19. E. C. Nusbaum and P. J. Silvia, “Are intelligence and creativity really so different? Fluid intelligence, executive processes, and strategy use in divergent thinking,” Intelligence, vol. 39, no. 1, pp. 36–45, 2011. View at Publisher · View at Google Scholar · View at Scopus
  20. P. J. Silvia, “Another look at creativity and intelligence: exploring higher-order models and probable confounds,” Personality and Individual Differences, vol. 44, no. 4, pp. 1012–1021, 2008. View at Publisher · View at Google Scholar · View at Scopus
  21. A. R. A. Conway, N. Cowan, M. F. Bunting, D. J. Therriault, and S. R. B. Minkoff, “A latent variable analysis of working memory capacity, short-term memory capacity, processing speed, and general fluid intelligence,” Intelligence, vol. 30, no. 2, pp. 163–183, 2002. View at Publisher · View at Google Scholar · View at Scopus
  22. J. M. Unterrainer, B. Rahm, C. P. Kaller et al., “Planning abilities and the Tower of London: is this task measuring a discrete cognitive function?” Journal of Clinical and Experimental Neuropsychology, vol. 26, no. 6, pp. 846–856, 2004. View at Publisher · View at Google Scholar · View at Scopus
  23. K. H. Kim, “Can only intelligent people be creative? A meta-analysis,” Prufrock Journal, vol. 16, no. 2-3, pp. 57–66, 2005. View at Google Scholar
  24. A. Furnham and V. Bachtiar, “Personality and intelligence as predictors of creativity,” Personality and Individual Differences, vol. 45, no. 7, pp. 613–617, 2008. View at Publisher · View at Google Scholar · View at Scopus
  25. P. L. Ackerman, M. E. Beier, and M. O. Boyle, “Working memory and intelligence: the same or different constructs?” Psychological Bulletin, vol. 131, no. 1, pp. 30–60, 2005. View at Publisher · View at Google Scholar · View at Scopus
  26. M. F. Folstein, S. E. Folstein, and P. R. McHugh, “‘Mini-mental state’: a practical method for grading the cognitive state of patients for the clinician,” Journal of Psychiatric Research, vol. 12, no. 3, pp. 189–198, 1975. View at Publisher · View at Google Scholar · View at Scopus
  27. World Medical Association, “Declaration of Helsinki: recommendations guiding medical doctors in biomedical research involving human subjects,” in Proceedings of the 18th World Medical Assembly, Helsinki, Finland, 1964, revised in Tokyo, 1975. Venice, Italy in 1983.
  28. J. C. Raven, G. Foulds, and A. Forbes, Advanced Progressive Matrices: Sets I and II: Plan and Use of the Scale with a Report of Experimental Work, Lewis, London, UK, 1973.
  29. D. Wechsler, Wechsler Adult Intelligence Scale, Canadian Manual, Pearson, Toronto, Canada, 4th edition, 2008.
  30. D. C. Delis, E. Kaplan, and J. H. Kramer, Delis-Kaplan Executive Function System (D-KEFS), Psychological Corporation, New York, NY, USA, 2001.
  31. D. E. Broadbent, P. F. Cooper, P. FitzGerald, and K. R. Parkes, “The cognitive failures questionnaire (CFQ) and its correlates,” British Journal of Clinical Psychology, vol. 21, no. 1, pp. 1–16, 1982. View at Publisher · View at Google Scholar · View at Scopus
  32. A. J. Day, K. Brasher, and R. S. Bridger, “Accident proneness revisited: the role of psychological stress and cognitive failure,” Accident Analysis and Prevention, vol. 49, pp. 532–535, 2012. View at Publisher · View at Google Scholar · View at Scopus
  33. M. J. Sullivan, K. Edgley, and E. Dehoux, “A survey of multiple sclerosis: I. Perceived cognitive problems and compensatory strategy use,” Canadian Journal of Rehabilitation, 1990. View at Google Scholar
  34. L. Frank, J. A. Flynn, L. Kleinman et al., “Validation of a new symptom impact questionnaire for mild to moderate cognitive impairment,” International Psychogeriatrics, vol. 18, no. 1, pp. 135–149, 2006. View at Publisher · View at Google Scholar · View at Scopus
  35. R. Hamel and V. D. Schmittmann, “The 20-minute version as a predictor of the Raven Advanced Progressive Matrices Test,” Educational and Psychological Measurement, vol. 66, no. 6, pp. 1039–1046, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus