Hostname: page-component-78c5997874-dh8gc Total loading time: 0 Render date: 2024-11-17T09:19:56.989Z Has data issue: false hasContentIssue false

An evidence-based method for examining and reporting cognitive processes in nutrition research

Published online by Cambridge University Press:  30 September 2014

Matthew P. Pase*
Affiliation:
Centre for Human Psychopharmacology, Swinburne University of Technology, Centre for Human Psychopharmacology, Swinburne University of Technology, PO Box 218, Hawthorn, Victoria, Australia 3122
Con Stough
Affiliation:
Centre for Human Psychopharmacology, Swinburne University of Technology, Centre for Human Psychopharmacology, Swinburne University of Technology, PO Box 218, Hawthorn, Victoria, Australia 3122
*
* Corresponding author: Dr Matthew Pase, email [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Cognitive outcomes are frequently implemented as endpoints in nutrition research. To reduce the number of statistical comparisons it is commonplace for nutrition researchers to combine cognitive test results into a smaller number of broad cognitive abilities. However, there is a clear lack of understanding and consensus as to how best execute this practice. The present paper reviews contemporary models of human cognition and proposes a standardised, evidence-based method for grouping cognitive test data into broader cognitive abilities. Both Carroll's model of human cognitive ability and the Cattell–Horn–Carroll (CHC) model of intelligence provide empirically based taxonomies of human cognition. These models provide a cognitive ‘map’ that can be used to guide the handling and analysis of cognitive outcomes in nutrition research. Making use of a valid cognitive nomenclature can provide the field of clinical nutrition with a common cognitive language enabling efficient comparisons of cognitive outcomes across studies. This will make it easier for researchers, policymakers and readers to interpret and compare cognitive outcomes for different interventions. Using an empirically derived cognitive nomenclature to guide the creation of cognitive composite scores will ensure that cognitive endpoints are theoretically valid and meaningful. This will increase the generalisability of trial results to the general population. The present review also discusses how the CHC model of cognition can also guide the synthesis of cognitive outcomes in systematic reviews and meta-analysis.

Type
Research Article
Copyright
Copyright © The Authors 2014 

Introduction

Cognition is an umbrella term collectively referring to mental processes, such as the ability to pay attention, remember and recall information, perceive relationships as well as to think logically and abstractly. Many cognitive abilities decline with advancing age( Reference Singh-Manoux, Kivimaki and Glymour 1 , Reference Christensen 2 ). In particular, memory and processing speed appear particularly sensitive to the effects of age( Reference Christensen 2 , Reference Salthouse 3 ). The Whitehall II prospective cohort study recently reported that reasoning, memory and fluency all declined over a 10-year period, while vocabulary remained relatively stable( Reference Singh-Manoux, Kivimaki and Glymour 1 ). This decline in cognitive ability can have a negative impact on an individual's ability to live an independent and fulfilling life( Reference Deary, Corley and Gow 4 ). As many Western populations are ageing rapidly, age-related cognitive decline is associated with significant societal consequences, such as increased healthcare expenditure costs, lower quality of life and reduced economic growth( 5 ).

Many risk factors for cognitive decline appear modifiable. For example, many such risk factors overlap with that of CVD( Reference Kaffashian, Dugravot and Elbaz 6 ), where clinical nutrition can play a role in reducing risk( Reference Obarzanek, Sacks and Vollmer 7 Reference Pase, Grima and Sarris 9 ). As a result, there has been substantial interest in the potential role of clinical nutrition in ameliorating or delaying cognitive decline and dementia. Studies are also frequently conducted to examine the effects of nutrition on cognitive processes in young healthy populations. Many students and professionals alike are interested in pharmacological means to enhance their cognitive performance. For example, coffee, a well-known cognitive enhancer, is one of the world's most traded commodities. Cognitive outcomes are thus being increasingly implemented as primary and secondary endpoints in nutrition research. Unfortunately, many researchers, policymakers and journal editors alike can find the results of cognitive studies difficult to interpret given the complexity of cognition itself. This problem is compounded by the lack of consistency in the cognitive tasks used across studies as well as the lack of standardisation in the way that cognitive test data are analysed and reported( Reference Dangour and Allen 10 , Reference Pase and Stough 11 ).

Dangour & Allen( Reference Dangour and Allen 10 ) identified that in 2011 and 2012 there were ten trials in the American Journal of Clinical Nutrition, the premier journal in the field of clinical nutrition, examining cognition as a primary outcome. There were twenty-nine different tests used across the ten studies and no two studies used the same primary outcome( Reference Dangour and Allen 10 ). With this heterogeneity, it is hard for experts and general readers alike to fully understand the significance and implications of the reported results, especially with reference to previous studies.

In nutrition research, it is commonplace for clinical trial investigators to group cognitive test data into broader cognitive domains. This practice can be beneficial because it reduces the number of statistical comparisons, thus reducing the risk of a type I error. However, as recently noted, this process is often executed in an arbitrary or atheoretical manner( Reference Pase and Stough 11 ). This means that cognitive outcomes frequently lack theoretical meaning while also differing substantially between clinical trials. This problem cannot simply be solved by combining cognitive test scores in a manner consistent with previous clinical trials. While this will standardise cognitive data across studies, it provides no assurance that the cognitive outcomes are theoretically appropriate or ecologically valid. When discussing the grouping of cognitive test data, Charles Spearman wrote in the Abilities of Man ( Reference Spearman 12 ):

‘If the lines of agreement are to be arbitrary, then science becomes none the better for it; the value obtained can be no more significant than would be that got from any list made of desirable traits of body; it would at best be comparable with some average mark derived from an individual's height, weight, strength of grip, soundness of heart, capacity of lungs, opsonic index, and so forth ad lib. How can any such concoction of heterogeneous traits, bodily or mental, be taken seriously?’ (p. 65)

In the above statement, Spearman asserts that summating cognitive test scores based on arbitrary rules produces a result of little value. In order to create valid and standardised cognitive composite scores, a common evidence-based approach is needed to guide the grouping of cognitive test data. Adopting a theoretically driven, standardised approach will ensure that cognitive outcomes are not only consistent between studies but are theoretically meaningful. Fortunately, the field of psychometrics has significantly advanced our understanding of human cognition and empirically derived cognitive taxonomies have been described. Such nomenclature provides a ‘cognitive map’( Reference Carroll, Flanagan and Harrison 13 ) that can be used in the field of clinical nutrition to guide the handling and reporting of cognitive outcomes.

The aim of the present paper is to review and describe current theories of cognitive ability and explain, with working examples, how such theories can guide the handling of cognitive outcomes in nutrition research. The present review aims to help the reader: (1) better understand the structure and nature of human cognition; (2) understand the problem of combining cognitive test data based on arbitrary rules; and (3) apply current cognitive theory to their own nutrition research in order to improve the reporting of cognitive outcomes. Embracing a valid cognitive nomenclature will introduce a common language to nutrition research( Reference Schneider, McGrew, Flanagan and Harrison 14 ), thus making it easier for readers, editors and reviewers to better evaluate cognitive outcomes and make useful inferences and comparisons across studies.

A brief review of research on human cognition

Research on human cognition spans across centuries and a detailed review of every development is beyond the scope of the present paper. Instead, we draw attention to some of the key milestones that led to the Cattell–Horn–Carroll (CHC) model of cognition, described later in more detail.

Research on human cognitive ability emerged from the quantification of individual differences in the late 1800s. Sir Francis Galton (1822–1911) documented individual differences on tests of sensory discrimination( Reference Galton 15 ). Although Galton suggested that such individual differences were due to differences in a general ability, it was Spearman who provided the first empirical evidence to support the existence of a general mental ability, which he labelled ‘g’( Reference Jensen 16 ). This ‘g’ factor was not simply the average of numerous test results but rather the correlations between them( Reference Jensen 16 ). The presence of a ‘g’ factor explained why individuals who were good at one mental test tended to also be good at other mental tests. Thus, according to Spearman, the structure of cognition involved an overarching general ability which dominated a group of more specific cognitive abilities including but not limited to verbal, spatial, motor and memory abilities( Reference Spearman and Wynn-Jones 17 ).

Raymond Cattell, who studied under Spearman, dismissed the existence of a single general mental ability in favour of two separate abilities. According to Cattell, two broad cognitive abilities termed fluid (gf) and crystallised (gc) intelligence dominate several more specific cognitive factors. This theory was coined the Gf-Gc theory. Fluid intelligence was described by Cattell( Reference Cattell 18 ) as the ability to ‘discriminate and perceive relations between any fundamentals’ (p. 178). As implied by the word ‘fluid’, core characteristics included mental flexibility, problem solving and adaption to the specific demands of any given mental challenge( Reference Wasserman, Flanagan and Harrison 19 ). Crystallised intelligence refers to acquired skills and knowledge, such as vocabulary and the ability to comprehend language. In the decades following the conception of the Gf-Gc model, Cattell and John Horn conducted factor analytic studies to verify Cattell's model. Based on empirical results, the Gf-Gc model was expanded to include additional broad cognitive abilities of short-term acquisition and retrieval (short-term memory), fluency of retrieval from long-term storage (long-term memory), processing speed, visual processing, auditory processing and quantitative knowledge. The next significant contribution to the study of human cognition came from John B. Carroll in 1993( Reference Carroll 20 ).

Carroll's cognitive framework and the three-stratum theory of cognitive abilities

In 1993, Carroll published a seminal work titled Human Cognitive Abilities: A Survey of Factor Analytic Studies ( Reference Carroll 20 ). In this contribution he coherently synthesised a lifetime of work involving factor analysis of more than 460 datasets involving a wide range of cognitive tasks. The principal contribution of the book was to empirically demonstrate the structure of human cognition.

Through factor analytic investigation, Carroll identified ten broad cognitive domains, reflecting ‘true’ cognitive abilities( Reference Carroll 20 ). Carroll then further developed his model and proposed a three-stratum theory of cognitive ability, which organised the discovered cognitive abilities according to a hierarchical model (Fig. 1). Across the datasets, Carroll found compelling evidence for a general intelligence factor that dominated the other abilities. Thus, general intelligence takes its place at stratum three. The second stratum includes eight broad cognitive abilities as follows: fluid intelligence; crystallised intelligence; general memory and learning; broad visual perception; broad auditory perception; broad retrieval ability; broad cognitive speediness; and processing speed. Stratum one is comprised of sixty-nine narrow, well-defined abilities.

Fig. 1 The structure of human cognition as specified by Carroll( Reference Carroll 20 ). The following narrow abilities of the crystallised intelligence factor have been omitted owing to space: spelling ability, writing ability, foreign language proficiency and foreign language aptitude.

According to Carroll, it is thus possible to examine cognition at different levels of breadth. Cognitive test data can be combined to create an overarching general mental ability, eight broad cognitive abilities or sixty-nine well-defined abilities.

Carroll's empirically derived taxonomy largely supported the structure of cognition outlined by Cattell and Horn( Reference Schneider, McGrew, Flanagan and Harrison 14 ). The major difference between the two was that Carroll advocated the presence of a single general mental ability (‘g’) dominating the broad and specific mental abilities. Carroll's theory was well received. Horn likened Carroll's taxonomy to Mendelyev's periodic table of elements( Reference Horn, McArdle and Woodcock 21 ). Others compared the work's significance to Newton's mathematical principles of natural philosophy( Reference Schneider, McGrew, Flanagan and Harrison 14 ). Along with the work of Cattell and Horn, Carroll's cognitive taxonomy lays the foundation for current intelligence theory: the CHC model of cognition.

Contemporary intelligence theory: Cattell–Horn–Carroll theory of cognitive abilities

The CHC theory is a consensus model housing both the Cattell–Horn and Carroll models( Reference Schneider, McGrew, Flanagan and Harrison 14 ). The CHC theory marries together both the broad and specific cognitive abilities outlined by both Cattell–Horn and Carroll. The broad cognitive abilities outlined by CHC are fluid intelligence/reasoning, comprehension knowledge (crystallised intelligence), short-term memory, visual processing, auditory processing, long-term storage and retrieval, processing speed, reaction/decision speed, reading and writing and quantitative knowledge( Reference McGrew 22 ). The broad and narrow cognitive abilities of CHC are outlined in Fig. 2. An additional six broad cognitive abilities of general domain-specific knowledge, tactile abilities, kinaesthetic abilities, olfactory abilities, psychomotor abilities and psychomotor speed have been identified and suggested for inclusion in the CHC model( Reference McGrew 22 ). These six additional cognitive abilities mostly involve sensory factors and are unlikely to be widely measured in nutrition research.

Fig. 2 The broad and narrow abilities of the Cattell–Horn–Carroll model based on the writing of McGrew( Reference McGrew 22 ). RT, reaction time. Three additional narrow abilities of the auditory processing factor are omitted owing to space. The newly identified broad abilities of tactile, kinaesthetic, olfactory and psychomotor abilities as well as domain-specific knowledge are also omitted owing to space.

A benefit of housing the two models in the one theory is that contemporary research and development can be applied directly to the single CHC model rather than separately to the two similar, yet different, Cattell–Horn and Carroll models. The CHC model has been used to guide modern cognitive test battery development and continues to be the subject of ongoing research and refinement( Reference McGrew, Flanagan and Harrison 23 ). The CHC model or Carroll's taxonomy can be applied to nutrition research to help interpret and organise cognitive test results with different cognitive measures loading on different components of the model. Practical examples of this are provided later in the present review.

Competing theories of human cognition and intelligence

The CHC model is just one of many theories describing human cognition and/or intelligence. Other theories include the Triarchic theory of intelligence, theory of multiple intelligences and the Planning, Attention-Arousal, Simultaneous and Successive (PASS) theory, amongst others. However, these models do not have the same weight of empirical validation or contemporary popularity as Carroll's cognitive model or the CHC model. Moreover, some of these other theories of intelligence describe non-cognitive components of intelligence (i.e. theory of multiple intelligences), which are of little relevance to researchers investigating the effects of nutritional factors on human cognition. Based on its empirical support, widespread popularity and vivid descriptions of human cognition, the CHC model is the ideal model to apply to nutrition research.

From theory to practice: applying intelligence theory to clinical nutrition research

To demonstrate how the CHC cognitive framework can be applied to nutrition research we have provided some worked examples. In example 1, cognitive tests from a recent clinical trial are grouped into broader cognitive abilities based on the CHC framework. Example 1 thus shows how cognitive tests can be grouped at the level of the clinical trial. In example 2, cognitive outcomes from recently published fish oil intervention studies are extracted and then grouped according to the CHC framework. Example 2 thus shows how cognitive tests can be grouped at the level of a review.

Example 1: grouping cognitive data from a single nutrition research study

The authors have used a recent study from our research group by Stough et al. ( Reference Stough, Downey and Silber 24 ) on DHA supplementation and cognitive performance as a worked example. The cognitive tests used by Stough were as follows: immediate word recall; simple reaction time; digit vigilance; choice reaction time; spatial working memory; numeric working memory; delayed word recall; delayed word recognition; and delayed picture recognition. These tasks were administered as part of the Cognitive Drug Research (CDR) Computerised Assessment System.

Table 1 shows the cognitive tasks grouped into broad cognitive abilities according to the CHC model. The cognitive test data form broader factors of short-term memory, long-term storage and retrieval, cognitive processing speed as well as reaction time. The authors could thus use these cognitive factors as primary outcomes by examining the effects of DHA on these broad cognitive abilities. The cognitive data could also be similarly grouped according to the Carroll framework.

Table 1 Neuropsychological tests used by Stough et al. ( Reference Stough, Downey and Silber 24 ) grouped into the broad cognitive abilities of the Cattell–Horn–Carroll (CHC) model*

RT, reaction time.

* Omitted abilities include psychomotor, olfactory, tactile and kinaesthetic abilities as well as psychomotor speed and general (domain-specific knowledge).

In this example, grouping tasks according to CHC is useful for many reasons. Firstly, it elucidates that the chosen cognitive tasks only examine a few of the cognitive domains that have been discovered. From applying the CHC model, both the investigators and readers can thus infer the need to examine the effects of the intervention on many other unexamined cognitive abilities. This helps others deduce that only a certain number of cognitive domains were assessed rather than concluding that fish oil has no effect on cognitive ability. Second, by combining cognitive tasks into common cognitive domains, nuances special to each cognitive task can be filtered out. This allows for examination into how the intervention affects a common underlying cognitive ability rather than the nuances of a single task. Researchers can then make greater generalisations about how their intervention affects a true cognitive ability rather than a single cognitive task. Researchers can then also better compare their results with those of other studies, even if the other studies used different cognitive tasks.

It should be noted that the cognitive tasks of CDR have previously been combined through factor analysis into five factors including speed of memory processes (picture recognition speed, word recognition speed, numeric working memory speed, spatial working memory speed), quality of episodic secondary memory (immediate word recall accuracy, delayed word recall accuracy, word recognition accuracy, picture recognition accuracy), power of attention (simple reaction time, choice reaction time, digit vigilance detection speed), continuity of attention (digit vigilance detection accuracy, choice reaction time accuracy, digit vigilance false alarms, tracking error) and quality of working memory (numeric working memory accuracy, spatial working memory accuracy)( Reference Wesnes, Ward and McGinty 25 ). Such grouping according this factor analysis is very common in the literature and it has proved useful for capturing the effects of different interventions( Reference Wesnes, Ward and McGinty 25 , Reference Stough, Downey and Lloyd 26 ). However, such grouping differs from that derived from the CHC model (Table 1). While grouping CDR data based on a previous factor analysis allows for efficient comparisons between like studies using CDR, it does not necessarily allow for comparison with other studies using other test batteries. Explaining results in terms of CHC provides a universal cognitive language, making for easier communication of findings. Moreover, using the CHC theory to group cognitive tasks allows researchers to collapse tasks from a standardised test battery, such as CDR, with other different cognitive tasks.

A note on standardised computerised cognitive test batteries and the Cattell–Horn–Carroll model

Standardised computerised cognitive test batteries are commonly used in nutrition research. Advantages of computerised cognitive test batteries include the availability of parallel forms, the ability to sequentially administer tasks on a single computer, the ability to automatically average over repeated trials and the ability to capture participant responses with millisecond precision. The implementation of computerised cognitive test batteries can thus be very useful. In addition to CDR, some common standardised test batteries include the Cambridge Neuropsychological Test Automated Battery (CANTAB) and Cogstate. Examining such cognitive test batteries with reference to the CHC framework allows for examination of how well these respective test batteries capture the full spectrum of human cognitive abilities. When measured against the CHC model, it is evident that such batteries have strengths and weaknesses in terms of the cognitive abilities that they assess. That is to say that some computerised cognitive test batteries do not cover the full spectrum of cognitive function. For example, Cogstate appears to capture five out of ten broad CHC cognitive domains. Although Cogstate taps long-term storage and retrieval as well as short-term memory, it has no measure of memory free recall, fluency or memory span.

The CHC or Carroll frameworks can be applied using a top-down approach to guide the selection of cognitive tests. As opposed to simply using one standardised cognitive test battery, investigators can assess the full spectrum of cognitive abilities by selecting a combination of cognitive or neuropsychological tests that map onto a large number of CHC cognitive factors. This could involve supplementing a standardised computerised cognitive test battery with a few individual cognitive tests or simply combining individual neuropsychological tests. A researcher can use the CHC framework to see what cognitive domains exist that are of interest to the intervention. Specific cognitive tests can thus be appropriately selected in order to measure the broad cognitive abilities that are of interest. For example, if it is believed that a nutritional intervention may improve long-term memory, a researcher can use the CHC framework to guide the selection of tests that measure different aspects of long-term memory. Applying the CHC model thus allows researchers to examine whether their chosen test battery examines a large number of distinct cognitive processes or whether their battery is either intentionally or unintentionally biased towards only a few cognitive domains. In the example provided, Stough et al. ( Reference Stough, Downey and Silber 24 ) could have included additional cognitive tasks to complement CDR, thereby examining a greater number of cognitive processes.

Example 2: grouping cognitive tests from a collection of nutrition research clinical trials

When conducting a systematic review or meta-analysis, it can be extremely challenging to synthesise heterogeneous cognitive tasks reported across different studies. This is because the same cognitive tasks are often infrequently reported across studies. Too often, reviews dealing with cognitive data group cognitive tasks based on seemingly arbitrary allocations. Both the CHC and Carroll's cognitive models provide an evidence-based method for grouping cognitive data in a systematic review or meta-analysis. Using this method can help reduce selection and outcome reporting bias while also ensuring that cognitive composites are theoretically meaningful.

In this example, cognitive outcomes are qualitatively pooled from a collection of clinical trials examining the effects of fish oil supplementation on cognitive performance. The example serves to demonstrate how cognitive outcomes can be grouped according to a validated cognitive framework, at the level of the narrative or systematic review. For this example, the authors have performed a brief review.

Method for collecting articles

A quick search of Medline (PubMed; http://www.ncbi.nlm.nih.gov/pubmed/) was performed to identify randomised, controlled trials examining the effects of n-3 supplementation on cognitive outcomes in adult human subjects. The search terms were kept consistent with that of a recent review on n-3 and cognition( Reference Mazereeuw, Lanctôt and Chau 27 ). Searching was limited to human clinical trials published between January 2012 and June 2013. Trials simply had to be randomised and controlled, administer n-3 in one treatment arm and a control in another, be conducted in adult human subjects and examine cognitive outcomes.

Results

There were twelve relevant trials( Reference Stough, Downey and Silber 24 , Reference Benton, Donohoe and Clayton 28 Reference Stonehouse, Conlon and Podd 38 ) identified. There was wide variation in the number of cognitive tests used between studies. The exact number of different cognitive tests used across all studies is difficult to identify. This is because some studies use slightly different versions of the same test or different methods of scoring. As expected, many studies measured similar cognitive processes with different tests. For example, some trials measured word recall using the Rey Auditory Verbal Learning Test whilst others measured word recall using visually based (often on a computer) word presentation. This variation between studies highlights how a cognitive taxonomy can be useful in helping readers understand what cognitive processes are actually being measured in each study. The cognitive tasks used in each study can be seen in Table 2.

Table 2 Cognitive tests used across all n-3 clinical trials identified

RVIP, rapid visual information processing; MMSE, Mini-Mental State Examination; RT, reaction time.

Once the cognitive tasks have been identified they can be grouped into true cognitive abilities. Table 3 displays the tasks grouped according to the CHC framework. Using this framework, cognitive data can be grouped according to broad or specific cognitive abilities. As shown in Table 3, we have grouped cognitive data into broad cognitive abilities. This produces a manageable number of distinct cognitive factors.

Table 3 Neuropsychological tests of each study organised according to the Cattall–Horn–Carroll broad cognitive abilities framework*

RVIP, rapid visual information processing; RT, reaction time; MMSE, Mini Mental State Examination.

* Omitted abilities include psychomotor, olfactory, tactile and kinaesthetic abilities as well as psychomotor speed and general (domain-specific knowledge).

When examining Table 3, it becomes obvious that certain cognitive abilities have been more intensely studied than others. Across studies, there were thirty-one tasks assessing short-term memory and no research in the domains of comprehension knowledge, auditory processes, quantitative knowledge or reading and writing ability. If there is a wealth of information in a particular broad cognitive ability (as in the case of long-term memory and retrieval) then tests could be grouped according to the more specific cognitive abilities of the CHC model.

Once cognitive outcomes have been organised into the different cognitive domains, data for each cognitive domain can be pooled in meta-analysis, if appropriate. Data can also be examined qualitatively, noting what proportions of tests are significant, in favour of treatment, in each cognitive domain. The Carroll framework has recently been used by our group in this manner to show that some interventions have beneficial effects on some cognitive processes but not others( Reference Grima, Pase and MacPherson 39 , Reference Pase, Kean and Sarris 40 ).

Using the CHC model to help synthesise cognitive outcomes for a systematic review is extremely useful because it provides a validated theory-driven approach. Without such an approach, there is a risk that authors of reviews can either willingly or unwillingly bias cognitive composites towards significance. For example, it is possible to collapse tenuously related tasks that trend towards the hypothesised treatment effect. In addition, using the CHC approach for a review will ensure that results are expressed in a universally understood cognitive language and that results are theoretically meaningful. As demonstrated in our example, the CHC approach is also useful for highlighting existing gaps in the literature.

Considerations for grouping cognitive test data into broader cognitive domains

When creating CHC factor scores for an individual trial, consideration must be given to how the individual tasks are going to be statistically combined to create the broader factor. Simply adding task scores together can result in factors that are biased towards one particular cognitive test. That is to say that, if tasks scores are left in their original units then tasks with large indices will be weighted more heavily in the composite. To account for this, the individual cognitive test variables can be converted to standardised scores, such as Z scores, before grouping. Standardised scores can then be combined using the simple aggregate of the standardised variables to be grouped. However, standardised cognitive variables can also be combined based on their respective contribution to the broader cognitive factor. The respective contribution of each task to the broader cognitive factor can be inferred from factor analysis. The cognitive frameworks described can be used to guide confirmatory factor analytic models. This will allow for cognitive tests to be combined based on weighted Z scores, noting the specific contribution of each test to the cognitive factor, rather than the simple aggregate of the cognitive Z scores. In some cases standardisation of scores is not necessary. For example, the review provided in example 2 of the present paper simply describes the tasks according to each CHC domain rather than combining them quantitatively.

Strengths of using the Cattell–Horn–Carroll or Carroll approach

There are many advantages to applying the CHC or Carroll cognitive framework to help guide the handling and reporting of cognitive outcomes in nutrition research. The first is that adopting a standardised method will lead to consistency across studies. Cognitive outcomes will become more uniform across studies yet researchers are still free to choose their own preferred cognitive tests. Expressing cognitive outcomes in a common language will make cognitive research easier to interpret and will aid cross-study comparisons. The second advantage is that cognitive outcomes will be theoretically meaningful and valid. It would be a significant waste of time and money for an otherwise well-conducted study to analyse the effect of treatment on a set of meaningless cognitive abilities. It is much more important to show that an intervention improves an empirically validated cognitive ability rather than a spurious compendium of cognitive tests. By creating CHC composite scores, nuances specific to an individual cognitive task can be filtered out. This allows researchers to investigate the effects of treatment on the underlying or latent cognitive ability rather than the effects of treatment on the nuances specific to a single cognitive task.

As stated earlier, one of the main utilities of implementing the CHC model is at the level of the systematic review. Here, the CHC model can be used to guide the synthesis of cognitive tasks across studies, reducing selection bias and outcome reporting bias while also ensuring that results are theoretically valid. At the level of the individual trial, applying a validated cognitive framework allows one to assess any holes that may exist in their cognitive battery. A researcher can assess whether their selection of tests measures the full range of cognitive abilities that are of interest. Organising cognitive tasks based on the discussed cognitive frameworks will help ensure that test batteries are not unwillingly biased towards one cognitive ability at the expense of another. On the contrary, if there is a specific hypothesis about certain cognitive abilities being more affected by a specific intervention, then the framework can help guide the implementation of tests that will appropriately measure the single cognitive ability of interest. Lastly, although our review focuses on clinical nutrition, the described cognitive frameworks could also be applied to other areas of study such as pharmaceutical clinical trials or epidemiological work.

What cognitive domains should future nutrition intervention studies investigate?

The CHC model describes many separate broad cognitive domains. It is therefore hard to know what cognitive domains to investigate when designing a clinical trial. Researchers will need to select cognitive tasks/domains on a case-by-case basis depending on the research questions of interest. Application of the CHC model can be useful for both exploratory and hypothesis-driven research. For exploratory research, investigators can select cognitive tests that tap a wide range of different broad CHC domains. This can help capture and catalogue the effects of an intervention on the full range of cognitive abilities. Further hypothesis-driven research can then narrow down on a select few broad CHC domains expected to be affected by the intervention.

If a study is interested in ameliorating age-associated cognitive decline, then it would be important to examine the effects of the intervention on those cognitive domains sensitive to ageing, such as fluid reasoning, short-term memory, long-term storage and retrieval and processing speed. However, such logic would not apply to studies aimed at augmenting cognitive performance in young healthy participants. A different approach could be to examine those cognitive domains expected to be affected by the nutritional intervention either based on past research or mechanistic effects.

Limitations of using the Cattell–Horn–Carroll or Carroll approach

The models of cognition described here reflect contemporary understanding yet they will continue to evolve as new data accumulate and more refined statistical approaches emerge. It will thus be important for the field of clinical nutrition to stay up to date with the latest developments in this field. A second limitation is that many cognitive tasks are not domain pure because they require multiple cognitive processes to execute the task. As just one example, many processing speed-type tasks involve an element of working memory and visual processing. Such tasks could be theoretically classified under two or more cognitive abilities. In such cases, discretion is needed to determine which single cognitive factor best captures the specifics of the task. Alternatively, tasks can be classified under two or more broad abilities, based on their respective factor loadings to each. It is important for researchers to understand that grouping cognitive tasks that either lack validity or that have been incorrectly administered can lead to meaningless cognitive composite scores. Finally, although the authors advocate the benefits of applying the CHC and Carroll cognitive frameworks, sometimes it may be desirable to group cognitive data according to other models. In some cases, factor analysis of cognitive test data may sometimes support grouping cognitive outcomes in a different way to the cognitive taxonomies described here. Other cognitive models can be used as long as there is sound theoretical, statistical or clinical justification. Lastly, in addition to reporting on cognitive composite scores, it is still useful for studies to report results pertaining to the original cognitive tasks. Often this can be completed by making use of an online supplement linked to the publication. Publishing data for the individual tasks makes it easier for future reviews and meta-analyses to access vital statistics that may be of interest.

Conclusions

Cognitive outcomes are frequently implemented in nutrition research. In such research, it is common protocol for cognitive test data to be combined into a smaller set of cognitive factors, although there appears to be little understanding or consensus as to how best execute this practice. We propose that empirically derived contemporary models of human cognition can help guide the handling and reporting of nutrition research cognitive data. Both the CHC model and Carroll's taxonomy provide a much needed ‘map’ of human cognition( Reference Carroll, Flanagan and Harrison 13 ). In defining the structure of cognitive ability, they allow researchers to hang their chosen cognitive tasks in an empirically validated framework and express cognitive outcomes in a common language. Adopting this common language will standardise reporting, making cognitive research easier to interpret. This increase in standardisation comes at little cost to the researcher because they are still free to choose their preferred cognitive tasks, as long as they are valid and used appropriately. A further advantage of combining cognitive test score data based on an empirically validated theory is that cognitive outcomes will reflect cognitive abilities that have been reliably discovered in research. This means that nutrition research results will better translate to the population at large. Moreover, adopting the CHC model can be extremely useful when synthesising and collapsing cognitive tasks at the level of the systematic review or meta-analysis.

Acknowledgements

There were no sources of funding obtained for the present review.

Both M. P. P. and C. S. contributed equally to all parts of the paper.

There are no conflicts of interest.

References

1 Singh-Manoux, A, Kivimaki, M, Glymour, MM, et al. (2012) Timing of onset of cognitive decline: results from Whitehall II prospective cohort study. BMJ 344, d7622.Google Scholar
2 Christensen, H (2001) What cognitive changes can be expected with normal ageing? Aust N Z J Psychiatry 35, 768775.Google Scholar
3 Salthouse, TA (1996) The processing-speed theory of adult age differences in cognition. Psychol Rev 103, 403428.CrossRefGoogle ScholarPubMed
4 Deary, IJ, Corley, J, Gow, AJ, et al. (2009) Age-associated cognitive decline. Br Med Bull 92, 135152.CrossRefGoogle ScholarPubMed
5 Productivity Commission (editor) (2005) Economic Implications of an Ageing Australia, Research Report. Canberra: Australian Government Productivity Commission.Google Scholar
6 Kaffashian, S, Dugravot, A, Elbaz, A, et al. (2013) Predicting cognitive decline: a dementia risk score vs the Framingham vacular risk scores. Neurology 80, 13001306.Google Scholar
7 Obarzanek, E, Sacks, FM, Vollmer, WM, et al. (2001) Effects on blood lipids of a blood pressure-lowering diet: The Dietary Approaches to Stop Hypertension (DASH) Trial. Am J Clin Nutr 74, 8089.CrossRefGoogle ScholarPubMed
8 Morris, MC, Sacks, F & Rosner, B (1993) Does fish oil lower blood pressure? A meta-analysis of controlled trials. Circulation 88, 523533.Google Scholar
9 Pase, MP, Grima, NA & Sarris, J (2011) The effects of dietary and nutrient interventions on arterial stiffness: a systematic review. Am J Clin Nutr 93, 446454.Google Scholar
10 Dangour, AD & Allen, E (2013) Do omega-3 fats boost brain function in adults? Are we any closer to an answer? Am J Clin Nutr 97, 909910.Google Scholar
11 Pase, MP & Stough, C (2013) Describing a taxonomy of cognitive processes for clinical trials assessing cognition. Am J Clin Nutr 98, 509510.Google Scholar
12 Spearman, C (1927) The Abilities of Man. Their Nature and Measurement. London: Macmillan.Google Scholar
13 Carroll, JB (2012) The three-stratum theory of cognitive abilities. In Contemporary Intellectual Assessment: Theories, Tests and Issues, 3rd ed. [Flanagan, DP and Harrison, PL, editors]. New York: Guilford Press.Google Scholar
14 Schneider, JW & McGrew, KS (2012) The Cattell–Horn–Carroll model of intelligence. In Contemporary Intellectual Assessment Theories, Tests and Issues, 3rd ed. [Flanagan, DP and Harrison, PL, editors]. New York: Guilford Press.Google Scholar
15 Galton, F (1892) Inquiries into Human Faculty and its Development. London: Macmillan and Company.Google Scholar
16 Jensen, AR (2002) Galton's legacy to research on intelligence. J Biosoc Sci 34, 145172.Google Scholar
17 Spearman, C & Wynn-Jones, L (1950) Human Ability: A Continuation of ‘The Abilities of Man’. London: Macmillan.Google Scholar
18 Cattell, RB (1943) The measurement of adult intelligence. Psychol Bull 40, 153193.Google Scholar
19 Wasserman, JD (2012) A history of intelligence assessment. In Contemporary Intellectual Assessment Theories, Tests and Issues, 3rd ed. [Flanagan, DP and Harrison, PL, editors]. New York: Guilford Press.Google Scholar
20 Carroll, JB (1993) Human Cognitive Abilities: A Survey of Factor Analytic Studies. New York: Cambridge University Press.Google Scholar
21 Horn, J (1998) A basis for research on age differences in cognitive abilities. In Human Cognitive Abilities in Theory and Practice, p. 5791 [McArdle, J and Woodcock, R, editors]. Mahwah, NJ: Lawrence Erlbaum Associates Publishers.Google Scholar
22 McGrew, KS (2009) CHC theory and the human cognitive abilities project: standing on the shoulders of the giants of psychometric intelligence research. Intelligence 37, 110.Google Scholar
23 McGrew, KS (2005) The Cattell–Horn–Carroll theory of cognitive abilities: past, present and future. In Contemporary Intellectual Assessment: Theories, Tests and Issues, 2nd ed. [Flanagan, DP and Harrison, PL, editors]. New York: Guilford Press.Google Scholar
24 Stough, C, Downey, L, Silber, B, et al. (2012) The effects of 90-day supplementation with the omega-3 essential fatty acid docosahexaenoic acid (DHA) on cognitive function and visual acuity in a healthy aging population. Neurobiol Aging 33, 824e1824e3.Google Scholar
25 Wesnes, KA, Ward, T, McGinty, A, et al. (2000) The memory enhancing effects of a Ginkgo biloba/Panax ginseng combination in healthy middle-aged volunteers. Psychopharmacology 152, 353361.Google Scholar
26 Stough, C, Downey, LA, Lloyd, J, et al. (2008) Examining the nootropic effects of a special extract of Bacopa monniera on human cognitive functioning: 90 day double-blind placebo-controlled randomized trial. Phytother Res 22, 16291634.CrossRefGoogle ScholarPubMed
27 Mazereeuw, G, Lanctôt, KL, Chau, SA, et al. (2012) Effects of omega-3 fatty acids on cognitive performance: a meta-analysis. Neurobiol Aging 33, 1482e171482e29.Google Scholar
28 Benton, D, Donohoe, RT, Clayton, DE, et al. (2013) Supplementation with DHA and the psychological functioning of young adults. Br J Nutr 109, 155161.Google Scholar
29 Geleijnse, JM, Giltay, EJ & Kromhout, D (2012) Effects of n-3 fatty acids on cognitive decline: a randomized, double-blind, placebo-controlled trial in stable myocardial infarction patients. Alzheimers Dement 8, 278287.Google Scholar
30 Jackson, PA, Deary, ME, Reay, JL, et al. (2012) No effect of 12 weeks' supplementation with 1 g DHA-rich or EPA-rich fish oil on cognitive function or mood in healthy young adults aged 18–35 years. Br J Nutr 107, 12321243.Google Scholar
31 Jackson, PA, Reay, JL, Scholey, AB, et al. (2012) Docosahexaenoic acid-rich fish oil modulates the cerebral hemodynamic response to cognitive tasks in healthy young adults. Biol Psychol 89, 183190.Google Scholar
32 Jackson, PA, Reay, JL, Scholey, AB, et al. (2012) DHA-rich oil modulates the cerebral haemodynamic response to cognitive tasks in healthy young adults: a near IR spectroscopy pilot study. Br J Nutr 107, 10931098.Google Scholar
33 Karr, JE, Grindstaff, TR & Alexander, JE (2012) Omega-3 polyunsaturated fatty acids and cognition in a college-aged population. Exp Clin Psychopharmacol 20, 236242.Google Scholar
34 Narendran, R, Frankle, WG, Mason, NS, et al. (2012) Improved working memory but no effect on striatal vesicular monoamine transporter type 2 after omega-3 polyunsaturated fatty acid supplementation. PLOS ONE 7, e46832.Google Scholar
35 Nilsson, A, Radeborg, K, Salo, I, et al. (2012) Effects of supplementation with n-3 polyunsaturated fatty acids on cognitive performance and cardiometabolic risk markers in healthy 51 to 72 years old subjects: a randomized controlled cross-over study. Nutr J 11, 99.Google Scholar
36 Rondanelli, M, Opizzi, A, Faliva, M, et al. (2012) Effects of a diet integration with an oily emulsion of DHA-phospholipids containing melatonin and tryptophan in elderly patients suffering from mild cognitive impairment. Nutr Neurosci 15, 4654.CrossRefGoogle ScholarPubMed
37 Sinn, N, Milte, CM, Street, SJ, et al. (2012) Effects of n-3 fatty acids, EPA v. DHA, on depressive symptoms, quality of life, memory and executive function in older adults with mild cognitive impairment: a 6-month randomised controlled trial. Br J Nutr 107, 16821693.Google Scholar
38 Stonehouse, W, Conlon, C, Podd, J, et al. (2013) DHA supplementation improved both memory and reaction time in healthy young adults: a randomized controlled trial. Am J Clin Nutr 97, 11341143.Google Scholar
39 Grima, NA, Pase, MP, MacPherson, H, et al. (2012) The effects of multivitamins on cognitive performance: a systematic review and meta-analysis. J Alzheimers Dis 29, 561569.Google Scholar
40 Pase, MP, Kean, J, Sarris, J, et al. (2012) The cognitive-enhancing effects of Bacopa monnieri: a systematic review of randomized, controlled human clinical trials. J Altern Complement Med 18, 647652.Google Scholar
Figure 0

Fig. 1 The structure of human cognition as specified by Carroll(20). The following narrow abilities of the crystallised intelligence factor have been omitted owing to space: spelling ability, writing ability, foreign language proficiency and foreign language aptitude.

Figure 1

Fig. 2 The broad and narrow abilities of the Cattell–Horn–Carroll model based on the writing of McGrew(22). RT, reaction time. Three additional narrow abilities of the auditory processing factor are omitted owing to space. The newly identified broad abilities of tactile, kinaesthetic, olfactory and psychomotor abilities as well as domain-specific knowledge are also omitted owing to space.

Figure 2

Table 1 Neuropsychological tests used by Stough et al.(24) grouped into the broad cognitive abilities of the Cattell–Horn–Carroll (CHC) model*

Figure 3

Table 2 Cognitive tests used across all n-3 clinical trials identified

Figure 4

Table 3 Neuropsychological tests of each study organised according to the Cattall–Horn–Carroll broad cognitive abilities framework*