Chapter 3
CHAPTER SUMMARY
- In this research, 3,543 meta-analyses cumulated the empirical evidence of relations between 79 personality traits with 97 cognitive abilities.
- Through nine search strategies, our research team identified 1,325 primary studies that contributed to these meta-analyses.
- Contributing studies were conducted in more than 50 countries and represent millions of participants across demographic groups.
- Measures in each study were mapped to personality and cognitive ability constructs in modern personality and cognitive ability taxonomies and compendia (Stanek & Ones, 2018) to avoid the idiosyncrasies of specific measures and ensure consistent construct definitions.
- Most effect sizes came from unpublished sources, reducing the risk of publication bias.
- The resulting meta-analytic database has resolution that is orders of magnitude greater than previous investigations’. We are making this database freely available as a resource for other scholars.
- Psychometric meta-analysis was used to aggregate the effect sizes, estimate the true relations, and quantify the degree of true variability across studies.
This chapter begins with rationale for conducting sweeping meta-analyses. We then describe our process for gathering relevant data, including a description of the studies included and resulting database, and provide key information on how measures were grouped for personality and ability constructs as well as how the data were quantitatively cumulated. Finally, we offer interpretive guidance.
Rationale for Sweeping Meta-Analyses
Our goal in this volume was to quantify and understand the grander pattern of relations between two of the largest domains of psychological individual differences: personality and cognitive abilities. In this endeavor, we could have conducted a large scale study. However, any individual empirical study, no matter how well-designed and -conducted, has major limitations. Sample sizes are limited due to feasibility considerations, since large sample sizes require substantial effort, time, and, often, financial resources to gather. Representation of a wide swath of cultures and countries is another persistent challenge. Even when large samples from multiple cultures are recruited, the time needed and the willingness of participants limit the number of measures that can be administered to the same sample. Covering the entire personality and cognitive ability construct space is not feasible, as such an effort would require the completion of hundreds of personality scales and cognitive ability tests. Additionally, psychological measurement fads, fashions, and falderol influence scale and test choices at any given point in time. Consequently, the use of different psychological measures has waxed and waned over the years, leaving gaps in the measurement record for personality traits and cognitive abilities. And relations between psychological constructs may vary over time periods (e.g., the strengthening relation between extraversion and political leadership in the past century is a case in point [Rubenzer et al., 2000]). In essence, findings from any study represent a limited set of individuals, from a specific period of time, through the lens of idiosyncratic measures, and therefore may not yield enduring truths.
The only approach that currently addresses all these concerns is sweeping meta-analyses. Sweeping meta-analyses pool data from numerous samples, multiple measures, many cultures, and extensive time periods. A pre-requisite for such meta-analyses is that many primary studies must have estimated the focal relations. Fortunately, personality-intelligence relations have been reported for a century with many variations in samples, cultures, disciplines, settings, and measures. Taking a meta-analytic approach stimulates discovery of universals, while also enabling investigations of particulars.
In sum, our goal was to construct a full and detailed panorama of personality-intelligence relations using meta-analytic techniques. In this endeavor, we attempted to amass all relevant existing quantitative relations, accurately map them to psychological constructs, carefully organize those constructs in models that incorporate the latest nomological evidence, and meta-analytically cumulate the effect sizes. The result is the quantification of the relations of 3,543 cognitive ability-personality trait pairs. Each step is described below.
Gathering Relevant Data
We sought to identify all empirical research that provided an estimate of personality trait-cognitive ability relations. In pursuit of this aim, we made significant efforts to include unpublished studies. For example, we did not limit our compilation to just reported effect sizes, but rather, we also located and included studies where personality-intelligence relations could have been reported but were not. In some of these cases, authors provided the relations upon request. In other cases, we obtained raw data from the study investigators, and computed the relevant relations ourselves. Study identification, inclusion/exclusion, and coding details of our research are described at length in the “Detailed Methods” of Appendix E. The diagram below highlights the magnitude of our enterprise, and Table 2 provides a comparison to previous meta-analyses.
Figure 3. Study inclusion flow diagram.
Note. Detailed search procedures and reasons for inclusion/exclusion are provided in Appendix E.
Description of Studies Included
The search strategies and inclusion/exclusion criteria, detailed in Appendix E, yielded 1,325 studies contributing independent data to the meta-analyses.1 We were as exhaustive as possible in our efforts. In striving to be comprehensive, we aimed to minimize the impact of publication bias and to reflect data from non- Western, educated, industrialized, rich, and democratic societies as well as several ethnicities, age groups, and other demographics. We also were careful to avoid the biases inherent in the paradigm of any one field or research perspective (Henrich et al., 2010; Simmons et al., 2011). Consistent with our aim of inclusion and generalization, we included data from across the past century. Despite these efforts and obtaining data from millions of participants, however, the current study is still unlikely to be fully exhaustive, and there are almost certainly studies that we did not locate. Nonetheless, we are confident that our search methods yielded a broadly representative majority of the relevant, available, quantitative relations between personality traits and cognitive abilities.
Comparison of the number of studies included in this set of meta-analyses with prior meta-analyses of personality-cognitive ability relations indicates that we created the largest and most comprehensive database to date on this topic, as well as one of the largest meta-analytic databases on any topic (cf. only larger N/K set of meta-analyses we are aware of [Polderman et al., 2015], though ours examined a larger number of relations). We have made this meta-analytic database publicly available in Appendix F of the online supplementary materials as well as on Open Science Framework, along with a full list of studies contributing effect sizes to the present meta-analyses (Appendix K).
Table 2. Current study compared to previous meta-analyses. | |||||
Personality Constructs | Cognitive Ability Constructs | Studies | Effect Sizes | Individuals | |
Current Study | 79 | 97 | 1,976 | 60,690 | 2,010,980 |
Ackerman & Heggestad (1997) | 19 | 10 | 135 | 2,033 | 64,592 |
Wolf & Ackerman (2005)* | 3 | 10 | 100 | 1,018 | 56,016 |
von Stumm & Ackerman (2013)** | 30 | 4 | 112 | 234 | 60,097 |
Schilling et a. (2021)*** | 5 | n/a | 66 | 115 | 46,265 |
Anglim et al. (2022)**** | 45 | 3 | 272 | 2,508 | 162,636 |
Database Description
The final database contained 60,690 personality-ability correlation coefficients and their associated characteristics. Each meta-analysis only contained independent effect sizes. Table 3 details the sources of the effect sizes in the database. The two largest contributing source types were published or publicly available raw datasets (41%; mostly from grant-funded studies) and peer-reviewed journal articles (29%).
Table 4 presents the proportions of participant types. Community samples formed the largest part of our database. Almost another quarter of the effect sizes were from occupational samples (i.e., employees, military members, and job applicants).
The average age of participants ranged from 12.0 to 100.3 (see Figure 4 for average age distribution), and 28% of the effect sizes came from samples with average ages between 30 and 50. Across the effect sizes that reported sex proportions, the average sample was 54.1% male.
Table 3. Sources of effect sizes. | ||
Publication Type | Number of Effect Sizes | % of Effect Sizes Contributed |
Publicly Available Raw Data | 24,981 | 41% |
Journal Article | 17,550 | 29% |
Dissertation | 8,158 | 13% |
Technical Manual | 2,588 | 4% |
Organizational Report | 2,198 | 4% |
Book | 2,151 | 4% |
Master’s Thesis | 1,388 | 2% |
Conference Material | 1,214 | 2% |
Undergraduate Thesis | 280 | <1% |
Manuscript / Working Paper | 182 | <1% |
Total | 60,690 | |
Table 4. Participants in contributing research. | ||
Sample Type | Count of Effect Sizes | % of Effect Sizes |
Community | 22,103 | 36% |
Student | 18,333 | 30% |
Applicant | 5,541 | 9% |
Employee | 4,754 | 8% |
Military | 3,888 | 6% |
Patient | 58 | <1% |
Combination | 1,386 | 2% |
(Not Reported) | 4,627 | 8% |
Total | 60,690 |
Figure 4. Number of effect sizes by samples’ age groups.
Samples were drawn from more than 50 countries as diverse as Australia, Chile, China, Estonia, Iran, Morocco, Nigeria, and Turkey (see Table 5). The majority of effect sizes contributing to our meta-analyses were from the United States of America (USA), which is consistent with where most psychological research has been conducted (Bauserman, 1997), though less than ideal for global representation. Nevertheless, our meta-analytic database did include studies from all peopled continents except Antarctica. Representation from Asia was modest but included several of the most populous countries (e.g., China, India, South Korea, Japan, Taiwan, Hong Kong, Singapore, Philippines); representation from South America (Brazil, Chile) and Africa (South Africa, Nigeria, Ethiopia) was weak.
Table 5. Effect sizes by country. | ||||||
Country | Count of Effect Sizes | % of Effect Sizes | Country | Count of Effect Sizes | % of Effect Sizes | |
United States of America (USA) | 40,297 | 66% | Denmark | 35 | <1% | |
United Kingdom (UK) | 5,228 | 9% | British Isles | 32 | <1% | |
Germany | 3,365 | 6% | Croatia | 28 | <1% | |
Canada | 2,306 | 4% | Portugal | 27 | <1% | |
Australia / New Zealand | 1,408 | 2% | Cyprus | 25 | <1% | |
Dominica | 432 | <1% | Russia | 15 | <1% | |
South Africa | 426 | <1% | Hong Kong | 14 | <1% | |
Sweden | 418 | <1% | Puerto Rico | 13 | <1% | |
Spain | 366 | <1% | Morocco | 11 | <1% | |
Netherlands | 343 | <1% | Luxembourg | 10 | <1% | |
China | 322 | <1% | Belgium | 5 | <1% | |
Finland | 202 | <1% | Chile | 5 | <1% | |
Poland | 174 | <1% | Nigeria | 5 | <1% | |
Estonia | 171 | <1% | Hungary | 3 | <1% | |
India | 152 | <1% | Greece | 2 | <1% | |
Japan | 125 | <1% | Iran | 2 | <1% | |
Norway | 119 | <1% | Philippines | 1 | <1% | |
Taiwan | 110 | <1% | Ethiopia | 1 | <1% | |
Israel | 102 | <1% | ..... | combination (Cyprus & Greece) | 60 | <1% |
Switzerland | 100 | <1% | combination (Estonia & Russia) | 5 | <1% | |
Italy | 95 | <1% | combination (Europe & East Asia) | 1 | <1% | |
Turkey | 72 | <1% | combination (Europe (15 countries)) | 5 | <1% | |
South Korea | 72 | <1% | combination (Europe) | 277 | <1% | |
Austria | 67 | <1% | combination (France & Pakistan) | 5 | <1% | |
France | 59 | <1% | combination (global (mostly USA & UK)) | 63 | <1% | |
Mexico | 58 | <1% | combination (global) | 332 | <1% | |
Bosnia and Herzegovina | 55 | <1% | combination (Netherlands & Belgium) | 15 | <1% | |
Romania | 53 | <1% | combination (Southeast Asia) | 5 | <1% | |
Slovenia | 51 | <1% | combination (UK & USA) | 60 | <1% | |
Brazil | 48 | <1% | combination (USA & Canada) | 9 | <1% | |
Singapore | 39 | <1% | (Not Reported) | 2,784 | 5% | |
Total | 60,690 |
Mapping Measures to Personality and Ability Taxonomies
As noted above, we wished our analyses, results, findings, and conclusions to reflect enduring constructs, not be constrained by particular measures. Specific measures have idiosyncrasies that undermine generalizability and limit scientific knowledge. Multiple measurements of the same construct increase the likelihood that inferences focus on the construct, rather than refer to a particular measure. Having a common vocabulary across measures and even models facilitates science, just as in other areas there are conversions between metric and standard. Other major meta-analyses (e.g., Barrick & Mount, 1991; Salgado, 1998) have also focused on constructs rather than specific measures. The two authors of this volume coded and classified the personality and cognitive ability measures into constructs based on the Stanek and Ones (2018)2 personality and cognitive ability taxonomies and compendia. Additional details about the categorization of personality and ability constructs are provided in the “Construct categorization” section of the Detailed Methods in Appendix E.
Across all levels of the personality hierarchy, 79 personality constructs are included in the present meta-analyses. In Chapter 2, Figure 2 depicts the structure of major personality constructs utilized as an organizing framework in this research. Definitions of each construct and a compendium of scales assessing each may be found in Stanek and Ones (2018) as well as at http://stanek.workpsy.ch/personality-map/personality-taxonomy/.
Figure 5. Number of meta-analyses involving each personality construct.
Note. More saturated colors indicate higher levels of the personality hierarchy, except for compound traits, where coloring indicates which Big Five trait contributes most to the construct.
Ninety-seven abilities were studied, including 70 specific abilities (e.g., number facility and quantitative reasoning) representing 10 dimensions of primary abilities (e.g., processing speed and fluid abilities), which in turn can be grouped into four clusters: domain independent general capacities (i.e., fluid abilities, memory), sensory-motor domain specific abilities (i.e., visual processing, auditory processing), speed capacities (i.e., processing speed, reaction and decision speed), and invested abilities (i.e., acquired knowledge).3 In Chapter 2, Figure 1 depicts the structure of major ability constructs utilized as an organizing framework in this research. Definitions of each construct and a compendium of scales assessing each can be found in Stanek and Ones (2018) and at http://stanek.workpsy.ch/cognitive-ability-map/cognitive-ability-taxonomy/.
Figure 6. Number of meta-analyses involving each cognitive ability construct.
Note. Darker shades indicate higher levels of the construct hierarchy.
Quantitatively Cumulating the Evidence Through Meta-Analyses
Psychometric meta-analysis (Schmidt & Hunter, 2014) was used to combine effect sizes across studies and improve statistical power, reduce error variance of the estimated effect sizes, and estimate the degree to which results might generalize to populations and situations beyond those investigated in individual primary studies. In psychometric meta-analysis, the mean observed correlation (r̄ ) is corrected for unreliability as well as other applicable statistical artifacts (e.g., dichotomization) to obtain the estimated mean corrected correlation p̂ (i.e., estimated true-score correlation). In our meta-analyses, correlations were corrected for sampling error and unreliability in both the cognitive ability and personality measures (see Appendix E’s Tables S1 and S2 in the online supplementary materials for the reliability distributions of each personality trait and cognitive ability).4 Such corrections result in estimates of the construct-level relations free of measurement error. SDr indicates the average variation of observed correlations. Observed standard deviations reflect the fact that unreliability and other statistical artifacts inflate variation in effects observed across studies. Therefore, corrections are necessary to control for the differential reliabilities of various scales contributing to each meta-analysis in order to estimate true variability. The estimated standard deviation of corrected correlations (SDp̂) indicates the degree of true (i.e., non-artifactual) variability associated with the mean corrected correlation and indexes true heterogeneity. We also computed 80% credibility values for each meta-analytic result. The credibility value range indicates the range in which most individual true-score correlations would be expected to fall (e.g., when new studies are conducted) (Schmidt & Hunter, 2014). Reporting credibility value ranges reveals whether the examined relations are expected to generalize (Ones et al., 2017; Wiernik et al., 2017). We also computed 90% confidence intervals for each meta-analytic result, illustrating the precision with which each meta-analytic mean correlation is estimated. Details of analyses (e.g., reliability artifact distributions) are reported in Appendix E.
Interpreting Results
We consistently use specific terms to describe the meta-analytic estimates. These terms are defined here to ensure clarity and facilitate understanding. Descriptors of effect size magnitude correspond to the behavioral science benchmarks provided by Funder and Ozer (Funder & Ozer, 2019), and are as follows:
an effect-sizer of .05 is very small for the explanation of single events but potentially consequential in the not-very-long run, an effect-sizer of .10…is still small…but potentially more ultimately consequential, an effect-sizer of .20 indicates a medium effect that is of some explanatory and practical use even in the short run and therefore even more important, and an effect-sizer of .30 indicates a large effect that is potentially powerful in both the short and the long run.
We use the term “homogenous” to refer to meta-analytic estimates that have small standard deviations (i.e., SDp̂ close to 0). In contrast, when the contributing effect sizes were more dispersed (i.e., SDp̂ farther from 0), we use the term “variable” to describe the meta-analytic estimate. We use the term “generalizable” to refer to meta-analytic estimates whose credibility value range did not include zero (i.e., if another primary study were conducted it would be likely that its effect size would fall within the reported credibility value range). “Inconsistent” is used to denote cases where a construct in one domain (e.g., fluid ability) displayed surprisingly different relations with multiple, similar constructs in another domain (e.g., neuroticism and negative affect). “Uniform” is used to describe relations where a construct in one domain displayed similar relations to multiple constructs in another domain.
Since Project Talent was the largest contributing source for several meta-analyses, we provide versions of the results tables with and without Project Talent data (for the latter see Supplementary Tables 100–196 and 276–354 in Appendices H and J as well as the “Impact of extremely large studies” section in Chapter 9). In this context, it is worth noting that when the number of contributing effect sizes is small (e.g., fewer than 10) SDp̂ will be biased. Therefore, in analyses where K is small, we cannot tell whether any potential changes in the estimated correlation are due to second-order sampling error or to the influence of a potential outlier.
Distillation of Our Methodology
Thousands of primary studies have investigated the relations between personality traits and cognitive abilities (Busato et al., 2000; Chamorro-Premuzic & Furnham, 2008; Conn & Ricke, 1994), and a few meta-analyses have summarized relations at a broad level (Ackerman & Heggestad, 1997; Anglim et al., 2022; von Stumm & Ackerman, 2013; Wolf & Ackerman, 2005).5 The research presented in this volume meta-analytically examined 97 cognitive abilities, more than 85% of which were not included in previous meta-analyses. Similarly, developments in personality models during the past decade and a half have led to a more-fine-grained, hierarchical taxonomy of the personality domain (Stanek & Ones, 2018). Our meta-analyses examine 79 personality constructs, more than a third of which were not included in previous meta-analytic investigations. The present set of analyses thus considers co-variation across the broad and deep hierarchies of personality constructs and cognitive abilities (e.g., facets vs. broader factors) (Cronbach, 1960; Cronbach & Gleser, 1965; Ones & Viswesvaran, 1996; Shannon & Weaver, 1949; Wittmann & Süß, 1999).
We investigated 3,543 relations in all, most (93%) of which have not previously been meta-analytically examined. Each of these new and unique meta-analyses contributes to the scientific literature as well as to basic and applied disciplines that utilize personality and cognitive ability constructs and measures in their theories, research, and applications.
Complete and detailed quantitative results for this study’s meta-analyses, including point estimates of magnitudes and their associated credibility intervals are reported in Supplementary Tables 3–354. Figures 7–17 in Chapters 4 and 5 visualize these results. Readers interested in relations between specific cognitive ability-personality pairs are directed to these detailed materials and to the interactive webtool for a summary visualization (stanek.workpsy.ch/interactivewebtool). The online supplementary materials of this book (Appendices A, B, C, and D) also contain definitions of each cognitive ability and personality trait as well as their respective measures.
The following chapters focus on the most notable findings that highlight cross-domain constellations of traits based on statistically robust results (i.e., at least 1,000 individuals or 10 effect sizes). In a few cases, we also review corroborative findings supporting a noteworthy trend.
References
Ackerman, P. L., & Heggestad, E. D. (1997). Intelligence, personality, and interests: Evidence for overlapping traits. Psychological Bulletin, 121, 219–245.
Anglim, J., Dunlop, P. D., Wee, S., Horwood, S., Wood, J. K., & Marty, A. (2022). Personality and intelligence: A meta-analysis. Psychological Bulletin, 148(5–6), 301.
Barrick, M. R., & Mount, M. K. (1991). The Big Five personality dimensions and job performance: A meta-analysis. Personnel Psychology, 44(1), 1–26.
Bauserman, R. (1997). International representation in the psychological literature. International Journal of Psychology, 32(2), 107–112.
Busato, V. V., Prins, F. J., Elshout, J. J., & Hamaker, C. (2000). Intellectual ability, learning style, personality, achievement motivation and academic success of psychology students in higher education. Personality and Individual Differences, 29(6), 1057–1068.
Chamorro-Premuzic, T., & Furnham, A. (2008). Personality, intelligence and approaches to learning as predictors of academic performance. Personality and Individual Differences, 44(7), 1596–1603.
Conn, S. R., & Ricke, M. L. (1994). The 16 PF Fifth Edition Technical Manual. Institute for Personality and Ability Testing, Inc.
Cronbach, L. J. (1960). Essentials of psychological testing (2nd ed.). Harper.
Cronbach, L. J., & Gleser, G. C. (1965). Psychological tests and personnel decisions. University of Illinois Press.
Funder, D. C., & Ozer, D. J. (2019). Evaluating effect size in psychological research: Sense and nonsense. Advances in Methods and Practices in Psychological Science, 2(2), 156–168.
Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33(2–3), 61–83.
Ones, D. S., & Viswesvaran, C. (1996). Bandwidth–fidelity dilemma in personality measurement for personnel selection. Journal of Organizational Behavior, 17(6), 609–626.
Ones, D. S., Viswesvaran, C., & Schmidt, F. L. (2017). Realizing the full potential of psychometric meta-analysis for a cumulative science and practice of human resource management. Human Resource Management Review, 27(1), 201–215. https://doi.org/10.1016/j.hrmr.2016.09.011
Polderman, T. J. C., Benyamin, B., De Leeuw, C. A., Sullivan, P. F., Van Bochoven, A., Visscher, P. M., & Posthuma, D. (2015). Meta-analysis of the heritability of human traits based on fifty years of twin studies. Nature Genetics, 47(7), 702.
Rubenzer, S. J., Faschingbauer, T. R., & Ones, D. S. (2000). Assessing the US presidents using the revised NEO Personality Inventory. Assessment, 7(4), 403–419.
Salgado, J. F. (1998). Big Five personality dimensions and job performance in army and civil occupations: A European perspective. Human Performance, 11(2), 271–288.
Schilling, M., Becker, N., Grabenhorst, M. M., & König, C. J. (2021). The relationship between cognitive ability and personality scores in selection situations: A meta-analysis. International Journal of Selection and Assessment, 29(1), 1–18.
Schmidt, F. L., & Hunter, J. E. (2014). Methods of Meta-Analysis: Correcting Error and Bias in Research Findings (3rd ed.). Sage Publications.
Shannon, C. E., & Weaver, W. (1949). The Mathematical Theory of Communication. University of Illinois Press.
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-Positive Psychology. Psychological Science, 22(11), 1359–1366.
Stanek, K. C., & Ones, D. S. (2018). Taxonomies and compendia of cognitive ability and personality constructs and measures relevant to industrial, work and organizational psychology. In D. S. Ones, C. Anderson, C. Viswesvaran, & H. K. Sinangil (Eds.), The SAGE handbook of industrial, work & organizational psychology: Personnel psychology and employee performance (pp. 366–407). Sage.
von Stumm, S., & Ackerman, P. L. (2013). Investment and intellect: A review and meta-analysis. Psychological Bulletin, 139(4), 841–869.
Wiernik, B. M., Kostal, J. W., Wilmot, M. P., Dilchert, S., & Ones, D. S. (2017). Empirical benchmarks for interpreting effect size variability in meta-analysis. Industrial and Organizational Psychology, 10(3), 472–479.
Wittmann, W. W., & Süß, H.-M. (1999). Investigating the paths between working memory, intelligence, knowledge, and complex problem-solving performances via Brunswik symmetry. In P. L. Ackerman, P. C. Kyllonen, & R. D. Roberts (Eds.), Learning and individual differences: Process, trait, and content determinants (pp. 77–108). American Psychological Association. https://doi.org/10.1037/10315-004
Wolf, M. B., & Ackerman, P. L. (2005). Extraversion and intelligence: A meta-analytic investigation. Personality and Individual Differences, 39(3), 531–542.
Endnotes
1 Our search strategies originated in the first author’s doctoral dissertation with the second author as advisor. The core database used in the current volume has been expanded and updated beyond what was presented in the dissertation.
2 The compendia provided with the current volume have been further expanded and are the most up-to-date, as of October 2022, for substantive and normal-range personality traits and cognitive abilities.
3 Compound ability measures that included variance from two or three primary abilities were also included.
4 We were unable to make corrections for range restriction due to limited reporting in primary studies. Most samples were not directly selected based on cognitive ability or personality. The relations we report would only be larger if corrected.
5 Schilling et al. (2021) also meta-analytically examined personality-ability relations with a focus on situational influences. These meta-analyses did not distinguish cognitive ability constructs, but instead focused on item content, such as written/verbal measures, so their estimates could not be included or compared here.