Hostname: page-component-586b7cd67f-vdxz6 Total loading time: 0 Render date: 2024-11-23T22:02:34.531Z Has data issue: false hasContentIssue false

A response to speculations about concurrent validities in selection: Implications for cognitive ability

Published online by Cambridge University Press:  31 August 2023

Deniz S. Ones*
Affiliation:
Department of Psychology, University of Minnesota, Minneapolis, MN, USA
Chockalingam Viswesvaran
Affiliation:
Florida International University, Miami, FL, USA
*
Corresponding author: Deniz S. Ones; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Type
Commentaries
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of the Society for Industrial and Organizational Psychology

“Facts are stubborn things; and whatever may be our wishes, our inclinations, or the dictates of our passion, they cannot alter the state of facts and evidence.”

— John Adams

Although we have many important areas of agreement with Sackett and colleaguesFootnote 1, we must address two issues that form the backbone of the focal article. First, we explain why range restriction corrections in concurrent validation are appropriate, describing the conceptual basis for range restriction corrections, and highlighting some pertinent technical issues that should elicit skepticism about the focal article’s assertions. Second, we disagree with the assertion that the operational validity of cognitive ability is much lower than previously reported. We conclude with some implications for applied practice.

Conceptual basis for range restriction corrections

Range restriction results in underestimation of criterion-related validities (Carretta & Ree, Reference Carretta and Ree2022). The formulae for range restriction corrections are well known and uncontroversial (Schmidt et al., Reference Schmidt, Hunter and Urry1976; Sackett & Yang, Reference Sackett and Yang2000). The focal article, following the logic offered in Sackett et al. (Reference Sackett, Zhang, Berry and Lievens2022), purported that most range restriction corrections in previous meta-analyses of predictor validities were inappropriate. In particular, Sackett et al., challenged the use of artifact distributions for corrections in validity generalization by asserting that range restriction data used are not typically representative. Consequentially, they reasoned those corrections with, in their view, unrepresentative distributions overestimated actual validities. This is an important challenge that requires empirical evidence.

What evidence was offered? Fundamentally, their reasoning was conceptual. Sackett et al., stated, “studies containing the needed information to compute a U ratio [to correct for range restriction] come solely from predictive studies” and they thus argued that the use of those U-ratios would overcorrect concurrent validities because they believed that in concurrent studies range restriction is not a concern. Their revisions to meta-analytic validities of predictors were based on the proportion of predictive and concurrent studies in each meta-analysis, where range restriction was assumed not to affect concurrent studies. Sackett et al., did not use real-world data to probe the degree of range restriction in concurrent studies of each predictor they included in their review. Rather, they assumed—without consulting the existing empirical evidence—that direct and indirect range restriction could not be influential in concurrent studies. As we show below, a fanciful and limited conception of how concurrent studies are conducted led to erroneous conclusions. In the employee selection literature, there are large numbers of concurrent studies which show empirical evidence of range restriction. In practice, employers periodically validate predictors that are or have been in use.Footnote 2

Concurrent studies are affected by range restriction. Sackett et al., suggested that when validation reports do not exclusively contain descriptions of operational range restriction mechanisms, it is inappropriate to correct for the effects of this artifact. Absence of evidence in narrative study descriptions is not evidence of absence. Indeed, what matters is the empirical story that predictor distributions tell: for example, are standard deviations lower than those that are found in less restricted populations, such as those of applicants or the labor force at large? If such comparisons indicate reduced variability, regardless of the mechanism that produced such homogeneity (e.g., organizational selection, placement decisions, gravitation to positions, occupational turnover forces), range restriction corrections are essential to uncover operational validities. The focal article’s conclusions and recommendations are based on flawed conceptual reasoning and empirically untested hypotheses, and are applied across the board, without nuance, to dozens of predictor meta-analyses. Accordingly, its conclusions about employee selection predictors are speculations.

Range restriction corrections are needed and should be applied to the vast majority of concurrent validity estimates. Many concurrent validity study reports contain information or standard deviations necessary for appropriate range restriction corrections. Using reported concurrent study standard deviations in range restriction corrections in individual studies and range restriction distributions in psychometric meta-analyses is wholly appropriate.

Technical issues in range restriction corrections

Correcting for range restriction in validation studies can be fraught with many technical issues given the multiple forms of range restriction that inevitably occur in applied field settings (see Sackett & Yang, Reference Sackett and Yang2000, for an informative and insightful summary of this complex literature). It is not just direct range restriction on the predictor or indirect range restriction due to selection on a third variable that affects validity (Carretta & Ree, Reference Carretta and Ree2022). Range restriction on the predictor is not limited to truncation on one end of its score continuum either—there can be restriction at both the low and high ends of score distributions. For example, individuals being let go during or after probationary or training periods, employees being promoted due to excellent performance, or top choice candidates rejecting offers can also artificially reduce the predictor’s range, depressing validity coefficients (Murphy, Reference Murphy1986).

Given the reality of multiple forms of indirect range restriction that inevitably operate in all validation studies and the inability of classical correction formulae to address them, in a foundational article published in the Journal of Applied Psychology, Hunter et al. (Reference Hunter, Schmidt and Le2006) presented a multi-step process model and formulae to make unbiased range restriction corrections.Footnote 3 Other excellent subsequent papers on range restriction also highlighted the importance of addressing both direct and indirect range restriction in employee selection as well as in organizational science research (e.g., Dahlke & Wiernik, Reference Dahlke and Wiernik2020; Le et al., Reference Le, Oh, Schmidt and Wooldridge2016). More directly relevant to the debate at hand, Oh, Le, and Roth (Reference Oh, Le and Rothin press) addressed some of Sackett et al.’s errors pertaining to range restriction and questioned their assumptions and reasoning. Based on conceptual, technical, and empirical grounds noted above and the referenced articles, we remain skeptical of the focal article’s conclusions and recommendations.

We urge practitioners to systematically consider how range restriction affects variability of both their predictors and criteria, rather than relying on a uniform, unsubstantiated assumption that range restriction is not a problem in concurrent validities. An important point is that multiple forms of range restriction affect both concurrent and predictive studies. Any single form of range restriction correction is likely to be dwarfed by the multiple types of range restriction that we fail to correct in empirical estimations of operational validity. This calls into question Sackett and colleagues’ conclusion that widely used selection predictors have significantly lower operational validity for overall performance. In particular, as we demonstrate below, their inference that “cognitive ability is no longer the stand-out predictor that it was in the prior work” (p. 5) is untenable.

Operational validity of cognitive ability tests

Assumed differential range restriction in concurrent and predictive studies is essential to Sackett et al.’s reasoning that concurrent validities should not be corrected for range restriction. Yet, there is ample empirical evidence that for ability tests, concurrent and predictive validities are similar (e.g., Bemis, Reference Bemis1968; Jensen, Reference Jensen1980; Hartigan & Wigdor, Reference Hartigan and Wigdor1989), a fact noted by Schmidt et al. (Reference Schmidt, Pearlman, Hunter and Hirsch1985): “Contrary to general belief, predictive and concurrent studies suffer from range restriction to about the same degree” (p. 750). Yet, Sackett and colleagues—without real-world data—single out concurrent studies, argue that there is no range restriction effects in such studies, and consequently recommend and apply no range restriction corrections.

We describe here how this distorts their operational validity estimate for cognitive ability tests. The bulk of their cognitive ability validity re-estimation relied on validities from the United States Employment Service’s (USES) General Aptitude Test Battery (GATB) datasets. Sackett et al. (Reference Sackett, Zhang, Berry and Lievens2022) deemed Hunter’s estimation of GATB range restriction U values “implausible” and “not trustworthy” (p. 2054) because they asserted that such levels of range restriction could not possibly exist in concurrent studies, which they assumed would only be impacted minimally by indirect range restriction. However, the GATB technical manual reports all means and standard deviations for each specific sample, alongside observed validities (U.S. Department of Labor, 1979). Hunter’s range restriction corrections used these data, forming an artifact distribution based on specifically what was reported for each study.Footnote 4 To empirically determine the degree of range restriction in each sample, Hunter compared validation sample standard deviations to the unrestricted working population standard deviation. Sackett et al., deemed these data dubious, arguing that the magnitude of actual range restriction was greater than what they thought it should be, based on their idiosyncratic description of GATB’s concurrent studies. Curiously, they made corrections for ability test range restriction for neither concurrent nor predictive studies.Footnote 5

Sackett and colleagues did not analyze any GATB predictor standard deviations but instead hypothesized an effect, assumed it to be correct, and proceeded with no corrections for range restriction, without verifying their hypothesis with actual data. When empirical data from validation studies show empirical evidence of restriction, such as reported in the GATB manual, applying no range restriction corrections disregards real-world data. This is not sound practice for research or applications.

The distinction between concurrent versus predictive validation is more apparent than real. Regarding GATB studies (which make up 88% of the studies that the Sackett et al.’s re-estimates of cognitive ability validity are based on), the National Academy of Science’s report on the GATB stated “the predictive/concurrent distinction is too crude to be of real value” (Hartigan & Wigdor, Reference Hartigan and Wigdor1989; p. 154). Hartigan and Wigdor indicated “applicants are screened using either the GATB itself or some other predictor, and thus range restriction is likely in both predictive and concurrent studies” (p. 154).

For range restriction corrections, we note that Hartigan and Wigdor (Reference Hartigan and Wigdor1989) suggested that working population standard deviations may be larger than job-specific applicant pools, inflating the degree of range restriction. However, Sackett and Ostgaard (Reference Sackett and Ostgaard1994) presented excellent empirical evidence that applicant standard deviations are on average only 10% smaller than population norms, assuaging inflationary concerns in range restriction corrections (see also Lang et al., [Reference Lang, Kersting and Hülsheger2010] for virtually identical findings, using independent data from Germany). Therefore, we posit that even if 10% lower unrestricted norm group standard deviations were used in range restriction corrections, thusly corrected operational validity for cognitive ability would be more accurate than those presented by Sackett et al., which imposed an arbitrary and implausible constraint of no range restriction for all ability test validity studies.Footnote 6

Operational validity of cognitive ability tests since 2000

Sackett et al., argued that Schmidt and Hunter’s (Reference Schmidt and Hunter1998) estimate of general cognitive ability’s validity (ρ = .51) relied on 40+ year old data. They referenced an unpublished conference poster that examined cognitive ability validities since 2000, based on 114 studies (Griebie et al., Reference Griebe, Bazian, Demeke, Priest, Sackett and Kuncel2022). That poster indicated a meta-analytic validity of .24, but only unreliability in the criterion was corrected for. No range restriction corrections were applied to the 82 concurrent validity studies in their database. We cannot provide an in-depth analysis of this meta-analysis given that we only have access to the poster that was presented at the SIOP conference, and the brief summary offered by Sackett and colleagues. However, it appears that the authors’ disbelief that range restriction can affect concurrent validities has produced a severe underestimation of operational validity for cognitive ability tests. The large true standard deviation accompanying the mean estimate (SD ρ = .15) is telling and suggests an inflation in variability due to unaddressed range restriction.

We also have concerns about the studies that may constitute the Griebe et al., database. By 2000, there was near scientific consensus about cognitive ability for employee selection (Reeve & Hakel, Reference Reeve and Hakel2002; Viswesvaran & Ones, Reference Viswesvaran and Ones2002); numerous meta-analyses had established excellent validity for them (Dilchert, Reference Dilchert, Ones, Anderson, Viswesvaran and Sinangil2018). Could it be that (a) studies that show contrary findings, and (b) studies that show incremental validity for novel predictors over cognitive ability dominated the literature and hence biased the Griebie et al., database, which was limited to the past 21 years? Sackett et al.’s summary of findings from a potentially distorted database cannot be the basis of scientific revisions to the relevance of cognitive ability for job performance and the basis of sweeping practice recommendations to pivot away from cognitive ability assessments for employee selection. Voluminous supporting data and intense scientific scrutiny are required.

Using the potentially distorted database from Griebie et al. (Reference Griebe, Bazian, Demeke, Priest, Sackett and Kuncel2022), the focal article offers two post hoc explanations for the lower cognitive ability validity they reported. First, the authors posit that there were fewer manufacturing jobs in their meta-analytic database, reflecting a reduced role of manufacturing jobs in the economy.Footnote 7 Second, Sackett et al., suggest that a “greater emphasis on less cognitively loaded interpersonal aspects of work” could have resulted in the lower validities. However, cognitive ability has greater importance in contemporary economic systems. Increasing complexity of jobs and workplaces (e.g., technological, economic, culturally diverse) as well as increasing knowledge and speed requirements of jobs suggest an increased importance of cognitive abilities. Information processing and learning ability are essential, more than ever, in the information age. Inconsistent with their rationale for the lower validity of cognitive ability they reported, Sackett et al., asserted that job knowledge tests and work sample tests are among the top predictors based on criterion-related validity. Cognitive ability is a primary causal determinant of both and is highly correlated with acquiring domain specific knowledge (Kuncel et al., Reference Kuncel, Ones and Sackett2010; McCloy et al., Reference McCloy, Campbell and Cudeck1994; Schmidt et al., Reference Schmidt, Hunter and Outerbridge1986). Surprisingly, Sackett et al., highlighted “study and practice” as important determinants of knowledge acquisition and skill development (“Measures such as work samples and knowledge tests lend themselves more readily to skill development via study and practice,” Sackett et al.). Yet, “meta-analyses demonstrate that deliberate practice fails to account for all, nearly all, or even most of the variance in expert performance, and often even explains only a surprisingly small proportion of the total variance” (Ullén et al., 2015, p. 435). In contrast, the influence of cognitive ability in knowledge and skill acquisition and expertise development is strong and well established (e.g., see Ullén et al., section “Expert performance and cognitive ability”).

Takeaways for practice

It is indeed true that typical meta-analyses of predictors contain a mixture of predictive and concurrent studies. Neither Sackett et al. (Reference Sackett, Zhang, Berry and Lievens2022) nor the focal article examined actual, real-world data contributing to meta-analyses to determine and appropriately address the degree of range restriction from concurrent studies. They assumed that concurrent studies would not be subject to range restriction. More perilously, they ignored USES data and Hunter’s analyses of those data which presented a standard deviation for each concurrent and predictive sample included in the GATB database that provided a reasonable empirical basis for range restriction corrections. Researchers should not knowingly discount practitioner data and knowingly report an underestimate of operational validity. Such underestimation can affect decisions about job applicants and shift staffing to rely on predictors with sparse supporting evidence (e.g., emotional intelligence, games, and gamified assessments) or no empirical support at all (e.g., physiognomic analysis), degrading evidence-based practice.

In the final analysis, we recommend organizations and practitioners use cognitive ability validity estimates reported by Hunter et al. (Reference Hunter, Schmidt and Le2006). Operational validities are summarized in Table 1, separately for different job complexity levels. (These data are for 425 jobs that used overall job performance as the criterion. For the summary they offered for 515 jobs from the GATB database, Sackett et al., included these and an additional 90 studies where the criterion was training performance.) Unlike Sackett et al., Hunter and colleagues carefully and appropriately addressed cumulative effects of range restriction. These operational validities and their associated standard deviations correctly summarize cognitive ability validities from USES’ GATB data for overall job performance. They refute Sackett et al.’s conclusions about cognitive ability tests.

Table 1. Validity of cognitive ability for overall job performance from the US Employment Service Database

Analyses are for US Employment Service’s General Aptitude Test Battery’s (GATB) measure of general mental ability. Findings are summarized from Hunter (Reference Hunter1983) and Hunter et al. (Reference Hunter, Schmidt and Le2006).

k = Number of studies; mean r = sample size weighted mean r; SDr = observed standard deviation of validities; SD ρ = true standard deviation (i.e., standard deviation corrected for statistical artifacts, as indicated in the respective column heading; Mean ρ = operational validity where mean rs are corrected for statistical artifacts, as indicated in the respective column heading. a Job complexity family 2; b job complexity family 1 (most complex); c job complexity family 3; d job complexity family 4; e job complexity family 5 (least complex), f equally weighted data combined from professional and managerial, complex setting up, and technician and skilled jobs; g also criterion unreliability corrected for. h Beatty et al. (Reference Beatty, Barratt, Berry and Sackett2014) indicated their preference for these indirect range restriction corrections: “We conclude that Hunter et al.’s correction should generally be preferred when compared to its common alternative, Thorndike’s Case II correction for direct range restriction” (p. 587).

Sackett et al., maintain that conservative estimates of operational validity are prudent. In this they may not be alone. However, we restate what we have previously written about conservative estimates in another context: “Many researchers maintain that being conservative is good science, but conservative estimates are by definition biased estimates. We believe it is more appropriate to aim for unbiased estimates because the research goal is to maximize the accuracy of the final estimates” (Viswesvaran et al., Reference Viswesvaran, Ones and Schmidt1996, p. 567). Science and evidence-based practice both require accuracy. Research and applied practice should aim for accuracy by considering the entirety of empirical evidence surrounding each predictor and appropriately correcting for range restriction. The reports of our best predictors’ demise are greatly exaggerated.

Footnotes

Deniz S. Ones and Chockalingam Viswesvaran contributed equally.

1 Most notably: (a) an appreciation for psychometric meta-analytic methods over single individual studies in ascertaining predictor validities, (b) the acknowledgement that overall job performance is the omnibus criterion reflecting the economic worth of employees, (c) identification of interrater reliabilities as the appropriate estimates to use in correcting for measurement error (rather than internal consistency), and (d) the need to attend to true variability in meta-analytic validity estimates rather than just the mean.

2 Such a practice helps safeguard against legal challenges in the USA, provides evidence for selection system audits, and supports contemporary evidence-based practice.

3 Sackett and Lievens (Reference Sackett and Lievens2008) characterized this work as follows: “The most significant development regarding range restriction is Hunter et al.’s (Reference Hunter, Schmidt and Le2006) development of a new approach to correcting for indirect range restriction. Prior approaches are based on the assumption that the third variable on which selection is actually done is measured and available to the researcher. However, the typical circumstance is that selection is done on the basis of a composite of measured and unmeasured variables (e.g., unquantified impressions in an interview), and that this overall selection composite is unmeasured. Hunter et al. (Reference Hunter, Schmidt and Le2006) developed a correction approach that does not require that the selection composite is measured. Schmidt et al. (Reference Schmidt, Oh and Le2006) apply this approach to meta-analysis, which has implicitly assumed direct range restriction, and show that applying a direct restriction correction when restriction is actually indirect results in a 21% underestimate of validity in a reanalysis of four existing meta-analytic data sets.” (p. 434)

4 Reference Sackett, Zhang, Berry and LievensSackett et al., stated, “we cannot see a reasonable basis for Hunter’s .67 value for the restricted SDx.” These data are available in the GATB manual, and a quick analysis appears to confirm Hunter’s correction values.

5 Their argument for combining predictive and concurrent studies was that most studies were concurrent: “the vast majority of studies were concurrent and were thus affected only minimally by range restriction, we conclude that our best estimates at present are the mean validity values corrected only for measurement error in the criterion.”

6 Sackett et al. (Reference Sackett, Zhang, Berry and Lievens2022) acknowledged our suggested correction approach as appropriate: “it is at least hypothetically possible that it could be reasonable to pool incumbent data across jobs to estimate the unrestricted SDx, and then reduce that SDx by 10%.” However, they did not implement it despite availability of necessary data in the GATB database for these corrections.

7 This post hoc explanation does not address a fundamental contradiction: Schmidt and Hunter (Reference Schmidt and Hunter1998) had carefully limited their cognitive ability estimates to medium complexity jobs, excluding lower complexity manufacturing jobs. But Sackett et al. (Reference Sackett, Zhang, Berry and Lievens2022) preferred to include validation studies from manufacturing jobs. Sackett et al.’s inconsistency is troubling: they included manufacturing jobs in their validity estimate for cognitive ability only to then critique Schmidt and Hunter’s database for including manufacturing jobs (something that Schmidt and Hunter did not do: Their validity estimate for cognitive ability is for medium complexity jobs).

References

Beatty, A. S., Barratt, C. L., Berry, C. M., & Sackett, P. R. (2014). Testing the generalizability of indirect range restriction corrections. Journal of Applied Psychology, 99(4), 587598. https://doi.org/10.1037/a0036361 CrossRefGoogle ScholarPubMed
Bemis, S. E. (1968). Occupational validity of the General Aptitude Test Battery. Journal of Applied Psychology, 52, 240244. https://doi.org/10.1037/h0025733 CrossRefGoogle ScholarPubMed
Carretta, T. R., & Ree, M. J. (2022). Correction for range restriction: Lessons from 20 research scenarios. Military Psychology, 34(5), 551569. https://doi.org/10.1080/08995605.2021.2022067 CrossRefGoogle Scholar
Dahlke, J. A., & Wiernik, B. M. (2020). Not restricted to selection research: Accounting for indirect range restriction in organizational research. Organizational Research Methods, 23(4), 717749. https://doi.org/10.1177/1094428119859398 CrossRefGoogle Scholar
Dilchert, S. (2018). Cognitive ability. In Ones, D. S., Anderson, N., Viswesvaran, C., & Sinangil, H. (Eds.), The SAGE handbook of industrial, work and organizational psychology: Vol. 1. Personnel psychology and employee performance (2nd ed., pp. 248276). SAGE https://dx.doi.org/10.4135/9781473914940.n10 Google Scholar
Griebe, A., Bazian, I., Demeke, S., Priest, R., Sackett, P. R., & Kuncel, N. R. (2022). A contemporary look at the relationship between cognitive ability and job performance [Poster presentation] Society for Industrial and Organizational Psychology annual conference, Seattle, WA, United States.Google Scholar
Hartigan, J. A., & Wigdor, A. K. (1989). Fairness in employment testing: Validity generalization, minority issues, and the General Aptitude Test Battery. National Academies Press. https://doi.org/10.17226/1338 Google Scholar
Hunter, J. E. (1983). est validation for 12,000 jobs: An application of job classification and validity generalization to the General Aptitude Test Battery (USES Test Research Report No. 45) US Department of Labor, Employment and Training Administration. http://archive.org/details/ERIC_ED241577 Google Scholar
Hunter, J. E., Schmidt, F. L., & Le, H. (2006). Implications of direct and indirect range restriction for meta-analysis methods and findings. Journal of Applied Psychology, 91(3), 594612. https://doi.org/10.1037/0021-9010.91.3.594 CrossRefGoogle ScholarPubMed
Jensen, A. R. (1980). Bias in mental testing. Free Press.Google Scholar
Kuncel, N. R., Ones, D. S., & Sackett, P. R. (2010). Individual differences as predictors of work, educational, and broad life outcomes. Personality and Individual Differences, 49(4), 331336. https://doi.org/10.1016/j.paid.2010.03.042 CrossRefGoogle Scholar
Lang, J. W., Kersting, M., & Hülsheger, U. R. (2010). Range shrinkage of cognitive ability test scores in applicant pools for German governmental jobs: Implications for range restriction corrections. International Journal of Selection and Assessment, 18(3), 321328.CrossRefGoogle Scholar
Le, H., Oh, I.-S., Schmidt, F. L., & Wooldridge, C. D. (2016). Correction for range restriction in meta-analysis revisited: Improvements and implications for organizational research. Personnel Psychology, 69(4), 9751008. https://doi.org/10.1111/peps.12122 CrossRefGoogle Scholar
McCloy, R. A., Campbell, J. P., & Cudeck, R. (1994). A confirmatory test of a model of performance determinants. Journal of Applied Psychology, 79(4), 493505. https://doi.org/10.1037/0021-9010.79.4.493 CrossRefGoogle Scholar
Murphy, K. R. (1986). When your top choice turns you down: Effect of rejected offers on the utility of selection tests. Psychological Bulletin, 99(1), 133138. https://doi.org/10.1037/0033-2909.99.1.133 CrossRefGoogle Scholar
Oh, I.-S., Le, H., & Roth, P. L.(in press) Revisiting Sackett et al.’s (2022) rationale behind their recommendation against correcting for range restriction in concurrent validation studies. Journal of Applied Psychology. http://dx.doi.org/10.2139/ssrn.4308528 Google Scholar
Reeve, C. L., & Hakel, M. D. (2002). Asking the right questions about g. Human Performance, 15(1–2), 4774. https://doi.org/10.1080/08959285.2002.9668083 CrossRefGoogle Scholar
Sackett, P. R., & Lievens, F. (2008). Personnel selection. Annual Review of Psychology, 59, 419450.CrossRefGoogle ScholarPubMed
Sackett, P. R., & Ostgaard, D. J. (1994). Job-specific applicant pools and national norms for cognitive ability tests: Implications for range restriction corrections in validation research. Journal of Applied Psychology, 79(5), 680684. https://doi.org/10.1037/0021-9010.79.5.680 CrossRefGoogle ScholarPubMed
Sackett, P. R., & Yang, H. (2000). Correction for range restriction: An expanded typology. Journal of Applied Psychology, 85(1), 112118. https://doi.org/10.1037/0021-9010.85.1.112 CrossRefGoogle ScholarPubMed
Sackett, P. R., Zhang, C., Berry, C. M., & Lievens, F. (2022). Revisiting meta-analytic estimates of validity in personnel selection: Addressing systematic overcorrection for restriction of range. Journal of Applied Psychology, 107, 20402068. https://doi.org/10.1037/apl0000994 CrossRefGoogle ScholarPubMed
Sackett, P. R., Zhang, C., Berry, C. M., & Lievens, F. (in press). Revisiting the design of selection systems in light of new findings regarding the validity of widely used predictors 16(3), 283300. Industrial and Organizational Psychology.Google Scholar
Schmidt, F. L. (2002). The role of general cognitive ability and job performance: Why there cannot be a debate. Human Performance, 15(1–2), 187211. https://doi.org/10.1080/08959285.2002.9668091 CrossRefGoogle Scholar
Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124(2), 262274. https://doi.org/10.1037/033-2909.124.2.262 CrossRefGoogle Scholar
Schmidt, F. L., Hunter, J. E., & Outerbridge, A. N. (1986). Impact of job experience and ability on job knowledge, work sample performance, and supervisory ratings of job performance. Journal of Applied Psychology, 71(3), 432439. https://doi.org/10.1037/0021-9010.71.3.432 CrossRefGoogle Scholar
Schmidt, F. L., Hunter, J. E., & Urry, V. W. (1976). Statistical power in criterion-related validation studies. Journal of Applied Psychology, 61(4), 473485. https://doi.org/10.1037/0021-9010.61.4.473 CrossRefGoogle Scholar
Schmidt, F. L., Oh, I.-S., & Le, H. (2006). Increasing the accuracy of corrections for range restriction: Implications for selection procedure validities and other research results. Personnel Psychology, 59(2), 281305.CrossRefGoogle Scholar
Schmidt, F. L., Pearlman, K., Hunter, J. E., & Hirsch, H. R. (1985). Forty questions about validity generalization and meta-analysis. Personnel Psychology, 38(4), 697798. https://doi.org/10.1111/j.1744-6570.1985.tb00565.x CrossRefGoogle Scholar
Ullén, F., Hambrick, D. Z., & Mosing, M. A. (2016). Rethinking expertise: A multifactorial gene-environment interaction model of expert performance. Psychological Bulletin, 142(4), 427446. https://doi.org/10.1037/bul0000033 CrossRefGoogle ScholarPubMed
U.S. Department of Labor 1979 1970 Manual for the USES General Aptitude Test Battery. Section III: Development. Development. Department of Labor, Employment and Training Administration, U.S. Employment Service.Google Scholar
Viswesvaran, C., & Ones, D. S. (2002). Agreements and disagreements on the role of general mental ability (GMA) in industrial, work, and organizational psychology. Human Performance, 15(1–2), 212231. https://doi.org/10.1080/08959285.2002.9668092 CrossRefGoogle Scholar
Viswesvaran, C., Ones, D. S., & Schmidt, F. L. (1996). Comparative analysis of the reliability of job performance ratings. Journal of Applied Psychology, 81(5), 557574. https://doi.org/10.1037/0021-9010.81.5.557 CrossRefGoogle Scholar
Figure 0

Table 1. Validity of cognitive ability for overall job performance from the US Employment Service Database