Hostname: page-component-cd9895bd7-p9bg8 Total loading time: 0 Render date: 2024-12-12T16:03:45.040Z Has data issue: false hasContentIssue false

The First Principal Component of Multifaceted Variables: It's More Than a G Thing

Published online by Cambridge University Press:  02 October 2015

Duncan J. R. Jackson*
Affiliation:
Department of Organizational Psychology, Birkbeck, University of London, and Faculty of Management, University of Johannesburg
Dan J. Putka
Affiliation:
Human Resources Research Organization, Alexandria, Virginia
Kevin R. H. Teoh
Affiliation:
Department of Organizational Psychology, Birkbeck, University of London
*
Correspondence concerning this article should be addressed to Duncan J. R. Jackson, Department of Organizational Psychology, Birkbeck, University of London, Clore Management Centre, Torrington Square, London, United KingdomWC1E 7JL. E-mail: [email protected]

Extract

Ree, Carretta, and Teachout (2015) raise the need for further investigation into dominant general factors (DGFs) and their prevalence in measures used for the purposes of employee selection, development, and performance measurement. They imply that a method of choice for estimating the contribution of DGFs is principal components analysis (PCA), and they interpret the variance accounted for by the first component of the PCA solution as indicative of the contribution of a general factor. In this response, we illustrate the hazard of equating the first component of a PCA with a general factor, and we illustrate how this becomes particularly problematic when applying PCA to multifaceted variables. Rather than simply critique this use of PCA, we offer an alternative approach that helps to address and illustrate the problem that we raise.

Type
Commentaries
Copyright
Copyright © Society for Industrial and Organizational Psychology 2015 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Arthur, W. Jr., Woehr, D. J., & Maldegan, R. (2000). Convergent and discriminant validity of assessment center dimensions: A conceptual and empirical reexamination of the assessment center construct-related validity paradox. Journal of Management, 26, 813835.Google Scholar
Bowler, M. C., & Woehr, D. J. (2006). A meta-analytic evaluation of the impact of dimension and exercise factors on assessment center ratings. Journal of Applied Psychology, 91, 11141124. doi:10.1037/0021-9010.91.5.1114Google Scholar
Cronbach, L. J., Gleser, G. C., Nanda, H., & Rajaratnam, N. (1972). The dependability of behavioral measurements: Theory of generalizability for scores and profiles. New York, NY: Wiley.Google Scholar
Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4, 272299. doi:10.1037/1082-989X.4.3.272Google Scholar
Floyd, R. G., Shands, E. I., Rafael, F. A., Bergeron, R., & McGrew, K. S. (2009). The dependability of general-factor loadings: The effects of factor-extraction methods, test battery composition, test battery size, and their interactions. Intelligence, 37, 453465. doi:10.1016/j.intell.2009.05.003Google Scholar
Hoffman, B. J., Lance, C. E., Bynum, B., & Gentry, W. A. (2010). Rater source effects are alive and well after all. Personnel Psychology, 63, 119151. doi:10.1111/j.1744-6570.2009.01164.xGoogle Scholar
Kuncel, N. R., & Sackett, P. R. (2014). Resolving the assessment center construct validity problem (as we know it). Journal of Applied Psychology, 99, 3847. doi:10.1037/a0034147Google Scholar
Lance, C. E. (2008). Why assessment centers do not work the way they are supposed to. Industrial and Organizational Psychology: Perspectives on Science and Practice, 1, 8497. doi:10.1111/j.1754-9434.2007.00017.xGoogle Scholar
Putka, D. J., & Hoffman, B. J. (2013). Clarifying the contribution of assessee-, dimension-, exercise-, and assessor-related effects to reliable and unreliable variance in assessment center ratings. Journal of Applied Psychology, 98, 114133. doi:10.1037/a0030887CrossRefGoogle ScholarPubMed
Putka, D. J., & Sackett, P. R. (2010). Reliability and validity. In Farr, J. L. & Tippins, N. T. (Eds.), Handbook of employee selection (pp. 949). New York, NY: Routledge.Google Scholar
Ree, M. J., Carretta, T. R., & Teachout, M. S. (2015). Pervasiveness of dominant general factors in organizational measurement. Industrial and Organizational Psychology: Perspectives on Science and Practice, 8 (3), 409427.Google Scholar
Scullen, S. E., Mount, M. K., & Goff, M. (2000). Understanding the latent structure of job performance ratings. Journal of Applied Psychology, 85, 956970. doi:10.1037//0021-9010.85.6.956Google Scholar
Shavelson, R. J., & Webb, N. M. (2005). Generalizability theory. In Green, J. L., Camilli, G., & Elmore, P. B. (Eds.), Complementary methods for research in education (3rd ed., pp. 599612). Washington, DC: AERA.Google Scholar
Widaman, K. F. (2007). Common factors versus components: Principals and principles, errors and misconceptions. In Cudeck, R. & MacCallum, R. C. (Eds.), Factor analysis at 100: Historical developments and future directions (pp. 177204). Mahwah, NJ: Erlbaum.Google Scholar
Woehr, D. J., Putka, D. J., & Bowler, M. C. (2012). An examination of G-theory methods for modeling multitrait–multimethod data: Clarifying links to construct validity and confirmatory factor analysis. Organizational Research Methods, 15, 134161. doi:10.1177/1094428111408616Google Scholar