Hostname: page-component-586b7cd67f-2plfb Total loading time: 0 Render date: 2024-11-29T06:44:52.132Z Has data issue: false hasContentIssue false

Who Is the Culprit? A Commentary on Moderator Detection

Published online by Cambridge University Press:  30 August 2017

Hannah M. Markell*
Affiliation:
Department of Psychology, George Mason University
Jose M. Cortina
Affiliation:
Department of Psychology, George Mason University
*
Correspondence concerning this article should be addressed to Hannah M. Markell, George Mason University – Psychology, 4400 University Drive, David King Hall 3077, Fairfax, VA 22030. E-mail: [email protected]

Extract

Over the years, many in the field of organizational psychology have claimed that meta-analytic tests for moderators provide evidence for validity generalization (Schmidt & Hunter, 1977), a term first used in the middle of the last century (Mosier, 1950). In response, Tett, Hundley, and Christiansen (2017) advise caution when it comes to our inclination toward generalizing findings across workplaces/domains and urge precision in attaching meaning to the statistic we are generalizing. Their focal article was insightful and offers important recommendations for researchers regarding certain statistical indicators of unexplained variability, such as SDρ. In this commentary, we would like to make a different point about SDρ, namely that it, and other statistics based on residual variance, will be deflated due to the lack of variance in moderators. It is this lack of between-study variance, as much as anything else, that leads to misguided conclusions about validity generalization.

Type
Commentaries
Copyright
Copyright © Society for Industrial and Organizational Psychology 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Cortina, J. M. (2003). Apples and oranges (and pears, oh my!): The search for moderators in meta-analysis. Organizational Research Methods, 6 (4), 415439.Google Scholar
Markell, H. M., Lei, X., & Foroughi, C. K. (April 2017). Restricted between-study variance in meta-analytic moderators. Symposium presented at the annual meeting for the Society for Industrial and Organizational Psychology, Orlando, FL.Google Scholar
Mosier, C. I. (1950). Review of difficulty prediction of test items. Journal of Applied Psychology, 34 (6), 452453.CrossRefGoogle Scholar
Schmidt, F. L., & Hunter, J. E. (1977). Development of a general solution to the problem of validity generalization. Journal of Applied Psychology, 62 (5), 529540.CrossRefGoogle Scholar
Schmidt, F. L., & Hunter, J. E. (2004). Methods of meta-analysis: Correcting error and bias in research findings. Newbury Park, CA: Sage.Google Scholar
Tett, R. P., Hundley, N., & Christiansen, N.D. (2017). Meta-analysis and the myth of generalizability. Industrial and Organizational Psychology: Perspectives on Science and Practice, 10 (3), 421456.Google Scholar
Wagner, J. A. (1995). Studies of individualism-collectivism: Effects on cooperation in groups. Academy of Management Journal, 38 (1), 152173.Google Scholar