Hostname: page-component-586b7cd67f-t8hqh Total loading time: 0 Render date: 2024-11-27T01:18:26.008Z Has data issue: false hasContentIssue false

A Call for Conceptual Models of Technology in I-O Psychology: An Example From Technology-Based Talent Assessment

Published online by Cambridge University Press:  03 November 2017

Neil Morelli*
Affiliation:
The Cole Group–R&D
Denise Potosky
Affiliation:
Pennsylvania State University–Psychology
Winfred Arthur Jr.
Affiliation:
Texas A&M University–Psychology
Nancy Tippins
Affiliation:
CEB
*
Correspondence concerning this article should be addressed to Neil Morelli, The Cole Group – R & D, 300 Brannan St., Suite 304, San Francisco, CA 94107. E-mail: [email protected]

Abstract

The rate of technological change is quickly outpacing today's methods for understanding how new advancements are applied within industrial-organizational (I-O) psychology. To further complicate matters, specific attempts to explain observed differences or measurement equivalence across devices are often atheoretical or fail to explain why a technology should (or should not) affect the measured construct. As a typical example, understanding how technology influences construct measurement in personnel testing and assessment is critical for explaining or predicting other practical issues such as accessibility, security, and scoring. Therefore, theory development is needed to guide research hypotheses, manage expectations, and address these issues at this intersection of technology and I-O psychology. This article is an extension of a Society for Industrial and Organizational Psychology (SIOP) 2016 panel session, which (re)introduces conceptual frameworks that can help explain how and why measurement equivalence or nonequivalence is observed in the context of selection and assessment. We outline three potential conceptual frameworks as candidates for further research, evaluation, and application, and argue for a similar conceptual approach for explaining how technology may influence other psychological phenomena.

Type
Focal Article
Copyright
Copyright © Society for Industrial and Organizational Psychology 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.Google Scholar
Anderson, J. C., & Gerbing, D. W. (1998). Structural equation modeling in practice: A review and recommended two-step approach. Psychological Bulletin, 103, 411423.Google Scholar
Armstrong, M., Landers, R. N., & Collmus, A. (2015, April). Game-thinking in human resource management. Poster session presented at the 30th Annual Conference of the Society for Industrial-Organizational Psychology, Philadelphia, PA.Google Scholar
Arthur, W. Jr., Doverspike, D., Kinney, T. B., & O'Connell, M. (2017). The impact of emerging technologies on selection models and research: Mobile devices and gamification as exemplars. In Farr, J. L. & Tippins, N. T. (Eds.), Handbook of employee selection (2nd ed.) (pp. 967986). New York: Taylor & Francis/Psychology Press.Google Scholar
Arthur, W. Jr., Glaze, R. M., Jarrett, S. M., White, C. D., Schurig, I., & Taylor, J. E. (2014). Comparative evaluation of three situational judgment test response formats in terms of construct-related validity, subgroup differences, and susceptibility to response distortion. Journal of Applied Psychology, 99, 335345.Google Scholar
Arthur, W. Jr., Keiser, N., & Doverspike, D. (2017). An information processing-based conceptual framework of the effects of unproctored Internet-based testing devices on scores on employment-related assessments and tests. Manuscript submitted for publication.Google Scholar
Arthur, W. B. (2009). The nature of technology: What it is and how it evolves. New York: Free Press.Google Scholar
Bank, J., Collins, L., Hartog, S., Hardesty, S., O'Shea, P., & Dapra, R. (2015, April). In Bank, J. (chair), High-fidelity simulations: Refining leader assessment and leadership development. Symposium presented at the 30th Annual Conference of the Society for Industrial-Organizational Psychology, Philadelphia, PA.Google Scholar
Becker, G. (2000). How important is transient error in estimating reliability? Going beyond simulation studies. Psychological Methods, 5, 370379.Google Scholar
Bennett, R. E., & Zhang, M. (2016). Validity and automated scoring. In Drasgow, F. (Ed.), Technology and testing: Improving educational and psychological measurement (pp. 142173). New York: Routledge.Google Scholar
Binning, J. F., & Barrett, G. V. (1989). Validity of personnel decisions: A conceptual analysis of the inferential and evidential bases. Journal of Applied Psychology, 89, 150157.Google Scholar
Chamorro-Premuzic, T., Winsborough, D. Sherman, R. A., & Hogan, R. (2016). New talent signals: Shiny new objects or a brave new world? Industrial and Organizational Psychology: Perspectives on Science and Practice, 9 (3), 120 doi:10.1017/iop.2016.6 Google Scholar
Chan, D., & Schmitt, N. (1997). Video-based versus paper-and-pencil method of assessment in situational judgment tests: Subgroup differences in test performance and face validity perceptions. Journal of Applied Psychology, 82, 143159.Google Scholar
Coovert, M. D., & Thompson, L. F. (2014a). Toward a synergistic relationship between psychology and technology. In Coovert, M. D. & Thompson, L. F. (Eds.), The psychology of workplace technology (pp. 121). New York: Routledge.Google Scholar
Coovert, M. D., & Thompson, L. F. (2014b). The psychology of workplace technology. New York: Routledge.Google Scholar
Coyne, I., Warszta, T., Beadle, S., & Sheehan, N. (2005). The impact of mode of administration on the equivalence of a test battery: A quasi-experimental design. International Journal of Selection and Assessment, 13, 220224.CrossRefGoogle Scholar
Ferran, C., & Watts, S. (2008). Videoconferencing in the field: A heuristic processing model. Management Science, 54, 565578.Google Scholar
Foster, D. (2016). Testing technology and its effects on test security. In Drasgow, F. (Ed.), Technology and testing: Improving educational and psychological measurement (pp. 235255). New York: Routledge.Google Scholar
Ghiselli, E. E., Campbell, J. P., & Zedeck, S. (1981). Measurement theory for the behavioral sciences. New York: W. H. Freeman & Co.Google Scholar
Gierl, M. J., Lai, H., Fung, K., & Zheng, B. (2016). Using technology-enhanced processes to generate test items in multiple languages. In Drasgow, F. (Ed.), Technology and testing: Improving educational and psychological measurement (pp. 109126). New York: Routledge.Google Scholar
Gray, C., Morelli, N. A., & McLane, W. (2015, April). Does use context affect selection assessments via mobile devices? In Morelli, N. A. (Chair), Mobile devices in talent assessment: The next chapter. Symposium presented at the 30th Annual Conference of the Society for Industrial and Organizational Psychology, Philadelphia, PA.Google Scholar
Guilford, J. P. (1954). Psychometric methods (2nd ed.). New York: McGraw-Hill.Google Scholar
Gulliksen, H. (1950). Theory of mental tests. New York: Wiley.Google Scholar
Hong, E. (1999). Test anxiety, perceived test difficulty, and test performance: Temporal patterns of their effects. Learning and Individual Differences, 11, 431447.Google Scholar
Huang, J., & Yuan, J. (In press.). Bayesian dynamic mediation analysis. Psychological Methods. doi:10.1037/met0000073 CrossRefGoogle Scholar
King, D. D., Ryan, A. M., Kantrowitz, T., Grelle, D., & Dainis, A. (2015). Mobile Internet testing: An analysis of equivalence, individual differences, and reactions. International Journal of Selection and Assessment, 23, 382394.CrossRefGoogle Scholar
Landers, R. N. (2016). An introduction plus a crash course in R. The Industrial-Organizational Psychologist, 54 (1). Retrieved from http://www.siop.org/tip/july16/crash.aspx Google Scholar
Leonardi, P. M. (2012). Materiality, sociomateriality, and socio-technical systems: What do these terms mean? How are they related? Do we need them? In Leonardi, P. M., Nardi, B. A., & Kallinikos, J. (Eds.), Materiality and organizing: Social interaction in a technological world (pp. 2548). Oxford, UK: Oxford University Press.Google Scholar
Luecht, R. M. (2016). Computer-based test delivery models, data, and operational implementation issues. In Drasgow, F. (Ed.), Technology and testing: Improving educational and psychological measurement (pp. 179205). New York: Routledge.Google Scholar
McCornack, R. L. (1956). A criticism of studies comparing item-weighing methods. Journal of Applied Psychology, 40, 343344.Google Scholar
Mead, A. D., & Drasgow, F. (1993). Equivalence of computerized and paper-and-pencil cognitive ability tests: A meta-analysis. Psychological Bulletin, 114, 449459.Google Scholar
Mead, A. D., Olson-Buchanan, , & Drasgow, F. (2014). Technology-based selection. In Coovert, M. D. & Thompson, L. F. (Eds.), The psychology of workplace technology (pp. 2143). New York: Routledge.Google Scholar
Morelli, N., Adler, S., Arthur, W. Jr., Potosky, D., & Tippins, N. (2016, April). Developing a conceptual model of technology applied to I-O psychology. Panel discussion presented at the 31st Annual Conference of the Society for Industrial and Organizational Psychology, Anaheim, CA.Google Scholar
Orilkowski, W. J. (2007). Sociomaterial practices: Exploring technology at work. Organization Studies, 28, 14351448.CrossRefGoogle Scholar
Orilkowski, W. J., & Scott, S. V. (2008). Sociomateriality: Challenging the separation of technology, work, and organization. The Academy of Management Annals, 2, 433474.Google Scholar
Potosky, D. (2008). A conceptual framework for the role of the administration medium in the personnel assessment process. Academy of Management Review, 33, 629648.Google Scholar
Ryan, A. M., & Ployhart, R. E. (2014). A century of selection. Annual Review of Psychology, 65, 693717.CrossRefGoogle ScholarPubMed
Schmitt, N., & Kuljanin, G. (2008). Measurement invariance: Review of practice and implications. Human Resource Management Review, 18, 210222.Google Scholar
Scott, J. C., & Mead, A. D. (2011). Foundations for measurement. In Tippins, N. & Adler, S. (Eds.), Technology-enhanced assessment of talent (pp. 118). San Francisco: John Wiley & Sons, Inc.Google Scholar
Seiler, S., McEwen, D., Benavidez, J., O'Shea, P., Popp, E., & Sydell, E. (2015, April). Under the hood: Practical challenges in developing technology-enhanced assessments. Panel discussion presented at the 30th Annual Conference of the Society for Industrial-Organizational Psychology, Philadelphia, PA.Google Scholar
Spearman, C. (1904). The proof and measurement of the association between two things. American Journal of Psychology, 15, 72101.Google Scholar
Spearman, C. (1910). Correlation calculated from faulty data. British Journal of Psychology, 3, 271295.Google Scholar
Stone, E., Laitusis, C. C., & Cook, L. L. (2016). Increasing the accessibility of assessments through technology. In Drasgow, F. (Ed.), Technology and testing: Improving educational and psychological measurement (pp. 217234). New York: Routledge.Google Scholar
Tay, L., Meade, A. W., & Cao, M. (2014). An overview and practical guide to IRT measurement equivalence analysis. Organizational Research Methods, 18, 346.Google Scholar
Thurstone, L. L. (1931). The reliability and validity of tests: Derivation and interpretation of fundamental formulae concerned with reliability and validity of tests and illustrative problems. Ann Arbor, MI: Edwards Bros.Google Scholar
Tippins, N. (2011). Overview of technology-enabled assessments. In Tippins, N. & Adler, S. (Eds.), Technology-enhanced assessment of talent (pp. 118). San Francisco: John Wiley & Sons, Inc.CrossRefGoogle Scholar
Tonidandel, S., Quiñones, M. A., & Adams, A. A. (2002). Computer-adaptive testing: The impact of test characteristics on perceived performance and test takers' reactions. Journal of Applied Psychology, 87, 320332.CrossRefGoogle ScholarPubMed
Vandenberg, R. J., & Lance, C. E. (2000). A review and synthesis of the measurement invariance literature: Suggestions, practices, and recommendations for organizational research. Organizational Research Methods, 3, 470.CrossRefGoogle Scholar
Vandenberg, R. J., & Morelli, N. A. (2016). A contemporary update on testing for measurement equivalence and invariance. In Meyer, J. P. (Ed.), The handbook of employee commitment (pp. 449461). Cheltenham, UK: Edward Elgar.Google Scholar
Zickar, M. J., Cortina, J., & Carter, N. T. (2010). Evaluation of measures: Sources of sufficiency, error, and contamination. In Farr, J. L. & Tippins, N. (Eds.), Handbook of employee selection (pp. 399416). New York: Routledge.Google Scholar