Hostname: page-component-745bb68f8f-v2bm5 Total loading time: 0 Render date: 2025-01-07T18:58:28.846Z Has data issue: false hasContentIssue false

Exploring the Effects of Item-Specific Factors in Sequential and IRTree Models

Published online by Cambridge University Press:  01 January 2025

Weicong Lyu*
Affiliation:
University of Wisconsin-Madison
Daniel M. Bolt
Affiliation:
University of Wisconsin-Madison
Samuel Westby
Affiliation:
Northeastern University
*
Correspondence should be made to Weicong Lyu, University of Wisconsin-Madison, 880 Educational Sciences, 1025 West Johnson Street, Madison, WI 53706, USA. Email: [email protected]

Abstract

Test items for which the item score reflects a sequential or IRTree modeling outcome are considered. For such items, we argue that item-specific factors, although not empirically measurable, will often be present across stages of the same item. In this paper, we present a conceptual model that incorporates such factors. We use the model to demonstrate how the varying conditional distributions of item-specific factors across stages become absorbed into the stage-specific item discrimination and difficulty parameters, creating ambiguity in the interpretations of item and person parameters beyond the first stage. We discuss implications in relation to various applications considered in the literature, including methodological studies of (1) repeated attempt items; (2) answer change/review, (3) on-demand item hints; (4) item skipping behavior; and (5) Likert scale items. Our own empirical applications, as well as several examples published in the literature, show patterns of violations of item parameter invariance across stages that are highly suggestive of item-specific factors. For applications using sequential or IRTree models as analytical models, or for which the resulting item score might be viewed as outcomes of such a process, we recommend (1) regular inspection of data or analytic results for empirical evidence (or theoretical expectations) of item-specific factors; and (2) sensitivity analyses to evaluate the implications of item-specific factors for the intended inferences or applications.

Type
Theory & Methods
Copyright
Copyright © 2023 The Author(s) under exclusive licence to The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Akkermans, W.(2000). Modelling sequentially scored item responses. British Journal of Mathematical and Statistical Psychology, 53(1),8398CrossRefGoogle ScholarPubMed
Bechger, TM, &Akkermans, W.(2001). A note on the equivalence of the graded response model and the sequential model. Psychometrika, 66(3),461463CrossRefGoogle Scholar
Bergner, Y., Choi, I., & Castellano, K. E.(2019). Item response models for multiple attempts with incomplete data. Journal of Educational Measurement, 56(2),415436CrossRefGoogle Scholar
Böckenholt, U.(2012). Modeling multiple response processes in judgment and choice. Psychological Methods, 17(4),665CrossRefGoogle ScholarPubMed
Böckenholt, U., & Meiser, T.(2017). Response style analysis with threshold and multi-process IRT models: A review and tutorial. British Journal of Mathematical and Statistical Psychology, 70(1),159181CrossRefGoogle ScholarPubMed
Bolsinova, M., Deonovic, B., Arieli-Attali, M., Settles, B., Hagiwara, M., &Maris, G.(2022). Measurement of ability in adaptive learning and assessment systems when learners use on-demand hints. Applied Psychological Measurement, 46(3),219235CrossRefGoogle ScholarPubMed
Culpepper, S. A.(2014). If at first you don’t succeed, try, try again: Applications of sequential IRT models to cognitive assessments. Applied Psychological Measurement, 38(8),632644CrossRefGoogle Scholar
De Boeck, P., & Wilson, M. (2014). Multidimensional explanatory item response modeling. In Handbook of item response theory modeling (pp. 270–289). Routledge. https://doi.org/10.4324/9781315736013-22CrossRefGoogle Scholar
Debeer, D., Janssen, R., &De Boeck, P.(2017). Modeling skipped and not-reached items using IRTrees. Journal of Educational Measurement, 54(3),333363CrossRefGoogle Scholar
Hemker, B. T., Andriesvander Ark, L., &Sijtsma, K.(2001). On measurement properties of continuation ratio models. Psychometrika, 66(4),487506CrossRefGoogle Scholar
Henninger, M., &Plieninger, H.(2021). Different styles, different times: How response times can inform our knowledge about the response process in rating scale measurement. Assessment, 28(5),13011319CrossRefGoogle ScholarPubMed
Jeon, M., &De Boeck, P.(2016). A generalized item response tree model for psychological assessments. Behavior Research Methods, 48(3),10701085CrossRefGoogle ScholarPubMed
Jeon, M., &De Boeck, P.(2019). Evaluation on types of invariance in studying extreme response bias with an IRTree approach. British Journal of Mathematical and Statistical Psychology, 72(3),517537CrossRefGoogle ScholarPubMed
Jeon, M., De Boeck, P., &van der Linden, W.(2017). Modeling answer change behavior: An application of a generalized item response tree model. Journal of Educational and Behavioral Statistics, 42(4),467490CrossRefGoogle Scholar
Jeon, M., Rijmen, F., &Rabe-Hesketh, S.(2014). Flexible item response theory modeling with FLIRT. Applied Psychological Measurement, 38(5),404405CrossRefGoogle Scholar
Khorramdel, L., &von Davier, M.(2014). Measuring response styles across the big five: A multiscale extension of an approach using multinomial processing trees. Multivariate Behavioral Research, 49(2),161177CrossRefGoogle ScholarPubMed
Kim, N., &Bolt, D. M.(2021). A mixture irtree model for extreme response style: Accounting for response process uncertainty. Educational and Psychological Measurement, 81(1),131154CrossRefGoogle ScholarPubMed
Kim, Y.(2020). Partial identification of answer reviewing effects in multiple-choice exams. Journal of Educational Measurement, 57(4),511526CrossRefGoogle Scholar
Masters, G. N.(1982). A rasch model for partial credit scoring. Psychometrika, 47(2),149174CrossRefGoogle Scholar
Meiser, T., Plieninger, H., &Henninger, M.(2019). IRTree models with ordinal and multidimensional decision nodes for response styles and trait-based rating responses. British Journal of Mathematical and Statistical Psychology, 72(3),501516CrossRefGoogle ScholarPubMed
Mellenbergh, G. J.(1995). Conceptual notes on models for discrete polytomous item responses. Applied Psychological Measurement, 19(1),91100CrossRefGoogle Scholar
Plieninger, H.(2021). Developing and applying IR-tree models: Guidelines, caveats, and an extension to multiple groups. Organizational Research Methods, 24(3),654670CrossRefGoogle Scholar
Rizopoulos, D.(2007). ltm: An R package for latent variable modeling and item response analysis. Journal of Statistical Software, 17 125Google Scholar
Samejima, F.(1969). Estimation of latent ability using a response pattern of graded scores. Psychometrika, 34 197CrossRefGoogle Scholar
Samejima, F.(1995). Acceleration model in the heterogeneous case of the general graded response model. Psychometrika, 60(4),549572CrossRefGoogle Scholar
Stocking, M. L., & Lord, F. M.(1983). Developing a common metric in item response theory. Applied Psychological Measurement, 7(2),201210CrossRefGoogle Scholar
Train, K.E. (2009). Discrete choice methods with simulation (2nd ed.). Cambridge University Press. https://doi.org/10.1017/CBO9780511805271CrossRefGoogle Scholar
Tutz, G.(1990). Sequential item response models with an ordered response. British Journal of Mathematical and Statistical Psychology, 43(1),3955CrossRefGoogle Scholar
Tutz, G. (1997). Sequential models for ordered responses. In Handbook of modern item response theory (pp. 139–152). https://doi.org/10.1007/978-1-4757-2691-6_8CrossRefGoogle Scholar
van der Linden, W. J., Jeon, M., &Ferrara, S.(2011). A paradox in the study of the benefits of test-item review. Journal of Educational Measurement, 48(4),380398CrossRefGoogle Scholar
Verhelst, N. D., Glas, C. A., & De Vries, H. (1997). A steps model to analyze partial credit. In Handbook of modern item response theory (pp. 123–138). https://doi.org/10.1007/978-1-4757-2691-6_7CrossRefGoogle Scholar
Wainer, H., Bradlow, E. T., &Wang, X.(2007). Testlet response theory and its applications. Cambridge University Press,Google Scholar
Zhang, S., Bergner, Y., DiTrapani, J., &Jeon, M.(2021). Modeling the interaction between resilience and ability in assessments with allowances for multiple attempts. Computers in Human Behavior, 122CrossRefGoogle Scholar