Hostname: page-component-5f745c7db-rgzdr Total loading time: 0 Render date: 2025-01-06T07:33:25.687Z Has data issue: true hasContentIssue false

Effects of Discontinue Rules on Psychometric Properties of Test Scores

Published online by Cambridge University Press:  01 January 2025

Matthias von Davier*
Affiliation:
National Board of Medical Examiners
Youngmi Cho
Affiliation:
American Institutes for Research
Tianshu Pan
Affiliation:
Pearson
*
Correspondence should bemade toMatthias von Davier, National Board of Medical Examiners, 3750 Market Street, Philadelphia, PA19104-3102, USA. Email: [email protected]

Abstract

This paper provides results on a form of adaptive testing that is used frequently in intelligence testing. In these tests, items are presented in order of increasing difficulty. The presentation of items is adaptive in the sense that a session is discontinued once a test taker produces a certain number of incorrect responses in sequence, with subsequent (not observed) responses commonly scored as wrong. The Stanford-Binet Intelligence Scales (SB5; Riverside Publishing Company, 2003) and the Kaufman Assessment Battery for Children (KABC-II; Kaufman and Kaufman, 2004), the Kaufman Adolescent and Adult Intelligence Test (Kaufman and Kaufman 2014) and the Universal Nonverbal Intelligence Test (2nd ed.) (Bracken and McCallum 2015) are some of the many examples using this rule. He and Wolfe (Educ Psychol Meas 72(5):808–826, 2012. https://doi.org/10.1177/0013164412441937) compared different ability estimation methods in a simulation study for this discontinue rule adaptation of test length. However, there has been no study, to our knowledge, of the underlying distributional properties based on analytic arguments drawing on probability theory, of what these authors call stochastic censoring of responses. The study results obtained by He and Wolfe (Educ Psychol Meas 72(5):808–826, 2012. https://doi.org/10.1177/0013164412441937) agree with results presented by DeAyala et al. (J Educ Meas 38:213–234, 2001) as well as Rose et al. (Modeling non-ignorable missing data with item response theory (IRT; ETS RR-10-11), Educational Testing Service, Princeton, 2010) and Rose et al. (Psychometrika 82:795–819, 2017. https://doi.org/10.1007/s11336-016-9544-7) in that ability estimates are biased most when scoring the not observed responses as wrong. This scoring is used operationally, so more research is needed in order to improve practice in this field. The paper extends existing research on adaptivity by discontinue rules in intelligence tests in multiple ways: First, an analytical study of the distributional properties of discontinue rule scored items is presented. Second, a simulation is presented that includes additional scoring rules and uses ability estimators that may be suitable to reduce bias for discontinue rule scored intelligence tests.

Type
Original Paper
Copyright
Copyright © 2019 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Bolt, D. M., Cohen, A. S., & Wollack, J. A. (2002). Item parameter estimation under conditions of test speededness: Application of a mixture Rasch model with ordinal constraints. Journal of Educational Measurement, 39, 331348. CrossRefGoogle Scholar
Bracken, B. A., & McCallum, R. S. (2015). Universal nonverbal intelligence test, 2 Itasca, IL: Riverside Publishers. Google Scholar
Chen, H., & Yamamoto, K., von Davier, M. Yan, D. L., von Davier, A. A., & Lewis, C. (2014). Controlling multistage testing exposure rates in international large-scale assessments. Computerized multistage testing: Theory and applications, New York: CRC Press. Google Scholar
DeAyala, R. J., Plake, B. S., & Impara, J. C. (2001). The impact of omitted responses on the accuracy of ability estimation in item response theory. Journal of Educational Measurement, 38, 213234. CrossRefGoogle Scholar
Firth, D. (1993). Bias reduction of maximum likelihood estimates. Biometrika, 80, 1 2738. CrossRefGoogle Scholar
Glas, C. A. W. van der Linden, W. J., & Glas, C. A. W. (2010). Item parameter estimation and item fit analysis. Elements of adaptive testing, New York: Springer. 269288. Google Scholar
He, W., & Wolfe, E. W. (2012). Treatment of not-administered items on individually administered intelligence tests. Educational and Psychological Measurement, 72, 5 808826. CrossRefGoogle Scholar
Holland, P. W., & Rosenbaum, P. R. (1986). Conditional association and unidimensionality in monotone latent variable models. The Annals of Statistics. 14 (4), 15231543. CrossRefGoogle Scholar
Holland, P. W., & Thayer, D. T. (1986). https://doi.org/10.1002/j.2330-8516.1986.tb00186.x Differential item functioning and the Mantel–Haenzel procedure. ETS Research Report Series.Google Scholar
Homack, S. R., & Reynolds, C. R. (2007). Essentials of assessment with brief intelligence tests, Hoboken: Wiley. ISBN: 978-0-471-26412-5 Google Scholar
Kaufman, A. S., & Kaufman, N. L. (2004). Manual: Kaufman assessment battery for children. (2). Circle Pines, MN: AGS Publishing. Google Scholar
Kaufman, A. S., & Kaufman, N. L. (2014). https://doi.org/10.1002/9781118660584.ese1323. Kaufman adolescent and adult intelligence test. Encyclopedia of Special Education.Google Scholar
Little, R. J. A (1988). Missing-data adjustments in large surveys. Journal of Business and Economic Statistics, 6, 287296. CrossRefGoogle Scholar
Little, R. J. A, Rubin, D. B. (2002). Statistical analysis with missing data. (2). Hoboken, NJ: Wiley. CrossRefGoogle Scholar
Little, R. J., & Zhang, N. (2011). Subsample ignorable likelihood for regression analysis with missing data. Journal of the Royal Statistical Society: Series C: Applied Statistics, 60, 4 591605. CrossRefGoogle Scholar
Little, R. J., Rubin, D. B., & Zangeneh, S. Z. (2017). Conditions for ignoring the missing-data mechanism in likelihood inferences for parameter subsets. Journal of the American Statistical Association, 112 (517), 314320. CrossRefGoogle Scholar
Lord, F. M. (1980). Applications of item response theory to practical testing problems, Hillsdale, NJ: Lawrence Erlbaum. Google Scholar
Mantel, N., & Haenszel, W. (1959). Statistical aspects of the analysis of data from retrospective studies of disease. Journal of the National Cancer Institute, 22, 719748. Google ScholarPubMed
Mislevy, R. J., & Wu, P. -K. (1996). Missing responses and IRT ability estimation: Omits, choice, time limits, and adaptive testing. ETS Research Report Series, 1996, i36. CrossRefGoogle Scholar
Morris, T. P., White, I. R., & Royston, P. (2014). Tuning multiple imputation by predictive mean matching and local residual draws. BMC Medical Research Methodology, 14, 7587. CrossRefGoogle ScholarPubMed
Riverside Publishing Company. (2003). Stanford-Binet intelligence scales (SB5) (5th edn). Itasca, IL.Google Scholar
Rose, N., von Davier, M., & Xu, X. (2010). Modeling non-ignorable missing data with item response theory (IRT; ETS RR-10-11), Princeton, NJ: Educational Testing Service. Google Scholar
Rose, N., von Davier, M., & Nagengast, B. (2017). Modeling omitted and not-reached items in IRT models. Psychometrika, 82, 795819. CrossRefGoogle Scholar
Reichenbach, H. (1956). The direction of time, Berkeley, LA: University of California Press. CrossRefGoogle Scholar
Rubin, D. B. (1976). Inference and missing data. Biometrika, 63, 581592. CrossRefGoogle Scholar
Rubin, D. B. (1986). Statistical matching using file concatenation with adjusted weights and multiple imputations. Journal of Business and Economic Statistics, 4, 8794. CrossRefGoogle Scholar
Suppes, P. (1970). A probabilistic theory of causality, Amsterdam: North-Holland Publishing Company. Google Scholar
Suppes, P., & Zanotti, M. (1981). When are probabilistic explanations possible?. Synthese, 48, 191199. CrossRefGoogle Scholar
van der Linden, W. (2016). Handbook of item response theory (Vol. 1, 2nd edn). Boca Raton: CRC Press.CrossRefGoogle Scholar
von Davier, M. (2005). A general diagnostic model applied to language testing data. In Research report RR-05-16. Princeton, NJ: ETS.Google Scholar
von Davier, M. van der Linden, W. (2016a). The rasch model. Chapter 3. Handbook of item response theory. (2). Boca Raton: CRC Press. 3148. Google Scholar
von Davier, M. (2016b). CTT and No-DIF and ? = (almost) Rasch model. Chapter 14. In: M. Rosen, K. Y. Hansen, U. Wolff (Eds.). Cognitive abilities and educational outcomes: A festschrift in Honour of Jan-Eric Gustafsson (pp. 249–272). A Volume in the Springer Book Series: Methodology of Educational Measurement and Assessment.CrossRefGoogle Scholar
von Davier, M., & Rost, J. (1995). Polytomous mixed Rasch models. In G. H. Fischer & I. W. Molenaar (Eds.), Rasch models—foundations, recent developments, and applications (pp. 371–379). New York: Springer.Google Scholar
Verhelst, N. D., & Glas, C. A.W Fischer, G. H., & Molenaar, I. W. (1995). The one parameter logistic model. Rasch models, New York, NY: Springer. Google Scholar
Warm, T. (1989). Weighted likelihood estimation of ability in item response theory. Psychometrika. 54 (3), 427450. CrossRefGoogle Scholar
Yamamoto, K., & Everson, H. Rost, J., & Langeheine, R. (1997). Modeling the effects of test length and test time on parameter estimation using the HYBRID model. Applications of latent trait and latent class models in the social sciences, New York: Waxman. 8998. Google Scholar