Hostname: page-component-745bb68f8f-grxwn Total loading time: 0 Render date: 2025-01-07T10:05:46.514Z Has data issue: false hasContentIssue false

A Bayesian Random Effects Model for Testlets

Published online by Cambridge University Press:  01 January 2025

Eric T. Bradlow*
Affiliation:
Marketing and Statistics, The Wharton School, The University of Pennsylvania
Howard Wainer
Affiliation:
Educational Testing Service
Xiaohui Wang
Affiliation:
Educational Testing Service
*
Requests for reprints should be sent to Eric T. Bradlow, Marketing Department, The Wharton School, University of Pennsylvania, 1400 Steinberg Hall-Dietrich Hall, Philadelphia PA 19104-6371.

Abstract

Standard item response theory (IRT) models fit to dichotomous examination responses ignore the fact that sets of items (testlets) often come from a single common stimuli (e.g. a reading comprehension passage). In this setting, all items given to an examinee are unlikely to be conditionally independent (given examinee proficiency). Models that assume conditional independence will overestimate the precision with which examinee proficiency is measured. Overstatement of precision may lead to inaccurate inferences such as prematurely ending an examination in which the stopping rule is based on the estimated standard error of examinee proficiency (e.g., an adaptive test). To model examinations that may be a mixture of independent items and testlets, we modified one standard IRT model to include an additional random effect for items nested within the same testlet. We use a Bayesian framework to facilitate posterior inference via a Data Augmented Gibbs Sampler (DAGS; Tanner & Wong, 1987). The modified and standard IRT models are both applied to a data set from a disclosed form of the SAT. We also provide simulation results that indicates that the degree of precision bias is a function of the variability of the testlet effects, as well as the testlet design.

Type
Original Paper
Copyright
Copyright © 1999 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

The authors wish to thank Robert Mislevy, Andrew Gelman and Donald B. Rubin for their helpful suggestions and comments, Ida Lawrence and Miriam Feigenbaum for providing us with the SAT data analyzed in section 5, and to the two anonymous referees for their careful reading and thoughtful suggestions on an earlier draft. We are also grateful to the Educational Testing service for providing the resources to do this research.

References

Albert, J. H. (1992). Bayesian estimation of normal ogive response curves using Gibbs sampling. Journal of Educational Statistics, 17, 251269.CrossRefGoogle Scholar
Albert, J. H., & Chib, S. (1993). Bayesian analysis of binary and polychotomous response data. Journal of the American Statistical Association, 88, 669679.CrossRefGoogle Scholar
Bradlow, E. T., & Zaslavsky, A. M. (1997). Case influence analysis in Bayesian inference. Journal of Computational and Graphical Statistics, 6(3), 314331.CrossRefGoogle Scholar
Bradlow, E. T., & Zaslavsky, A. M. (1999). A hierarchical latent variable model for ordinal customer satisfaction survey data with “no answer” responses. Journal of the American Statistical Association, 94(445), 4352.Google Scholar
Gelfand, A. E., & Smith, A. F. M. (1990). Sampling-based approaches to calculating marginal densities. Journal of the American Statistical Association, 85, 398409.CrossRefGoogle Scholar
Gelman, A., & Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences. Statistical Science, 7, 457511.CrossRefGoogle Scholar
Hulin, C. L., & Drasgow, F., Parsons, L. K. (1983). Item response theory, Homewood, IL: Dow-Jones-Irwin.Google Scholar
Lord, F. M., & Novick, M. R. (1968). Statistical theories of mental test scores, Reading, PA: Addison-Wesley.Google Scholar
McDonald, R. P. (1981). The dimensionality of tests and items. British Journal of Mathematical and Statistical Psychology, 34, 100117.CrossRefGoogle Scholar
McDonald, R. P. (1982). Linear versus nonlinear models in item response theory. Applied Psychological Measurement, 6, 379396.CrossRefGoogle Scholar
Mislevy, R. J., & Bock, R. D. (1983). BILOG: Item and test scoring with binary logistic models, Mooresville, IN: Scientific Software.Google Scholar
Rosenbaum, P. R. (1988). Item Bundles. Psychometrika, 53, 349359.CrossRefGoogle Scholar
Sireci, S. G., & Wainer, H., Thissen, D. (1991). On the reliability of testlet-based tests. Journal of Educational Measurement, 28, 237247.CrossRefGoogle Scholar
Stout, W. F. (1987). A nonparametric approach for assessing latent trait dimensionality. Psychometrika, 52, 589617.CrossRefGoogle Scholar
Stout, W. F. (1990). A new item response theory modeling approach with applications to unidimensional assessment and ability estimation. Psychometrika, 55, 293326.CrossRefGoogle Scholar
Stout, W., Habing, B., Douglas, J., Kim, H. R., Roussos, L., & Zhang, J. (1996). Conditional covariance-based nonparametric multidimensionality assessment. Applied Psychological Measurement, 20, 331354.CrossRefGoogle Scholar
Tanner, M. A., & Wong, W. H. (1987). The calculation of posterior distributions by data augmentation. Journal of the American Statistical Association, 82, 528540.CrossRefGoogle Scholar
Wainer, H. (1995). Precision and differential item functioning on a testlet-based test: The 1991 Law School Admissions Test as an example. Applied Measurement in Education, 8(2), 157187.CrossRefGoogle Scholar
Wainer, H., & Kiely, G. (1987). Item clusters and computerized adaptive testing: A case for testlets. Journal of Educational Measurement, 24, 185202.CrossRefGoogle Scholar
Wainer, H., & Thissen, D. (1996). How is reliability related to the quality of test scores? What is the effect of local dependence on reliability?. Educational Measurement: Issues and Practice, 15(1), 2229.CrossRefGoogle Scholar
Yen, W. (1993). Scaling performance assessments: Strategies for managing local item dependence. Journal of Educational Measurement, 30, 187213.CrossRefGoogle Scholar
Zhang, J. (1996). Some fundamental issues in item response theory with applications. Unpublised doctoral dissertation, University of Illinois at Urbana-Champaign.Google Scholar
Zhang, J., & Stout, W. F. (1999). Conditional covariance structure of generalized compensatory multidimensional items. Psychometrika, 64, 129152.CrossRefGoogle Scholar