Hostname: page-component-78c5997874-4rdpn Total loading time: 0 Render date: 2024-11-02T20:03:36.576Z Has data issue: false hasContentIssue false

On-line Assessment of Comprehension Processes

Published online by Cambridge University Press:  10 January 2013

Tomás Martínez*
Affiliation:
Universitat de València (Spain)
Eduardo Vidal-Abarca
Affiliation:
Universitat de València (Spain)
Laura Gil
Affiliation:
Universitat de València (Spain)
Ramiro Gilabert
Affiliation:
Universitat de València (Spain)
*
Correspondence concerning this article should be addressed to Tomás Martínez. Facultat de Psicología. Departamento de Psicología Evolutiva y de la Educación, Universitat de Vàlencia, Av. de Blasco Ibáñez, n° 21. 46010 Valencia (Spain). E-mail: [email protected]

Abstract

In this paper we describe a new version of a former paper-and-pencil standardized comprehension test called Test of Comprehension Processes (Vidal-Abarca, Gilabert, Martínez, & Sellés, 2007). The new version has been adapted to a computer-based environment based on the moving window technique. It can be used to assess comprehension strategies of students from fifth to tenth grades (11 to 16 years old). Comprehension strategies are registered on-line using reading times and visits to relevant sections of the text during the question-answering process. Data show that the computer-based version draws similar results to those provided by the paper-and-pencil version. In addition, we identify the particular strategies deployed during the question-answering process by high, medium and low comprehenders.

En el presente artículo presentamos una nueva versión de un test de comprensión estandarizado de lápiz y papel llamado Test de Procesos de Comprensión (Vidal-Abarca, Gilabert, Martínez, & Sellés, 2007), el cual ha sido adaptado a un entorno electrónico mediante la utilización una técnica de ventana móvil. Esta versión electrónica puede ser usada para diagnosticar estrategias de comprensión de escolares entre 5 de Primaria y 4 de Educación Secundaria Obligatoria (11 a 16 años). Las estrategias de comprensión se miden registrando tiempos de lectura y visitas a segmentos relevantes del texto de forma on-line durante el proceso de responder a preguntas del texto. Los resultados muestran en primer lugar que la versión electrónica es similar a la versión papel y lápiz. En segundo lugar, se muestran diferentes estrategias durante el proceso de respuesta a preguntas del texto que son características de estudiantes con estrategias de comprensión de nivel alto, medio y bajo.

Type
Research Article
Copyright
Copyright © Cambridge University Press 2009

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Aaronson, D. & Ferres, S. (1983). Lexical categories and reading tasks. Journal of Experimental Psychology: Human Perception and Performance, 9(5), 675699.Google Scholar
Alderson, J. C. (2000). Assessing reading. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Bugbee, A. C. Jr., (1996). The equivalence of paper-and-pencil and computer-based testing. Journal of Research on Computing in Education, 28(3), 282299.CrossRefGoogle Scholar
Carver, C. S. & Scheier, M. F. (1990). Origins and functions of positive and negative effect. A control-process view. Psychological Review, 97(1), 1935.CrossRefGoogle Scholar
Cataldo, M. G. & Oakhill, J. (2000). Why are poor comprehenders inefficient searchers? An investigation into the effects of text representation and spatial memory on the ability to locate information in text. Journal of educational psychology, 92(4), 791799.CrossRefGoogle Scholar
Choi, I., Kim, K. S. & Boo, J. (2003). Comparability of a paperbased language test and a computer-based language test. Language Testing, 20(3), 295320.CrossRefGoogle Scholar
Cordón, L. A. & Day, J. D. (1996). Strategy use on standardized reading comprehension tests. Journal of educational psychology, 88(2), 288295.CrossRefGoogle Scholar
Entin, E. B. & Klare, G. R. (1985). Relationships of measures of interest, prior knowledge, and readability to comprehension of expository passages. En Hutson, B. (Ed.) Advances in Reading/Language Research (pp. 938). Greenwich: JAI Press.Google Scholar
Evans, L. D., Tannehill, R. & Martin, S. (1995). Children's reading skills: A comparison of traditional and computerized assessment. Behavior Research Methods, Instruments & Computers, 27(2), 162165.CrossRefGoogle Scholar
Farr, R., Pritchard, R. & Smitten, B. (1990). A description of what happens when an examinee takes a mutiple-choice reading comprehension test. Journal of Educational Measurement, 27(3), 209226.CrossRefGoogle Scholar
Goldman, S. R. (1997). Learning from text: Reflections on the past and suggestions for the future. Discourse Processes, 23(3), 357398.CrossRefGoogle Scholar
Goldman, S. R. & Saul, E. U. (1990). Applications for tracking reading behavior on the macintosh. Behavior Research Methods, Instruments & Computers, 22(6), 526532.CrossRefGoogle Scholar
Graesser, A. C., Singer, M., & Trabasso, T. (1994). Constructing inferences during narrative test comprehension. Psychological Review, (101), 371395.CrossRefGoogle Scholar
Just, M. A., & Carpenter, P. A. (1980). A theory of reading: From eye fixations to comprehension. Psychological review, 87(4), 329354.CrossRefGoogle ScholarPubMed
Just, M. A., Carpenter, P. A., & Woolley, J. D. (1982). Paradigms and processes in reading comprehension. Journal of Experimental Psychology: General, 111(2), 228238.CrossRefGoogle ScholarPubMed
Kintsch, W. (1988). The role of knowledge in discourse comprehension: A construction integration model. Psychological Review, 95(2), 163182.CrossRefGoogle ScholarPubMed
Kintsch, W. (1998). Comprehension: A paradigm for cognition. New York: Cambridge University Press.Google Scholar
Kintsch, W. & Kintsch, E. (2005). Comprehension. En Paris, S. G. & Stahl, S. A. (Eds.), Children's reading comprehension and assessment. (pp. 7192). Mahwah: Lawrence Erlbaum Associates Publishers.Google Scholar
Kobrin, J. L. & Young, J. W. (2003). The cognitive equivalence of reading comprehension test items via computerized and paper-and-pencil administration. Applied Measurement in Education, 16(2), 115–40.CrossRefGoogle Scholar
Koriat, A. & Goldsmith, M. (1996). Monitoring and control processes in the strategic regulation of memory accuracy. Psychological Review, 103(3), 490517.CrossRefGoogle ScholarPubMed
Magliano, J. P., Millis, K. K., Ozurur, Y. & McNamara, D. S. (2007). A multidimensional framework to evaluate assessment tools. En McNamara, D. S. (Ed.), Reading comprehension strategies: Theories, interventions, and technologies (pp. 107136). Mahwah, NJ: Erlbaum.Google Scholar
Magliano, J. P., Millis, K. K., The NIU R-SAT Development Team, Levinstein, I. B. & Boonthum, C. (En prensa). Comprehension strategies and performance with the reading strategies assessment tool (R-SAT). Journal of Educational Psychology.Google Scholar
Maguire, K. B., Knobel, M. M., Knobel, B. L. & Sedlacek, L. G. (1991). Computer-adapted PPVT-R: A comparison between standard and modified versions within an elementary school population. Psychology in the Schools, 28(3), 199205.3.0.CO;2-Z>CrossRefGoogle Scholar
Martínez, T., Vidal-Abarca, E., Sellés, M. P. & Gilabert, R. (2008). Evaluación de las estrategias y los procesos de comprensión: El test de procesos de comprensión (TPC). [Evaluation of comprehension stratragies and processes: The test of Comprehension Processes]. Infancia y Aprendizaje, 31(3), 319332.CrossRefGoogle Scholar
McNamara, D. S. & Kintsch, W. (1996). Learning from texts: Effects of prior knowledge and text coherence. Discourse Processes, 22(3), 247288.CrossRefGoogle Scholar
McNamara, D. S., O'Reilly, T. P., Best, R. M. & Ozuru, Y. (2006). Improving adolescent students' reading comprehension with iSTART. Journal of Educational Computing Research, 34(2), 147171.CrossRefGoogle Scholar
Mead, A. D. & Drasgow, F. (1993). Equivalence of computerized and paper-and-pencil cognitive ability tests: A meta-analysis. Psychological Bulletin, 114(3), 449458.CrossRefGoogle Scholar
Metcalfe, J. (2002). Is study time allocated selectively to a region of proximal learning? Journal of Experimental Psychology: General, 131(3), 349363.CrossRefGoogle ScholarPubMed
Metcalfe, J. & Kornell, N. (2003). The dynamics of learning and allocation of study time to a region of proximal learning. Journal of Experimental Psychology: General, 132(4), 530542.CrossRefGoogle ScholarPubMed
Myers, S. S. (1991). Perfomance in reading comprehension -product or process? Educational Review, 43(3), 257272.CrossRefGoogle Scholar
Perfetti, C. A. (1985). Reading ability. New York: Oxford University Press.Google Scholar
Perfetti, C. A., Beck, I., Bell, L. C. & Hughes, C. (1988). Phonemic knowledge and learning to read are reciprocal: A longitudinal study of first grade children. Detroit, MI: Wayne State University Press.Google Scholar
Ramos, J. L. & Cuetos, F. (1999). PROLEC-SE: evaluación de los procesos lectores en alumnos de tercer ciclo de educación primaria y secundaria. [PROLEC-SE: evaluation of reading processes for students of third cycle of primary and secondary school]. Madrid: TEA.Google Scholar
Rupp, A. A., Ferne, T., & Choi, H. (2006). How assessing reading comprehension with multiple-choice questions shapes the construct: A cognitive processing perspective. Language Testing, 23(4), 441474.CrossRefGoogle Scholar
Schraw, G. & Roedel, T. D. (1994). Test difficulty and judgment bias. Memory & cognition, 22(1), 6369.CrossRefGoogle ScholarPubMed
Thiede, K. W. & Dunlosky, J. (1999). Toward a general model of self-regulated study: An analysis of selection of items for study and self-paced study time. Journal of Experimental Psychology: Learning Memory and Cognition, 25(4), 10241037.Google Scholar
Trites, L. & McGroarty, M. (2005). Reading to learn and reading to integrate: New tasks for reading comprehension tests? Language Testing, 22(2), 174210CrossRefGoogle Scholar
Vidal-Abarca, E., Gilabert, R., Martínez, T. & Sellés, M. P. (2007). Test de estrategias de comprensión. [Test of Comprehension Strategies]. Madrid: Instituto Calasanz de Ciencias de la Educación.Google Scholar
Vispoel, W. P., Boo, J. & Bleiler, T. (2001). Computerized and paper-and-pencil versions of the rosenberg self-esteem scale: A comparison of psychometric features and respondent preferences. Educational and Psychological Measurement, 61(3), 461474.CrossRefGoogle Scholar
Wise, S. L. & Plake, B. S. (1989). Research on the effects of administering tests via computers. Educational Measurement: Issues and Practice, 8(3), 510.CrossRefGoogle Scholar