Skip to main content Accessibility help
×
Hostname: page-component-586b7cd67f-r5fsc Total loading time: 0 Render date: 2024-11-27T20:06:48.107Z Has data issue: false hasContentIssue false

13 - Achievement Assessment

from Part II - Specific Clinical Assessment Methods

Published online by Cambridge University Press:  06 December 2019

Martin Sellbom
Affiliation:
University of Otago, New Zealand
Julie A. Suhr
Affiliation:
Ohio University
Get access

Summary

This chapter includes an overview of achievement assessments that are designed to measure performance across multiple academic domains or a single domain. First, commonly used comprehensive achievement tests, such as the Woodcock-Johnson Tests of Achievement – Fourth Edition, the Wechsler Individual Achievement Test – Third Edition, and the Kaufman Tests of Educational Achievement – Third Edition, are reviewed. Next, several single subject area tests in reading, writing, or mathematics are presented. Next curriculum-based measurements (CBMs), designed to provide ongoing evaluation of a student’s progress toward curriculum-based achievement goals, are described. We also discuss advances in technology, issues related to achievement testing, considerations of culture and diversity, and misuses and misinterpretations of achievement testing. Finally, we include several interpretive and practical recommendations for achievement testing.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2019

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Abedi, J. (2002). Standardized achievement tests and English language learners: Psychometrics issues. Educational Assessment, 8, 231257.Google Scholar
Abedi, J., Hofstetter, C, Baker, E., & Lord, C. (2001). NAEP math performance test accommodations: Interactions with student language background (CSE Technical Report 536). Los Angeles: National Center for Research on Evaluation, Standards, and Student Testing.Google Scholar
Abedi, J., & Leon, S. (1999). Impact of students’ language background on content-based performance: Analyses of extant data. Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing.Google Scholar
Abedi, J., Leon, S., & Mirocha, J. (2003). Impact of student language background on content-based performance: Analyses of extant data (CSE Technical Report 603). Los Angeles: National Center for Research on Evaluation, Standards, and Student Testing.Google Scholar
Adelman, H. S., Lauber, B. A., Nelson, P., & Smith, D. C. (1989). Toward a procedure for minimizing and detecting false positive diagnoses of learning disability. Journal of Learning Disabilities, 22, 234244.CrossRefGoogle Scholar
American Psychiatric Association. (2013). Desk reference to the diagnostic criteria from DSM-5. Washington, DC: American Psychiatric Publishing.Google Scholar
APA (American Psychological Association). (2004). Code of fair testing practices in education. Washington, DC: Joint Committee on Testing Practices.Google Scholar
Babad, E. Y., Inbar, J., & Rosenthal, R. (1982). Pygmalion, Galatea, and the Golem: Investigations of biased and unbiased teachers. Journal of Educational Psychology, 74, 459474.CrossRefGoogle Scholar
Banks, K. (2006). A comprehensive framework for evaluating hypotheses about cultural bias in educational testing. Applied Measurement in Education, 19, 115132.Google Scholar
Banks, K. (2012). Are inferential reading items more susceptible to cultural bias than literal reading items? Applied Measurement in Education, 25, 220245.Google Scholar
Breaux, K. C. (2009). Wechsler individual achievement test: Technical manual (3rd ed.). San Antonio, TX: Pearson.Google Scholar
Breaux, K. C., Bray, M. A., Root, M. M., & Kaufman, A. S. (Eds.) (2017). Special issue on studies of students’ errors in reading, writing, math, and oral language. Journal of Psychoeducational Assessment, 35. https://doi.org/10.1177/0734282916669656Google Scholar
Brooks, B. L., Holdnack, J. A., & Iverson, G. L. (2011). Advanced clinical interpretation of the WAIS-IV and WMS-IV: Prevalence of low scores varies by level of intelligence and years of education. Assessment, 18, 156167.Google Scholar
Brown, J. I., Fishco, V. V., & Hanna, G. (1993). Nelson-Denny reading test (forms G and H). Austin, TX: PRO-ED.Google Scholar
Coleman, C., Lindstrom, J., Nelson, J., Lindstrom, W., & Gregg, K. N. (2010). Passageless comprehension on the Nelson Denny Reading Test: Well above chance for university students. Journal of Learning Disabilities, 43, 244249.Google Scholar
Connolly, A. (2007). Key Math-3 Diagnostic Assessment. Austin, TX: Pearson.Google Scholar
Cruickshank, W. M. (1977). Least-restrictive placement: Administrative wishful thinking. Journal of Learning Disabilities, 10, 193194.Google Scholar
CTB/McGraw-Hill. (1999). Teacher’s guide to Terra Nova: CTBS battery, survey, and plus editions, multiple assessments. Monterey, CA: Author.Google Scholar
Davis, L. B., & Fuchs, L. S. (1995). “Will CBM help me learn?”: Students’ perception of the benefits of curriculum-based measurement. Education and Treatment of Children, 18(1), 1932.Google Scholar
Deeney, T. A., & Shim, M. K. (2016). Teachers’ and students’ views of reading fluency: Issues of consequential validity in adopting one-minute reading fluency assessments. Assessment for Effective Instruction, 41(2), 109126.CrossRefGoogle Scholar
DeRight, J., & Carone, D. A. (2015). Assessment of effort in children: A systematic review. Child Neuropsychology, 21, 124.Google Scholar
Ford, J. W., Missall, K. N., Hosp, J. L., & Kuhle, J. L. (2017). Examining oral passage reading rate across three curriculum-based measurement tools for predicting grade-level proficiency. School Psychology Review, 46, 363378.CrossRefGoogle Scholar
Fuchs, L. S. (2016). Curriculum based measurement as the emerging alternative: Three decades later. Learning Disabilities Research and Practice, 32, 57.Google Scholar
Fuchs, L. S., & Fuchs, D. (2002). Curriculum-based measurement: Describing competence, enhancing outcomes, evaluating treatment effects, and identifying treatment nonresponders. Peabody Journal of Education, 77(2), 6484.Google Scholar
Fuchs, L. S., Fuchs, D., Hosp, M., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: A theoretical, empirical, and historical analysis. Scientific Studies of Reading, 5, 239256.Google Scholar
Gardner, E. (1989). Five common misuses of tests. ERIC Digest No. 108. Washington, DC: ERIC Clearinghouse on Tests Measurement and Evaluation.Google Scholar
Greiff, S., Wüstenberg, S., Holt, D. V., Goldhammer, F., & Funke, J. (2013). Computer-based assessment of complex problem solving: Concept, implementation, and application. Educational Technology Research and Development, 61, 407421.Google Scholar
Hammill, D. D., & Larsen, S. C. (2009). Test of written language (4th ed.). Austin, TX: PRO-ED.Google Scholar
Harrison, A. G., & Edwards, M. J. (2010). Symptom exaggeration in post-secondary students: Preliminary base rates in a Canadian sample. Applied Neuropsychology, 17, 135143.CrossRefGoogle Scholar
Harrison, A. G., Edwards, M. J., Armstrong, I., & Parker, K. C. H. (2010). An investigation of methods to detect feigned reading disabilities. Archives of Clinical Neuropsychology, 25, 8998.Google Scholar
Harrison, A. G., Edwards, M. J., & Parker, K. C. H. (2008). Identifying students feigning dyslexia: Preliminary findings and strategies for detection. Dyslexia, 14, 228246.Google Scholar
Hasbrouck, J., & Tindal, G. (2017 ). An update to compiled ORF norms (Technical Report No. 1702). Eugene, OR: Behavioral Research and Teaching, University of Oregon.Google Scholar
Hinnant, J. B., O’Brien, M., & Ghazarian, S. R. (2009). The longitudinal relations of teacher expectations to achievement in the early school years. Journal of Educational Psychology, 101, 662670.Google Scholar
Hosp, M. K., Hosp, J. L., & Howell, K. W. (2016). The ABCs of CBM: A practical guide to curriculum-based measurement (2nd ed.). New York: Guilford Press.Google Scholar
Hosp, J. L., & Suchey, N. (2014). Reading assessment: Reading fluency, reading fluently, and comprehension – Commentary on the special topic. School Psychology Review, 43, 5968.CrossRefGoogle Scholar
Huff, K. L., & Sireci, S. G. (2001). Validity issues in computer-based testing. Educational Measurement: Issues and Practice, 20, 1625.Google Scholar
Jones, E. D., Southern, W. T., & Brigham, F. J. (1998). Curriculum-based assessment: Testing what is taught and teaching what is tested. Intervention in School and Clinic, 33, 239249.Google Scholar
Joseph, L. M., Wargelin, L., & Ayoub, S. (2016). Preparing school psychologists to effectively provide services to students with dyslexia. Perspectives on Language and Literacy, 42(4), 1523.Google Scholar
Kaufman, A. S., & Kaufman, N. L. (2014). Kaufman test of educational achievement (3rd ed.). San Antonio, TX: Pearson.Google Scholar
Kaufman, A. S., Kaufman, N. L., & Breaux, K. C. (2014). Technical and interpretive manual. Kaufman Test of Educational Achievement – Third Edition (KTEA-3) Comprehensive Form. Bloomington, MN: NCS Pearson.Google Scholar
Kaufman, A. S., Raiford, S. E., & Coalson, D. L. (2016). Intelligent testing with the WISC-V. Hoboken, NJ: John Wiley & Sons.Google Scholar
Keenan, J. M., & Betjemann, R. S. (2006). Comprehending the Gray Oral Reading Test without reading it: Why comprehension tests should not include passage-independent items. Scientific Studies of Reading, 10, 363380.Google Scholar
Keenan, J. M., Betjemann, R. S., & Olson, R. K. (2008). Reading comprehension tests vary in the skills that they assess: Differential dependence on decoding and oral comprehension. Scientific Studies of Reading, 12, 281300.Google Scholar
Keenan, J. M., & Meenan, C. E. (2014). Test differences in diagnosing reading comprehension deficits. Journal of Learning Disabilities, 47, 125135.CrossRefGoogle ScholarPubMed
Kellow, J. T., & Jones, B. D. (2008). The effects of stereotypes on the achievement gap: Reexamining the academic performance of African American high school students. Journal of Black Psychology, 34(1), 94120.Google Scholar
Kendeou, P., Papadopoulos, T. C., & Spanoudis, G. (2012). Processing demands of reading comprehension tests in young readers. Learning and Instruction, 22, 354367.Google Scholar
Kieffer, M. J., Lesaux, N. K., Rivera, M., & Francis, D. J. (2009). Accommodations for English language learners taking large-scale assessments: A meta-analysis on effectiveness and validity. Review of Educational Research, 79, 11681201.Google Scholar
Kirkwood, M. W., Kirk, J. W., Blaha, R. Z., Wilson, P. (2010). Noncredible effort during pediatric neuropsychological exam: A case series and literature review. Child Neuropsychology, 16, 604618.Google Scholar
Leslie, L., & Caldwell, J. (2001). Qualitative reading inventory–3. New York: Addison Wesley Longman.Google Scholar
Linn, R. L. (2000). Assessments and accountability. Educational Researcher, 29, 416.Google Scholar
Lu, P. H., & Boone, K. B. (2002). Suspect cognitive symptoms in a 9-year old child: Malingering by proxy? The Clinical Neuropsychologist, 16, 9096.Google Scholar
Markwardt, F. C. (1997). Peabody individual achievement test – revised (normative update). Bloomington, MN: Pearson Assessments.Google Scholar
Martiniello, M. (2009). Linguistic complexity, schematic representations, and differential item functioning for English language learners in math tests. Educational Assessment, 14, 160179.Google Scholar
McGrew, K. S., LaForte, E. M., & Schrank, F. A. (2014). Woodcock-Johnson IV: Technical manual [CD]. Itasca, IL: Houghton Mifflin Harcourt.Google Scholar
Molnar, M. (2017). Market is booming for digital formative assessments. Education Week, May 24. http://edweek.org/ew/articles/2017/05/24/market-is-booming-for-digital-formative-assessments.htmlGoogle Scholar
Monroe, M. (1932). Children who cannot read. Chicago. IL: University of Chicago Press.Google Scholar
NASP (National Association of School Psychologists). (2016). School psychologists’ involvement in assessment. Bethesda, MD: Author.Google Scholar
Nguyen, H. H. D., & Ryan, A. M. (2008). Does stereotype threat affect test performance of minorities and women? A meta-analysis of experimental evidence. Journal of Applied Psychology, 93, 13141334.Google Scholar
Rome, H. P., Swenson, W. M., Mataya, P., McCarthy, C. E., Pearson, J. S., Keating, F. R., & Hathaway, S. R. (1962). Symposium on automation techniques in personality assessment. Proceedings of the Staff Meetings of the Mayo Clinic, 37, 6182.Google Scholar
Sattler, J. M. (2008). Assessment of children: Cognitive foundations. CA: Author.Google Scholar
Schneider, J. W., Lichtenberger, E. O., Mather, N., & Kaufman, N. L. (2018). Essentials of assessment report writing. Hoboken, NJ: John Wiley & Sons.Google Scholar
Schrank, F. A., Mather, N., & McGrew, K. S. (2014a). Woodcock-Johnson IV tests of achievement. Itasca, IL: Houghton Mifflin Harcourt.Google Scholar
Schrank, F. A., Mather, N., & McGrew, K. S. (2014b). Woodcock-Johnson IV tests of oral language. Itasca, IL: Houghton Mifflin Harcourt.Google Scholar
Schrank, F. A., McGrew, K. S., & Mather, N. (2014a). Woodcock-Johnson IV. Itasca, IL: Houghton Mifflin Harcourt.Google Scholar
Schrank, F. A., McGrew, K. S., & Mather, N. (2014b). Woodcock-Johnson IV tests of cognitive abilities. Itasca, IL: Houghton Mifflin Harcourt.Google Scholar
Shinn, M. R., Good, R. H., Knutson, N., & Tilly, D. W. (1992). Curriculum-based measurement of oral reading fluency: A confirmatory analysis of its relation to reading. School Psychology Review, 21, 5, 459479.Google Scholar
Shute, V. J., Leighton, J. P., Jang, E. E., & Chu, M. W. (2016). Advances in the science of assessment. Educational Assessment, 21(1), 3459.Google Scholar
Shute, V. J., & Rahimi, S. (2017). Review of computer‐based assessment for learning in elementary and secondary education. Journal of Computer Assisted Learning, 33(1), 119.CrossRefGoogle Scholar
Singleton, C. H. (2001). Computer-based assessment in education. Educational and Child Psychology, 18(3), 5874.Google Scholar
Spencer, S. J., Steele, C. M., & Quinn, D. M. (1999). Stereotype threat and women’s math performance. Journal of Experimental Social Psychology, 35, 428.CrossRefGoogle Scholar
Steele, C. M., & Aronson, J. (1995). Stereotype threat and the intellectual test performance of African Americans. Journal of Personality and Social Psychology, 69, 797811.Google Scholar
Sullivan, B. K., May, K., & Galbally, L. (2007). Symptom exaggeration by college adults in Attention-Deficit Hyperactivity Disorder and Learning Disorder assessments. Applied Neuropsychology, 14, 189207.Google Scholar
Thurlow, M., Lazarus, S., & Christensen, L. (2013). Accommodations for assessment. In Lloyd, J., Landrum, T., Cook, B., & Tankersley, M. (Eds.)., Research-based approaches for assessment (pp. 94110). Upper Saddle River, NJ: Pearson.Google Scholar
Valencia, S. W., Smith, A. T., Reece, A. M., Li, M., Wixson, K. K., & Newman, H. (2010). Oral reading fluency assessment: Issues of construct, criterion, and consequential validity. Reading Research Quarterly, 45, 270291.Google Scholar
Van den Bergh, L., Denessen, E., Hornstra, L., Voeten, M., & Holland, R. W. (2010). The implicit prejudiced attitudes of teachers: Relations to teacher expectations and the ethnic achievement gap. American Educational Research Journal, 47, 497527.Google Scholar
VanDerHeyden, A. M., Witt, J. C., & Gilbertson, D. (2007). A multi-year evaluation of the effects of a response to intervention (RTI) model on identification of children for special education. Journal of School Psychology, 45, 225256. https://doi.org/10.1016/j.jsp.2006.11.004Google Scholar
Van Norman, E. R., Nelson, P. M., & Parker, D. C. (2018). A comparison of nonsense-word fluency and curriculum-based measurement of reading to measure response to phonics instruction. School Psychology Quarterly, 33, 573581. https://doi.org/10.1037/spq0000237Google Scholar
Wechsler, D. (2009). Wechsler individual achievement test (3rd ed.). San Antonio, TX: Psychological Corporation.Google Scholar
Wechsler, D. (2014). Wechsler intelligence scale for children (5th ed.). San Antonio, TX: Psychological Corporation.Google Scholar
Wei, H., & Lin, J. (2015). Using out-of-level items in computerized adaptive testing. International Journal of Testing, 15, 5070.CrossRefGoogle Scholar
Wiederholt, J. L., & Bryant, B. R. (2001). Gray oral reading test (4th ed.). Austin, TX: PRO-ED.Google Scholar
Wiederholt, J. L., & Bryant, B. R. (2012). Gray oral reading test (5th ed.). Austin, TX: PRO-ED.Google Scholar
Willis, J. (2015). The historical role and best practice in identifying Specific Learning Disabilities. Paper presented at the New York Association of School Psychologists annual conference. Verona, NY, October.Google Scholar
Woodcock, R. W. (2011). Woodcock reading mastery test (3rd ed.). San Antonio, TX: Pearson.Google Scholar
Woodcock, R. W., McGrew, K. S., & Mather, N. (2001). Woodcock–Johnson III tests of achievement. Itasca, IL: Riverside.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×