Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-27T14:26:45.237Z Has data issue: false hasContentIssue false

Using a Scoring Rubric to Assess the Writing of Bioethics Students

Published online by Cambridge University Press:  09 March 2016

Abstract:

Educators in bioethics have struggled to find valid and reliable assessments that transcend the “reproduction of knowledge” to target more important skill sets. This manuscript reports on the process of developing and grading a minimal-competence comprehensive examination in a bioethics master’s degree program. We describe educational theory and practice for the creation and deployment of scoring rubrics for high-stakes performance assessments that reduce scoring inconsistencies. The rubric development process can also benefit the program by building consensus among stakeholders regarding program goals and student outcomes.

We describe the Structure of the Observed Learning Outcome taxonomy as a mechanism for rubric design and provide an example of how we applied that taxonomy to define pass/fail cut scores. Details about domains of assessment and writing descriptors of performance are also presented. Despite the laborious work required to create a scoring rubric, we found the effort to be worthwhile for our program.

Type
Departments and Columns
Copyright
Copyright © Cambridge University Press 2016 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Notes

1. American Educational Research Association, American Psychological Association, National Council on Measurement in Education. Standards for Educational and Psychological Testing, 2014 Edition. Washington, DC: American Educational Research Association; 2014.

2. Pellegrino, JW, Chudowsky, N, Glaser, R. Knowing What Students Know: The Science and Design of Educational Assessment. Washington, DC: National Academies Press; 2001.Google Scholar

3. Antes, AL, Murphy, ST, Waples, EP, Mumford, MD, Brown, RP, Connelly, S, et al. A meta-analysis of ethics instruction effectiveness in the sciences. Ethics and Behavior 2009;19(5):379402.Google Scholar

4. Mumford, M, Connelly, S, Brown, R, Murphy, S, Hill, J, Antes, A, et al. Ethics training for scientists: Effects on ethical decision-making. Ethics and Behavior 2008;18(4):315–39.Google Scholar

5. Schonfeld, T, Stoddard, HA, Labrecque, CA. Examining ethics: Developing a comprehensive exam for a bioethics master’s program. Cambridge Quarterly of Healthcare Ethics 2014;23(4):461–71.Google Scholar

6. Brookhart, SM. Assessment theory for college classrooms. New Directions for Teaching and Learning 2004;2004(100):514.Google Scholar

7. Stiggins, RJ. Design and development of performance assessments. Educational Measurement: Issues and Practice 1987;6(3):3342.Google Scholar

8. Lane, S, Stone, CA. Performance assessment. In: Brennan, RL, ed. Educational Measurement 4th Edition. Westport, CT: American Council on Education and Praeger; 2006.Google Scholar

9. Favia, A, Frank, L, Gligorov, N, Birnbaum, S, Cummins, P, Fallar, R, et al. A model for the assessment of medical students’ competency in medical ethics. AJOB Primary Research 2013;4(4):6883.Google Scholar

10. See note 1, AERA/APA/NCME 2014.

11. Lohfeld, L, Goldie, J, Schwartz, L, Eva, K, Cotton, P, Morrison, J, et al. Testing the validity of a scenario-based questionnaire to assess the ethical sensitivity of undergraduate medical students. Medical Teacher 2012;34(8):635–42.Google Scholar

12. Tierney, R, Simon, M. What’s still wrong with rubrics: Focusing on the consistency of performance criteria across scale levels. Practical Assessment, Research & Evaluation 2004;9(2):110Google Scholar [cited 26 Dec 2013]; available at http://pareonline.net/getvn.asp?v=9&n=2 (last accessed 9 Sept 2014).

13. Lukhele, R, Thissen, D, Wainer, H. On the relative value of multiple-choice, constructed response, and examinee-selected items on two achievement tests. Journal of Educational Measurement 1994;31(3):234–50.Google Scholar

14. Moskal, BM, Leydens, JA. Scoring rubric development: Validity and reliability. Practical Assessment, Research & Evaluation 2000;7(10):7181Google Scholar [cited 26 Dec 2013]; available at http://pareonline.net/getvn.asp?v=7&n=10 (last accessed 9 Sept 2014).

15. See note 5, Schonfeld et al. 2014.

16. Biggs, JB, Collis, KF. Evaluating the Quality of Learning. New York: Academic Press; 1982.Google Scholar

17. See note 16, Biggs, Collis 1982.

18. Biggs J. John Biggs: Writer, academic, traveller; 2013 [cited 26 Dec 2013]; available at http://www.johnbiggs.com.au/academic/solo-taxonomy/ (last accessed 9 Sept 2014).

19. See note 16, Biggs, Collis 1982.

20. See note 16, Biggs, Collis 1982.

21. See note 6, Brookhart 2004.

22. See note 14, Moskal, Leydens 2000.

23. Hambleton, RK, Pitoniak, MJ. Setting performance standards. In: Brennan, RL, ed. Educational Measurement 4th Edition. Westport, CT: American Council on Education and Praeger; 2006.Google Scholar

24. See note 23, Hambleton, Pitoniak 2006

25. See note 5, Schonfeld et al. 2014.