Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-25T16:25:16.178Z Has data issue: false hasContentIssue false

Some Thoughts about the Suitability of the Reliable Change Index (RCI) for Analysis of Ordinal Scale Data

Published online by Cambridge University Press:  23 February 2015

Michael Perdices*
Affiliation:
Department of Neurology, Royal North Shore Hospital, Sydney, Australia Division of Psychological Medicine, Northern Clinical School, Faculty of Medicine, University of Sydney, Australia
*
Address for correspondence: Dr Michael Perdices, Department of Neurology, Royal North Shore Hospital, Pacific Highway, St Leonards NSW 2065, Sydney, Australia. E-mail: [email protected]
Get access

Abstract

The reliable change index (RCI) was introduced approximately 30 decades ago in order to provide an empirical, statistically grounded technique for determining whether improvement after a therapeutic intervention was real or due to measurement error. Since the definitions of the properties and limitations of scales of measurement described by Stevens in 1947, there has been vigorous controversy about whether it is permissible to analyse ordinal data with parametric statistics. Specifically, are parameters and statistics such as means and standard deviations meaningful in the context of ordinal data? These are important concerns because many of the scales used to measure outcomes in behavioural research and clinical settings yield ordinal-scale measures. Given that the standard deviation is used in the computation of the RCI, the question as to whether or not the RCI is reliable when used with ordinal-scale data is explored. Data from the SPRS-2 was used to calculate minimum reliable difference criteria in terms of both (ordinal) Total Raw Scores (MRDRS) and logit scores (MRDLS) derived from Rasch analysis. Test–retest differences across the Total Raw Score range were evaluated using each criterion. At both extremes of the range, small changes in Total Raw Score not deemed to be reliable according to the MRDRS criterion were classified as reliable according to the MRDLS criterion. Conversely, test–retest changes in the centre of the range deemed to be reliable according to the MRDRS criterion were classified as unreliable according to the MRDLS criterion. It is suggested that while MRDRS can determine numerically reliable differences, MRDLS can determine reliable differences that are meaningful in terms of the underlying construct being measured.

Type
Articles
Copyright
Copyright © Australasian Society for the Study of Brain Impairment 2015 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Allen, I.E., & Seaman, C.A. (2007). Likert scales and data analyses. Quality Progress, 40, 6465.Google Scholar
Beck, A.T., Steer, R.A., & Brown, G.K. (1996). Beck depression inventory manual (2nd ed.). San Antonio: The Psychological Corporation, Harcourt Brace & Company.Google Scholar
Carifio, J. (1978). Measuring vocational preferences: ranking versus categorical rating procedures. Career Education Quarterly, 3 (1), 3466.Google Scholar
Carifio, J., & Perla, R. (2007). Ten common misunderstandings, misconceptions, persistent myths and urban legends about Likert scales and Likert response formats and their antidotes. Journal of Social Sciences, 3 (3), 106116.Google Scholar
Chelune, G.J., Naugle, R.I., Luders, H., Seldak, J., & Awad, I.A. (1993). Individual change after epilepsy: Practice effects and base-rate information. Neuropsychologia, 7, 4152.Google Scholar
Christensen, L., & Mendoza, J.L. (1986). A method for assessing change in a single subject: An alternative in the RC Index. Behavior Therapy, 17, 305308.Google Scholar
Collie, A., Maruff, P., McStephen, M., & Darby, D. (2003). Are Reliable Change (RC) calculations appropriate for determining the extent of cognitive change in concussed athletes? British Journal of Sports Medicine, 37, 370372.Google Scholar
Glass, G.V., Peckham, P.D., & Sanders, J.R. (1972). Consequences of failure to meet assumptions underlying the analyses of variance and covariance. Review of Education Research, 42, 237288.Google Scholar
Hageman, W.J.J.M., & Arrindell, W.A. (1993). A further refinement of the reliable change (RC) index by improving the pre–post difference score: introducing RCID. Behaviour Research and Therapy, 31, 693700.Google Scholar
Iverson, G.L. (2001). Interpreting change on the WAIS-III/WMS-III in clinical samples. Archives of Clinical Neuropsychology, 16, 183191.Google Scholar
Jacobson, N.S., Follette, W.C., & Revenstorf, D. (1984). Psychotherapy outcome research: Methods for reporting variability and evaluating clinical significance. Behavior Therapy, 15, 336352.Google Scholar
Jacobson, N.S., Follette, W.C., & Revenstorf, D. (1986). Towards a standard definition of clinically significant change. Behavior Therapy, 15, 309311.Google Scholar
Jacobson, N.S., Roberts, L.J., Berns, S.B., & McGlinchey, J.B. (1999). Methods for defining and determining the clinical significance of treatment effects: Description, application, and alternatives. Journal of Consulting and Clinical Psychology, 67 (3), 300307.Google Scholar
Jacobson, N.S., & Truax, P. (1991). Clinical significance: A statistical approach to defining meaningful change in psychotherapy research. Journal of Consulting and Clinical Psychology, 59 (1), 1219.Google Scholar
Knapp, T.R. (1990). Treating ordinal scales as interval scales: An attempt to resolve the controversy. Nursing Research, 39 (2), 121123.Google Scholar
Kuzon, W.M. Jr, Urbanchek, M.G., & McCabe, S. (1996). The seven deadly sins of statistical analysis. Annals of Plastic Surgery, 37, 265272.Google Scholar
Labovitz, S. (1972). Statistical usage in sociology: Sacred cows and ritual. Sociological Methods & Research, 1, 1335.Google Scholar
Lucke, J.F. (1996). Student's t test and the Glasgow Coma Scale. Annals of Emergency Medicine, 28, 408413.Google Scholar
Maassen, G.H. (2004). The standard error in the Jacobson and Truax Reliable Change Index: The classical approach to assessment of reliable change. Journal of the International Neuropsychological Society, 10, 888893.Google Scholar
Marcus-Roberts, H. M., & Roberts, F. S. (1987). Meaningless statistics. Journal of Educational Statistics, 12 (4), 383394.Google Scholar
Murray, J. (2013). Likert data: What to use, parametric or non-parametric? International Journal of Business and Social Science, 4 (11), 258264.Google Scholar
Norman, G. (2010). Likert scales, levels of measurement and the ‘laws’ of statistics. Advances in Health Science Education, 15, 625632.Google Scholar
Nunnally, J.C. (1975). Psychometric theory – 25 years ago and now. Educational Researcher, 4, 721.Google Scholar
Salzberger, T. (2010). Does the Rasch model convert an ordinal scale into an interval scale? Rasch Measurement, 24 (2), 12731275.Google Scholar
Stevens, S.S. (1946). On the theory of scales of measurement. Science, 103 (2684), 677680.Google Scholar
Tate, R.L. (2011). Manual for the Sydney Psychosocial Reintegration Scale Version 2 (SPRS-2). Unpublished Manuscript, Rehabilitation Studies Unit, University of Sydney.Google Scholar
Tate, R.L., Simpson, G.K., Soo, C.A., & Lane-Brown, A.T. (2011). Participation after acquired brain injury: Clinical and psychometric considerations of the Sydney Psychosocial Reintegration Scale (SPRS). Journal of Rehabilitation Medicine, 43, 609618.Google Scholar
Vickers, A. (1999). Comparison of an ordinal and a continuous outcome measure of muscle soreness. International Journal of Technology Assessment in Health Care, 15, 709716.Google Scholar
Vigderhous, G. (1977). The level of measurement and ‘permissible’ statistical analysis in social research. The Pacific Sociological Review, 20 (1), 6172.CrossRefGoogle Scholar