Hostname: page-component-745bb68f8f-f46jp Total loading time: 0 Render date: 2025-01-08T11:51:41.951Z Has data issue: false hasContentIssue false

Exact Distributions of Intraclass Correlation and Cronbach's Alpha with Gaussian Data and General Covariance

Published online by Cambridge University Press:  01 January 2025

Emily O. Kistner*
Affiliation:
University of North Carolina at Chapel Hill
Keith E. Muller*
Affiliation:
University of North Carolina at Chapel Hill
*
Requests for reprints should be sent to Emily O. Kistner (e-mail: [email protected]), Department of Biostatistics, University of North Carolina, McGavran-Greenberg Building CB#7420, Chapel Hill, North Carolina 27599-7420.
Keith E. Muller (e-mail: [email protected]) is an Associate Professor, Department of Biostatistics, University of North Carolina, 3105C McGavran-Greenberg Building CB#7420, Chapel Hill, North Carolina 27599-7420.

Abstract

Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions.

New exact results allow calculating the exact distribution function and other properties of intraclass correlation and Cronbach's alpha, for Gaussian data with any covariance pattern, not just compound symmetry. Probabilities are computed in terms of the distribution function of a weighted sum of independent chi-square random variables.

New F approximations for the distribution functions of intraclass correlation and Cronbach's alpha are much simpler and faster to compute than the exact forms. Assuming the covariance matrix is known, the approximations typically provide sufficient accuracy, even with as few as ten observations.

Either the exact or approximate distributions may be used to create confidence intervals around an estimate of reliability. Monte Carlo simulations led to a number of conclusions. Correctly assuming that the covariance matrix is compound symmetric leads to accurate confidence intervals, as was expected from previously known results. However, assuming and estimating a general covariance matrix produces somewhat optimistically narrow confidence intervals with 10 observations. Increasing sample size to 100 gives essentially unbiased coverage. Incorrectly assuming compound symmetry leads to pessimistically large confidence intervals, with pessimism increasing with sample size. In contrast, incorrectly assuming general covariance introduces only a modest optimistic bias in small samples. Hence the new methods seem preferable for creating confidence intervals, except when compound symmetry definitely holds.

Type
Theory and Methods
Copyright
Copyright © 2004 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

An earlier version of this paper was submitted in partial fulfillment of the requirements for the M.S. in Biostatistics, and also summarized in a presentation at the meetings of the Eastern North American Region of the International Biometric Society in March, 2001.

Kistner's work was supported in part by NIEHS training grant ES07018-24 and NCI program project grant P01 CA47 982-04. She gratefully acknowledges the inspiration of A. Calandra's “Scoring formulas and probability considerations” (Psychometrika, 6, 1–9). Muller's work supported in part by NCI program project grant P01 CA47 982-04.

References

Arnold, S.F. (1981). The theory of linear models and multivariate analysis. New York: WileyGoogle Scholar
Cronbach, L.J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297334CrossRefGoogle Scholar
Davies, R.B. (1980). The distribution of a linear combination ofx 2 random variables. Applied Statistics, 29, 323333CrossRefGoogle Scholar
Feldt, L.S. (1965). The approximate sampling distribution of Kuder-Richardson reliability coefficient twenty. Psychometrika, 30, 357370CrossRefGoogle ScholarPubMed
Glueck, D.H., & Muller, K.E. (1998). On the trace of a Wishart. Communications in Statistics: Theory and Methods, 27, 21372141CrossRefGoogle Scholar
Johnson, N.L., & Kotz, S. (1970). Continuous univariate distributions—2. Boston: Houghton MifflinGoogle Scholar
Johnson, N.L., Kotz, S., Balakrishnan, N. (1994). Continuous univariate distributions—1 2nd ed., New York: WileyGoogle Scholar
Johnson, N.L., Kotz, S., & Balakrishnan, N. (1995). Continuous univariate distributions—2 2nd ed., New York: WileyGoogle Scholar
Kristof, W. (1963). The statistical theory of stepped-up reliability when a test has been divided into several equivalent parts. Psychometrika, 28, 221228CrossRefGoogle Scholar
Lancaster, P.L. (1969). Theory of matrices. New York: Academic PressGoogle Scholar
Mathai, A.M., & Provost, S.B. (1992). Quadratic forms in random variables. New York: Marcel DekkerGoogle Scholar
Morrison, D.F. (1990). Multivariate statistical methods 3rd ed., New York: McGraw-HillGoogle Scholar
SAS Institute (1999). SAS/IML user's guide, version 8. Cary, NC: SAS Institute, Inc.Google Scholar
van Zyl, J.M., Neudecker, H., & Nel, D.G. (2000). On the distribution of the maximum likelihood estimator of Cronbach's alpha. Psychometrika, 65, 271280CrossRefGoogle Scholar