Hostname: page-component-5f745c7db-q8b2h Total loading time: 0 Render date: 2025-01-06T06:41:07.169Z Has data issue: true hasContentIssue false

Retrieving the Correlation Matrix from a Truncated PCA Solution: The Inverse Principal Component Problem

Published online by Cambridge University Press:  01 January 2025

Jos M. F. ten Berge*
Affiliation:
University of Groningen
Henk A. L. Kiers
Affiliation:
University of Groningen
*
Requests for reprints should be sent to Jos M.E ten Berge, Heijmans Institute, University of Groningen, Grote Kruisstraat 2/1, 9712 TS Groningen, THE NETHERLANDS. Email: [email protected]

Abstract

When r Principal Components are available for k variables, the correlation matrix is approximated in the least squares sense by the loading matrix times its transpose. The approximation is generally not perfect unless r =k. In the present paper it is shown that, when r is at or above the Ledermann bound, r principal components are enough to perfectly reconstruct the correlation matrix, albeit in a way more involved than taking the loading matrix times its transpose. In certain cases just below the Ledermann bound, recovery of the correlation matrix is still possible when the set of all eigenvalues of the correlation matrix is available as additional information.

Type
Original Paper
Copyright
Copyright © 1999 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Bekker, P.A., & ten Berge, J.M.F. (1997). Generic global identification in factor analysis. Linear Algebra & Applications, 264, 255263.CrossRefGoogle Scholar
Carroll, J.B. (1993). Human cognitive abilities, A survey of factor-analytic studies, New York: Cambridge University Press.CrossRefGoogle Scholar
Chen, X., & Chu, M.T. (1996). On the least squares solution of inverse eigenvalue problems. SIAM Journal of Numerical Analysis, 33, 24172430.CrossRefGoogle Scholar
Chu, M.T. (1998). Inverse eigenvalue problems. SIAM Review, 40, 139.CrossRefGoogle Scholar
Friedland, S. (1977). Inverse eigenvalue problems. Linear Algebra & Applications, 17, 1551.CrossRefGoogle Scholar
Friedland, S., Nocedal, J., & Overton, M.L. (1987). The formulation and analysis of numerical methods for inverse eigenvalue problems. SIAM Journal of Numerical Analysis, 24, 634667.CrossRefGoogle Scholar
Guttman, L. (1958). To what extent can communalities reduce rank?. Psychometrika, 23, 297308.CrossRefGoogle Scholar
Harman, H.H. (1967). Modern factor analysis 2nd. ed.,, Chicago: The University of Chicago Press.Google Scholar
Ledermann, W. (1937). On the rank of the reduced correlation matrix in multiple-factor analysis. Psychometrika, 2, 8593.CrossRefGoogle Scholar
Shapiro, A. (1982). Rank-reducibility of a symmetric matrix and sampling theory of minimum trace factor analysis. Psychometrika, 47, 187199.CrossRefGoogle Scholar
Shapiro, A. (1985). Identifiability of factor analysis: Some results and some open problems. Linear Algebra & Applications, 70, 17.CrossRefGoogle Scholar
Wilson, E.B., & Worcester, J. (1939). The resolution of six tests into three general factors. Proc. Nat. Acad. Sci. USA, 25, 7377.CrossRefGoogle ScholarPubMed