Hostname: page-component-5f745c7db-rgzdr Total loading time: 0 Render date: 2025-01-06T05:49:27.796Z Has data issue: true hasContentIssue false

Partial Identification of Latent Correlations with Ordinal Data

Published online by Cambridge University Press:  01 January 2025

Jonas Moss
Affiliation:
BI Norwegian Business School
Steffen Grønneberg*
Affiliation:
BI Norwegian Business School
*
Correspondence should be made to Steffen Grønneberg, Department of Economics, BI Norwegian Business School, 0484 Oslo, Norway. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

The polychoric correlation is a popular measure of association for ordinal data. It estimates a latent correlation, i.e., the correlation of a latent vector. This vector is assumed to be bivariate normal, an assumption that cannot always be justified. When bivariate normality does not hold, the polychoric correlation will not necessarily approximate the true latent correlation, even when the observed variables have many categories. We calculate the sets of possible values of the latent correlation when latent bivariate normality is not necessarily true, but at least the latent marginals are known. The resulting sets are called partial identification sets, and are shown to shrink to the true latent correlation as the number of categories increase. Moreover, we investigate partial identification under the additional assumption that the latent copula is symmetric, and calculate the partial identification set when one variable is ordinal and another is continuous. We show that little can be said about latent correlations, unless we have impractically many categories or we know a great deal about the distribution of the latent vector. An open-source R package is available for applying our results.

Type
Theory and Methods
Creative Commons
Creative Common License - CCCreative Common License - BY
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Copyright
Copyright © 2022 The Author(s) under exclusive licence to The Psychometric Society

The empirical covariance matrix for continuous data is consistent and asymptotically normal, enabling the use of a single asymptotic framework for inference in structural equation models (Browne, Reference Browne1984; Satorra, Reference Satorra1989). But with ordinal data, the situation is more complex.

When the data is a random sample of vector variables with ordinal coordinates, it is usually inappropriate to estimate structural equation models directly on the covariance matrix of the observations (Bollen, Reference Bollen1989, Chapter 9). Instead, the correlation matrix of a latent continuous random vector Z is used as input for the models, such as ordinal factor analysis (Christoffersson, Reference Christoffersson1975; Muthén, Reference Muthén1978), ordinal principal component analysis (Kolenikov & Angeles, Reference Kolenikov and Angeles2009), ordinal structural equation models (Jöreskog, Reference Muthén1984; Muthén, Reference Jöreskog1994), and, more recently, ordinal methods in network psychometrics (Epskamp, Reference Epskamp2017; Isvoranu & Epskamp, Reference Isvoranu and Epskamp2021; Johal & Rhemtulla, Reference Johal and Rhemtulla2021).

The polychoric correlation (Olsson, Reference Olsson1979) is the correlation of a latent bivariate normal variable based on ordinal data. While the polychoric correlation is an important dependency measure for ordinal variables under the bivariate normality assumption, its prime application lies in empirical psychometrics. In particular, it is employed in the two-stage estimation method for ordinal factor analysis and ordinal structural equation models. To employ the two-stage method, first estimate the latent correlation matrix using polychoric correlations, then fit a covariance model to this correlation matrix (Jöreskog, Reference Jöreskog2005). The method is implemented in current software packages such as EQS (Bentler, Reference Bentler2006), Mplus (Muthén & Muthén, Reference Muthén and Muthén2012), LISREL (Jöreskog & Sörbom, Reference Jöreskog and Sörbom2015), and lavaan (Rosseel, Reference Rosseel2012), and is frequently employed by researchers.

The polychoric correlation is guaranteed to equal the true latent correlation only if the continuous latent vector is bivariate normal, and is not, in general, robust against non-normality (Foldnes & Grønneberg, Reference Foldnes and Grønneberg2019b, Reference Foldnes and Grønneberga). Moreover, the inconsistent estimates of the latent correlation are transferred to ordinal structural equation models (Foldnes & Grønneberg, Reference Foldnes and Grønneberg2021). Multivariate normality has some testable implications (Foldnes & Grønneberg, Reference Foldnes and Grønneberg2019b; Jöreskog, Reference Maydeu-Olivares2006; Maydeu-Olivares, Reference Foldnes and Grønneberg2019b), and empirical datasets are frequently incompatible with it (Foldnes & Grønneberg, Reference Grønneberg and Foldnes2022). It is therefore important to consider what can be said about the latent correlations that can generate an observed ordinal variable under weaker conditions than bivariate normality.

This paper continues Grønneberg, Moss, and Foldnes (Reference Grønneberg, Moss and Foldnes2020) in calculating the possible values of a latent correlation when knowing only the marginal distributions of the latent variable, but not its copula. This type of calculation is called partial identification analysis (Manski, Reference Tamer2010; Tamer, Reference Manski2003). While Grønneberg et al. (Reference Grønneberg, Moss and Foldnes2020) studied binary data, we study ordinal data with an arbitrary number of categories. As in Grønneberg et al. (Reference Grønneberg, Moss and Foldnes2020), our analysis is at the population level. Inference for partial identification sets can be done using the methods of Tamer (Reference Tamer2010, Section 4.4). Our partial identification analyses are done for a single latent correlation only, even though the multivariate setting is of greater psychometric interest. Simultaneous partial identification sets for the covariance matrix will be difficult to calculate, as even the set of 3 × 3 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$3\times 3$$\end{document} correlation matrices without any restrictions is hard to describe (Li & Tam, Reference Li and Tam1994).

Let Z be a bivariate continuous latent variable with correlation ρ \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\rho $$\end{document} , which we call the latent correlation. We are dealing with ordinal variables ( X , Y ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$(X,Y)$$\end{document} with I , J \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$I, J$$\end{document} categories generated via the equations

(1) X = 1 , if Z 1 τ 1 X 2 , if τ 1 X < Z 1 τ 2 X I , if τ ( I - 1 ) X < Z 1 Y = 1 , if Z 2 τ 1 Y 2 , if τ 2 Y < Z 2 τ 3 Y J , if τ ( J - 1 ) Y < Z 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\begin{aligned} X= {\left\{ \begin{array}{ll} 1, &{} \text {if } Z_1 \le \tau ^X_{1} \\ 2, &{} \text {if } \tau ^X_{1}< Z_1 \le \tau ^X_{2} \\ \vdots &{} \\ I, &{} \text {if } \tau ^X_{(I-1)}< Z_1 \\ \end{array}\right. } \quad \quad Y= {\left\{ \begin{array}{ll} 1, &{} \text {if } Z_2 \le \tau ^Y_{1} \\ 2, &{} \text {if } \tau ^Y_{2}< Z_2 \le \tau ^Y_{3} \\ \vdots &{} \\ J, &{} \text {if } \tau ^Y_{(J-1)} < Z_2 \\ \end{array}\right. } \end{aligned}$$\end{document}

where τ X R I - 1 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\tau ^X\in \mathbb {R}^{I-1}$$\end{document} and τ Y R J - 1 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\tau ^Y\in \mathbb {R}^{J-1}$$\end{document} are strictly increasing vectors of deterministic thresholds. Our goal is to identify the possible values of the latent correlation ρ \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\rho $$\end{document} from the distribution of the latent variable, plus potentially some more information.

We will show that knowing only the marginals of Z is insufficient for pinpointing the latent correlation to high precision, even when the number of categories is as high as ten. High precision can only be achieved by making assumptions about the copula of the latent variable as well. We calculate the set of possible values of the latent variable when the copula of the latent variable is known to be symmetric and its marginals are known. While this reduces the range of the possible values of the latent variable, the reduction is small. We also study partial identification of ρ \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\rho $$\end{document} when Z 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Z_2$$\end{document} is directly observed, i.e., the polyserial correlation (Olsson, Drasgow, & Dorans, Reference Olsson, Drasgow and Dorans1982) without assuming bivariate normality. Methods for calculating the resulting bounds on the latent correlations are implemented in the R package polyiden available in the online supplementary material and on GithubFootnote 1.

The core results of this paper generalize the results in Grønneberg et al. (Reference Grønneberg, Moss and Foldnes2020) from two categories to an arbitrary number of categories. Our emphasis is on aspects that appear when I \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$I$$\end{document} or J \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$J$$\end{document} is higher than 2, such as asymptotic results when I \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$I$$\end{document} or J \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$J$$\end{document} increase separately. We show that when the marginal distributions of the latent variable are known, the latent correlation is asymptotically identified when both I \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$I$$\end{document} and J \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$J$$\end{document} increase. Moreover, when only J \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$J$$\end{document} increases, the identification region of ρ \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\rho $$\end{document} approaches the identification region found when one variable is directly observed.

We consider the case where the copula of the latent variables is completely unknown (or known to be symmetric) except for the restrictions given from the distribution of the observations. As argued above, additional assumptions on the copula are needed to better pinpoint the latent correlation. One possibility is to consider a parametric class of copulas, and identify the set of possible Pearson correlations compatible with this class. Another possibility is to consider stronger but still nonparametric assumptions, such ellipticity. Such additional assumptions would lead to shorter partial identification sets than those we find, but their calculation is outside the scope of the paper.

There are several alternative ways of formulating psychometric models for ordinal data that are not dependent on latent correlations, the most prominent being variants of item response theory (see, e.g., Bartholomew, Steele, Galbraith, & Moustaki, Reference Bartholomew, Steele, Galbraith and Moustaki2008). While a large class of commonly used item response theory models are mathematically equivalent to ordinal covariance models (Foldnes & Grønneberg, Reference Takane and De Leeuw1987; Takane & De Leeuw, Reference Foldnes and Grønneberg2019a), the models are usually estimated directly in terms of the model parameters using maximum likelihood or Bayesian methods (Van der Linden, Reference Van der Linden2017, Section III). These models are usually conceptualized in fully parametric terms, so our analysis is less relevant for such models.

In cases where the dimensionality of the item response theory model is unknown, i.e., the model is not fully specified in terms of continuously varying parameters, a factor analysis based on polychoric correlations is sometimes recommended, see, e.g., Mair (Reference Mair2018, Section 4.1.2), Brown and Croudace (Reference Brown and Croudace2014, p. 316), Revicki, Chen, and Tucker (Reference Revicki, Chen and Tucker2014, p. 344), Zumbo (Reference Zumbo, Rao and Sinharay2006, Section 3.1). From this perspective, our work also has relevance for item response theory models.

Recently, structural equation models based on copulas have been suggested (Krupskii & Joe, Reference Krupskii and Joe2013, Reference Krupskii and Joe2015), and Nikoloulopoulos and Joe (Reference Nikoloulopoulos and Joe2015) deals specifically with copula motivated models for ordinal data. Since we focus specifically on correlations, our analysis is not relevant for such models.

We focus exclusively on the Pearson correlation of the latent continuous vector Z, and do not consider the more general problem of quantifying and analyzing dependence between discrete variables. Several papers have been written in this more general direction. For instance, Liu, Li, Yu, and Moustaki (Reference Liu, Li, Yu and Moustaki2021) introduces partial association measures between ordinal variables, Nešlehová (Reference Nešlehová2007) discuss rank correlation measures for non-continuous variables, and Wei and Kim (Reference Wei and Kim2017) introduces a measure for asymmetric association for two-way contingency tables. Constraints on concordance measures in bivariate discrete data are derived in Denuit and Lambert (Reference Denuit and Lambert2005). Finally, we mention the multilinear extension copula discussed in Genest, Nešlehová, and Rémillard (Reference Genest, Nešlehová and Rémillard2014); Genest, Nešlehová, and Rémillard (Reference Genest, Nešlehová and Rémillard2017) which provides an abstract inference framework for a large class of copula based empirical methods for count data.

The structure of the paper is as follows. We start by studying partial identification sets for latent correlations based on ordinal variables in Sect. 1. Then, in Sect. 2, we study the same problem, but allow Z 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Z_2$$\end{document} to be directly observed. In Sect. 3 we illustrate the results with a detailed example, and Sect. 4 concludes the paper. All proofs and technical details are in the online appendix, including a short introduction to copulas. Scripts in R (R Core Team, 2020) for numerical computations are available in the online supplementary material.

1. Latent Correlations on I × J \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$I\times J$$\end{document} Tables

We work with the distribution function of the ordinal variable ( X , Y ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$(X, Y)$$\end{document} , which can be described by the cumulative probability matrix Π \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$${\varvec{\Pi }}$$\end{document} with elements Π ij = P ( X i , Y j ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$${\varvec{\Pi }}_{ij} = P(X\le i,Y\le j)$$\end{document} for i = 1 , , I , j = 1 , , J \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$i = 1,\ldots ,I,\;j= 1,\ldots ,J$$\end{document} . The model for ( X , Y ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$(X, Y)$$\end{document} follows the discretization model defined in eq. (1) for some continuous Z with marginal distribution functions F 1 , F 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$F_1,F_2$$\end{document} .

Observe that

(2) Π ij = P ( F 1 ( Z 1 ) Π iJ , F 2 ( Z 2 ) Π Ij ) = C ( Π iJ , Π Ij ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\begin{aligned} {\varvec{\Pi }}_{ij} = P(F_1(Z_1) \le {\varvec{\Pi }}_{iJ}, F_2(Z_2) \le {\varvec{\Pi }}_{Ij}) = C({\varvec{\Pi }}_{iJ}, {\varvec{\Pi }}_{Ij}) \end{aligned}$$\end{document}

where C is the copula of Z (see, e.g., Nelsen, Reference Nelsen2007). It follows that the copula C restricted to A = { Π iJ , i = 1 , , I } × { Π Ij j = 1 , , J } \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$A = \{{\varvec{\Pi }}_{iJ},i=1,\cdots , I\}\times \{{\varvec{\Pi }}_{Ij} \mid j = 1,\cdots , J\}$$\end{document} encodes all available information about Z. Since A is a product set with both factors containing 0 and 1, the restriction of C to A is a subcopula of C (Carley, Reference Carley2002).

Now we are ready to state our first result.

Proposition 1

For any cumulative probability matrix Π \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$${\varvec{\Pi }}$$\end{document} , the latent correlation can be any number in ( - 1 , 1 ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$(-1,1)$$\end{document} when the marginals F 1 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$F_{1}$$\end{document} and F 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$F_{2}$$\end{document} are unrestricted.

Proof

See the online appendix, Section 8. \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\square $$\end{document}

Proposition 1 implies that we have to know something about the marginals F 1 , F 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$F_1, F_2$$\end{document} to get non-trivial partial identification sets for the latent correlation. Now we consider the case when both marginals are known. Let F \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\mathcal {F}$$\end{document} be a set of bivariate distribution functions, and ρ ( F ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\rho (F)$$\end{document} be the Pearson correlation for a bivariate distribution F. Define the partial identification set for the latent correlation as

(3) ρ Π ( F ) = ρ ( F ) F is compatible with Π and F F , \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\begin{aligned} \rho _{\varvec{\Pi }}(\mathcal {F}) = \left\{ \rho (F)\mid F\text { is compatible with }{\varvec{\Pi }}\text { and }F\in \mathcal {F}\right\} , \end{aligned}$$\end{document}

where F is compatible with Π \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$${\varvec{\Pi }}$$\end{document} if equation (2) holds for its copula.

Now define the I × J \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$I\times J$$\end{document} matrices α , β , γ , δ \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\alpha ,\beta ,\gamma ,\delta $$\end{document} with elements

(4) α ij = Π ( i - 1 ) J + Π i ( j - 1 ) - Π ( i - 1 ) ( j - 1 ) = P ( X < i ) + P ( X = i , Y < j ) , β ij = Π I ( j - 1 ) + Π ( i - 1 ) j - Π ( i - 1 ) ( j - 1 ) = P ( Y < j ) + P ( X < i , Y = j ) , γ ij = Π iJ - ( Π i ( J - j + 1 ) - Π ( i - 1 ) ( J - j + 1 ) ) = P ( X i ) - P ( X = i , Y J - j + 1 ) , δ ij = Π I ( J - j + 1 ) - ( Π i ( J - j + 1 ) - Π i ( J - j ) ) = P ( X J - j + 1 ) - P ( X i , Y = J - j + 1 ) . \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\begin{aligned} \alpha _{ij}&= {\varvec{\Pi }}_{(i-1)J}+{\varvec{\Pi }}_{i(j-1)}-{\varvec{\Pi }}_{(i-1)(j-1)} = P(X<i)+P(X=i,Y<j),\nonumber \\ \beta _{ij}&= {\varvec{\Pi }}_{I(j-1)}+{\varvec{\Pi }}_{(i-1)j}-{\varvec{\Pi }}_{(i-1)(j-1)} = P(Y<j)+P(X<i,Y=j),\nonumber \\ \gamma _{ij}&= {\varvec{\Pi }}_{iJ}-({\varvec{\Pi }}_{i(J-j+1)}-{\varvec{\Pi }}_{(i-1)(J-j+1)}) = P(X\le i)-P(X=i,Y\le J-j+1),\nonumber \\ \delta _{ij}&= {\varvec{\Pi }}_{I(J-j+1)}-({\varvec{\Pi }}_{i(J-j+1)}-{\varvec{\Pi }}_{i(J-j)}) = P(X\le J-j+1)-P(X\le i,Y=J-j+1). \end{aligned}$$\end{document}

where Π 0 j = Π i 0 = 0 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$${\varvec{\Pi }}_{0j} = {\varvec{\Pi }}_{i0}=0$$\end{document} . Then define the vectors

where a b \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$a\frown b$$\end{document} is the concatenation of the vectors a, b and

is the vectorization of A, obtained from stacking the columns of A on top of each other. The matrices in (4) are the same as the α , β , γ , δ \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\alpha ,\beta ,\gamma ,\delta $$\end{document} matrices of Genest and Nešlehová (Reference Genest and Nešlehová2007, p. 481) and Carley (Reference Carley2002), only the order of γ \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\gamma $$\end{document} and δ \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\delta $$\end{document} has been changed. We have made this minor modification as it is needed to make u U \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$u^U$$\end{document} and u L \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$u^L$$\end{document} increasing, which simplifies the statement of the next result.

The following result extends Proposition 5 in Genest and Nešlehová (Reference Genest and Nešlehová2007), who built their result on the work of Carley (Reference Carley2002) on maximal extensions of subcopulas, to the case of non-uniform marginals.

Theorem 1

Let F \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\mathcal {F}$$\end{document} be the set of distributions with continuous and strictly increasing marginals F 1 , F 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$F_1, F_2$$\end{document} with finite variance. Then ρ Π ( F ) = [ ρ L , ρ U ] \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\rho _{\varvec{\Pi }}(\mathcal {F}) = [\rho _L, \rho _U]$$\end{document} where

(5) ρ U = sd ( F 1 ) - 1 sd ( F 2 ) - 1 k = 1 IJ u k U u k + 1 U F 1 - 1 ( u ) F 2 - 1 ( v k U - u k U + u ) d u - μ F 1 μ F 2 , \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\begin{aligned} \rho _U= & {} {\text {sd}} (F_1)^ {-1} {\text {sd}} (F_2)^ {-1} \left( \sum _{k=1}^{IJ}\int _{u_{k}^{U}}^{u^U_{k+1}}F_{1}^{-1}(u)F_{2}^{-1}(v_{k}^U-u^U_{k}+u)du-\mu _{F_{1}}\mu _{F_{2}} \right) , \end{aligned}$$\end{document}
(6) ρ L = sd ( F 1 ) - 1 sd ( F 2 ) - 1 k = 1 IJ u k L u k + 1 L F 1 - 1 ( u ) F 2 - 1 ( v k L + u k + 1 L - u ) d u - μ F 1 μ F 2 , \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\begin{aligned} \rho _L= & {} {\text {sd}} (F_1)^ {-1} {\text {sd}} (F_2)^ {-1} \left( \sum _{k=1}^{IJ}\int _{u_{k}^{L}}^{u_{k+1}^{L}}F_{1}^{-1}(u)F_{2}^{-1}(v_{k}^{L}+u_{k+1}^{L}-u)du-\mu _{F_{1}}\mu _{F_{2}} \right) , \end{aligned}$$\end{document}

where F 1 - 1 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$F_{1}^{-1}$$\end{document} and F 2 - 1 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$F_{2}^{-1}$$\end{document} are the generalized inverses of F 1 , F 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$F_{1},F_{2}$$\end{document} , where μ F 1 , μ F 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\mu _{F_{1}},\mu _{F_{2}}$$\end{document} are the means of F 1 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$F_{1}$$\end{document} and F 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$F_{2}$$\end{document} , and where sd ( F 1 ) , sd ( F 2 ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$ {\text {sd}} (F_1), {\text {sd}} (F_2)$$\end{document} are the standard deviations of F 1 , F 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$F_1,F_2$$\end{document} .

Proof

See the online appendix, Section 11. \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\square $$\end{document}

Example 1

Let us compute the partial identification limits in Theorem 1 for a sequence of cumulative probability matrices. Let Z \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Z$$\end{document} have a bivariate normal copula with correlation ρ = 0.7 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\rho = 0.7$$\end{document} . We study what an analyst who does not know the copula structure of Z \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Z$$\end{document} can say about ρ \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\rho $$\end{document} .

To generate thresholds that plausibly fit real world settings and can be applied for any number of categories, we fit a statistical model to the marginal probability distribution of the bfi dataset from the psych package, a dataset described in more detail in Sect. 3. We estimated the parameters of a Beta distribution that best correspond to the ordinal marginals of the questions A2 and A5 using a least squares procedure; see the code for details. While the bfi dataset has six categories, we can emulate the marginal probabilities for any number of categories by choosing cutoffs ( Π iJ ) i = 1 I \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$({\varvec{\Pi }}_{iJ})_{i=1}^I$$\end{document} and ( Π Ij ) j = 1 J \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$({\varvec{\Pi }}_{Ij})_{j=1}^J$$\end{document} as follows. The cutoffs for X \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$X$$\end{document} with k categories are equal to Q 1 ( i / k ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Q_1(i/k)$$\end{document} , i = 1 , , ( k - 1 ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$i = 1,\ldots ,(k-1)$$\end{document} , where Q 1 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Q_1$$\end{document} is the quantile function of a Beta distributed variable with parameters α 1 = 2.7 , β 2 = 1.1 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\alpha _1 = 2.7, \beta _2 = 1.1$$\end{document} . The cutoffs for Y \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Y$$\end{document} are generated in the same way, but with α 2 = 2.3 , β 2 = 1.2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\alpha _2 = 2.3, \beta _2 = 1.2$$\end{document} .

In Fig. 1, we see the partial identification region as a function of I, J when I = J \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$I = J$$\end{document} . The latent marginals are either standard normal, standard Laplace distributed, or uniform on [0, 1]. The dotted line is the latent correlation ( ρ = 0.7 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\rho = 0.7$$\end{document} ) when the marginals are normal. The true latent correlations are 0.682 when the marginals are uniform and 0.686 when the marginals are Laplace distributed. \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\square $$\end{document}

Figure 1. Upper and lower limits for ρ Π ( F ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\rho _{\varvec{\Pi }}(\mathcal {F})$$\end{document} when the marginals are fixed. The dashed line is the polychoric correlation, corresponding to normal marginals and the normal copula.

Figure 1 suggests two conclusions. First, when the latent copula is completely unknown the identification sets are too wide to be informative even for a large number of categories, such as I = J = 10 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$I= J= 10$$\end{document} . Second, the partial correlation sets converge to the true latent correlations as the number of categories go to infinity. This is indeed the case when the marginals are known, as shown by the following corollary.

Consider a sequence ( Π n ) n = 1 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$({\varvec{\Pi }}^n)_{n=1}^\infty $$\end{document} of cumulative probability matrices where Π n \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$${\varvec{\Pi }}^n$$\end{document} has I n , J n \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$I_n, J_n$$\end{document} categories. We say that the sequence ( Π n ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$({\varvec{\Pi }}^n)$$\end{document} has its X \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$X$$\end{document} -mesh uniformly decreasing to 0 if

(7) x n : = max 1 i I n [ Π i J n n - Π ( i - 1 ) J n n ] 0 , \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\begin{aligned} x_n := \max _{1 \le i \le I_n}{[{\varvec{\Pi }}^n_{iJ_n} - {\varvec{\Pi }}^n_{(i-1)J_n}]} \rightarrow 0, \end{aligned}$$\end{document}

and, likewise, its Y \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Y$$\end{document} -mesh is uniformly decreasing to 0 if

(8) y n : = max 1 j J n [ Π I n j n - Π I n ( j - 1 ) n ] 0 . \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\begin{aligned} y_n := \max _{1 \le j \le J_n}{[{\varvec{\Pi }}^n_{I_nj} - {\varvec{\Pi }}^n_{I_n(j-1)}]} \rightarrow 0. \end{aligned}$$\end{document}

For a copula C and marginals F 1 , F 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$F_1, F_2$$\end{document} , let ρ ( C ; F 1 , F 2 ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\rho (C;F_1,F_2)$$\end{document} be the Pearson correlation of the combined distribution ( x 1 , x 2 ) C ( F 1 ( x 1 ) , F 2 ( x 2 ) ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$(x_1,x_2) \mapsto C(F_1(x_1), F_2(x_2))$$\end{document} .

Corollary 1

Let F \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\mathcal {F}$$\end{document} be the set of distributions with continuous and strictly increasing marginals F 1 , F 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$F_1, F_2$$\end{document} and ( Π n ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$({\varvec{\Pi }}^n)$$\end{document} be a sequence of cumulative probability matrices compatible with C whose X \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$X$$\end{document} -mesh and Y \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Y$$\end{document} -mesh uniformly decrease to 0. Then the latent correlation identification set converges to ρ ( C ; F 1 , F 2 ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\rho (C;F_1,F_2)$$\end{document} , i.e., lim n ρ Π n ( F ) = ρ ( C ; F 1 , F 2 ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\lim _{n\rightarrow \infty } \rho _{{\varvec{\Pi }}^n}(\mathcal {F}) = \rho (C;F_1,F_2)$$\end{document} .

Proof

See the online appendix, Section 12. \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\square $$\end{document}

Figure 1 illustrates Corollary 1. The sequence of ordinal distributions has uniformly decreasing X-mesh and Y-mesh, and the partial identification sets for normal marginals clearly converge to the true correlation as n \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$n\rightarrow \infty $$\end{document} .

In Theorem 1, the latent marginals are fixed, and the latent copula is unknown. The numerical illustration in Fig. 1 shows that, even with a large number of categories such as ten, the partial identification intervals for latent correlations are rather wide. If our goal is to make the intervals shorter, we will have to add some restrictions to the copula. In Section 10 in the online appendix, we conduct a partial identification analysis based on the assumption that the latent copula is symmetric (Nelsen, Reference Nelsen2007, p. 32). Unfortunately, symmetry does not shorten the identification intervals by much. More work is needed to find tractable restrictions on the copula that make the identification intervals shorter.

2. Latent Correlations with One Ordinal Variable

Until now, we have studied the case where we could observe neither Z 1 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Z_1$$\end{document} nor Z 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Z_2$$\end{document} . Now we take a look at the case when we are able to observe one of them. That is, we still observe the ordinal X \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$X$$\end{document} from the discretization model of equation (1) but now we also observe the continuous Z 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Z_2$$\end{document} . We are still interested in the correlation between the latent Z 1 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Z_1$$\end{document} and the now observed Z 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Z_2$$\end{document} . Mirroring the fully ordinal case, the latent correlation is identified when ( Z 1 , Z 2 ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$(Z_1, Z_2)$$\end{document} is bivariate normal, and can be estimated by the polyserial correlation (Olsson et al., Reference Olsson, Drasgow and Dorans1982). As before, the latent variable has known marginals F 1 , F 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$F_1,F_2$$\end{document} but unknown copula C. Again, the latent correlation ρ \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\rho $$\end{document} is not identified, and we find the partial identification set.

Assume that F 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$F_2$$\end{document} is continuous and strictly increasing, which implies that V = F 2 ( Z 2 ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$V = F_2(Z_2)$$\end{document} is uniformly distributed. Let Π \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$${\varvec{\Pi }}^\star $$\end{document} be the cumulative distribution of ( X , V ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$(X, V)$$\end{document} , that is, Π iv = P ( X i , V v ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$${\varvec{\Pi }}^\star _{iv} = P(X\le i, V \le v)$$\end{document} . If C is the copula of Z, we get the relationship

(9) Π iv = C ( Π i 1 , Π Iv ) = C ( Π i 1 , v ) , 1 i I - 1 , v [ 0 , 1 ] . \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\begin{aligned} {\varvec{\Pi }}^\star _{iv} = C({\varvec{\Pi }}^\star _{i1}, {\varvec{\Pi }}^\star _{Iv}) = C({\varvec{\Pi }}^\star _{i1}, v),\quad 1\le i \le I-1, v\in [0, 1]. \end{aligned}$$\end{document}

Whenever C is a copula that satisfies the equation above, we say that C is compatible with Π \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$${\varvec{\Pi }}^\star $$\end{document} . From the results of Tankov (Reference Tankov2011), we can derive the maximal and minimal copula bounds for every C satisfying Eq. (9). Using these bounds, we can derive the following result, which generalizes Proposition 3 in Grønneberg et al. (Reference Grønneberg, Moss and Foldnes2020). We use the notation x + = max ( x , 0 ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$x^+ = \max (x,0)$$\end{document} and x - = min ( x , 0 ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$x^- = \min (x,0)$$\end{document} .

Theorem 2

Let F 1 , F 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$F_1, F_2$$\end{document} be continuous and strictly increasing with finite variance, and let F \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\mathcal {F}$$\end{document} be the set of distributions with marginals F 1 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$F_1$$\end{document} and F 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$F_2$$\end{document} . Then the set of latent correlations that is compatible with C , F 1 , F 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$C,F_1,F_2$$\end{document} is

(10) ρ Π ( F ) = [ ρ ( W Π ; F 1 , F 2 ) , ρ ( M Π ; F 1 , F 2 ) ] , \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\begin{aligned} \rho _{{\varvec{\Pi }}^\star }(\mathcal {F}) = [\rho (W_{{\varvec{\Pi }}^\star };F_1,F_2), \rho (M_{{\varvec{\Pi }}^\star };F_1,F_2)], \end{aligned}$$\end{document}

where

M Π ( u , v ) = min ( u , v , min 1 i I - 1 ( Π iv + ( u - Π i 1 ) + ) , W Π ( u , v ) = max ( 0 , u + v - 1 , max 1 i I - 1 ( Π iv - ( Π i 1 - u ) + ) . \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\begin{aligned} M_{{\varvec{\Pi }}^\star }(u,v)&= \min (u,v,\min _{1\le i\le I-1}({\varvec{\Pi }}^\star _{i v}+(u-{\varvec{\Pi }}^\star _{i1})^{+}),\\ W_{{\varvec{\Pi }}^\star }(u,v)&= \max (0,u+v-1,\max _{1\le i\le I-1}({\varvec{\Pi }}^\star _{i v}-({\varvec{\Pi }}^\star _{i1}-u)^{+}). \end{aligned}$$\end{document}

Proof

See the online appendix, Section 13. \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\square $$\end{document}

Remark 1

To calculate the correlation ρ ( C , F 1 , F 2 ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\rho (C,F_1,F_2)$$\end{document} , one may use the Höffding (Reference Höffding1940) formula,

(11) ρ ( C ; F 1 , F 2 ) = sd ( F 1 ) - 1 sd ( F 2 ) - 1 0 1 0 1 C ( u , v ) - u v d F 1 - 1 ( u ) d F 2 - 1 ( v ) , \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\begin{aligned} \rho (C;F_1,F_2) = {\text {sd}} (F_1)^ {-1} {\text {sd}} (F_2)^ {-1} \int _{0}^{1} \int _{0}^{1}\left[ C(u, v) - uv\right] \, \textrm{d}F_{1}^{-1}(u) \textrm{d}F_{2}^{-1}(v), \end{aligned}$$\end{document}

where sd ( F 1 ) , sd ( F 2 ) \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$ {\text {sd}} (F_1), {\text {sd}} (F_2)$$\end{document} are the standard deviations of F 1 , F 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$F_1,F_2$$\end{document} .

Let ( Π n ) n = 1 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$({\varvec{\Pi }}^n)_{n=1}^\infty $$\end{document} be a sequence of cumulative probability matrices where I n = I 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$I_n = I\ge 2$$\end{document} is fixed and J n \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$J_n \rightarrow \infty $$\end{document} . Then we ought to regain the polyserial identification set of Theorem 2 under reasonable assumptions. This is formalized and confirmed by the following corollary.

Corollary 2

Let ( Π n ) n = 1 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$({\varvec{\Pi }}^n)_{n=1}^\infty $$\end{document} be a sequence of cumulative probability matrices compatible with C. Let I \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$I$$\end{document} be fixed for all n and let J n \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$J_n$$\end{document} diverge to infinity, and let the Y-mesh of ( Π n ) n = 1 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$({\varvec{\Pi }}^n)_{n=1}^\infty $$\end{document} decrease uniformly toward 0. Let Π \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$${\varvec{\Pi }}^\star $$\end{document} have I \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$I$$\end{document} categories and be compatible with C. If F \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\mathcal {F}$$\end{document} is the set of distributions with continuous and strictly increasing marginal distributions F 1 , F 2 \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$F_1, F_2$$\end{document} , then

lim n ρ Π n ( F ) = ρ Π ( F ) . \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\begin{aligned} \lim _{n\rightarrow \infty } \rho _{{\varvec{\Pi }}^n}(\mathcal {F}) = \rho _{{\varvec{\Pi }}^\star }(\mathcal {F}). \end{aligned}$$\end{document}

Proof

See the online appendix, Section 14. \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\square $$\end{document}

Figure 2. Illustration of Theorem 2 and Corollary 2. The black lines are the limits of the identification sets in Corollary 2, the black dashed lines are the limits of identification sets in Theorem 2, and the gray dashed line is the true polychoric correlation.

Theorem 2 and Corollary 2 are illustrated in Fig. 2. We use the same setup as Example 1 on p. 1. The marginals of Z \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Z$$\end{document} are normal and known to be so, the true copula is bivariate normal, but this is not known, and the true latent correlation is 0.35. The number of categories for X \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$X$$\end{document} is 4. The number of categories for Y \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Y$$\end{document} increase indefinitely, and the Y-mesh uniformly decreases toward 0. We used the polyserialiden function in the R package polyiden to calculate the polyserial bounds.

3. An Empirical Example Using Data from the International Personality Item Pool

The R package psychTools (Revelle, Reference Revelle2019) contains the dataset bfi, which is a small subset of the data presented and analyzed in Revelle, Wilt, and Rosenthal (Reference Revelle, Wilt and Rosenthal2010) based on items from the International Personality Item Pool (Goldberg, Reference Goldberg, Mervielde, Deary, De Fruyt and Ostendorf1999). The bfi dataset contains 2800 responses to 25 items, and are organized by the five factors of personality: Agreeableness (A1–A5), Conscientiousness (C1–C5), Extraversion (E1–E5), Neuroticism (N1–N5), and Openness to experience (O1–O5). Each response is graded on a 6-point scale from “Very Inaccurate” to “Very Accurate.” A sample question is A2: “Inquire about others’ well-being.” We have flipped the ratings on reverse-coded items, and omitted all rows with missing values, resulting in 2236 remaining observations.

The polychoric correlations are visualized in Fig. 3. This dataset has been used for illustrations in several contexts, for instance in the second empirical example of McNeish (Reference McNeish2018), who analyzed the data using a five factor model estimated via polychoric correlations as well as Pearson correlations.

Figure 3. Polychoric correlation estimates for 25 items from the International Personality Item Pool (Goldberg, Reference Goldberg, Mervielde, Deary, De Fruyt and Ostendorf1999).

Polychoric correlations and models that use them for input in their estimation, hinges on the exact normality of each bivariate pair of the latent continuous variable Z \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Z$$\end{document} . As shown theoretically in this paper, and through simulation in Foldnes and Grønneberg (Reference Foldnes and Grønneberg2019b, Reference Foldnes and Grønneberg2021), polychoric correlations are not robust against latent non-normality. The assumption of joint normality has testable implications, and we applied the parametric bootstrap test of Foldnes and Grønneberg (Reference Foldnes and Grønneberg2019b) using the R package discnorm (Foldnes & Grønneberg, Reference Foldnes and Grønneberg2020), which has been shown to behave well in the simulation studies of Foldnes and Grønneberg (Reference Foldnes and Grønneberg2019b, Reference Foldnes and Grønneberg2021). We test both multivariate normality of the 25 dimensional random vector, as well as bivariate normality of each pair of variables. The resulting p values for the joint test of multivariate normality was zero within numerical precision. Out of 300 pairs, 184 pairs had a p value of latent normality also equal to zero within numerical precision, 287 pairs had a p value less than 5 % \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$5 \%$$\end{document} , and the mean of the p values equaled 0.99 % \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$0.99 \%$$\end{document} . Latent normality is therefore not a tenable assumption, with the possible exception of bivariate latent normality between some pairs of variables.

We therefore calculate the lower and upper latent correlation bounds from Theorem 1, when assuming marginal but not bivariate normality. The results are visualized in Fig. 4. The bounds are large for all variables; the sign of the correlations are unknown in most cases, even in the same factor, though we see indications of white regions in the lower bounds (indicating lower bounds near zero) for the agreeableness items, the conscientiousness items, extroversion items, and neuroticism items, but not for the openness items. For neuroticism (N1–N5), most of the correlations are positive. There are some other pairs with lower bounds near zero, such as the bright region between A5 and E3–E4, where the lower bounds are near zero and the upper bounds are close to one, which under the assumption of latent marginal normality shows that the latent correlations between these items are estimated to be positive.

Figure 4. Upper (blue) and lower (red) correlation bounds for the items in the International Personality Item Pool (Goldberg, Reference Goldberg, Mervielde, Deary, De Fruyt and Ostendorf1999).

4. Conclusion

We have calculated partial identification sets for latent correlations based on the distribution of ordinal data under the assumption that the marginal distributions of Z \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Z$$\end{document} are known. The most common number of categories is 5 and 7 (Flora & Curran, Reference Rhemtulla, Brosseau-Liard and Savalei2012; Rhemtulla, Brosseau-Liard, & Savalei, Reference Flora and Curran2004), and for these numbers the partial identification sets are rather wide. Merely knowing the latent marginal distributions is usually insufficient, and knowing that the latent copula is symmetric does not help. More knowledge is required in order to get informative partial identification sets.

Since the partial identification sets are wide, a psychometrician wishing to estimate latent correlations must know more about the latent distribution class than just its marginals and possibly knowing that the copula is symmetric. For instance, the copula class could be known to belong to a certain class of distributions, or the psychometrician may know that Z \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Z$$\end{document} follows a model class, such as a factor model. Such knowledge would reduce the partial identification sets of the model parameters.

Funding

Open Access funding provided by BI Norwegian Business School.

Footnotes

Supplementary Information The online version contains supplementary material available at https://doi.org/10.1007/s11336-022-09898-y.

Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

Bartholomew, D. J., Steele, F., Galbraith, J., Moustaki, I., Analysis of multivariate social science data Chapman and Hall/CRC 10.1201/b15114CrossRefGoogle Scholar
Bentler, P., Eqs 6 structural equations program manual (2006). Encino, CA Multivariate SoftwareGoogle Scholar
Bollen, K. A., Structural equations with latent variables (1989). New York, USA Wiley 10.1002/9781118619179CrossRefGoogle Scholar
Brown, A., & Croudace, T. J. (2014). Scoring and estimating score precision using multidimensional irt models. Handbook of item response theory modeling (pp. 325–351). Routledge. https://doi.org/10.4324/9781315736013.CrossRefGoogle Scholar
Browne, M. W., (2008). Asymptotically distribution-free methods for the analysis of covariance structures British Journal of Mathematical and Statistical Psychology (1984). 37(1) 6283 10.1111/j.2044-8317.1984.tb00789.x 6733054CrossRefGoogle Scholar
Carley, H., (2002). Maximum and minimum extensions of finite subcopulas Communications in Statistics - Theory and Methods 31(12) 21512166 10.1081/STA-120017218CrossRefGoogle Scholar
Christoffersson, A., (1975). Factor analysis of dichotomized variables Psychometrika 40(1) 532 10.1007/BF02291477CrossRefGoogle Scholar
Denuit, M., Lambert, P., (2005). Constraints on concordance measures in bivariate discrete data Journal of Multivariate Analysis 93(1) 4057 10.1016/j.jmva.2004.01.004CrossRefGoogle Scholar
Epskamp, S. (2017). Network psychometrics (Doctoral dissertation). Retrieved from https://hdl.handle.net/11245.1/a76273c6-6abc-4cc7-a2e9-3b5f1ae3c29eGoogle Scholar
Flora, D. B., Curran, P. J., (2004). An empirical evaluation of alternative methods of estimation for confirmatory factor analysis with ordinal data Psychological Methods 9(4) 466491 10.1037/1082-989X.9.4.466 15598100 3153362CrossRefGoogle ScholarPubMed
Foldnes, N., & Grønneberg, S. (2020). discnorm: Test for discretized normality in ordinal data[Computer software manual]. Retrieved from https://CRAN.R-project.org/package=discnorm (R package version 0.1.0)Google Scholar
Foldnes, N., Grønneberg, S., (2019). On identification and non-normal simulation in ordinal covariance and item response models Psychometrika 84(4) 10001017 10.1007/s11336-019-09688-z 31562591CrossRefGoogle ScholarPubMed
Foldnes, N., Grønneberg, S., (2019). Pernicious polychorics: The impact and detection of underlying non-normality Structural Equation Modeling 27(4) 525543 10.1080/10705511.2019.1673168CrossRefGoogle Scholar
Foldnes, N., & Grønneberg, S. (2021). The sensitivity of structural equation modeling with ordinal data to underlying non-normality and observed distributional forms. Psychological Methods. https://doi.org/10.1037/met0000385.CrossRefGoogle Scholar
Genest, C., Nešlehová, J., (2007). A primer on copulas for count data ASTIN Bulletin: The Journal of the IAA 37(2) 475515 10.1017/S0515036100014963CrossRefGoogle Scholar
Genest, C., Nešlehová, J. G., Rémillard, B., (2014). On the empirical multilinear copula process for count data Bernoulli 20(3) 13441371 10.3150/13-BEJ524CrossRefGoogle Scholar
Genest, C., Nešlehová, J. G., Rémillard, B., (2017). Asymptotic behavior of the empirical multilinear copula process under broad conditions Journal of Multivariate Analysis 159 82110 10.1016/j.jmva.2017.04.002CrossRefGoogle Scholar
Goldberg, L. R. (1999). A broad-bandwidth, public domain, personality inventory measuring the lower-level facets of several five-factor models. In Mervielde, I., Deary, I., De Fruyt, F. & Ostendorf, F. (Eds.), Personality psychology in Europe (Vol. 7, pp. 7–28). Tilburg University Press. Retrieved from http://hdl.handle.net/1854/LU-119613Google Scholar
Grønneberg, S., & Foldnes, N. (2022). Factor analyzing ordinal items requires substantive knowledge of response marginals. Psychological Methods. https://doi.org/10.1037/met0000495.CrossRefGoogle Scholar
Grønneberg, S., Moss, J., Foldnes, N., (2020). Partial identification of latent correlations with binary data Psychometrika 85(4) 10281051 10.1007/s11336-020-09737-y 33346887CrossRefGoogle ScholarPubMed
Höffding, W. (1940). Maßstabinvariante korrelationstheorie für diskontinuierliche verteilungen (Unpublished doctoral dissertation). Universität Berlin.Google Scholar
Isvoranu, A.-M., & Epskamp, S. (2021). Continuous and ordered categorical data in network psychometrics: Which estimation method to choose? deriving guidelines for applied researchers. PsyArXiv. https://doi.org/10.31234/osf.io/mbycn.CrossRefGoogle Scholar
Johal, S., & Rhemtulla, M. (2021). Comparing estimation methods for psychometric networks with ordinal data. PsyArXiv. https://doi.org/10.31234/osf.io/ej2gn.CrossRefGoogle Scholar
Jöreskog, K. G. (1994). Structural equation modeling with ordinal variables. In Multivariate analysis and its applications (pp. 297–310). Institute of Mathematical Statistics. https://doi.org/10.1214/lnms/1215463803.CrossRefGoogle Scholar
Jöreskog, K. G., Structural equation modeling with ordinal variables using lisrel Scientific Software International Inc, Lincolnwood, IL Technical reportGoogle Scholar
Jöreskog, K. G., & Sörbom, D. (2015). Lisrel 9.20 for windows [computer software]. Skokie, IL: Scientific Software International.Google Scholar
Kolenikov, S., Angeles, G., (2005). Socioeconomic status measurement with discrete proxy variables: Is principal component analysis a reliable answer? Review of Income and Wealth (2009). 55(1) 128165 10.1111/j.1475-4991.2008.00309.xCrossRefGoogle Scholar
Krupskii, P., Joe, H., (2013). Factor copula models for multivariate data Journal of Multivariate Analysis 120 85101 10.1016/j.jmva.2013.05.001CrossRefGoogle Scholar
Krupskii, P., Joe, H., (2015). Structured factor copula models: Theory, inference and computation Journal of Multivariate Analysis 138 5373 10.1016/j.jmva.2014.11.002CrossRefGoogle Scholar
Li, C-K Tam, B-S (1994). A note on extreme correlation matrices SIAM Journal on Matrix Analysis and Applications 15(3) 903908 10.1137/S0895479892240683CrossRefGoogle Scholar
Liu, D., Li, S., Yu, Y., Moustaki, I., (2021). Assessing partial association between ordinal variables: quantification, visualization, and hypothesis testing Journal of the American Statistical Association 116(534) 955968 10.1080/01621459.2020.1796394CrossRefGoogle Scholar
Mair, P. (2018). Modern psychometrics with r. Springer. https://doi.org/10.1007/978-3-319-93177-7.CrossRefGoogle Scholar
Manski, C. F. (2003). Partial identification of probability distributions. Springer Science & Business Media. https://doi.org/10.1007/b97478.CrossRefGoogle Scholar
Maydeu-Olivares, A., (2006). Limited information estimation and testing of discretized multivariate normal structural models Psychometrika 71(1) 5777 10.1007/s11336-005-0773-4CrossRefGoogle Scholar
McNeish, D., (2018). Thanks coefficient alpha, we’ll take it from here Psychological Methods 23(3) 412433 10.1037/met0000144 28557467CrossRefGoogle Scholar
Muthén, B., (1978). Contributions to factor analysis of dichotomous variables Psychometrika 43(4) 551560 10.1007/BF02293813CrossRefGoogle Scholar
Muthén, B., (1984). A general structural equation model with dichotomous, ordered categorical, and continuous latent variable indicators Psychometrika 49(1) 115132 10.1007/BF02294210CrossRefGoogle Scholar
Muthén, B., & Muthén, L. (2012). Mplus version 7: User’s guide. Muthén & Muthén.Google Scholar
Nelsen, R. B. (2007). An introduction to copulas. Springer Science & Business Media. https://doi.org/10.1007/978-1-4757-3076-0.CrossRefGoogle Scholar
Nešlehová, J. (2007). On rank correlation measures for non-continuous random variables. Journal of Multivariate Analysis, 98(3), 544–567. https://doi.org/j.jmva.2005.11.007.CrossRefGoogle Scholar
Nikoloulopoulos, A. K., Joe, H., (2015). Factor copula models for item response data Psychometrika 80(1) 126150 10.1007/s11336-013-9387-4 24297437CrossRefGoogle ScholarPubMed
Olsson, U., (1979). Maximum likelihood estimation of the polychoric correlation coefficient Psychometrika 44(4) 443460 10.1007/BF02296207CrossRefGoogle Scholar
Olsson, U., Drasgow, F., Dorans, N. J., (1982). The polyserial correlation coefficient Psychometrika 47(3) 337347 10.1007/BF02294164CrossRefGoogle Scholar
R Core Team. (2020). R: A language and environment for statistical computing [Computer software manual]. Vienna, Austria. Retrieved from http://www.R-project.org/.Google Scholar
Revelle, W. (2019). psychTools:Tools to accompany the ’psych; package for psychological research. Evanston, Illinois Retrieved from. https://CRAN.R-project.org/package=psychTools.Google Scholar
Revelle, W., Wilt, J., & Rosenthal, A. (2010). Individual differences in cognition: New methods for examining the personality-cognition link. In Handbook of individual differences in cognition (pp. 27–49). Springer. https://doi.org/10.1007/978-1-4419-1210-7.CrossRefGoogle Scholar
Revicki, D. A., Chen, W.-H., & Tucker, C. (2014). Developing item banks for patient-reported health outcomes. In Handbook of item response theory modeling (pp. 352–381). Routledge. https://doi.org/10.4324/9781315736013.CrossRefGoogle Scholar
Rhemtulla, M., Brosseau-Liard, Savalei, V., (2012). When can categorical variables be treated as continuous? a comparison of robust continuous and categorical SEM estimation methods under suboptimal conditions Psychological methods 17(3) 354373 10.1037/a0029315 22799625CrossRefGoogle ScholarPubMed
Rosseel, Y., (2012). lavaan: An R package for structural equation modeling Journal of Statistical Software 48(2) 136 10.18637/jss.v048.i02CrossRefGoogle Scholar
Satorra, A., (1989). Alternative test criteria in covariance structure analysis: A unified approach Psychometrika 54(1) 131151 10.1007/BF02294453CrossRefGoogle Scholar
Takane, Y., De Leeuw, J., (1987). On the relationship between item response theory and factor analysis of discretized variables Psychometrika 10.1007/BF02294363CrossRefGoogle Scholar
Tamer, E., (2010). Partial identification in econometrics Annual Review of Economics 2(1) 167195 10.1146/annurev.economics.050708.143401CrossRefGoogle Scholar
Tankov, P., (2011). Improved Fréchet bounds and model-free pricing of multi-asset options Journal of Applied Probability 48(2) 389403 10.1239/jap/1308662634CrossRefGoogle Scholar
Van der Linden, W. J. (2017). Handbook of item response theory: Volume 2: Statistical tools. CRC Press. https://doi.org/10.1201/b19166.CrossRefGoogle Scholar
Wei, Z., Kim, D., (2017). Subcopula-based measure of asymmetric association for contingency tables Statistics in Medicine 36(24) 38753894 10.1002/sim.7399 28766323CrossRefGoogle ScholarPubMed
Zumbo, B. D. (2006). 3 validity: Foundational issues and statistical methodology. In Rao, C.R. & Sinharay, S. (Eds.), Handbook of statistics (Vol. 26, pp. 45–79). Elsevier. https://doi.org/10.1016/S0169-7161(06)26003-6.CrossRefGoogle Scholar
Figure 0

Figure 1. Upper and lower limits for ρΠ(F)\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\rho _{\varvec{\Pi }}(\mathcal {F})$$\end{document} when the marginals are fixed. The dashed line is the polychoric correlation, corresponding to normal marginals and the normal copula.

Figure 1

Figure 2. Illustration of Theorem 2 and Corollary 2. The black lines are the limits of the identification sets in Corollary 2, the black dashed lines are the limits of identification sets in Theorem 2, and the gray dashed line is the true polychoric correlation.

Figure 2

Figure 3. Polychoric correlation estimates for 25 items from the International Personality Item Pool (Goldberg, 1999).

Figure 3

Figure 4. Upper (blue) and lower (red) correlation bounds for the items in the International Personality Item Pool (Goldberg, 1999).

Supplementary material: File

Moss and Grønneberg supplementary material

Moss and Grønneberg supplementary material
Download Moss and Grønneberg supplementary material(File)
File 1.1 MB