We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The achievement level is a variable measured with error, that can be estimated by means of the Rasch model. Teacher grades also measure the achievement level but they are expressed on a different scale. This paper proposes a method for combining these two scores to obtain a synthetic measure of the achievement level based on the theory developed for regression with covariate measurement error. In particular, the focus is on ordinal scaled grades, using the SIMEX method for measurement error correction. The result is a measure comparable across subjects with smaller measurement error variance. An empirical application illustrates the method.
A maximum likelihood estimation routine for two-level structural equation models with random slopes for latent covariates is presented. Because the likelihood function does not typically have a closed-form solution, numerical integration over the random effects is required. The routine relies upon a method proposed by du Toit and Cudeck (Psychometrika 74(1):65–82, 2009) for reformulating the likelihood function so that an often large subset of the random effects can be integrated analytically, reducing the computational burden of high-dimensional numerical integration. The method is demonstrated and assessed using a small-scale simulation study and an empirical example.
A general approach for fitting a model to a data matrix by weighted least squares (WLS) is studied. This approach consists of iteratively performing (steps of) existing algorithms for ordinary least squares (OLS) fitting of the same model. The approach is based on minimizing a function that majorizes the WLS loss function. The generality of the approach implies that, for every model for which an OLS fitting algorithm is available, the present approach yields a WLS fitting algorithm. In the special case where the WLS weight matrix is binary, the approach reduces to missing data imputation.
A maximum likelihood estimation routine is presented for a generalized structural equation model that permits a combination of response variables from various distributions (e.g., normal, Poisson, binomial, etc.). The likelihood function does not have a closed-form solution and so must be numerically approximated, which can be computationally demanding for models with several latent variables. However, the dimension of numerical integration can be reduced if one or more of the latent variables do not directly affect any nonnormal endogenous variables. The method is demonstrated using an empirical example, and the full estimation details, including first-order derivatives of the likelihood function, are provided.
The standard tobit or censored regression model is typically utilized for regression analysis when the dependent variable is censored. This model is generalized by developing a conditional mixture, maximum likelihood method for latent class censored regression. The proposed method simultaneously estimates separate regression functions and subject membership in K latent classes or groups given a censored dependent variable for a cross-section of subjects. Maximum likelihood estimates are obtained using an EM algorithm. The proposed method is illustrated via a consumer psychology application.
This paper presents a new stochastic multidimensional scaling procedure for the analysis of three-mode, three-way pick any/J data. The method provides either a vector or ideal-point model to represent the structure in such data, as well as “floating” model specifications (e.g., different vectors or ideal points for different choice settings), and various reparameterization options that allow the coordinates of ideal points, vectors, or stimuli to be functions of specified background variables. A maximum likelihood procedure is utilized to estimate a joint space of row and column objects, as well as a set of weights depicting the third mode of the data. An algorithm using a conjugate gradient method with automatic restarts is developed to estimate the parameters of the models. A series of Monte Carlo analyses are carried out to investigate the performance of the algorithm under diverse data and model specification conditions, examine the statistical properties of the associated test statistic, and test the robustness of the procedure to departures from the independence assumptions. Finally, a consumer psychology application assessing the impact of situational influences on consumers' choice behavior is discussed.
We devise a new statistical methodology called constrained stochastic extended redundancy analysis (CSERA) to examine the comparative impact of various conceptual factors, or drivers, as well as the specific predictor variables that contribute to each driver on designated dependent variable(s). The technical details of the proposed methodology, the maximum likelihood estimation algorithm, and model selection heuristics are discussed. A sports marketing consumer psychology application is provided in a Major League Baseball (MLB) context where the effects of six conceptual drivers of game attendance and their defining predictor variables are estimated. Results compare favorably to those obtained using traditional extended redundancy analysis (ERA).
The psychometric and classification literatures have illustrated the fact that a wide class of discrete or network models (e.g., hierarchical or ultrametric trees) for the analysis of ordinal proximity data are plagued by potential degenerate solutions if estimated using traditional nonmetric procedures (i.e., procedures which optimize a STRESS-based criteria of fit and whose solutions are invariant under a monotone transformation of the input data). This paper proposes a new parametric, maximum likelihood based procedure for estimating ultrametric trees for the analysis of conditional rank order proximity data. We present the technical aspects of the model and the estimation algorithm. Some preliminary Monte Carlo results are discussed. A consumer psychology application is provided examining the similarity of fifteen types of snack/breakfast items. Finally, some directions for future research are provided.
Lord developed an approximation for the bias function for the maximum likelihood estimate in the context of the three-parameter logistic model. Using Taylor's expansion of the likelihood equation, he obtained an equation that includes the conditional expectation, given true ability, of the discrepancy between the maximum likelihood estimate and true ability. All terms of orders higher than n−1 are ignored where n indicates the number of items. Lord assumed that all item and individual parameters are bounded, all item parameters are known or well-estimated, and the number of items is reasonably large. In the present paper, an approximation for the bias function of the maximum likelihood estimate of the latent trait, or ability, will be developed using the same assumptions for the more general case where item responses are discrete. This will include the dichotomous response level, for which the three-parameter logistic model has been discussed, the graded response level and the nominal response level. Some observations will be made for both dichotomous and graded response levels.
The vast majority of existing multidimensional scaling (MDS) procedures devised for the analysis of paired comparison preference/choice judgments are typically based on either scalar product (i.e., vector) or unfolding (i.e., ideal-point) models. Such methods tend to ignore many of the essential components of microeconomic theory including convex indifference curves, constrained utility maximization, demand functions, et cetera. This paper presents a new stochastic MDS procedure called MICROSCALE that attempts to operationalize many of these traditional microeconomic concepts. First, we briefly review several existing MDS models that operate on paired comparisons data, noting the particular nature of the utility functions implied by each class of models. These utility assumptions are then directly contrasted to those of microeconomic theory. The new maximum likelihood based procedure, MICROSCALE, is presented, as well as the technical details of the estimation procedure. The results of a Monte Carlo analysis investigating the performance of the algorithm as a number of model, data, and error factors are experimentally manipulated are provided. Finally, an illustration in consumer psychology concerning a convenience sample of thirty consumers providing paired comparisons judgments for some fourteen brands of over-the-counter analgesics is discussed.
A procedure for computing the power of the likelihood ratio test used in the context of covariance structure analysis is derived. The procedure uses statistics associated with the standard output of the computer programs commonly used and assumes that a specific alternative value of the parameter vector is specified. Using the noncentral Chi-square distribution, the power of the test is approximated by the asymptotic one for a sequence of local alternatives. The procedure is illustrated by an example. A Monte Carlo experiment also shows how good the approximation is for a specific case.
In very simple test theory models such as the Rasch model, a single parameter is used to represent the ability of any examinee or the difficulty of any item. Simple models such as these provide very important points of departure for more detailed modeling when a substantial amount of data are available, and are themselves of real practical value for small or even medium samples. They can also serve a normative role in test design.
As an alternative to the Rasch model, or the Rasch model with a correction for guessing, a simple model is introduced which characterizes strength of response in terms of the ratio of ability and difficulty parameters rather than their difference. This model provides a natural account of guessing, and has other useful things to contribute as well. It also offers an alternative to the Rasch model with the usual correction for guessing. The three models are compared in terms of statistical properties and fits to actual data. The goal of the paper is to widen the range of “minimal” models available to test analysts.
This paper proposes a structural analysis for generalized linear models when some explanatory variables are measured with error and the measurement error variance is a function of the true variables. The focus is on latent variables investigated on the basis of questionnaires and estimated using item response theory models. Latent variable estimates are then treated as observed measures of the true variables. This leads to a two-stage estimation procedure which constitutes an alternative to a joint model for the outcome variable and the responses given to the questionnaire. Simulation studies explore the effect of ignoring the true error structure and the performance of the proposed method. Two illustrative examples concern achievement data of university students. Particular attention is given to the Rasch model.
Rationale and the actual procedures of two nonparametric approaches, called Bivariate P.D.F. Approach and Conditional P.D.F. Approach, for estimating the operating characteristic of a discrete item response, or the conditional probability, given latent trait, that the examinee's response be that specific response, are introduced and discussed. These methods are featured by the facts that: (a) estimation is made without assuming any mathematical forms, and (b) it is based upon a relatively small sample of several hundred to a few thousand examinees.
Some examples of the results obtained by the Simple Sum Procedure and the Differential Weight Procedure of the Conditional P.D.F. Approach are given, using simulated data. The usefulness of these nonparametric methods is also discussed.
Categorical marginal models (CMMs) are flexible tools for modelling dependent or clustered categorical data, when the dependencies themselves are not of interest. A major limitation of maximum likelihood (ML) estimation of CMMs is that the size of the contingency table increases exponentially with the number of variables, so even for a moderate number of variables, say between 10 and 20, ML estimation can become computationally infeasible. An alternative method, which retains the optimal asymptotic efficiency of ML, is maximum empirical likelihood (MEL) estimation. However, we show that MEL tends to break down for large, sparse contingency tables. As a solution, we propose a new method, which we call maximum augmented empirical likelihood (MAEL) estimation and which involves augmentation of the empirical likelihood support with a number of well-chosen cells. Simulation results show good finite sample performance for very large contingency tables.
Techniques are developed for surrounding each of the points in a multidimensional scaling solution with a region which will contain the population point with some level of confidence. Bayesian credibility regions are also discussed. A general theorem is proven which describes the asymptotic distribution of maximum likelihood estimates subject to identifiability constraints. This theorem is applied to a number of models to display asymptotic variance-covariance matrices for coordinate estimates under different rotational constraints. A technique is described for displaying Bayesian conditional credibility regions for any sample size.
This paper develops a maximum likelihood based method for simultaneously performing multidimensional scaling and cluster analysis on two-way dominance or profile data. This MULTICLUS procedure utilizes mixtures of multivariate conditional normal distributions to estimate a joint space of stimulus coordinates and K vectors, one for each cluster or group, in a T-dimensional space. The conditional mixture, maximum likelihood method is introduced together with an E-M algorithm for parameter estimation. A Monte Carlo analysis is presented to investigate the performance of the algorithm as a number of data, parameter, and error factors are experimentally manipulated. Finally, a consumer psychology application is discussed involving consumer expertise/experience with microcomputers.
This paper presents a new procedure called TREEFAM for estimating ultrametric tree structures from proximity data confounded by differential stimulus familiarity. The objective of the proposed TREEFAM procedure is to quantitatively “filter out” the effects of stimulus unfamiliarity in the estimation of an ultrametric tree. A conditional, alternating maximum likelihood procedure is formulated to simultaneously estimate an ultrametric tree, under the unobserved condition of complete stimulus familiarity, and subject-specific parameters capturing the adjustments due to differential unfamiliarity. We demonstrate the performance of the TREEFAM procedure under a variety of alternative conditions via a modest Monte Carlo experimental study. An empirical application provides evidence that the TREEFAM outperforms traditional models that ignore the effects of unfamiliarity in terms of superior tree recovery and overall goodness-of-fit.
Applications of item response theory, which depend upon its parameter invariance property, require that parameter estimates be unbiased. A new method, weighted likelihood estimation (WLE), is derived, and proved to be less biased than maximum likelihood estimation (MLE) with the same asymptotic variance and normal distribution. WLE removes the first order bias term from MLE. Two Monte Carlo studies compare WLE with MLE and Bayesian modal estimation (BME) of ability in conventional tests and tailored tests, assuming the item parameters are known constants. The Monte Carlo studies favor WLE over MLE and BME on several criteria over a wide range of the ability scale.