Monotonically convergent algorithms are described for maximizing six (constrained) functions of vectors x, or matrices X with columns x1, ..., xr. These functions are h1(x)=Σk (x′Akx)(x′Ckx)−1, H1(X)=Σk tr (X′AkX)(X′CkX)−1, h1(X)=Σk Σl (x′lAkxl) (x′lCkxl)−1 with X constrained to be columnwise orthonormal, h2(x)=Σk (x′Akx)2(x′Ckx)−1 subject to x′x=1, H2(X)=Σk tr (X′AkX)(X′AkX)′(X′CkX)−1 subject to X′X=I, and h2(X)=Σk Σl (x′lAkxl)2 (x′lCkXl)−1 subject to X′X=I. In these functions the matrices Ck are assumed to be positive definite. The matrices Ak can be arbitrary square matrices. The general formulation of the functions and the algorithms allows for application of the algorithms in various problems that arise in multivariate analysis. Several applications of the general algorithms are given. Specifically, algorithms are given for reciprocal principal components analysis, binormamin rotation, generalized discriminant analysis, variants of generalized principal components analysis, simple structure rotation for one of the latter variants, and set component analysis. For most of these methods the algorithms appear to be new, for the others the existing algorithms turn out to be special cases of the newly derived general algorithms.