Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-25T06:04:02.587Z Has data issue: false hasContentIssue false

On a class of bivariate distributions built of q-ultraspherical polynomials

Published online by Cambridge University Press:  02 December 2024

Paweł J. Szabłowski*
Affiliation:
Faculty of Mathematics and Information Sciences, Warsaw University of Technology, ul Koszykowa 75, 00-662 Warsaw, mazowieckie, Poland ([email protected]) (corresponding author)
Rights & Permissions [Opens in a new window]

Abstract

Our primary result concerns the positivity of specific kernels constructed using the q-ultraspherical polynomials. In other words, it concerns a two-parameter family of bivariate, compactly supported distributions. Moreover, this family has a property that all its conditional moments are polynomials in the conditioning random variable. The significance of this result is evident for individuals working on distribution theory, orthogonal polynomials, q-series theory, and the so-called quantum polynomials. Therefore, it may have a limited number of interested researchers. That is why, we put our results into a broader context. We recall the theory of Hilbert–Schmidt operators and the idea of Lancaster expansions (LEs) of the bivariate distributions absolutely continuous with respect to the product of their marginal distributions. Applications of LE can be found in Mathematical Statistics or the creation of Markov processes with polynomial conditional moments (the most well-known of these processes is the famous Wiener process).

Type
Research Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of The Royal Society of Edinburgh

1. Introduction

As stated in the Abstract, our main result concerns the positivity of certain bivariate kernels built of the so-called q-ultraspherical polynomials. The proof is long and difficult, involving certain identities from the so-called q-series theory. That is why, we will present our results in a broader context that includes Hilbert–Schmidt operators, kernels built of orthogonal polynomials, Markov processes with polynomial conditional moments, and bivariate distributions that are absolutely continuous with respect to the products of their marginals and their applications in Mathematical statistics.

We will be dealing mostly with expressions of type

(1.1)\begin{equation} K(x,y)=\sum_{j=0}^{\infty}c_{j}a_{j}(x)b_{j}(y)\text{,} \end{equation}

where $\left\{ a_{j}(x)\right\} $ and $\left\{ b_{j}(y)\right\} $ are the sets of real polynomials orthogonal with respect to some finite real measures respectively $d\alpha(x)$ and $d\beta(y)$. Since the terminology found in the literature is somewhat confusing, we will call such expressions kernels, symmetric when the two measures are the same (and consequently families $\left\{ a_{j}(x)\right\} $ and $\left\{ b_{j}(y)\right\} $ are the same) or non-symmetric when the two measures are different.

The most general application of such kernels is the so-called Hilbert–Schmidt operators considered in functional analysis. Namely, imagine that we have two Hilbert, real (for the sake of simplicity of argument) spaces $L_{2} (d\alpha(x))$ and $L_{2}(d\beta(y))$ of functions that are square integrable with respect to respectively $d\alpha$ and $d\beta$. Let $\left\{ a_{j}(x)\right\} $ and $\left\{ b_{j}(y)\right\} $ be the orthonormal bases of these spaces. Let us take a function h say, from $L_{2}(d\alpha(x))$. It can be presented in the following form $h(x) = \sum _{n\geq0}h_{n}a_{n}(x),$ with $\sum_{n\geq0}h_{n}^{2} \lt \infty$. Then

\begin{equation*} f(y)=\int K(x,y)h\left( x\right) d\alpha(x)=\sum_{n\geq0}c_{n}h_{n} b_{n}(y)\text{.} \end{equation*}

We can observe that if only, $\sum_{n\geq0}c_{n}^{2} \lt \infty$ then $f\left( y\right) \in L_{2}(d\beta(y))$. Moreover, $K(x,y)$ is the kernel defining a Hilbert–Schmidt operator with the norm equal $\sum_{n\geq0}c_{n}^{2}$.

Now, assume that both families of polynomials $\left\{ a_{j}\right\} $ and $\left\{ b\right\} $ are orthonormal, i.e., $\int a_{j}^{2}d\alpha = \int b_{j}^{2}d\beta = 1$. Then the condition $\sum_{n\geq0}c_{n}^{2} \lt \infty$ implies $\sum_{n\geq0}c_{n}^{2} a_{n}^{2}\left( x\right) \lt \infty$ almost everywhere mod $d\alpha$ and $\sum_{n\geq0}c_{n}^{2}b_{n}^{2}\left( x\right) \lt \infty$ almost everywhere mod $d\beta$. Consequently, for almost all $x\in\operatorname*{supp}\alpha$: $\sum_{j=0}^{\infty}c_{j}a_{j}(x)b_{j}(y)$ is convergent in mean squares sense with respect to $d\beta$ (as a function of $y)$. Similarly for x and y interchanged.

The kernel $K(x,y)$ is positive, if it is non-negative for almost all x mod $d\alpha$ and y mod $d\beta$. Positive kernels define stationary Markov processes, whose all its conditional moments are polynomials in the conditioning random variable. How to do it, see, e.g., [Reference Szabłowski27], [Reference Szabłowski29], and [Reference Szabłowski30]. For more introduction and the list of literature treating this subject was done in the recently published in Stochastics article [Reference Szabłowski34]. Let us also remark that following [Reference Szabłowski35], positive kernels, scaled to 1 constitute the totality of bivariate distributions satisfying condition (1.2) presented below, whose conditional moments of, say order n, are the polynomials of order not exceeding n in the conditioning random variable.

The main problem is to present kernels in the compact, simple forms. The process of doing so is called summing the kernels. It turns out to be a very hard problem. Only a few kernels are summed. A sample of a few positive, summed kernels will be presented in §5.

We will present several compactly supported, bivariate distributions allowing the so-called Lancaster expansion (LE). In the series of articles [Reference Lancaster11], [Reference Lancaster13], [Reference Lancaster12], and [Reference Lancaster14], Lancaster analysed the family of bivariate distributions which allow a special type of expansion of its Radon–Nikodym derivative of these distributions with respect to the products of their marginal measures. To be more precise, let’s assume that $d\mu(x,y)$ is the distribution in question and $d\alpha(x)$ and $d\beta(x)$ are its marginal distributions, and the following condition is met

(1.2)\begin{equation} \int\int\left( \frac{\partial^{2}\mu(x,y)}{\partial\alpha(x)\partial\beta (y)}\right) ^{2}d\alpha(x)d\beta(y) \lt \infty\text{,} \end{equation}

where the integration is over the support of the product measure $d\alpha d\beta$. It turns out that then the following expansion is convergent, at least in mean square with respect to the product measure:

(1.3)\begin{equation} d\mu(x,y)=d\alpha(x)d\beta(y)\sum_{j=0}^{\infty}c_{j}a_{j}(x)b_{j}(y)\text{.} \end{equation}

From the theory of orthogonal series, it follows, that if the condition (1.2) is satisfied and we deal with expansion (1.3) convergent in mean square, then we must have

\begin{equation*} \sum_{n\geq0}c_{n}^{2} \lt \infty\text{,} \end{equation*}

that is $\left\{ c_{n}\right\} _{n\geq0}\in l_{2}$, the space of square summable sequences. When equipped in the following norm $\left\Vert \mathbf{c}\right\Vert = \sum_{n\geq0}c_{n}^{2}$ where: $\mathbf{c} = \left\{ c_{n}\right\} _{n\geq0}$, then l 2 becomes a Banach space.

In the expansion (1.3), $\left\{ a_{j}\right\} $ and $\left\{ b_{j}\right\} $ are orthonormal sequences with respect to respectively $d\alpha$ and $d\beta$. We will call expansions of the form (1.3) satisfying a condition (1.2) LE. One has to notice, that expressions like (1.3), more precisely of the form

\begin{equation*} \sum_{j=0}^{\infty}c_{j}a_{j}(x)b_{j}(y)\text{,} \end{equation*}

where $\left\{ a_{j}\right\} $ and $\left\{ b_{j}\right\} $ are two families of polynomials are kernels, discussed above.

Obviously, for probabilists, the most interesting are those kernels that are non-negative for certain ranges of x and y. Non-negative kernels on some subsets of $\mathbb{R}^{2}$ will be called Lancaster kernels briefly LKs. By scaling properly, one can use such a kernel to construct a positive, bivariate measure. In the probabilistic context of this article, ‘LE’ will serve as the most precise and suitable term. Probably, the first LE expansion was the following one:

\begin{gather*} \exp(-(x^{2}+y^{2})/2)\sum_{j\geq0}^{\infty}\frac{\rho^{j}}{j!}H_{j} (x)H_{j}(y)=\\ \exp(-(x^{2}-2\rho xy+y^{2})/(2(1-\rho^{2}))/(2\pi\sqrt{1-\rho^{2}})\text{,} \end{gather*}

that is convergent for all $x,y\in\mathbb{R}$ and $\left\vert \rho\right\vert \lt 1$. Above, Hj denotes the jth element of the family of the so-called probabilistic Hermite polynomials, described below in §3 (see [Reference Mercer16]). As mentioned earlier, in §5, we will list all known to the author such LE.

Why such expansions are important? Lancaster himself, a long time ago, pointed out applications in mathematical statistics, hence we will not repeat these arguments. In this article, we will concentrate on applications in distribution theory by indicating bivariate distributions having a simple structure. As shown in [Reference Szabłowski35], a bivariate distribution that allows LE and satisfies the condition (1.2) has the property that all its conditional moments of degree say n are polynomials of order n in the conditioning random variable. More precisely, for all random variables (X, Y) having bivariate distribution given by (1.3), we have for all $n\geq0:$

\begin{equation*} E(a_{n}(X)|Y=y)=c_{n}b_{n}(y)\text{.} \end{equation*}

We have an immediate observation concerning coefficients $\left\{ c_{n}\right\} _{n\geq0}$ that appear in the definitions of LK. Observe that if we assume that LK must be a probability distribution (i.e., integrating to 1) then the coefficients $\left\{ c_{n}\right\} _{n\geq0}$ form a convex cone in the space l 2 of square summable sequences.

Other possible applications are in the theory of Markov processes. Namely, recall, that every Markov process $\left\{ X_{t}\right\} _{t\in\mathbb{R}}$ is completely defined by the two families of measures. The first one is the family of the so-called marginal distributions, i.e., family (indexed by the time parameter $t)$ of one-dimensional distributions of a random variable Xt. The other family of distributions is the family of the conditional distributions of $X_{t+\tau}|X_{\tau}=x$, indexed by $t\geq0$, $\tau$ and $x$. If we confine ourselfs to stationary Markow processes, then the marginal distributions are all the same and conditional distributions are indexed only by $t\geq0$ and $x$. One could utilize symmetric LE almost immediately by finding positive constants γn such that $\exp(-t\gamma_{n}) = c_{n}$ as it was done, e.g., in [Reference Szabłowski34] and define a stationary Markov process.

Many examples of LE stem from the application of the polynomials from the so-called Askey–Wilson (AW) family of polynomials. These polynomials involve notions of the so-called q-series theory, so in the next section we will present basic notions and facts from this theory. The traditional terminology calls polynomial appearing within q-series theory, quantum polynomials, see, e.g., an excellent monograph [Reference Ismail8] of Ismail.

The article is arranged as follows. Section 2 includes the traditional notation used in the q-series theory and some general results used in the subsequent section. Section 3 contains a list of polynomials mostly from the so-called AW scheme. It is important to present these polynomials and their relationship to the q-ultraspherical ones, which is the main subject of the article. Section 4 is dedicated to the proof of our main result, i.e., summing and proving the positivity of certain kernel built of q-ultraspherical polynomials. Section 5 lists simple, known to the author, summed kernels both symmetric and non-symmetric kernels. Finally, Section 6 contains longer, requiring tedious calculations, proofs and other auxiliary results from q-series theory.

2. Notation, definition, and some methods of obtaining LE

q is a parameter. We will assume that $-1 \lt q\leq1$ unless otherwise stated. The case q = 1 may not always be considered directly but, sometimes, as left-hand side limit (i.e., $q\longrightarrow1^{-}$). We will point out these cases.

We will use traditional notations of the q-series theory, i.e.,

\begin{equation*} \left[ 0\right] _{q} = 0,~\left[ n\right] _{q} = 1+q+\ldots+q^{n-1} ,\left[ n\right] _{q}! = \prod_{j=1}^{n}\left[ j\right] _{q},\text{with }\left[ 0\right] _{q}! =1\text{,} \end{equation*}
\begin{equation*} \genfrac{[}{]}{0pt}{}{n}{k} _{q}= \begin{cases} \frac{\left[ n\right] _{q}!}{\left[ n-k\right] _{q}!\left[ k\right] _{q}!}, & \mathrm{if }\ n\geq k\geq0\text{;}\\ 0, & \text{otherwise.} \end{cases} \end{equation*}

$\left( {\matrix{n \cr k \cr }}\right)$ will denote the ordinary, well-known binomial coefficient.

It is useful to use the so-called q-Pochhammer symbol for $n\geq1$

\begin{equation*} \left( a|q\right) _{n}=\prod_{j=0}^{n-1}\left( 1-aq^{j}\right) ,~~\left( a_{1},a_{2},\ldots,a_{k}|q\right) _{n} = \prod_{j=1} ^{k}\left( a_{j}|q\right) _{n}\text{,} \end{equation*}

with $\left( a|q\right) _{0} = 1$.

Often $\left( a|q\right) _{n}$ as well as $\left( a_{1},a_{2},\ldots ,a_{k}|q\right) _{n}$ will be abbreviated to $\left( a\right) _{n}$ and $\left( a_{1},a_{2},\ldots,a_{k}\right) _{n}$, if it will not cause misunderstanding.

Remark. In the literature functions also an ordinary Pochhammer symbol, i.e., $a(a+1)\ldots(a+n-1)$. We will denote it by $\left( a\right) ^{(n)}$ and call ‘rising factorial’. There, in the literature, functions also the so-called ‘falling factorial’ equal to $a(a-1)\ldots(a-n+1)$ that we will denote $\left( a\right) _{\left( n\right) }$. Hence, in this article, $\left( a\right) _{n}$ would mean $\left( a|q\right) _{n}$ as defined above.

We will also use the following symbol $\left\lfloor n\right\rfloor $ to denote the largest integer not exceeding n.

For further reference, we mention the following four formulae from [Reference Koekoek, Lesky and Swarttouw9] (subsections 1.8–1.14). Namely, the following formulae are true for $\left\vert t\right\vert \lt 1$, $\left\vert q\right\vert \lt 1$ (already proved by Euler, see [Reference Andrews, Askey and Roy2] corollary 10.2.2)

(2.1)\begin{align} \frac{1}{(t)_{\infty}} & = \sum_{k\geq0}\frac{t^{k} }{(q)_{k}}\text{, }\frac{1}{(t)_{n+1}}=\sum_{j\geq0} \genfrac{[}{]}{0pt}{}{n+j}{j} _{q}t^{j}\text{,} \end{align}
(2.2)\begin{align} (t)_{\infty} & = \sum_{k\geq0}(-1)^{k}q^{\binom{k}{2} }\frac{t^{k}}{(q)_{k}}\text{, }\left( t\right) _{n}=\sum_{j=0}^{n} \genfrac{[}{]}{0pt}{}{n}{j} _{q}q^{\binom{j}{2}}(-t)^{j}\text{.} \end{align}

It is easy to see that $\left( q\right) _{n}=\left( 1-q\right) ^{n}\left[ n\right] _{q}!$ and that

\begin{equation*} \genfrac{[}{]}{0pt}{}{n}{k} _{q} = \begin{cases} \frac{\left( q\right) _{n}}{\left( q\right) _{n-k}\left( q\right) _{k} }\text{,} & \mathrm{if }\ n\geq k\geq0\text{;}\\ 0\text{,} & \text{otherwise.} \end{cases} \end{equation*}

The above-mentioned formula is just an example where direct setting q = 1 is senseless; however, the passage to the limit $q\longrightarrow1^{-}$ makes sense.

Notice, that, in particular,

(2.3)\begin{equation} \left[ n\right] _{1} = n\text{,}~\left[ n\right] _{1}! = n!\text{,}~ \genfrac{[}{]}{0pt}{}{n}{k} _{1} = \binom{n}{k},~(a)_{1} = 1-a\text{,}~\left( a|1\right) _{n} = \left( 1-a\right) ^{n} \end{equation}

and

(2.4)\begin{equation} \left[ n\right] _{0} = \begin{cases} 1\text{,} & \mathrm{if }\ n\geq1\text{;}\\ 0\text{,} & \mathrm{if }\ n=0\text{.} \end{cases} \text{,}~\left[ n\right] _{0}! = 1\text{,}~ \genfrac{[}{]}{0pt}{}{n}{k} _{0} = 1\text{,}~\left( a|0\right) _{n} = \begin{cases} 1\text{,} & \mathrm{if }\ n=0\text{;}\\ 1-a\text{,} & \mathrm{if }\ n\geq1\text{.} \end{cases} \end{equation}

i will denote imaginary unit, unless otherwise stated. Let us define also:

(2.5)\begin{align} \left( ae^{i\theta},ae^{-i\theta}\right) _{\infty} & =\prod_{k=0}^{\infty }v\left( x|aq^{k}\right) \text{,} \end{align}
(2.6)\begin{align} \left( te^{i\left( \theta+\phi\right) },te^{i\left( \theta-\phi\right) },te^{-i\left( \theta-\phi\right) },te^{-i\left( \theta+\phi\right) }\right) _{\infty} & =\prod_{k=0}^{\infty}w\left( x,y|tq^{k}\right) \text{,} \end{align}
(2.7)\begin{align} \left( ae^{2i\theta},ae^{-2i\theta}\right) _{\infty} & =\prod _{k=0}^{\infty}l\left( x|aq^{k}\right) \text{,} \end{align}

where,

(2.8)\begin{align} v(x|a) & = 1-2ax+a^{2}\text{,} \end{align}
(2.9)\begin{align} l(x|a) & =(1+a)^{2}-4x^{2}a\text{,} \end{align}
(2.10)\begin{align} w(x,y|a) & =(1-a^{2})^{2}-4xya(1+a^{2})+4a^{2}(x^{2}+y^{2}) \end{align}

and, as usually in the q-series theory, $x = \cos\theta$ and $y=\cos\phi$.

We will use also the following notation:

\begin{equation*} S(q)\overset{df}{=} \begin{cases} \lbrack-2/\sqrt{1-q},2/\sqrt{1-q}], & \mathrm{if }\ \left\vert q\right\vert \lt 1\text{;}\\ \mathbb{R}, & \mathrm{if }\ q=1\text{.} \end{cases} \end{equation*}

2.1. Method of expansion of the ratio of densities

We will use through the article the following way of obtaining infinite expansions of type

\begin{equation*} \sum_{j\geq0}d_{n}p_{n}(x)\text{,} \end{equation*}

that are convergent almost everywhere on some subset of $\mathbb{R}$. Namely, in view of [Reference Szabłowski22], let us consider two measures on $\mathbb{R}$ both having densities f and g. Furthermore, suppose that, we know that $\int(f(x)/g(x))^{2}g(x)dx$ is finite. Further suppose also that we know two families of orthogonal polynomials $\left\{ a_{n}\right\} $ and $\left\{ b_{n}\right\} $, such that the first one is orthogonal with respect to the measure having the density f and the other is orthogonal with respect to the measure having the density g. Then we know that $f/g$ can be expanded in an infinite series

(2.11)\begin{equation} \sum_{n\geq0}d_{n}b_{n}(x)\text{,} \end{equation}

that is convergent in $L^{2}(\mathbb{R},g)$. We know, in particular, that $\sum_{n\geq0}\left\vert d_{n}\right\vert ^{2} \lt \infty$. If additionally

\begin{equation*} \sum_{n\geq0}\left\vert d_{n}\right\vert ^{2}\log^{2}(n+1) \lt \infty\text{,} \end{equation*}

then by the Rademacher–Meshov theorem, we deduce that the series in question converges not only in L 2 but also almost everywhere with respect to the measure with the density g.

Thus, we will get the condition $\sum_{n\geq0}\left\vert d_{n}\right\vert ^{2} \lt \infty$ satisfied for free. Moreover, in many cases, we will have $\left\vert d_{n}\right\vert ^{2}\leq r^{n}$ for some r < 1. Hence the condition $\sum_{n\geq0}\left\vert d_{n}\right\vert ^{2}\log^{2}(n+1) \lt \infty$ is also naturally satisfied. If one knows the connection coefficients between the families $\left\{ b_{n}\right\} $ and $\left\{ a_{n}\right\} $, i.e., a set of coefficients $\left\{ c_{k,n}\right\} _{n\geq1,0\leq k\leq n}$ satisfying

\begin{equation*} b_{n} (x)= \sum_{k=0}^{n}c_{k,n}a_{k}(x)\text{,} \end{equation*}

then $d_{n} = c_{0,n}/\int b_{n}^{2}(x)g(x)dx$. We will refer to this type of reasoning as D(ensity) E(expansion) I(idea) (*,*) (that is DEI(*,*)), where the first star point out to the formula for the connection coefficient and the second star to the formula for $\int b_{n} ^{2}(x)g(x)dx$.

3. Families of polynomials appearing in the article including those forming part of the AW scheme

All families of polynomials listed in this section are described in many positions of literature starting from [Reference Askey and Wilson4], [Reference Andrews, Askey and Roy2], [Reference Ismail8]. However, as it was noticed by the author in [Reference Szabłowski24], by changing the parameters to complex conjugate and changing the usual range of all variables from $[-1,1]$ to S(q), we obtain polynomials from the AW scheme suitable for probabilistic applications. Recently, in the review article [Reference Szabłowski33] and a few years earlier in [Reference Szabłowski26], the author described and analysed the polynomials of this scheme with conjugate complex parameters. Thus, we will refer to these two articles for details.

The families of orthogonal polynomials will be identified by their three-term recurrences. Usually, the polynomials mentioned in such a three-term recurrence will be monic (i.e., having a coefficient of the highest power of the variable equal to 1). The cases when the given three-term recurrence leads to non-monic polynomials, will be clearly pointed out. Together with the three-term recurrence, we will mention the measure, usually having density, that makes a given family of polynomials orthogonal.

In order not to allow the article to be too large, we will mention only basic properties of the polynomials involved. More properties and relationships between used families of polynomials could be found in already mentioned fundamental positions of literature like [Reference Askey and Wilson4], [Reference Andrews, Askey and Roy2], [Reference Ismail8], or [Reference Koekoek, Lesky and Swarttouw9]. One must remark that in these positions of the literature, the polynomials of the AW scheme are presented in their basic versions where all ranges of the variables are confined to the segment $[-1,1]$. As mention before, for the probabilistic applications, more useful are versions where variables range over S(q). Then, it is possible to pass with q to 1 and compare the results with the properties of Hermite polynomials and Normal distribution which are the reference points to all distribution comparisons in probability theory.

We recall these families of polynomials for the sake of completeness of the article.

3.1. Chebyshev polynomials

They are of two types denoted by $\left\{ T_{n}\right\} $ and $\left\{ U_{n}\right\} $ called, respectively, the first and second kind satisfying the same three-term recurrence, for $n\geq1$

\begin{equation*} 2xU_{n}(x)=U_{n+1}(x)+U_{n-1}(x), \end{equation*}

with different initial conditions $T_{0}(x)=1=U_{0}(x)$ and $T_{1}(x)=x$ and $U_{1}(x)=2x$. Obviously, they are not monic. They are orthogonal, respectively, with respect to arcsine distribution with the density $f_{T}(x) = \frac{1}{\pi\sqrt{1-x^{2}}}$ and to the semicircle or Wigner distribution with the density $f_{U}(x) = \frac{2}{\pi }\sqrt{1-x^{2}}$. Besides, we have also the following orthogonal relationships:

\begin{align*} \int_{-1}^{1}T_{n}(x)T_{m}(x)f_{T}(x)dx & = \begin{cases} 0, & \mathrm{if }\ m\neq n\text{;}\\ 1\text{,} & \mathrm{if }\ m=n=0\text{;}\\ 2\text{,} & \mathrm{if }\ m=n \gt 0\text{.} \end{cases} \\ \int_{-1}^{1}U_{n}(x)U_{m}(x)f_{U}(x)dx & = \begin{cases} 0\text{,} & \mathrm{if }\ m\neq n\text{;}\\ 1\text{,} & \mathrm{if }\ m=n\text{.} \end{cases} \end{align*}

More about their properties, one can read in [Reference Mason and Handscomb15].

3.2. Hermite polynomials

We will consider here only the so-called probabilistic Hermite polynomials namely the ones satisfying the following three-term recurrence:

\begin{equation*} H_{n+1}(x)=xH_{n}(x)-nH_{n-1}(x), \end{equation*}

with initial conditions $H_{0}(x)=1$, $H_{1}(x) = x$. They are monic and orthogonal with respect to the Normal $N(0,1)$ distribution with the well-known density $f_{N}(x) = \frac{1} {\sqrt{2\pi}}\exp(-x^{2}/2)$. They satisfy the following orthogonal relationship:

(3.1)\begin{equation} \int_{-\infty}^{\infty}H_{n}(x)H_{m}(x)f_{N}(x)dx= \begin{cases} 0\text{,} & \mathrm{if }\ m\neq n\text{;}\\ n!\text{,} & \mathrm{if }\ m=n\text{.} \end{cases} \end{equation}

3.3. q-Hermite polynomials

The following three-term recurrence defines the q-Hermite polynomials, which will be denoted $H_{n}(x|q)$:

(3.2)\begin{equation} xH_{n}\left( x|q\right) =H_{n+1}\left( x|q\right) +\left[ n\right] _{q}H_{n-1}\left( x\right) \text{,} \end{equation}

for $n\geq1$ with $H_{-1}\left( x|q\right) = 0$, $H_{1}\left( x|q\right) = 1$. Notice that now polynomials $H_{n}(x|q)$ are monic and also that

\begin{equation*} \lim_{q\rightarrow1^{-}}H_{n}(x|q)=H_{n}(x)\text{.} \end{equation*}

Let us define the following non-negative function where we denoted

(3.3)\begin{equation} f_{h}\left( x|q\right) =\frac{2\left( q\right) _{\infty}\sqrt{1-x^{2}} }{\pi}\prod_{k=1}^{\infty}l\left( x|q^{k}\right) \text{,} \end{equation}

with, as before, $l(x|a) = (1+a)^{2}-4x^{2}a$ and let us define a new density

(3.4)\begin{equation} f_{N}\left( x|q\right) = \begin{cases} \sqrt{1-q}f_{h}(x\sqrt{1-q}/2|q)/2\text{,} & \mathrm{if }\ \left\vert q\right\vert \lt 1\text{;}\\ \exp\left( -x^{2}/2\right) /\sqrt{2\pi}\text{,} & \mathrm{if }\ q=1\text{.} \end{cases} \end{equation}

Notice that fN is non-negative if only $x\in S(q)$. It turns out that we have the following orthogonal relationship:

(3.5)\begin{equation} \int_{S(q)}H_{n}(x|q)H_{m}(x|q)f_{N}(x|q)dx= \begin{cases} 0\text{,} & \mathrm{if }\ n\neq m\text{;}\\ \left[ n\right] _{q}!\text{,} & \mathrm{if }\ n=m\text{.} \end{cases} \end{equation}

Notice, that if $X\sim f_{N}(x|q)$, then as $H_{1}(x|q) = x$, $H_{2}(x|q) = x^{2}-1$, we deduce that $EX=0$ and $EX^{2} = 1$.

3.4. Big q-Hermite polynomials

More on these polynomials can be read in [Reference Szabłowski33], [Reference Szabłowski21], or [Reference Szabłowski26]. Here, we will only mention that these polynomials are, for example, defined by the relationship:

\begin{equation*} H_{n}(x|a,q) =\sum_{j=0}^{n} \genfrac{[}{]}{0pt}{}{n}{j} _{q}q^{\binom{j}{2}}(-a)^{j}H_{n-j}\left( x|q\right) \text{,} \end{equation*}

where Hn denotes a continuous q-Hermite polynomials, defined above and $a\in(-1,1)$. They satisfy the following three-term recurrence:

\begin{equation*} xH_{n}(x|a,q)=H_{n+1}(x|a,q)+aq^{n}H_{n}(x|a,q)+[n]_{q}H_{n-1}(x|a,q)\text{,} \end{equation*}

with $H_{-1}(x|a,q) = 0$ and $H_{0}(x|a,q) = 1$. It is known in particular that the characteristic function of the polynomials $\left\{ H_{n}\right\} $ for $\left\vert q\right\vert \lt 1$ is given by the formula:

\begin{equation*} \sum_{n=0}^{\infty}\frac{t^{n}}{\left[ n\right] _{q}!}H_{n}\left( x|q\right) =\varphi\left( x|t,q\right) \text{,} \end{equation*}

where

(3.6)\begin{equation} \varphi\left( x|t,q\right) = \frac{1}{\prod _{k=0}^{\infty}\left( 1-(1-q)xtq^{k}+(1-q)t^{2}q^{2k}\right) }\text{.} \end{equation}

Notice that it is convergent for t such that $\left\vert t\sqrt {1-q}\right\vert \lt 1$ and $x\in S(q)$. These polynomials satisfy the following orthogonality relationship:

\begin{equation*} \int_{S\left( q\right) }H_{n}\left( x|a,q\right) H_{m}\left( x|a,q\right) f_{bN}\left( x|a,q\right) dx = \left[ n\right] _{q}!\delta_{m,n}\text{,} \end{equation*}

where

\begin{equation*} f_{bN}(x|a,q)=f_{N}(x|q)\varphi(x|a,q). \end{equation*}

There exists one more interesting relationship between q-Hermite and big q-Hermite polynomials. Namely, following article of Carlitz [Reference Carlitz6], we get

\begin{equation*} H_{n}(x|a,q)\sum_{j\geq0}\frac{a^{j}}{\left[ j\right] _{q}!}H_{j} (x|q)=\sum_{j\geq0}\frac{a^{j}}{\left[ j\right] _{q}!}H_{j+n}(x|q). \end{equation*}

Note that this expansion nicely compliments and generalizes the results of proposition 4.11.

3.5. Continuous q-ultraspherical polynomials

These polynomials were first considered by Rogers in 1894 (see [Reference Rogers18], [Reference Rogers19], [Reference Rogers20]). They were defined for $\left\vert x\right\vert \leq1$ by the three-term recurrence given in, e.g., [Reference Koekoek, Lesky and Swarttouw9](14.10.19). We have the celebrated connection coefficient formula for the Rogers polynomials, see [Reference Ismail8] (13.3.1), that will be of use in the sequel, of course after proper rescaling

(3.7)\begin{equation} C_{n}\left( x|\gamma,q\right) = \sum_{k=0} ^{\left\lfloor n/2\right\rfloor }\frac{\beta^{k}\left( \gamma/\beta\right) _{k}\left( \gamma\right) _{n-k}\left( 1-\beta q^{n-2k}\right) } {(q)_{k}\left( \beta q\right) _{n-k}\left( 1-\beta\right) }C_{n-2k}\left( x|\beta,q\right) \text{.} \end{equation}

We will consider polynomials $\left\{ C_{n}\right\} $, with a different scaling of the variable $x\in S(q)$ and parameter $\beta\in\lbrack-1,1]$. We have

(3.8)\begin{equation} R_{n}(x|\beta,q)=\left[ n\right] _{q}!C_{n}(x\sqrt{1-q}/2|\beta ,q)(1-q)^{n/2}\text{,} \end{equation}

Then, their three-term recurrence becomes

(3.9)\begin{equation} \left( 1-\beta q^{n}\right) xR_{n}\left( x|\beta,q\right) =R_{n+1}\left( x|\beta,q\right) +\left( 1-\beta^{2}q^{n-1}\right) \left[ n\right] _{q}R_{n-1}\left( x|\beta,q\right) \text{.} \end{equation}

with $R_{-1}(x|\beta,q) = 0$, $R_{0}(x|\beta ,q) = 1$. Let us define the following density (following [Reference Koekoek, Lesky and Swarttouw9] (14.10.19) after necessary adjustments)

(3.10)\begin{equation} f_{C}(x|\beta,q) = \frac{(\beta^{2})_{\infty}} {(\beta,\beta q)_{\infty}}f_{h}\left( x|q\right) /\prod_{j=0}^{\infty }l\left( x|\beta q^{j}\right) \text{,} \end{equation}

where fh is given by (3.3). Let us modify it by considering

(3.11)\begin{equation} f_{R}\left( x|\beta,q\right) =\sqrt{1-q}f_{C}(x\sqrt{1-q}/2|\beta ,q)/2\text{.} \end{equation}

Then we have the following orthogonal relationship satisfied by polynomials $\left\{ R_{n}\right\} $.

(3.12)\begin{align} & \int_{S(q)}R_{n}\left( x|\beta,q\right) R_{m}\left( x|\beta,q\right) f_{R}\left( x|\beta,q\right) dx\\ & \quad = \begin{cases} 0\text{,} & \mathrm{if }\ m\neq n\text{;}\\ \frac{\left[ n\right] _{q}!(1-\beta)\left( \beta^{2}\right) _{n}}{\left( 1-\beta q^{n}\right) }\text{,} & \mathrm{if }\ m=n\text{.} \end{cases} \nonumber \end{align}

Polynomials $\left\{ R_{n}\right\} $ are not monic. We can easily notice that the coefficient by x n in Rn is $(\beta)_{n}$. Hence by defining a new sequence of polynomials

(3.13)\begin{equation} V_{n}(x|\beta,q) = R_{n}(x|\beta,q)/(\beta)_{n}\text{,} \end{equation}

we get the sequence of monic versions of the polynomials $\left\{ R_{n}\right\} $.

Our main result concerns summing certain kernels built of polynomials $\left\{ R_{n}\right\} $. Therefore, we need the following lemma that exposes the relationships between polynomials $\left\{ R_{n}\right\} $ and $\left\{ H_{n}\right\} $.

Lemma. (1)

(3.14)\begin{align} R_{n}(x|r,q) & =\sum_{k=0}^{\left\lfloor n/2\right\rfloor }\frac{\left[ n\right] _{q}!}{\left[ k\right] _{q}!\left[ n-2k\right] _{q}!} q^{\binom{k}{2}}(-r)^{k}\left( r\right) _{n-k}H_{n-2k}(x|q)\text{,} \end{align}
(3.15)\begin{align} H_{n}(x|q) & =\sum_{k=0}^{\left\lfloor n/2\right\rfloor }\frac{\left[ n\right] _{q}!(1-rq^{n-2k})}{\left[ k\right] _{q}!\left[ n-2k\right] _{q}!(1-r)(1-rq)_{n-k}}r^{k}R_{n-2k}(x|r,q)\text{,} \end{align}

(3.16)\begin{align} & \int_{S(q)}H_{n}(x)R_{m}(x|\beta,q)f_{N}(x|q)dx=\\ & \begin{cases} 0\text{,} & \mathrm{if }\ n \gt m\ \mathrm{ or }\ n+m\ \mathrm{is\ odd;}\\ q^{\binom{(m-n)/2}{2}}\frac{\left[ m\right] _{q}!(-\beta)^{(m-n)/2}}{\left[ (m-n)/2\right] _{q}!}\left( \beta\right) _{(m+n)/2}\text{,} & \mathrm{otherwise,} \end{cases} \nonumber \end{align}

and

(3.17)\begin{align} &\int_{S(q)}H_{n}(x)R_{m}(x|\beta,q)f_{R}(x|\beta,q)dx \notag\\ & \quad = \begin{cases} 0\text{,} & \mathrm{if }\ m \gt n\ \mathrm{ or }\ \left\vert n-m\right\vert\ \mathrm{is\ odd;}\\ \frac{\beta^{(n-m)/2}\left( \beta^{2}\right) _{m}\left[ n\right] _{q} !}{(1-\beta)\left[ (n-m)/2\right] _{q}!\left( \beta q\right) _{(n+m)/2} }\text{,} & \mathrm{otherwise,} \end{cases} \end{align}

where density fR is given by (3.11) and, moreover, can be presented in one of the following equivalent forms:

(3.18)\begin{gather} f_{R}(x|\beta,q)=(1-\beta)f_{CN}(x|x,\beta,q)=\\ f_{N}(x|q)\frac{\left( \beta^{2}\right) _{\infty}}{\left( \beta\right) _{\infty}\left( \beta q\right) _{\infty}\prod_{j=0}^{\infty}\left( (1+\beta q^{j}\right) ^{2}-(1-q)\beta q^{j}x^{2}))}=\nonumber\\ (1-\beta)f_{N}(x|q)\sum_{n\geq0}\frac{\beta^{n}H_{n}^{2}(x|q)}{\left[ n\right] _{q}!}=(1-\beta)f_{N}(x|q)\sum_{n\geq0}\frac{\beta^{n}H_{2n} (x|q)}{\left[ n\right] _{q}!\left( \beta\right) _{n+1}}\text{.}\nonumber \end{gather}

We also have the following expansion of $f_{N}/f_{R}$ in orthogonal series in polynomials $\left\{ R_{n}\right\} :$

(3.19)\begin{equation} f_{N}(x|q)=f_{R}(x|\gamma,q)\sum_{n\geq0}(-\gamma)^{n}q^{\binom{n}{2}} \frac{(\gamma)_{n}(1-\gamma q^{2n})}{\left[ n\right] _{q}!(1-\gamma )(\gamma^{2})_{2n}}R_{2n}(x|\gamma,q)\text{.} \end{equation}

(2) We also have the following linearization formulae

(3.20)\begin{gather} R_{n}(x|r,q)R_{m}(x|r,q)\\ =\sum_{k=0}^{\min(n,m)} \genfrac{[}{]}{0pt}{}{m}{k} _{q} \genfrac{[}{]}{0pt}{}{n}{k} _{q}\left[ k\right] _{q}!\frac{\left( r\right) _{m-k}\left( r\right) _{n-k}\left( r\right) _{k}\left( r^{2}\right) _{n+m-k}\left( 1-rq^{n+m-2k}\right) }{\left( 1-r\right) \left( rq\right) _{n+m-k}\left( r^{2}\right) _{m+n-2k}}\times\nonumber\\ R_{n+m-2k}(x|r,q)\text{,}\nonumber \end{gather}
(3.21)\begin{gather} H_{m}(x|q)R_{n}(x|r,q)= \end{gather}
(3.22)\begin{gather} \sum_{s=0}^{\left\lfloor (n+m)/2\right\rfloor } \genfrac{[}{]}{0pt}{}{n}{s} _{q}\left[ s\right] _{q}!H_{n+m-2s}(x|q)\sum_{k=0}^{s} \genfrac{[}{]}{0pt}{}{m}{s-k} _{q} \genfrac{[}{]}{0pt}{}{n-s}{k} _{q}(-r)^{k}q^{\binom{k}{2}}(r)_{n-k}\text{,}\nonumber\\ H_{m}(x|q)R_{n}(x|r,q)=\\ \sum_{u=0}^{\left\lfloor (n+m)/2\right\rfloor }\frac{\left[ n\right] _{q}!\left[ m\right] _{q}!(1-rq^{n+m-2u})}{\left[ u\right] _{q}!\left[ n+m-2u\right] _{q}!(1-r)}R_{n+m-2u}(x|r,q)\sum_{s=0}^{u} \genfrac{[}{]}{0pt}{}{u}{s} _{q}\frac{r^{u-s}}{\left( rq\right) _{n+m-u-s}}\times\nonumber\\ \sum_{k=0}^{s} \genfrac{[}{]}{0pt}{}{s}{k} _{q} \genfrac{[}{]}{0pt}{}{m+m-2s}{m+k-s} _{q}q^{\binom{k}{2}}(-r)^{k}\left( r\right) _{n-k}\text{.}\nonumber \end{gather}

Proof. (1) (3.15) and (3.14) are adaptations of (3.7) with either β = 0 or γ = 0. When we consider β = 0, one has to be careful and notice that

\begin{equation*} \beta^{n}(\gamma/\beta)_{n}=\prod_{j=0}^{n-1}(\beta-\gamma q^{j} )\rightarrow(-\gamma)^{n}q^{\binom{n}{2}}\text{,} \end{equation*}

where the limit is taken when $\beta\rightarrow0$. When rescaling to S(q), we use formula (3.8). Formulae (3.16) and (3.17) follow almost directly expansions, respectively, (3.14) and (3.15) and the fact that

\begin{equation*} \int_{S(q)}H_{n}(x|q)H_{m}(x|q)f_{N}(x|q)dx=\left[ n\right] _{q}!\delta _{nm}\text{} \end{equation*}

in the first case and (3.12) in the second. Formula (3.18) is given in [Reference Szabłowski31] (proposition 1(3.3)). To get (3.19), we use DEI (3.14 and 3.1).

(2) Again (3.20) is an adaptation of the well-known formula derived by Rogers himself by the end of the nineteenth century concerning polynomial Cn related to polynomials Rn by the formula (3.8). Formula (3.21) appeared in the version for polynomials h and C in [Reference Szabłowski33] (8.3) but in fact, it was proved by Al-Salam and Ismail in [Reference Al-Salam and Ismail1]. Formula (3.22) is obtained directly by inserting (3.15) into (3.21).

3.6. Al-Salam–Chihara polynomials

Al-Salam–Chihara (ASC) polynomials were first defined for $\left\vert x\right\vert \leq1$ with q and two other parameters a and b from the segment $[-1,1]$. We will consider a and b being complex conjugate. Let us define the new parameters ρ and y in the following way: $ab = \rho^{2}$ and $a+b = \frac{y}{\sqrt{1-q}}\rho$ and $y\in S\left( q\right) $. Then, the ASC polynomials with these new parameters will be denoted as $P_{n} (x|y,\rho,q)$. The polynomials $\left\{ P_{n}\right\} $, as demonstrated in [Reference Szabłowski24], can be interpreted in a probabilistic manner as conditional expectations. The denotation $P_{n}(x|y,\rho,q)$ reflects this conditional interpretation. It is known (see [Reference Ismail8], [Reference Szabłowski26], or [Reference Szabłowski24]) that they satisfy the following three-term recurrence:

(3.23)\begin{equation} P_{n+1}(x|y,\rho,q)=(x-\rho yq^{n})P_{n}(x|y,\rho,q)-(1-\rho^{2} q^{n-1})[n]_{q}P_{n-1}(x|y,\rho,q)\text{,} \end{equation}

with $P_{-1}(x|y,\rho,q) = 0$ and $P_{0}(x|y,\rho ,q) = 1$.

These polynomials are orthogonal with respect to the measure with the following density:

(3.24)\begin{equation} f_{CN}\left( x|y,\rho,q\right) =f_{N}(x|q)\frac{(\rho^{2})_{\infty} }{W(x,y|\rho,q)}\text{,} \end{equation}

where

(3.25)\begin{equation} W(x,y|\rho,q) = \prod_{k=0}^{\infty}\hat{w}_{q}\left( x,y|\rho q^{k},q\right) \text{,} \end{equation}

and

\begin{align*} \hat{w}_{q}(x,y|\rho,q) & = (1-\rho^{2})^{2} - (1-q)\rho xy(1+\rho^{2}) + \rho ^{2}(1-q)(x^{2}+y^{2}) \\ & = w(x\sqrt{1-q}/2,y\sqrt{1-q}/2|\rho)\text{,} \end{align*}

where w is given by (2.10). Ismail showed that

(3.26)\begin{equation} f_{CN}(x|y,\rho,q)\rightarrow\exp\left( -\frac{(x-\rho y)^{2}}{2(1-\rho^{2} )}\right) /\sqrt{2\pi(1-\rho^{2})}\text{,} \end{equation}

as $q\rightarrow1$. That is, fCN tends, as $q\rightarrow1^-$, to the density of the normal $N(\rho y,1-\rho^{2})$ distribution. That is why, fCN is called conditional q-Normal.

The orthogonal relation for these polynomials is the following:

(3.27)\begin{align} \int_{S\left( q\right) }P_{n}(x|y,\rho,q)P_{m}\left( x|y,\rho,q\right) f_{CN}\left( x|y,\rho,q\right) dx= \begin{cases} 0\text{,} & \mathrm{if }\ m\neq n\text{;}\\ \left[ n\right] _{q}!\left( \rho^{2}\right) _{n}\text{,} & \mathrm{if }\ m=n\text{.} \end{cases} \end{align}

Another fascinating property of the distribution fCN is the following Chapman–Kolmogorov property:

(3.28)\begin{equation} \int_{S\left( q\right) }f_{CN}\left( z|y,\rho_{1},q\right) f_{CN}\left( y|x,\rho_{2},q\right) dy=f_{CN}\left( x|z,\rho_{1}\rho_{2},q\right) \text{.} \end{equation}

As shown in [Reference Bryc, Matysiak and Szabłowski5], the relationship between the two families of polynomials $\left\{ H_{n}\right\} $ and $\left\{ P_{n}\right\} $ are the following:

(3.29)\begin{align} P_{n}\left( x|y,\rho,q\right) & =\sum_{j=0}^{n} \genfrac{[}{]}{0pt}{}{n}{j} _{q}\rho^{n-j}B_{n-j}\left( y|q\right) H_{j}\left( x|q\right) \text{,} \end{align}
(3.30)\begin{align} H_{n}\left( x|q\right) & =\sum_{j=0}^{n} \genfrac{[}{]}{0pt}{}{n}{j} _{q}\rho^{n-j}H_{n-j}\left( y|q\right) P_{j}\left( x|y,\rho,q\right) \text{,} \end{align}

where polynomials $\left\{ B_{n}\right\} $ satisfy the following three-term recurrence

\begin{equation*} B_{n+1}(x|q)=-xq^{n}B_{n}(x|q)+q^{n-1}\left[ n\right] _{q}B_{n-1} (x|q)\text{,} \end{equation*}

with $B_{-1}(x|q) = 0$, $B_{0}(x|q) = 1.$

It has been noticed in [Reference Szabłowski26] or [Reference Szabłowski21] that the following particular cases are true.

Proposition 3.2. For $n\geq0$, we have

  1. (i) $R_{n}\left( x|0,q\right) = H_{n}\left( x|q\right) $,

  2. (ii) $R_{n}\left( x|q,q\right) = \left( q\right) _{n}U_{n}\left( x\sqrt{1-q}/2\right) $,

  3. (iii) $\lim_{\beta- \gt 1^{-}}\frac{R_{n}\left( x|\beta,q\right) }{\left( \beta\right) _{n}} = 2\frac{T_{n}\left( x\sqrt {1-q}/2\right) }{(1-q)^{n/2}}$,

  4. (iv) $R_{n}(x|\beta,1) = \left( \frac{1+\beta}{1-\beta }\right) ^{n/2}H_{n}\left( \sqrt{\frac{1-\beta}{1+\beta}}x\right) $,

  5. (v) $R_{n}(x|\beta,0) = (1-\beta)U_{n}(x/2)-\beta (1-\beta)U_{n-2}(x/2)$.

  6. (vi) $R_{n}(x|\beta,q)=P_{n}(x|x,\beta,q)$, where $\left\{ P_{n}\right\} $ is defined by its three-term recurrence (3.23).

Having done all those preparations, we are ready to present our main result.

4. Positive, summable kernel built of q-ultraspherical polynomials

One can find in the literature attempts to sum the kernels for q-ultraspherical polynomials like, e.g., [Reference Gasper and Rahman7] or [Reference Rahman and Tariq17]. However, the kernel we will present has a simple sum and depends on two parameters. More precisely, it depends on the symmetric function of two parameters, not on only one as it usually takes place.

Let us denote by $w_{n}(m,r_{1},r_{2},q)$ the following symmetric polynomial of degree 2n and by $\phi_{n}(r_{1},r_{2},q)$ the following rational, symmetric function, both in r 1 and r 2 that will often appear in the sequel:

(4.1)\begin{align} w_{n}(m,r_{1},r_{2},q) & =\sum_{s=0}^{n} \genfrac{[}{]}{0pt}{}{n}{s} _{q}r_{1}^{s}\left( q^{m}r_{2}^{2}\right) _{s}r_{2}^{n-s}\left( q^{m} r_{1}^{2}\right) _{n-s}\text{,} \end{align}
(4.2)\begin{align} \phi_{n}(r_{1},r_{2},q) & =\frac{w_{n}(0,r_{1},r_{2},q)}{\left( r_{1} ^{2}r_{2}^{2}\right) _{n}}\text{.} \end{align}

Let us notice immediately that

\begin{equation*} w_{n}(m,r_{1},r_{2},q)=q^{-nm/2}w_{n}(0,r_{1}q^{m/2},r_{2}q^{m/2},q)\text{.} \end{equation*}

Theorem 4.1 The following symmetric bivariate kernel is non-negative on $S(q)\times S(q)$ and for $\left\vert r_{1}\right\vert ,\left\vert r_{2}\right\vert \lt 1:$

\begin{equation*} \sum_{n\geq0}\phi_{n}(r_{1},r_{2},q)\frac{(1-r_{1}r_{2}q^{n})}{\left[ n\right] _{q}!(r_{1}^{2}r_{2}^{2})_{n}(1-r_{1}r_{2})}R_{n}(x|r_{1} r_{2},q)R_{n}(y|r_{1}r_{2},q)\text{,} \end{equation*}

where $\left\{ R_{n}(x|\beta,q\right\} _{n\geq0}$ are the q-ultraspherical polynomials defined by the three-term recurrence (3.9). Functions $\left\{\phi_n\right\}$ are given by (4.2).

Moreover, we have

(4.3)\begin{gather} f_{R}(x|r_{1}r_{2},q)f_{R}(y|r_{1}r_{2},q)\sum_{n\geq0}\frac{\phi_{n} (r_{1},r_{2},q)(1-r_{1}r_{2}q^{n})}{\left[ n\right] _{q}!(r_{1}^{2}r_{2} ^{2})_{n}(1-r_{1}r_{2})}R_{n}(x|r_{1}r_{2},q)R_{n}(y|r_{1}r_{2},q)\\ =(1-r_{1}r_{2})f_{CN}\left( y|x,r_{1},q\right) f_{CN}\left( x|y,r_{2} ,q\right) \text{}\nonumber \end{gather}

where $f_{CN}\left( x|y,r_{2},q\right) $ denotes the so-called conditional q-Normal distribution, defined by (3.24).

Denoting by $\hat{R}_{n}(x|r_{1}r_{2},q) = R_{n} (x|r_{1}r_{2},q)\sqrt{1-r_{1}r_{2}q^{n}}/\sqrt{\left[ n\right] _{q}!\left( r_{1}^{2}r_{2}^{2}\right) _{n}(1-r_{1}r_{2})}$ the orthonormal version of the polynomials Rn, we get more friendly version of our result

(4.4)\begin{align} & (1-r_{1}r_{2})f_{CN}\left( y|x,r_{1},q\right) f_{CN}\left( x|y,r_{2},q\right)\\ & \quad =f_{R}(x|r_{1}r_{2},q)f_{R}(y|r_{1}r_{2},q)\sum_{n\geq0}\phi_{n} (r_{1},r_{2},q)\hat{R}_{n}(x|r_{1}r_{2},q)\hat{R}_{n}(y|r_{1}r_{2} ,q)\text{.}\nonumber \end{align}

Remark. One of the referees reading this article raised the question of convergence in (4.3). Theorem 4.8 states that it is almost uniform on $S(x)\times S(q)$. The main difficulty in proving theorem 4.1 lies not in the convergence problems but in the transformation of (4.8) to (4.3). It is done by the series of operations like changing the order of summation, introducing new variables and using non-trivial identities, in other words, a very tedious, hard algebra.

Remark. Recall that polynomials Rn are closely connected with the classical q-ultraspherical polynomial by the formula (3.8). So far, there have been two successful summations of bivariate kernels which were built of q-ultraspherical polynomials. In [Reference Gasper and Rahman7] and later generalized in [Reference Rahman and Tariq17] (formulae 1.7, 1.8), the sum has a form (adopted to our case}: $\sum_{n\geq 0}h_{n}C_{n}(x|r_{1}r_{2},q)C_{n}(y|r_{1}r_{2},q)t^{n}$ with a normalizing sequence $\left\{ h_{n}\right\} $. But by no means one can find such t so that $h_{n}t^{n} = \phi_{n}(r_{1},r_{2},q)$. The other, independent summation of bivariate kernel related to q-ultraspherical polynomials was done in [Reference Koelink and Van der Jeugt10] (theorem 3.3). Again, to adopt it to the situation considered in theorem 4.1, we should take $c = d = c^{\prime} = d^{\prime} = 0$ and $a = r_{1} r_{2}e^{i\theta}$, $b = r_{1}r_{2}e^{-i\theta}$, $a^{\prime} = r_{1}r_{2}e^{i\varphi}$, and $b^{\prime } = r_{1}r_{2}e^{-i\varphi}$ with $x = \cos\theta$ and $y = \cos\varphi$. This is so since we have the assertion vi) of proposition 3.2. But then again since in this case sequence $\left\{ H_{n}\right\} $ reduces to $1/{\left(q,r_1r_2\right)}_n$ and we cannot find t such that $t^{n}/\left( q,r_{1}r_{2}\right) _{n} = \phi_{n} (r_{1},r_{2},q)$ for all n.

Hence, we deduce that our result is aside known results and is completely new.

Remark. Now we can apply our results and complement the results of [Reference Szabłowski31]. Let us recall that in this article the following three-dimensional distribution

\begin{equation*} f_{3D}(x,y,z|\rho_{12},\rho_{13},\rho_{23},q)=(1-r)f_{CN}(x|y,\rho _{12},q)f_{CN}(y|z,\rho_{23},q)f_{CN}(z|x,\rho_{13},q), \end{equation*}

where we denoted $r = \rho_{12}\rho_{23}\rho_{13}$, has the property that all its conditional moments are the polynomials in the conditioning random variable(s). Hence, it is a compactly supported generalization of the three-dimensional Normal distribution. Let us recall also, that the one-dimensional marginals are the same and equal to $f_{R}(.|r,q)$. Moreover, the two-dimensional marginal distribution of (Y, Z) is equal to $(1-r)f_{CN}(y|z,\rho_{23},q)f_{CN}(z|y,\rho_{12}\rho_{13},q)$ and similarly for the other two two-dimensional marginals. Consequently, now we can expand it in the following way:

(4.5)\begin{align} & (1-r)f_{CN}(y|z,\rho_{23},q)f_{CN}(z|y,\rho_{12}\rho_{13},q)\\ & =f_{R}(z|r,q)f_{R}(y|r,q)\sum_{n\geq0}\phi_{n}(\rho_{23},\rho_{13}\rho _{12},q)\hat{R}_{n}(z|r,q)\hat{R}_{n}(y|r,q).\nonumber \end{align}

As a result, we can simplify the formula for the conditional moment $E(.|Z)$. Namely, we have

(4.6)\begin{equation} E(\hat{R}_{n}(Y|r,q)|Z) = \phi_{n}(\rho_{23},\rho_{13} \rho_{12},q)\hat{R}_{n}(Z|r,q), \end{equation}

as it follows from our main result.

As stated above, all marginal distributions are the same and have a density fR. Further, recall that all conditional moments are polynomials in the conditioning random variable of order not exceeding the order of the moment. Hence, we could deduce from the main result of [Reference Szabłowski35] that there should exist an LE of the joint two-dimensional marginal involving polynomials Rn as the ones being orthogonal with respect to the one-dimensional marginal distribution. So (4.5) presents this lacking LE of the two-dimensional distribution.

Remark. As another application of our result, one could think of constructing a stationary Markov process with marginals fR and transitional density given by (4.5). It would be the first application of q-ultraspherical polynomials in the theory of Stochastic processes.

Remark. Finally, we have the following two theoretical results related one to another. Namely, true is the following expansion

\begin{align*} & f_{R}(x|r_{1}r_{2},q)f_{R}(y|r_{1}r_{2},q)\sum_{n\geq0}\phi_{n}(r_{1} ,r_{2},q)\hat{R}_{n}(x|r_{1}r_{2},q)\hat{R}_{n}(y|r_{1}r_{2},q)\\ & =(1-r_{1}r_{2})f_{N}(x|q)f_{N}(y|q)\sum_{n,m\geq0}\frac{r_{1}^{n}r_{2}^{m} }{\left[ n\right] _{q}!\left[ m\right] _{q}!}H_{n}(x|q)H_{m} (x|q)H_{n}(y|q)H_{m}(y|q), \end{align*}

where Hn are the q-Hermite polynomials defined by the three-term recurrence (3.2) and fN is the q-Normal density defined by (3.4). It follows (4.7), after cancelling out $f_N(y\vert q)$. The convergence, as it follows from Rademacher–Menshov theorem is for almost all x and y from S(q), provided of course if $\left\vert r_{1}\right\vert ,\left\vert r_{2}\right\vert \lt 1$.

Further, we cancel out $f_{N}(x|q)f_{N}(y|q)$ on both sides and use (3.4) and (4.7) expansion (4.16) with m = 0 (4.4). As far as the convergence is concerned, we apply the well-known Rademacher–Menshov theorem and use the fact that product measure $f_{N}(x|q)\times f_{N}(y|q)$ is absolutely continuous with respect to Lebesgue measure on $S(q)\times S(q)$,

\begin{align*} &(1-r_{1}r_{2})\left( \sum_{j\geq0}\frac{r_{1}^{j}}{\left[ j\right] _{q} !}H_{j}(x|q)H_{j}(y|q)\right) \left( \sum_{j\geq0}\frac{r_{2}^{j}}{\left[ j\right] _{q}!}H_{j}(x|q)H_{j}(y|q)\right)\\ \quad &\quad =\left( \sum_{k\geq0} \frac{(r_{1}r_{2})^{k}}{\left[ k\right] _{q}!(r_{1}^{2}r_{2}^{2})_{k}} H_{2k}(x|q)\right) \times\left( \sum_{k\geq0}\frac{(r_{1}r_{2})^{k}}{\left[ k\right] _{q}!(r_{1}^{2}r_{2}^{2})_{k}}H_{2k}(y|q)\right) \\ &\quad\quad \times \sum_{n\geq0}\frac{\phi _{n}(r_{1},r_{2},q)(1-r_{1}r_{2}q^{n})}{\left[ n\right] _{q}!(r_{1}^{2} r_{2}^{2})_{n}(1-r_{1}r_{2})}R_{n}(x|r_{1}r_{2},q)R_{n}(y|r_{1}r_{2},q), \end{align*}

for all $x,y\in S(q)$, $\left\vert r_{1}\right\vert ,\left\vert r_{2} \right\vert ,\left\vert q\right\vert \lt 1$. The convergence is almost everywhere on $S\left( q\right) \times S\left( q\right) $ with respect to the product Lebesgue measure.

Corollary. (i) Setting $r_{1} = \rho$ and $r_{2} = 0$, we have the following expansion which is true for all $x,y\in S(q),\left\vert \rho\right\vert \lt 1$, $-1 \lt q\leq1$.

(4.7)\begin{equation} f_{N}(x|q)f_{N}(y|q)\sum_{n\geq0}\frac{\rho^{n}}{\left[ n\right] _{q}!} H_{n}(x|q)H_{n}(y|q)=f_{N}(y|q)f_{CN}(x|y,\rho,q). \end{equation}

In other words, the well-known Poisson–Mehler expansion formula is a particular case of (4.3).

(ii) Let us set $r_{1} = r = -r_{2}$. We have then

\begin{equation*} \phi_{n}(r,-r,q)=\left\{ \begin{array} [c]{ccc} 0 & \textrm{if} & n\ \mathrm{is}\ \mathrm{odd}\\ r^{2k}\left( q|q^{2}\right) _{k}/\left( r^{4}q|q^{2}\right) _{k} & \textrm{if} & n=2k \end{array} \right. , \end{equation*}

and consequently we get

\begin{gather*} (1+r^{2})f_{CN}\left( y|x,r,q\right) f_{CN}\left( x|y,-r,q\right) =\\ f_{R}(x|-r^{2},q)f_{R}(y|-r^{2},q)\sum_{k\geq0}r^{2k}\left( q|q^{2}\right) _{k}/\left( r^{4}q|q^{2}\right) _{k}\hat{R}_{2k}(x|-r^{2},q)\hat{R} _{2k}(y|-r^{2},q). \end{gather*}

(iii) Taking q = 1, we get:

\begin{gather*} \frac{1+r_{1}r_{2}}{2\pi(1-r_{1}r_{2})}\exp(-\frac{(1-r_{1}r_{2})x^{2} }{2(1+r_{1}r_{2})}-\frac{(1-r_{1}r_{2})y^{2}}{2(1+r_{1}r_{2})})\\\times \sum_{n\geq0}\left( \frac{r_{1}+r_{2}}{\left( 1+r_{1}r_{2}\right) (1-r_{1}^{2}r_{2}^{2})}\right) ^{n}\left( \frac{1+r_{1}r_{2}}{1-r_{1}r_{2} }\right) ^{n}H_{n}\left( x\sqrt{\frac{1-r_{1}r_{2}}{1+r_{1}r_{2}}}\right) H_{n}\left( y\sqrt{\frac{1-r_{1}r_{2}}{1+r_{1}r_{2}}}\right) \\ =(1-r_{1}r_{2})\exp\left( -\frac{\left( x-r_{1}y\right) ^{2}}{2\left( 1-r_{1}\right) ^{2}}--\frac{\left( y-r_{2}x\right) ^{2}}{2\left( 1-r_{2}\right) ^{2}}\right) /(2\pi(1-r_{1})(1-r_{2})). \end{gather*}
Proof.

  1. (i) We use the fact that $R_{n}(x|0,q) = H_{n}(x|q)$ and $f_{N}(x|0,q) = f_{N}(x|q)$ and also that ${\sum\nolimits_{j = 0}^n {\left[ {\matrix{ n \cr j \cr }} \right]} _q}r_1^j{\left( {r_2^2} \right)_j}r_2^{n - j}{\left( {r_1^2} \right)_{n - j}}$ $ = r_{1}^{n}$ when $r_{2} = 0$.

  2. (ii) We have

    \begin{gather*} \phi_{n}(r,-r,q)=\frac{1}{\left( -r^{2}\right) _{n}}r^{n}\sum_{j=0} ^{n}(-1)^{j}(r^{2})_{j}\left( r^{2}\right) _{n-j}\\ =\left\{ \begin{array} [c]{ccc} 0 & \mathrm{if} & n\ \text{is\,odd}\\ r^{2k}\left( q|q^{2}\right) _{k}\left( r^{2}|q^{2}\right) _{k}/\left( -r^{2}\right) _{2k} & \mathrm{if} & n=2k \end{array} \right. . \end{gather*}
  3. (iii) We use (3.26) with x = y, proposition (3.2), the fact that in this case $\phi_{n}(r_{1},r_{2} ,1) = (\frac{r_{1}+r_{2}}{1+r_{1}r_{2}})^{n}$ and the fact that polynomials $H_{n}(\sqrt{a}x)$ are orthogonal with respect to the measure with the density $\exp(-\alpha x^{2}/2)/\sqrt{2\pi\alpha}$.

Before we present a complicated proof of this theorem, let us formulate and prove some auxiliary results.

Theorem 4.8 For all $x,y\in S(q)$, $r_{1},r_{2}\in(-1,1)$, and $-1 \lt q\leq1$, we have

(4.8)\begin{align} 0 & \leq(1-r_{1}r_{2})f_{CN}\left( y|x,r_{1},q\right) f_{CN}\left( x|y,r_{2},q\right)\\ & =f_{N}(x|q)f_{R}(y|r_{1}r_{2},q)\sum_{n\geq0}\frac{1}{\left[ n\right] _{q}!}H_{n}(x|q)D_{n}(y|r_{1},r_{2},q),\nonumber \end{align}

where

(4.9)\begin{equation} D_{n}(y|r_{1},r_{2},q)=\sum_{j=0}^{n} \genfrac{[}{]}{0pt}{}{n}{j} _{q}r_{1}^{n-j}r_{2}^{j}\left( r_{1}\right) _{j}H_{n-j}(y|q)R_{j} (y|r_{1}r_{2},q)/\left( r_{1}^{2}r_{2}^{2}\right) _{j} \end{equation}

and the convergence is absolute and almost uniform on $S(q)\times S(q)$. We also have

(4.10)\begin{equation} \int_{S(q)}(1-r_{1}r_{2})f_{CN}\left( y|x,r_{1},q\right) f_{CN}\left( x|y,r_{2},q\right) dx=f_{R}(y|r_{1}r_{2},q). \end{equation}

Proof. This theorem is composed of the results that appeared in [Reference Szabłowski24] and, e.g., [Reference Szabłowski31]. Namely, in [Reference Szabłowski24], the following result was proved (theorem 3 (3.4))

(4.11)\begin{align} & \frac{f_{CN}\left( z|x,\rho_{2},q\right) f_{CN}\left( x|y,\rho_{1},q\right) f_{N}\left( y|q\right) }{f_{CN}\left( z|y,\rho _{1}\rho_{2},q\right) f_{N}\left( y|q\right) }\\ & =f_{N}\left( x|q\right) \sum_{j=0}^{\infty}\frac{1}{\left[ j\right] _{q}!}H_{j}\left( x|q\right) C_{j}\left( y,z|\rho_{1},\rho_{2},q\right) ,\nonumber \end{align}

where

\begin{equation*} C_{n}\left( y,z|\rho_{1},\rho_{2},q\right) =\sum_{s=0}^{n} \genfrac{[}{]}{0pt}{}{n}{s} _{q}\rho_{1}^{n-s}\rho_{2}^{s}\left( \rho_{1}^{2}\right) _{s}H_{n-s}\left( y|q\right) P_{s}\left( z|y,\rho_{1}\rho_{2},q\right) /(\rho_{1}^{2}\rho _{2}^{2})_{s}. \end{equation*}

Convergence in (4.11) is absolute and almost uniform on $S(q)\times S(x)\times S(q)$. Now in [Reference Szabłowski31], it has been noticed (proposition 1(3.1)) that $(1-r)f_{CN}(y|y,r,q) = f_{R}(y|r,q)$ where fR is the density of the measure that makes polynomials $\left\{ R_{n}\right\} $ orthogonal. Thus, by replacing ρ 1 and ρ 2 by r 1 and r 2 and identifying y and z, we get (4.8). Let us denote now

\begin{equation*} D_{n}(y|\rho_{1},\rho_{2},q)=C_{n}(y,y|\rho_{1},\rho_{2},q). \end{equation*}

Remark 4.9. $(1-r_{1}r_{2})f_{CN}\left( y|x,r_{1},q\right) f_{CN}\left( x|y,r_{2},q\right) $ is a symmetric function with respect to both x and y as well as r 1 and r 2.

To see this, let us refer to the definition of the density fCN. Namely, we have

\begin{align*} & (1-r_{1}r_{2})f_{CN}\left( y|x,r_{1},q\right) f_{CN}\left( x|y,r_{2},q\right) \\ & =(1-r_{1}r_{2})\left( r_{1}^{2},r_{2}^{2}\right) _{\infty}f_{N} (x|q)f_{N}(y|q)/(W(x,y|r_{1},q)W(x,y|r_{2},q)) \end{align*}

where $W(x,y|r_{1},q)$ is defined by (3.25).

The next several partial results require very special and tedious calculations that can be interested only for those who are working on q-series theory. To preserve the logic of the arguments leading to the result, we will move all such auxiliary results to the last section.

Our following result presents the functions $\left\{ D_{n}\right\} $ given by (4.9) as a combination of only polynomials $\left\{ R_{n}\right\} $ or in other words expands each Dn in the basis of $\left\{ R_{n}\right\} $. The proof is simple ideologically, but very hard in terms of specialized calculations, which is why we will shift it to the last section.

Proposition 4.10. We have for all $n\geq0$, $r_{1},r_{2}\in(-1,1)$ and $-1 \lt q\leq1$:

(4.12)\begin{equation} D_{n}(y|r_{1},r_{2},q)=\sum_{u=0}^{\left\lfloor n/2\right\rfloor } \frac{\left[ n\right] _{q}!(1-r_{1}r_{2}q^{n-2u})(r_{1}r_{2})^{u}}{\left[ u\right] _{q}!\left[ n-2u\right] _{q}!\left( r_{1}r_{2}\right) _{n-u+1} }R_{n-2u}(y|r_{1}r_{2},q)\gamma_{n,u}(r_{1},r_{2},q), \end{equation}

where

(4.13)\begin{align} \gamma_{n,u}(r_{1},r_{2},q) & =\frac{1}{(r_{1}^{2}r_{2}^{2})_{2n}}\sum _{m=0}^{u} \genfrac{[}{]}{0pt}{}{u}{m} _{q}\left( r_{2}^{2}\right) _{m}\left( r_{1}r_{2}q^{n-u-m+1}\right) _{m}\\ &\quad \times\left( r_{1}^{2}\right) _{m}\left( r_{1}r_{2}\right) _{m} w_{n-2m}(m,r_{1}r_{2},q),\nonumber \end{align}

is well-defined for all $n\geq0$ and $u\leq0\leq\left\lfloor n/2\right\rfloor $.

Proof. The long, tedious proof is shifted to §6.

Before we formulate the next lemma, let us formulate the following simple result.

Proposition 4.11. For all $x\in S(x)$, $\left\vert r\right\vert \lt 1$, $\left\vert q\right\vert \leq1$, $m\geq0$, we have

(4.14)\begin{equation}(1-r)f_N(x\vert q)\sum_{j\geq0}H_j(x\vert q)H_{j+m}(x\vert q)r^j/{\left[j\right]}_q!=f_R(x\vert r,q)R_m(x\vert r,q)/{\left(r^2\right)}_m=\end{equation}
(4.15)\begin{gather} f_{R}(x|r,q)\sum_{k=0}^{m} \genfrac{[}{]}{0pt}{}{m}{k} _{q}(-r)^{k}q^{\binom{k}{2}}H_{m-j}(x|q)R_{j}(x|r,q)/\left( r^{2}\right) _{j}= \end{gather}
(4.16)\begin{gather} (1-r)f_{N}(x|q)\sum_{s\geq0}\frac{r^{s}}{\left[ s\right] _{q}!(r)_{m+s+1} }H_{2s+m}(x|q). \end{gather}

Proof. We start with the result of [Reference Szabłowski24] (lemma 3i) where we set x = y and make other adjustments and get directly (4.14) and (4.15).

To get (4.16), we must apply the linearization formula for the q-Hermite polynomials and (2.1):

\begin{align*} &\sum_{j\geq0}\frac{r^{j}}{\left[ j\right] _{q}!}H_{j}(x|q)H_{j+m}(x|q)\\& \quad =\sum_{j\geq0}\frac{r^{j}}{\left[ j\right] _{q}!}\sum_{k=0}^{j} \genfrac{[}{]}{0pt}{}{j}{k} _{q} \genfrac{[}{]}{0pt}{}{j+m}{k} _{q}\left[ k\right] _{q}!H_{2j+m-2k}(x|q)\\& \quad =\sum_{j\geq0}\frac{r^{j}}{\left[ j\right] _{q}!}\sum_{s=0}^{j} \genfrac{[}{]}{0pt}{}{j}{s} _{q} \genfrac{[}{]}{0pt}{}{j+m}{m+s} _{q}\left[ j-s\right] _{q}!H_{2s+m}(x|q)\\& \quad =\sum_{s\geq0}\frac{r^{s}}{\left[ s\right] _{q}!}H_{2s+m}(x|q)\sum_{j\geq s}r^{j-s} \genfrac{[}{]}{0pt}{}{j-s+(m+s)}{m+s} _{q}\\& \quad =\sum_{s\geq0}\frac{r^{s}}{\left[ s\right] _{q}!(r)_{m+s+1}}H_{2s+m}(x|q). \end{align*}

Lemma 4.12. We have $n\geq0:$

\begin{gather*} \int_{S(q)}R_{n}(x|r_{1}r_{2},q)(1-r_{1}r_{2})f_{CN}\left( y|x,r_{1} ,q\right) f_{CN}\left( x|y,r_{2},q\right) dx\\ =\phi_{n}(r_{1},r_{2},q)R_{n}(y|r_{1}r_{2},q) \end{gather*}

and similarly for the integral with respect to y.

Further, for all $n\geq0$ and $k\leq\left\lfloor n/2\right\rfloor $

\begin{equation*} \gamma_{n,k}(r_{1},r_{2},q)=\phi_{n-2k}(r_{1},r_{2},q). \end{equation*}

In particular, we have

\begin{align*} \gamma_{2k,k}(r_{1},r_{2},q) & =1,\\ \gamma_{n,0}(r_{1},r_{2},q) & =\phi_{n}(r_{1},r_{2},q). \end{align*}

Proof. The long, detailed proof by induction is shifted to §6.

Proof. [Proof of the theorem 4.1] Now, it is enough to recall results of say [Reference Szabłowski35] to get our expansion.

As a corollary, we get the following nice identities that seem to be of interest by themselves.

Corollary. (i) For all $r_{1},r_{2},q\in\mathbb{C}$, $n\geq0$, $0\leq u\leq\left\lfloor n/2\right\rfloor $, we have

\begin{gather*} \sum_{m=0}^{u} \genfrac{[}{]}{0pt}{}{u}{m} _{q}\left( r_{2}^{2}\right) _{m}\left( r_{1}r_{2}q^{n-u-m+1}\right) _{m}\left( r_{1}^{2}\right) _{m}\left( r_{1}r_{2}\right) _{m} w_{n-2m}(m,r_{1},r_{2},q)\\ =\left( r_{1}^{2}r_{2}^{2}q^{n-2u}\right) _{2u}w_{n-2u}(0,r_{1},r_{2},q), \end{gather*}

since we have polynomials on both sides of the equations.

(ii) For all $\left\vert r_{1}\right\vert ,\left\vert r_{2}\right\vert \lt 1,q\in(-1,1)$, $n\geq0$

\begin{align*} \left\vert \phi_{n}(r_{1},r_{2},q)\right\vert & \leq1,\\ \sum_{n\geq0}\left\vert \phi_{n}(r_{1},r_{2},q)\right\vert ^{2} & \lt \infty. \end{align*}

Proof. (i) The proof follows directly the definition of ϕn and its proved properties. (ii) The fact that $\sum_{n\geq0}\left\vert \phi_{n}(r_{1} ,r_{2},q)\right\vert ^{2} \lt \infty$ follows directly the fact that we are dealing with the mean-square expansion. The fact that $\left\vert \phi _{n}(r_{1},r_{2},q)\right\vert \leq1$, follows the probabilistic interpretation of our result. Namely recall (4.6) with $r_{1} = \rho_{23}$, $r_{2} = \rho_{13} \rho_{12}$ and the fact that $E\hat{R}_{n}(Y|r_{1}r_{2},q) = 0$ and $E\hat{R}_{n}^{2}(Y|r_{1}r_{2},q) = 1$. Now recall, the well-known fact that the variance of the conditional expectation of a random variable does not exceed its variance. Consequently, we have

\begin{gather*} 1=E\hat{R}_{n}^{2}(r_{1}r_{2},q)\geq E(E(\hat{R}_{n}(Y|r,q)|Z))^{2}=\\ \phi_{n}(r_{1},r_{2},q)^{2}E\hat{R}_{n}^{2}(r_{1}r_{2},q)=\phi_{n}(r_{1} ,r_{2},q)^{2}. \end{gather*}

5. List of LKs

5.1. Symmetric kernels

Below, we list symmetric LKs that can be easily obtained using, mentioned in §1, the idea of expansion of the ratio of two densities. The list below has simple sums and sometimes leads to smooth stationary Markov processes.

1. We start with the following LK called the Poisson–Mehler kernel.

(5.1)\begin{equation} f_{CN}(x|y,\rho,q)/f_{N}(x|q)=\sum_{n\geq0}\frac{\rho^{n}}{\left[ n\right] _{q}!}H_{n}(x|q)H_{n}(y|q). \end{equation}

Its justification is DEI (3.30,3.5). It leads to the so-called q-Ornstein–Uhlenbeck process, for details see, e.g., [Reference Szabłowski23].

2. One should mention the following particular case of the above-mentioned formula, that is $q=0$. Then $H_{n} (x|0) = U_{n}(x/2)$ and $\left[ n\right] _{0} ! = 1$ and finally we get for all $\left\vert x\right\vert ,\left\vert y\right\vert \leq2,\left\vert \rho\right\vert \lt 1$:

(5.2)\begin{equation} \sum_{n\geq0}\rho^{n}U_{n}(x/2)U_{n}(y/2) = \frac {1-\rho^{2}}{(1-\rho^{2})^{2}-\rho(1+\rho^{2})xy+\rho^{2}(x^{2}+y^{2})}. \end{equation}

Recently, in [Reference Szabłowski32], formula (5.2) has been proven by other means together with the following one:

3.

\begin{equation*} \sum_{n\geq0}\rho^{n}T_{n}(x/2)T_{n}(y/2)=\frac{4(1-\rho^{2})-\rho(3+\rho ^{2})xy+2\rho^{2}(x^{2}+y^{2})}{4((1-\rho^{2})^{2}-\rho(1+\rho^{2})xy+\rho ^{2}(x^{2}+y^{2}))}. \end{equation*}

4. The following expansion appeared recently in [Reference Szabłowski34] (23):

\begin{gather*} 1+2\sum_{n\geq1}\rho^{n^{2}}T_{n}(x)T_{n}(y)=\\ \frac{1}{2}\left( \theta(\rho,\arccos(x)-\arccos(y)\right) +\theta (\rho,\arccos(x)+\arccos(y)), \end{gather*}

where θ denotes the Jacobi Theta function.

5. The following expansion appeared recently in [Reference Szabłowski34] (unnamed formula):

\begin{gather*} \frac{4}{\pi^{2}}\sqrt{(1-x^{2})(1-y^{2})}\sum_{n\geq0}\rho^{n(n+2)} U_{n}(x)U_{n}(y)=\\ \frac{1}{\rho\pi^{2}}\left( \theta(\rho,\arccos(x)-\arccos(y)\right) -\theta(\rho,\arccos(x)+\arccos(y)), \end{gather*}

where, as before, θ denotes Jacobi Theta function.

6. It is well-known that the following LK is also true

\begin{gather*} (1-\rho)\sum_{n\geq0}\frac{n!}{\Gamma(n+\alpha)}L_{n}^{\alpha}(x)L_{n} ^{\alpha}(y)\rho^{n}\\ =\exp\left( -\rho\frac{x+y}{1-\rho}\right) I_{\alpha-1}\left( \frac {2(x,y\rho)^{1/2}}{1-\rho}\right) /(xy\rho)^{(\alpha-1)/2}, \end{gather*}

where $L_{n}^{(\alpha)}(x)$ denotes generalized Laguerre polynomials, i.e., the ones orthogonal with respect to the measure with the density $\exp (-x)x^{\alpha-1}/\Gamma(\alpha)$ for $x\geq0$ and $\alpha\geq1$. $I_{\alpha -1}(x)$ denotes a modified Bessel function of the first kind. This kernel also leads to a smooth, stationary Markov process, as shown in [Reference Szabłowski34].

Let us remark that, following [Reference Szabłowski34], the kernels mentioned at points 1, 4, 5, and 6 allow to generate stationary Markov processes with polynomial conditional moments that allow continuous path modification.

5.2. Non-symmetric kernels

It has to be underlined that the list of non-symmetric kernels presented below is far from being complete. There is nice article, namely [Reference Askey, Rahman and Suslov3] mentioning more of them. However, they are in a very complicated form, interesting only to specialists in q-series theory.

1. We start with the following, known, but recently presented with general setting in [Reference Szabłowski32]:

\begin{equation*} \sum_{n\geq0}\rho^{n}U_{n}(x/2)T_{n}(y/2)=\frac{2(1-\rho^{2})+\rho^{2} x^{2}-\rho xy}{2((1-\rho^{2})^{2}-\rho(1+\rho^{2})xy+\rho^{2}(x^{2}+y^{2}))}. \end{equation*}

2. In [Reference Szabłowski26] (3.12), we have the kernel that after adjusting it to ranges of $x,y,z$ equal to $S\left( q\right) $ and utilizing and the fact that

\begin{gather*} f_{CN}(y|x,\rho_{1},q)f_{CN}(x|z,\rho_{2},q)/(f_{CN}(y|z,\rho_{1}\rho _{2},q)f_{CN}(x|y,\rho_{1},q)) \\ = f_{CN}(z|y,\rho_{2},q)/f_{CN}(z|x,\rho_{2},q), \end{gather*}

we get finally the following non-symmetric kernel:

\begin{equation*} \sum_{j\geq0}\frac{\rho_{2}^{j}}{\left[ j\right] _{q}!\left( \rho_{1} ^{2}\rho_{2}^{2}\right) _{j}}P_{j}\left( z|y,\rho_{1}\rho_{2},q\right) P_{j}\left( x|y,\rho_{1},q\right) =\frac{f_{CN}\left( z|x,\rho _{2},q\right) }{f_{CN}(z|y,\rho_{1}\rho_{2},q)}, \end{equation*}

true for all $\left\vert x\right\vert ,\left\vert y\right\vert ,\left\vert z\right\vert \in S(q),\left\vert \rho_{1}\right\vert ,\left\vert \rho _{1}\right\vert \lt 1$, and $\left\vert q\right\vert \leq1$, where $P_{j} (x|y,\rho,q)$ and $f_{CN}(x|y,\rho,q)$ are defined respectively by (3.23) and (3.24).

2s. Using the fact that

\begin{equation*} P_{n}(x|y,\rho,1) = (1-\rho^{2})^{n/2}H_{n}\left( (x-\rho y)/\sqrt{1-\rho^{2}}\right) , \end{equation*}

and

\begin{equation*} f_{CN}(x|y,\rho,1) =\frac{1}{\sqrt{2\pi(1-\rho^{2})}}\exp\left( -\frac{\left( x-\rho y\right) ^{2}}{2(1-\rho^{2})}\right) , \end{equation*}

by [Reference Szabłowski33] (8.24) and [Reference Szabłowski33] (8.32) we get the following kernel

\begin{align*} & \sum_{j\geq0}\frac{\rho_{2}^{j}(1-\rho_{1}^{2})^{j/2}}{j!(1-\rho_{1} ^{2}\rho_{2}^{2})^{j/2}}H_{j}\left(\frac{z-\rho_{1}\rho_{2}y}{\sqrt{1-\rho_{1} ^{2}\rho_{2}^{2}}}\right)H_{j}\left(\frac{x-\rho_{1}y}{\sqrt{1-\rho_{1}^{2}}}\right)\\ & \quad =\sqrt{\frac{1-\rho_{1}^{2}\rho_{2}^{2}}{1-\rho_{2}^{2}}}\exp\left( \frac{(z-\rho_{1}\rho_{2}y)^{2}}{2(1-\rho_{1}^{2}\rho_{2}^{2})}-\frac {(z-\rho_{2}x)^{2}}{2(1-\rho_{2}^{2})}\right) . \end{align*}

3. The following non-symmetric kernel was presented in [Reference Szabłowski25]. For $\left\vert b\right\vert \gt \left\vert a\right\vert ,\left\vert q\right\vert \lt 1$, $x,y\in S\left( q\right) $

\begin{equation*} \sum_{n\geq0}\frac{a^{n}}{\left[ n\right] _{q}!b^{n}}H_{n}(x|a,q)H_{n} \left( y|b,q\right) = \left( a^{2}/b^{2}\right) _{\infty}\prod_{k=0}^{\infty}\frac{\left( 1-(1-q)xaq^{k}+(1-q)a^{2} q^{2k}\right) }{w\left( x,y,q^{k}a/b|q\right) }, \end{equation*}

where $H_{n}(x|a,q)$ denotes the so-called big q-Hermite polynomials. Now let us change a bit this expansion by renaming its parameters. Let us denote $a/b = \rho$. Then we can recognize that $\prod _{k=0}^{\infty}(1-(1-q)axq^{k}+(1-q)a^{2}q^{2k}) = 1/\varphi(x|a,q)$ and

\begin{equation*} \left( a^{2}/b^{2}\right) _{\infty}\prod_{k=0}^{\infty}\frac{1}{w\left( x,y,q^{k}a/b|q\right) }=f_{CN}(x|y,a/b,q)/f_{N}(x|q). \end{equation*}

See also [Reference Szabłowski28].

6. Complicated proofs and auxiliary facts from q-series theory

We start with the series of auxiliary, simplifying formulae.

Lemma 6.1. The generating function of the sequence $\left\{ w_{n} (0,r_{1},r_{2},q)\right\} $ is the following:

\begin{equation*} \sum_{n\geq0}\frac{t^{n}}{\left( q\right) _{n}}w_{n}(0,r_{1},r_{2} ,q)=\frac{\left( tr_{1}r_{2}^{2}\right) _{\infty}\left( tr_{2}r_{1} ^{2}\right) _{\infty}}{\left( tr_{1}\right) _{\infty}\left( tr_{2}\right) _{\infty}}. \end{equation*}

Consequently, we have, for all $n\geq0$, $r_{1},r_{2}\in(-1,1)$, and $-1 \lt q\leq1$:

(6.1)\begin{align} w_{n}(m,r_{1},r_{2},q) & =q^{-nm/2}w_{n}(0,r_{1}q^{m/2},r_{2}q^{m/2} ,q)\\ & =\sum_{s=0}^{n} \genfrac{[}{]}{0pt}{}{n}{s} _{q}r_{1}^{n-s}r_{2}^{s}\left( q^{m}r_{1}r_{2}\right) _{s}\left( q^{m} r_{1}r_{2}\right) _{n-s}\nonumber \end{align}

and

\begin{equation*} \sum_{n\geq0}\frac{t^{n}}{\left( q\right) _{n}}w_{n}(m,r_{1},r_{2} ,q)=\frac{\left( tq^{m}r_{1}r_{2}^{2}\right) _{\infty}\left( tq^{m} r_{2}r_{1}^{2}\right) _{\infty}}{\left( tr_{1}\right) _{\infty}\left( tr_{2}\right) _{\infty}}. \end{equation*}

Proof. We have

\begin{equation*} \sum_{n\geq0}\frac{t^{n}}{\left( q\right) _{n}}w_{n}(0,r_{1},r_{2} ,q)=\sum_{n\geq0}\sum_{s=0}^{n}\frac{\left( tr_{1}\right) ^{s}}{\left( q\right) _{s}}\left( r_{2}^{2}\right) _{s}\frac{\left( tr_{2}\right) ^{n-s}}{\left( q\right) _{n-s}}\left( r_{1}^{2}\right) _{n-s}. \end{equation*}

Now, changing the order of summation, we get

\begin{align*} \sum_{n\geq0}\frac{t^{n}}{\left( q\right) _{n}}w_{n}(0,r_{1},r_{2},q) & =\sum_{s\geq0}\frac{\left( tr_{1}\right) ^{s}}{\left( q\right) _{s} }\left( r_{2}^{2}\right) _{s}\sum_{n\geq s}\frac{\left( tr_{2}\right) ^{n-s}}{\left( q\right) _{n-s}}\left( r_{1}^{2}\right) _{n-s}\\ & =\frac{\left( tr_{1}r_{2}^{2}\right) _{\infty}\left( tr_{2}r_{1} ^{2}\right) _{\infty}}{\left( tr_{1}\right) _{\infty}\left( tr_{2}\right) _{\infty}}. \end{align*}

Further, we notice that

\begin{equation*} \frac{\left( tr_{1}r_{2}^{2}\right) _{\infty}\left( tr_{2}r_{1}^{2}\right) _{\infty}}{\left( tr_{1}\right) _{\infty}\left( tr_{2}\right) _{\infty} }=\frac{\left( tr_{1}r_{2}^{2}\right) _{\infty}\left( tr_{2}r_{1} ^{2}\right) _{\infty}}{\left( tr_{2}\right) _{\infty}\left( tr_{1}\right) _{\infty}}, \end{equation*}

from which follows directly (6.1).

Lemma 6.2. The following identity is true for all $n\geq0$, $r_{1},r_{2} \in(-1,1)$ and $-1 \lt q\leq1$:

\begin{equation*} \sum_{s=0}^{n} \genfrac{[}{]}{0pt}{}{n}{s} _{q}r_{1}^{n-s}r_{2}^{s}\left( r_{1}^{2}\right) _{s}\left( r_{1} r_{2}\right) _{s}/\left( r_{1}^{2}r_{2}^{2}\right) _{s}=\frac{1}{\left( r_{1}^{2}r_{2}^{2}\right) _{n}}w_{n}(0,r_{1},r_{2},q)=\phi_{n}(r_{1} ,r_{2},q). \end{equation*}

Proof. We will prove it by the generating function method. Firstly, we notice that:

\begin{equation*} \left( r_{1}^{2}r_{2}^{2}\right) _{n}\sum_{s=0}^{n} \genfrac{[}{]}{0pt}{}{n}{s} _{q}r_{1}^{n-s}r_{2}^{s}\left( r_{1}^{2}\right) _{s}\frac{\left( r_{1} r_{2}\right) _{s}}{\left( r_{1}^{2}r_{2}^{2}\right) _{s}}=\sum_{s=0}^{n} \genfrac{[}{]}{0pt}{}{n}{s} _{q}r_{1}^{n-s}r_{2}^{s}\left( r_{1}^{2}\right) _{s}\left( r_{1} r_{2}\right) _{s}\left( r_{1}^{2}r_{2}^{2}q^{s}\right) _{n-s}. \end{equation*}

Secondly, we calculate the generating function of the sequence $\left\{ \sum\nolimits_{s=0}^{n}{\left[ {\matrix{ n \cr j \cr }} \right]}_{q}r_{1}^{n-s}r_{2}^{s}\left( r_{1}^{2}\right) _{s}\left( r_{1} r_{2}\right) _{s}\left( r_{1}^{2}r_{2}^{2}q^{s}\right) _{n-s}\right\} _{n\geq0}$. We have

\begin{align*} & \sum_{n\geq0}\frac{t^{n}}{\left( q\right) _{n}}\sum_{s=0}^{n} \genfrac{[}{]}{0pt}{}{n}{s} _{q}r_{1}^{n-s}r_{2}^{s}\left( r_{1}^{2}\right) _{s}\left( r_{1} r_{2}\right) _{s}\left( r_{1}^{2}r_{2}^{2}q^{s}\right) _{n-s}\\ & \quad =\sum_{s\geq0}\frac{\left( tr_{2}\right) ^{s}}{\left( q\right) _{s} }\left( r_{1}^{2}\right) _{s}\left( r_{1}r_{2}\right) _{s}\sum_{n\geq s}\frac{\left( tr_{1}\right) ^{n-s}}{\left( q\right) _{n-s}}\left( r_{1}^{2}r_{2}^{2}q^{s}\right) _{n-s}\\ & \quad =\sum_{s\geq0}\frac{\left( tr_{2}\right) ^{s}}{\left( q\right) _{s} }\left( r_{1}^{2}\right) _{s}\left( r_{1}r_{2}\right) _{s}\frac{\left( tr_{1}^{3}r_{2}^{2}q^{s}\right) _{\infty}}{\left( tr_{1}\right) _{\infty} }\\ &\quad =\frac{\left( tr_{1}^{3}r_{2}^{2}\right) _{\infty}}{\left( tr_{1}\right) _{\infty}}\sum_{s\geq0}\frac{\left( tr_{2}\right) ^{s} }{\left( q\right) _{s}}\frac{\left( r_{1}^{2}\right) _{s}\left( r_{1}r_{2}\right) _{s}}{\left( tr_{1}^{3}r_{2}^{2}\right) _{s}} =\frac{\left( tr_{1}^{3}r_{2}^{2}\right) _{\infty}}{\left( tr_{1}\right) _{\infty}}~_{2}\phi_{1}\left( \begin{array} [c]{cc} r_{1}^{2} & r_{1}r_{2}\\ tr_{1}^{3}r_{2}^{2} & \end{array} ;q,tr_{2}\right) , \end{align*}

where $_{2}\phi_{1}$ denotes the so-called basic hypergeometric function (see, e.g., [Reference Koekoek, Lesky and Swarttouw9] (1.10.1)) (different from the function defined by (4.2)). Reading its properties, in particular, the so-called reduction formulae, we observe that:

(6.2)\begin{equation} tr_{2}=\frac{tr_{1}^{3}r_{2}^{2}}{r_{1}^{2}r_{1}r_{2}}. \end{equation}

So now we use the so-called q-Gauss sum. It is one of the reduction formulae for $_{2}\phi_{1}$ presented in [Reference Koekoek, Lesky and Swarttouw9]. Namely the one given by (1.13.2) with (6.2) being equivalent to $ab/c = z^{-1}$. We get then

\begin{align*} _{2}\phi_{1}\left( \begin{array} [c]{cc} r_{1}^{2} & r_{1}r_{2}\\ tr_{1}^{3}r_{2}^{2} & \end{array} ;q,tr_{2}\right) =\frac{\left( tr_{1}^{3}r_{2}^{2}/r_{1}^{2}\right) _{\infty}\left( tr_{1}^{3}r_{2}^{2}/r_{1}r_{2}\right) _{\infty}}{\left( tr_{1}^{3}r_{2}^{2}\right) _{\infty}\left( tr_{2}\right) _{\infty}}. \end{align*}

Hence, indeed we have

\begin{equation*} \frac{\left( tr_{1}^{3}r_{2}^{2}\right) _{\infty}}{\left( tr_{1}\right) _{\infty}}~_{2}\phi_{1}\left( \begin{array} [c]{cc} r_{1}^{2} & r_{1}r_{2}\\ tr_{1}^{3}r_{2}^{2} & \end{array} ;q,tr_{2}\right) =\frac{\left( tr_{1}r_{2}^{2}\right) _{\infty}\left( tr_{2}r_{1}^{2}\right) _{\infty}}{\left( tr_{1}\right) _{\infty}\left( tr_{2}\right) _{\infty}}. \end{equation*}

Now we recall the assertion on the lemma 4.1.

Lemma 6.3. The following identity is true for all $n\geq0$, $r_{1},r_{2} \in(-1,1)$ and $-1 \lt q\leq1$:

\begin{equation*} \sum_{j=0}^{m} \genfrac{[}{]}{0pt}{}{m}{j} _{q}(-1)^{m-j}q^{\binom{m-j}{2}}\frac{\left( aq^{j}\right) _{m}}{1-aq^{j}}= \begin{cases} 1/(1-a)\text{,} & \textrm{if }m=0\text{;}\\ 0, & \textrm{if }m \gt 0\text{.} \end{cases} \end{equation*}

Proof. It is obvious that when m = 0, the identity is true. Hence, let us assume that $m\geq1$. Now we change the index of summation from j to $t = m-j$ and apply (2.2). We get then

\begin{align*} &\sum_{j=0}^{m} \genfrac{[}{]}{0pt}{}{m}{j} _{q}(-1)^{j}q^{\binom{j}{2}}\frac{\left( aq^{m-j}\right) _{m}}{1-aq^{m-j} }=\sum_{j=0}^{m} \genfrac{[}{]}{0pt}{}{m}{j} _{q}(-1)^{j}q^{\binom{j}{2}}(aq^{m-j})_{m-1}\\ & \quad =\sum_{j=0}^{m} \genfrac{[}{]}{0pt}{}{m}{j} _{q}(-1)^{j}q^{\binom{j}{2}}\sum_{k=0}^{m-1} \genfrac{[}{]}{0pt}{}{m-1}{k} _{q}(-aq^{m-j})^{k}q^{\binom{k}{2}}\\ & \quad =\sum_{k=0}^{m-1} \genfrac{[}{]}{0pt}{}{m-1}{k} _{q}(-a)^{k}(q^{m-j})^{k}q^{\binom{k}{2}}\sum_{j=0}^{m} \genfrac{[}{]}{0pt}{}{m}{j} _{q}(-1)^{j}q^{\binom{j}{2}}q^{k(m-j)}\\ & \quad =\sum_{k=0}^{m-1} \genfrac{[}{]}{0pt}{}{m-1}{k} _{q}(-a)^{k}(q^{m-j})^{k}q^{\binom{k}{2}}q^{km}\sum_{j=0}^{m} \genfrac{[}{]}{0pt}{}{m}{j} _{q}(-1)^{j}q^{\binom{j}{2}}q^{-kj}\\ & \quad =\sum_{k=0}^{m-1} \genfrac{[}{]}{0pt}{}{m-1}{k} _{q}(-a)^{k}(q^{m-j})^{k}q^{\binom{k}{2}}q^{km}(q^{-k})_{m}=0. \end{align*}

This is so since $\left( q^{-k}\right) _{m} = 0$ for every $k = 0,\ldots,m-1$.

Lemma 6.4. The following identity is true for all $n,t\geq0$, $r_{1},r_{2} \in(-1,1)$ and $-1 \lt q\leq1$:

(6.3)\begin{equation} \sum_{k=0}^{m} \genfrac{[}{]}{0pt}{}{m}{k} _{q}(-r_{2}^{2})^{k}q^{\binom{k}{2}}\frac{\left( r_{1}^{2}\right) _{t+m+k} }{\left( r_{1}^{2}r_{2}^{2}\right) _{t+m+k}}=\frac{\left( r_{2}^{2}\right) _{m}\left( r_{1}^{2}\right) _{t+m}}{\left( r_{1}^{2}r_{2}^{2}\right) _{t+2m}}. \end{equation}

Proof. Recall that in [Reference Szabłowski24] (lemma 1ii)), the following identity has been proved for all $-1 \lt q\leq1$, $a,b\in\mathbb{R}$, and $n\geq0$:

(6.4)\begin{align} \sum_{j=0}^{n} \genfrac{[}{]}{0pt}{}{n}{j} _{q}(-b)^{j}q^{\binom{j}{2}}\left( a\right) _{j}\left( abq^{j}\right) _{n-j} = \left( b\right) _{n}. \end{align}

So now, let us transform a bit, the identity that we must prove. Namely, after multiplying both sides by $\left( r_{1}^{2}r_{2}^{2}\right) _{t+m}$ and dividing again both sides by $\left( r_{1}^{2}\right) _{t+m}$ we get

\begin{equation*} \sum_{k=0}^{m} \genfrac{[}{]}{0pt}{}{m}{k} _{q}(-r_{2}^{2})^{k}q^{\binom{k}{2}}\left( r_{1}^{2}q^{t+m}\right) _{k}\left( r_{1}^{2}r_{2}^{2}q^{t+m}\right) _{m-k}. \end{equation*}

We apply (6.4) with $a = r_{1}^{2}q^{t+m}$ and $b = r_{2}^{2}$ and get (6.3).

Proof. [Proof of proposition 4.10] We start with inserting (3.22) into (4.9). Then we get

\begin{gather*} D_{n}(y|r_{1},r_{2},q)=\sum_{s=0}^{n} \genfrac{[}{]}{0pt}{}{n}{s} _{q}r_{1}^{n-s}r_{2}^{s}\frac{\left( r_{1}^{2}\right) _{s}}{\left( r_{1}^{2}r_{2}^{2}\right) _{s}}\times\\ \sum_{u=0}^{\left\lfloor n/2\right\rfloor }\frac{\left[ s\right] _{q}!\left[ n-s\right] _{q}!(1-r_{1}r_{2}q^{n-2u})}{\left[ u\right] _{q}!\left[ n-2u\right] _{q}!}R_{n-2u}(y|r_{1}r_{2},q)\times\\ \sum_{m=0}^{u} \genfrac{[}{]}{0pt}{}{u}{m} _{q}\frac{(r_{1}r_{2})^{u-m}}{(1-r_{1}r_{2})\left( r_{1}r_{2}q\right) _{n-m-u}}\sum_{k=0}^{m} \genfrac{[}{]}{0pt}{}{m}{k} _{q} \genfrac{[}{]}{0pt}{}{n-2m}{n+k-s-m} _{q}(-r_{1}r_{2})^{k}q^{\binom{k}{2}}\left( r_{1}r_{2}\right) _{s-k}. \end{gather*}

First, we change the order of summation and get

\begin{gather*} D_{n}(y|r_{1},r_{2},q)=\sum_{u=0}^{\left\lfloor n/2\right\rfloor } \frac{\left[ n\right] _{q}!(1-r_{1}r_{2}q^{n-2u})}{\left[ u\right] _{q}!\left[ n-2u\right] _{q}!\left( r_{1}r_{2}\right) _{n-u+1}} R_{n-2u}(y|r_{1}r_{2},q)\sum_{s=0}^{n}r_{1}^{n-s}r_{2}^{s}\frac{\left( r_{1}^{2}\right) _{s}}{\left( r_{1}^{2}r_{2}^{2}\right) _{s}}\\ \times\sum_{m=0}^{u} \genfrac{[}{]}{0pt}{}{u}{m} _{q}\left( r_{1}r_{2}\right) ^{u-m}\left( r_{1}r_{2}q^{n+1-u-m}\right) _{m}\times\\ \sum_{k=0}^{m} \genfrac{[}{]}{0pt}{}{m}{k} _{q} \genfrac{[}{]}{0pt}{}{n-2m}{n+k-s-m} _{q}(-r_{1}r_{2})^{k}q^{\binom{k}{2}}\left( r_{1}r_{2}\right) _{s-k}. \end{gather*}

On the way, we used the well-known property of the q-Pochhammer symbol (see, e.g., [Reference Koekoek, Lesky and Swarttouw9]):

\begin{equation*} \left( a\right) _{n+m}=\left( a\right) _{n}\left( aq^{n}\right) _{m}. \end{equation*}

Now notice that for ${\left[ {\matrix{ n-2m \cr n-s-\left(m-k\right) \cr }} \right]}_{q}$ to be non-zero, we have to have $n-s\geq m-k$ and $n-2m-n+s+m-k\geq 0,\ $which leads to $s\geq m+k$. Hence, we have further:

\begin{gather*} D_{n}(y|r_{1},r_{2},q)=\sum_{u=0}^{\left\lfloor n/2\right\rfloor } \frac{\left[ n\right] _{q}!(1-r_{1}r_{2}q^{n-2u})}{\left[ u\right] _{q}!\left[ n-2u\right] _{q}!\left( r_{1}r_{2}\right) _{n-u+1}} R_{n-2u}(y|r_{1}r_{2},q)\times\\ \sum_{m=0}^{u} \genfrac{[}{]}{0pt}{}{u}{m} _{q}(r_{1}r_{2})^{u-m}\left( r_{1}r_{2}q^{n-u-m+1}\right) _{m}\times\\ \sum_{k=0}^{m} \genfrac{[}{]}{0pt}{}{m}{k} _{q}(-r_{1}r_{2})^{k}q^{\binom{k}{2}}\sum_{s=m+k}^{n-(m-k)} \genfrac{[}{]}{0pt}{}{n-2m}{n-s-(m-k)} _{q}r_{1}^{n-s}r_{2}^{s}\frac{\left( r_{1}^{2}\right) _{s}}{\left( r_{1}^{2}r_{2}^{2}\right) _{s}}\left( r_{1}r_{2}\right) _{s-k}\\ =\sum_{u=0}^{\left\lfloor n/2\right\rfloor }\frac{\left[ n\right] _{q}!(1-r_{1}r_{2}q^{n-2u})}{\left[ u\right] _{q}!\left[ n-2u\right] _{q}!\left( r_{1}r_{2}\right) _{n-u+1}}R_{n-2u}(y|r_{1}r_{2},q)\times\\ \sum_{m=0}^{u} \genfrac{[}{]}{0pt}{}{u}{m} _{q}(r_{1}r_{2})^{u-m}\left( r_{1}r_{2}q^{n-u-m+1}\right) _{m}\sum_{k=0}^{m} \genfrac{[}{]}{0pt}{}{m}{k} _{q}(-r_{1}r_{2})^{k}q^{\binom{k}{2}}\\ \times\sum_{t=0}^{n-2m} \genfrac{[}{]}{0pt}{}{n-2m}{t} _{q}r_{1}^{n-t-m-k}r_{2}^{t+m+k}\frac{\left( r_{1}^{2}\right) _{t+m+k} }{\left( r_{1}^{2}r_{2}^{2}\right) _{t+m+k}}\left( r_{1}r_{2}\right) _{t+m}. \end{gather*}

Further, we get

\begin{gather*} D_{n}(y|r_{1},r_{2},q)=\sum_{u=0}^{\left\lfloor n/2\right\rfloor } \frac{\left[ n\right] _{q}!(1-r_{1}r_{2}q^{n-2u})}{\left[ u\right] _{q}!\left[ n-2u\right] _{q}!\left( r_{1}r_{2}\right) _{n-u+1}} R_{n-2u}(y|r_{1}r_{2},q)\times\\ \sum_{m=0}^{u} \genfrac{[}{]}{0pt}{}{u}{m} _{q}(r_{1}r_{2})^{u-m}\left( r_{1}r_{2}q^{n-u-m+1}\right) _{m}\times\\ \sum_{k=0}^{m} \genfrac{[}{]}{0pt}{}{m}{k} _{q}(-r_{1}^{2})^{k}q^{\binom{k}{2}}\sum_{t=0}^{n-2m} \genfrac{[}{]}{0pt}{}{n-2m}{t} _{q}r_{1}^{n-t-2m}r_{2}^{t}\frac{\left( r_{1}^{2}\right) _{t+m+k}}{\left( r_{1}^{2}r_{2}^{2}\right) _{t+m+k}}\left( r_{1}r_{2}\right) _{t+m}. \end{gather*}

Now, we change the order of summation in the last two sums, and we get

\begin{gather*} D_{n}(y|r_{1},r_{2},q)=\sum_{u=0}^{\left\lfloor n/2\right\rfloor } \frac{\left[ n\right] _{q}!(1-r_{1}r_{2}q^{n-2u})}{\left[ u\right] _{q}!\left[ n-2u\right] _{q}!\left( r_{1}r_{2}\right) _{n-u+1}} R_{n-2u}(y|r_{1}r_{2},q)\times\\ \sum_{m=0}^{u} \genfrac{[}{]}{0pt}{}{u}{m} _{q}(r_{1}r_{2})^{u-m}\left( r_{1}r_{2}q^{n-u-m+1}\right) _{m}\times\\ \sum_{t=0}^{n-2m} \genfrac{[}{]}{0pt}{}{n-2m}{n-2m-t} _{q}r_{1}^{n-t-2m}r_{2}^{t}\left( r_{1}r_{2}\right) _{t+m}\sum_{k=0}^{m} \genfrac{[}{]}{0pt}{}{m}{k} _{q}(-r_{1}^{2})^{k}q^{\binom{k}{2}}\frac{\left( r_{1}^{2}\right) _{t+m+k} }{\left( r_{1}^{2}r_{2}^{2}\right) _{t+m+k}}. \end{gather*}

Now, notice that after applying lemma, 6.4 we get

\begin{gather*} D_{n}(y|r_{1},r_{2},q)=\sum_{u=0}^{\left\lfloor n/2\right\rfloor } \frac{\left[ n\right] _{q}!(1-r_{1}r_{2}q^{n-2u})(r_{1}r_{2})^{u}}{\left[ u\right] _{q}!\left[ n-2u\right] _{q}!\left( r_{1}r_{2}\right) _{n-u+1} }R_{n-2u}(y|r_{1}r_{2},q)\times\\ \sum_{m=0}^{u} \genfrac{[}{]}{0pt}{}{u}{m} _{q}\left( r_{2}^{2}\right) _{m}\left( r_{1}r_{2}q^{n-u-m+1}\right) _{m}\\ \times\sum_{t=0}^{n-2m} \genfrac{[}{]}{0pt}{}{n-2m}{n-2m-t} _{q}r_{1}^{n-t-2m}r_{2}^{t}\left( r_{1}r_{2}\right) _{t+m}\frac{\left( r_{1}^{2}\right) _{t+m}}{\left( r_{1}^{2}r_{2}^{2}\right) _{t+2m}}. \end{gather*}

Notice that

\begin{gather*} \sum_{t=0}^{n-2m} \genfrac{[}{]}{0pt}{}{n-2m}{n-2m-t} _{q}r_{1}^{n-t-2m}r_{2}^{t}\left( r_{1}r_{2}\right) _{t+m}\frac{\left( r_{1}^{2}\right) _{t+m}}{\left( r_{1}^{2}r_{2}^{2}\right) _{t+2m} } =\\ \sum_{s=0}^{n-2m} \genfrac{[}{]}{0pt}{}{n-2m}{s} _{q}r_{1}^{s}r_{2}^{n-2m-s}\left( r_{1}r_{2}\right) _{n-m-s}\frac{\left( r_{1}^{2}\right) _{n-m-s}}{\left( r_{1}^{2}r_{2}^{2}\right) _{n-s}}=\\ \frac{\left( r_{1}^{2}\right) _{m}\left( r_{1}r_{2}\right) _{m}}{\left( r_{1}^{2}r_{2}^{2}\right) _{2m}}\sum_{s=0}^{n-2m} \genfrac{[}{]}{0pt}{}{n-2m}{s} _{q}r_{1}^{s}r_{2}^{n-2m-s}\frac{\left( r_{1}r_{2}q^{m}\right) _{n-2m-s}\left( r_{1}^{2}q^{m}\right) _{n-m-s}}{\left( r_{1}^{2}r_{2} ^{2}q^{2m}\right) _{n-s}} \end{gather*}

and further that

\begin{gather*} =\frac{\left( r_{1}^{2}\right) _{m}\left( r_{1}r_{2}\right) _{m}}{\left( r_{1}^{2}r_{2}^{2}\right) _{2m}}q^{-m(n-2m)/2}\times\\ \sum_{s=0}^{n-2m} \genfrac{[}{]}{0pt}{}{n-2m}{s} _{q}(q^{m/2}r_{1})^{s}(q^{m/2}r_{2})^{n-2m-s}\frac{\left( r_{1}r_{2} q^{m}\right) _{n-2m-s}\left( r_{1}^{2}q^{m}\right) _{n-m-s}}{\left( r_{1}^{2}r_{2}^{2}q^{2m}\right) _{n-s}}\\ =\frac{\left( r_{1}^{2}\right) _{m}\left( r_{1}r_{2}\right) _{m}}{\left( r_{1}^{2}r_{2}^{2}\right) _{2m}}\frac{1}{\left( r_{1}^{2}r_{2}^{2} q^{2m}\right) _{n-2n}}\sum_{s=0}^{n-2m} \genfrac{[}{]}{0pt}{}{n-2m}{s} _{q}r_{1}^{s}r_{2}^{n-s}\left( r_{2}^{2}q^{m}\right) _{s}\left( r_{1} ^{2}q^{m}\right) _{n-s}\\ =\frac{\left( r_{1}^{2}\right) _{m}\left( r_{1}r_{2}\right) _{m}}{\left( r_{1}^{2}r_{2}^{2}\right) _{n}}w_{n-2m}(m,r_{1},r_{2},q). \end{gather*}

At the final stage, we used lemma 6.2 and the identity $\left( a\right) _{n+m} = \left( a\right) _{n}\left( aq^{n}\right) _{m}$ multiple times.

Concluding, we have

\begin{gather*} D_{n}(y|r_{1},r_{2},q)=\frac{1}{\left( r_{1}^{2}r_{2}^{2}\right) _{n}} \sum_{u=0}^{\left\lfloor n/2\right\rfloor }\frac{\left[ n\right] _{q}!(1-r_{1}r_{2}q^{n-2u})(r_{1}r_{2})^{u}}{\left[ u\right] _{q}!\left[ n-2u\right] _{q}!\left( r_{1}r_{2}\right) _{n-u+1}}R_{n-2u}(y|r_{1} r_{2},q)\times\\ \sum_{m=0}^{u} \genfrac{[}{]}{0pt}{}{u}{m} _{q}\left( r_{2}^{2}\right) _{m}\left( r_{1}r_{2}q^{n-u-m+1}\right) _{m}\left( r_{1}^{2}\right) _{m}\left( r_{1}r_{2}\right) _{m} w_{n-2m}(m,r_{1}r_{2},q). \end{gather*}

Or after applying the definition of $\gamma_{n,k}(r_{1},r_{2},q)$ given by (4.13), we get

\begin{equation*} D_{n}(y|r_{1},r_{2},q)=\sum_{u=0}^{\left\lfloor n/2\right\rfloor } \frac{\left[ n\right] _{q}!(1-r_{1}r_{2}q^{n-2u})(r_{1}r_{2})^{u}}{\left[ u\right] _{q}!\left[ n-2u\right] _{q}!\left( r_{1}r_{2}\right) _{n-u+1} }R_{n-2u}(y|r_{1}r_{2},q)\gamma_{n,u}(r_{1},r_{2},q). \end{equation*}

Proof. Proof of lemma 4.12

Let us denote for brevity $K(x,y) = (1-r_{1}r_{2})f_{CN}\left( y|x,r_{1},q\right) \times f_{CN}\left( x|y,r_{2},q\right) $. As shown in the remark 4.9, function K is a symmetric function of x and y. Now imagine that we multiply function K by any function g(x) and integrate the product over S(q) with respect to x. Let us call the result h(y). Now imagine that we multiply $K(x,y)$ by g(y) and integrate with respect to y over S(q). We should get h(x).

The proof will be by induction with respect to $s =$ $n-2u$. The induction assumption is the following:

\begin{align*} \gamma_{n,k}(r_{1},r_{2},q) & = \phi_{n-2k}(r_{1} ,r_{2},q)\,\text{and }\\ \int_{S(q)}K(x,y)R_{n}(x|r_{1}r_{2},q)dx & =\phi_{n}(r_{1},r_{2} ,q)R_{n}(y|r_{1}r_{2},q). \end{align*}

So, now to start induction, let us set s = 0. Integrating the right-hand side (4.8) with respect to x yields $f_{R}(y|r_{1}r_{2},q)$. Now, integration of the right-hand side of (4.8) with respect to y results in:

\begin{gather*} f_{N}(x|q)\sum_{n\geq0}\frac{H_{2n}(x|q)}{\left[ 2n\right] _{q}!}\int _{S(q)}D_{2n}(y|r_{1},r_{2},q)f_{R}(y|r_{1}r_{2},q)dy=\\ f_{N}(x|q)\sum_{n\geq0}\frac{H_{2n}(x|q)}{\left[ 2n\right] _{q}!} \frac{(1-r_{1}r_{2})(r_{1}r_{2})^{n}\left[ 2n\right] _{q}!}{\left[ n\right] _{q}!(r_{1}r_{2})_{n+1}}\gamma_{2n,n}(r_{1},r_{2},q)\\ =(1-r_{1}r_{2})f_{N}(x|q)\sum_{n\geq0}\frac{H_{2n}(x|q)}{\left[ n\right] _{q}!}\frac{(r_{1}r_{2})^{n}}{(r_{1}r_{2})_{n+1}}\gamma_{2n,n}(r_{1},r_{2},q). \end{gather*}

Hence, using proposition 6 and the uniqueness of the expansion in orthogonal polynomials, we must have

\begin{equation*} \gamma_{2n,n}(r_{1},r_{2},q)=\phi_{0}(r_{1},r_{2},1)=1, \end{equation*}

for all $n\geq0$.

So now let us take s = m and make the induction assumption that $\gamma_{n,k}\left( r_{1},r_{2},q\right) = $ $\phi_{n-2k}(r_{1},r_{2},q)$ whenever $n-2k \lt m$.

Now let us multiply the left-hand side of (4.8) by $R_{m}(x|r_{1} r_{2},q)$ and integrate over $S\left( q\right) $ with respect to x. Since (3.16) is zero for n > m, we have

\begin{gather*} f_{R}(y|r_{1}r_{2},q)\sum_{s=0}^{\left\lfloor m/2\right\rfloor }\frac {D_{m-2s}(y|r_{1},r_{2},q)}{\left[ m-2s\right] _{q}!}\int_{S(q)} H_{m-2s}(x|q)R_{m}(x|r_{1}r_{2},q)f_{N}(x|q)dx=\\ f_{R}(y|r_{1}r_{2},q)\sum_{s=0}^{\left\lfloor m/2\right\rfloor }(-r_{1} r_{2})^{s}q^{\binom{s}{2}}\frac{\left[ m\right] _{q}!\left( r_{1} r_{2}\right) _{m-s}}{\left[ s\right] _{q}!}\frac{1}{\left[ m-2s\right] _{q}!}\times\\ \sum_{u=0}^{\left\lfloor m/2\right\rfloor -s}\frac{\left[ m-2s\right] _{q}!(1-r_{1}r_{2}q^{m-2s-2u})(r_{1}r_{2})^{u}}{\left[ u\right] _{q}!\left[ m-2s-2u\right] _{q}!(r_{1}r_{2})_{m-2s-u+1}}R_{m-2s-2u}(r_{1},r_{2} ,q)\gamma_{m-2s,u}(r_{1},r_{2},q)=\\ f_{R}(y|r_{1}r_{2},q)\sum_{k=0}^{\left\lfloor m/2\right\rfloor }\frac{\left[ m\right] _{q}!(1-r_{1}r_{2}q^{m-2k})(r_{1}r_{2})^{k}}{\left[ k\right] _{q}!\left[ m-2k\right] _{q}!}R_{m-2k}(r_{1},r_{2},q)\gamma_{m-2k,0} (r_{1},r_{2},q)\\ \times\sum_{s=0}^{k} \genfrac{[}{]}{0pt}{}{k}{s} _{q}(-1)^{s}q^{\binom{s}{2}}\frac{\left( r_{1}r_{2}\right) _{m-s}} {(r_{1}r_{2})_{m-k-s+1}}= \end{gather*}
\begin{gather*} f_{R}(y|r_{1}r_{2},q)\sum_{k=0}^{\left\lfloor m/2\right\rfloor }\frac{\left[ m\right] _{q}!(1-r_{1}r_{2}q^{m-2k})(r_{1}r_{2})^{k}}{\left[ k\right] _{q}!\left[ m-2k\right] _{q}!}R_{m-2k}(r_{1},r_{2},q)\gamma_{m-2k,0} (r_{1},r_{2},q)\\ \times\sum_{u=0}^{k} \genfrac{[}{]}{0pt}{}{k}{u} _{q}(-1)^{k-u}q^{\binom{k-u}{2}}\frac{\left( r_{1}r_{2}\right) _{m-k+u} }{(r_{1}r_{2})_{m-2k+u+1}}=\\ f_{R}(y|r_{1}r_{2},q)\sum_{k=0}^{\left\lfloor m/2\right\rfloor }\frac{\left[ m\right] _{q}!(1-r_{1}r_{2}q^{m-2k})(r_{1}r_{2})^{k}}{\left[ k\right] _{q}!\left[ m-2k\right] _{q}!}R_{m-2k}(r_{1},r_{2},q)\gamma_{m-2k,0} (r_{1},r_{2},q)\\ \times\sum_{u=0}^{k} \genfrac{[}{]}{0pt}{}{k}{u} _{q}(-1)^{k-u}q^{\binom{k-u}{2}}\frac{\left( r_{1}r_{2}q^{m-2k+u}\right) _{k}}{(1-r_{1}r_{2}q^{m-2k+u+1})}=\\ f_{R}(y|r_{1}r_{2},q)R_{m}(r_{1},r_{2},q)\phi_{m}(r_{1},r_{2},q). \end{gather*}

We applied here induction assumption as well as lemma 6.3 with $a = r_{1}r_{2}q^{m-2k}$.

Now, let us multiply the (4.8) by $R_{m}(y|r_{1}r_{2},q)$ and integrate with respect to y over S(q), we get then

\begin{gather*} f_{N}(x|q)\sum_{n\geq0}\frac{1}{\left[ n\right] _{q}!}H_{n}(x|q)\int _{S(q)}D_{n}(y|r_{1},r_{2},q)R_{m}f_{R}(y|r_{1}r_{2},q)dy=\\ f_{N}(x|q)\sum_{u\geq0}\frac{H_{2u+m}(x|q)\left[ 2u+m\right] _{q} !(1-r_{1}r_{2}q^{m})\left[ m\right] _{q}!(r_{1}^{2}r_{2}^{2})_{m} (1-r_{1}r_{2})}{\left[ 2u+m\right] _{q}!\left[ u\right] _{q}!\left[ m\right] _{q}!\left( r_{1}r_{2}\right) _{m+u+1}(1-r_{1}r_{2}q^{m})}=\\ (1-r_{1}r_{2})f_{N}(x|q)(r_{1}^{2}r_{2}^{2})_{m}\sum_{u\geq0}\frac {H_{2u+m}(x|q)}{\left[ u\right] _{q}!\left( r_{1}r_{2}\right) _{m+u+1} }\gamma_{2u+m,u}(r_{1},r_{2},q). \end{gather*}

Now, in order to have this expression to be equal to $R_{m}(x|r_{1} r_{2},q)f_{R}(x|r_{1}r_{2},q)$, in the face of proposition1(3) of [Reference Szabłowski31], we must have

\begin{equation*} \gamma_{2u+m,u}(r_{1},r_{2},q)=\phi_{m}(r_{1},r_{2},q). \end{equation*}

References

Al-Salam, W. A. and Ismail, M. E. H.. q-beta integrals and the q-Hermite polynomials. Pac. J. Math. 135 (1988), 209221 MR0968609 (90c:33001).CrossRefGoogle Scholar
Andrews, G. E., Askey, R. and Roy, R.. Special functions, Encyclopedia of Mathematics and its Applications, 71, (Cambridge University Press, Cambridge, 1999) ISBN: 0-521-62321-9 0-521-78988-5 MR1688958 (2000g:33001).Google Scholar
Askey, R. A., Rahman, M. and Suslov, S. K.. On a general q-Fourier transformation with nonsymmetric kernels. J. Comput. Appl. Math. 68 (1996), 2555 MR1418749 (98m:42033).CrossRefGoogle Scholar
Askey, R. and Wilson, J.. Some basic hypergeometric orthogonal polynomials that generalize Jacobi polynomials. Mem. Amer. Math. Soc. 54 (1985), .Google Scholar
Bryc, W., Matysiak, W. and Szabłowski, P. J.. Probabilistic aspects of Al-Salam-Chihara polynomials. Proc. Amer. Math. Soc. 133 (2005), 11271134 (electronic). MR2117214 (2005m:33033).CrossRefGoogle Scholar
Carlitz, L.. Generating functions for certain Q-orthogonal polynomials. Collect. Math. 23 (1972), 91104 MR0316773 (47 #5321).Google Scholar
Gasper, G. and Rahman, M.. Positivity of the Poisson kernel for the continuous q-ultraspherical polynomials. SIAM J. Math. Anal. 14 (1983), 409420 MR0688587 (84f:33008).CrossRefGoogle Scholar
Ismail, M. E. H.. Classical and Quantum Orthogonal Polynomials in One Variable. With two Chapters by Walter Van Assche. With a Foreword by Richard A. Askey. Encyclopedia of Mathematics and its Applications (98), (Cambridge University Press, Cambridge, 2005) ISBN: 978-0-521-78201-2; 0-521-78201-5 MR2191786 (2007f:33001).Google Scholar
Koekoek, R., Lesky, P. A. and Swarttouw, R. F.. Hypergeometric Orthogonal Polynomials and Their q-analogues. With a Foreword by Tom H. Koornwinder. (Springer Monographs in Mathematics Springer-Verlag, Berlin, 2010) ISBN: 978-3-642-05013-8 (2011e:33029).CrossRefGoogle Scholar
Koelink, H. T. and Van der Jeugt, J.. Bilinear generating functions for orthogonal polynomials. Constr. Approx. 15 (1999), 481497 MR1702811.CrossRefGoogle Scholar
Lancaster, H. O.. The structure of bivariate distributions. Ann. Math. Stat. 29 (September) (1958), 719736.CrossRefGoogle Scholar
Lancaster, H. O.. Correlation and complete dependence of random variables. Ann. Math. Stat. 34 (December) (1963), 13151321.CrossRefGoogle Scholar
Lancaster, H. O.. Correlations and canonical forms of bivariate distributions. Ann. Math. Stat. 34 (June) (1963), 532538.CrossRefGoogle Scholar
Lancaster, H. O.. Joint probability distributions in the Meixner classes. J. Roy. Statist. Soc. Ser. B. 37 (1975), 434443 MR0394971 (52 #15770).CrossRefGoogle Scholar
Mason, J. C. and Handscomb, D. C.. Chebyshev polynomials. (Chapman & Hall/CRC, Boca Raton, FL, 2003) ISBN: 0-8493-0355-9 MR1937591.Google Scholar
Mercer, J.. Functions of positive and negative type and their connection with the theory of integral equations. Phil. Trans. R. Soc. A 209 (1909), 415446.Google Scholar
Rahman, M. and Tariq, Q. M.. Poisson kernel for the associated continuous q-ultraspherical polynomials. Methods Appl. Anal. 4 (1997), 7790 MR1457206 (98k:33038).CrossRefGoogle Scholar
Rogers, L. J.. On the expansion of certain infinite products. Proc. London Math. Soc. 24 (1893), 337352.Google Scholar
Rogers, L. J.. Second memoir on the expansion of certain infinite products. Proc. London Math. Soc. 25 (1894), 318343.Google Scholar
Rogers, L. J.. Third memoir on the expansion of certain infinite products. Proc. London Math. Soc. 26 (1895), 1532.Google Scholar
Szabłowski, P. and , J. On the q-Hermite polynomials and their relationship with some other families of orthogonal polynomials. Dem. Math. 66 (2013), 679708 http://arxiv.org/abs/1101.2875.Google Scholar
Szabłowski, P. J.. Expansions of one density via polynomials orthogonal with respect to the other. J. Math. Anal. Appl. 383 (2011), 3554 http://arxiv.org/abs/1011.1492.CrossRefGoogle Scholar
Szabłowski, P. J.. q-Wiener and $(\alpha,q)$-Ornstein–Uhlenbeck processes. A generalization of known processes, Theor. Probab. Appl. 56 (2011), 742772 http://arxiv.org/abs/math/0507303.Google Scholar
Szabłowski, P. J.. On the structure and probabilistic interpretation of Askey-Wilson densities and polynomials with complex parameters. J. Funct. Anal. 262 (2011), 635659 http://arxiv.org/abs/1011.1541.CrossRefGoogle Scholar
Szabłowski, P. J.. On summable form of Poisson-Mehler kernel for big q-Hermite and Al-Salam-Chihara polynomials. Infinite Dimens. Anal. Quantum Probab. Relat. Top. 15 (2012), 18 http://arxiv.org/abs/1011.1848.Google Scholar
Szabłowski, P. J.. Befriending Askey–Wilson polynomials, Infin. Dimens. Anal. Quantum Probab. Relat. Top. 17 (2014), (25 pages) http://arxiv.org/abs/1111.0601.CrossRefGoogle Scholar
Szabłowski, P. J.. On Markov processes with polynomial conditional moments. Trans. Amer. Math. Soc. 367 (2015), 84878519 MR3403063, http://arxiv.org/abs/1210.6055.CrossRefGoogle Scholar
Szabłowski, P. J.. Around Poisson-Mehler summation formula. Hacet. J. Math. Stat. 45 (2016), 17291742 MR3699734 http://arxiv.org/abs/1108.3024.Google Scholar
Szabłowski, P. J.. On stationary Markov processes with polynomial conditional moments. Stoch. Anal. Appl. 35 (2017), 852872 MR3686472 http://arxiv.org/abs/1312.4887.CrossRefGoogle Scholar
Szabłowski, P. J.. Markov processes, polynomial martingales and orthogonal polynomials. Stochastics 90 (2018), 6177 MR3750639.CrossRefGoogle Scholar
Szabłowski, P. J.. On three dimensional multivariate version of q-normal distribution and probabilistic interpretations of Askey-Wilson, Al-Salam–Chihara and q-ultraspherical polynomials. J. Math. Anal. Appl. 474 (2019), 10211035 MR3926153.CrossRefGoogle Scholar
Szabłowski, P. J.. Multivariate generating functions involving Chebyshev polynomials and some of its generalizations involving q-Hermite ones. Colloq. Math. 169 (2022), 141170 ArXiv: https://arxiv.org/abs/1706.00316.CrossRefGoogle Scholar
Szabłowski, P. J.. On the families of polynomials forming a part of the Askey-Wilson scheme and their probabilistic applications. Infin. Dimens. Anal. Quantum Probab. Relat. Top. 25 (2022), 57 pp MR4408180.CrossRefGoogle Scholar
Szabłowski, P. J.. On positivity of orthogonal series and its applications in probability. Positivity 26 (2022), 20 https://arxiv.org/abs/2011.02710.CrossRefGoogle Scholar
Szabłowski, P. J.. Stationary, Markov, stochastic processes with polynomial conditional moments and continuous paths. Stochastics 96 (2024), 1007–1027.CrossRefGoogle Scholar