Hostname: page-component-7bb8b95d7b-dtkg6 Total loading time: 0 Render date: 2024-09-23T00:23:54.883Z Has data issue: false hasContentIssue false

Random analytic functions with a prescribed growth rate in the unit disk

Published online by Cambridge University Press:  26 April 2024

Xiang Fang*
Affiliation:
Department of Mathematics, National Central University, Chungli, Taiwan (R.O.C.)
Pham Trong Tien
Affiliation:
Faculty of Mathematics, Mechanics and Informatics, VNU University of Science, Vietnam National University, Hanoi, Vietnam e-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Let $\mathcal {R}f$ be the randomization of an analytic function over the unit disk in the complex plane

$$ \begin{align*}\mathcal{R} f(z) =\sum_{n=0}^\infty a_n X_n z^n \in H({\mathbb D}), \end{align*} $$
where $f(z)=\sum _{n=0}^\infty a_n z^n \in H({\mathbb D})$ and $(X_n)_{n \geq 0}$ is a standard sequence of independent Bernoulli, Steinhaus, or complex Gaussian random variables. In this paper, we demonstrate that prescribing a polynomial growth rate for random analytic functions over the unit disk leads to rather satisfactory characterizations of those $f \in H({\mathbb D})$ such that ${\mathcal R} f$ admits a given rate almost surely. In particular, we show that the growth rate of the random functions, the growth rate of their Taylor coefficients, and the asymptotic distribution of their zero sets can mutually, completely determine each other. Although the problem is purely complex analytic, the key strategy in the proofs is to introduce a class of auxiliary Banach spaces, which facilitate quantitative estimates.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Canadian Mathematical Society

1 Introduction and main results

The study of random analytic functions (RAF) has a long and rich history, with a dominating theme being the distribution of the zero sets or, more generally, that of a-values. In this paper, however, our primary concern lies in the metrical aspect of RAF or, rather, how to measure the size of an RAF?

The first significant result along this line is the Littlewood theorem: let $f(z)=a_0+a_1z+a_2z^2+\cdots \in H^2$ be an element of the Hardy space over the unit disk. Let $\{\varepsilon _n\}_{n\geq 0}$ be a sequence of independent, identically distributed Bernoulli random variables, that is, ${\mathbb P}(\varepsilon _n=1)={\mathbb P}(\varepsilon _n=-1)=\frac {1}{2}$ for all $n\geq 0.$ Littlewood’s theorem, proven in 1930 [Reference Littlewood17], states that

$$ \begin{align*}\mathcal{R}f(z):= \sum_{n=0}^\infty a_n \varepsilon_n z^n \in H^p\end{align*} $$

almost surely for all $p \ge 2.$ When $f \notin H^2$ , for almost every choice of signs, $\mathcal {R} f$ has a radial limit almost nowhere. As a consequence, according to the fundamental theory of the Hardy spaces [Reference Duren5, Theorem 2.2, p. 17], $\mathcal {R} f$ is not in any $H^p$ almost surely. The same holds true for a standard Steinhaus sequence [Reference Littlewood and Offord18, Reference Paley and Zygmund25] and a standard Gaussian sequence [Reference Kahane11, p. 54].

Determining when the random series $\mathcal {R} f$ represents an $H^{\infty }$ -function almost surely is much harder, where $H^{\infty }$ denotes the space of bounded analytic functions over the unit disk $\mathbb {D}$ . Several partial results since the 1930s, encompassing both necessary conditions and sufficient conditions, have been obtained by noted analysts, including Paley, Zygmund, and Salem [Reference Offord24, Reference Salem and Zygmund29]. In [Reference Billard2], Billard demonstrated the equivalence between the Bernoulli case and the Steinhaus case. A remarkable characterization was eventually achieved by Marcus and Pisier in 1978 [Reference Marcus and Pisier20] (see also [Reference Kahane11, Reference Marcus and Pisier21]). Their characterization relies on the celebrated Dudley–Fernique theorem.

Despite its intrinsic interests and the active research on various aspects of random analytic functions, such as on the distribution of zeros in canonical Gaussian analytic functions, the study of the metrical properties of RAF has remained largely stagnant in recent decades. An elegant exception to this is a theorem due to Cochran, Shapiro, and Ullrich concerning the Dirichlet space [Reference Cheng, Fang and Liu3]. In 1993, they proved that a Dirichlet function with random signs is almost surely a Dirichlet multiplier. This result has been further generalized by Liu in [Reference Liu19]. More recently, Cheng, Liu, and the first author have proved a Littlewood-type theorem for random Bergman functions [Reference Cochran, Shapiro and Ullrich4]. The exploration of this theme has been continued by the current authors, particularly for Fock spaces [Reference Fang7].

In this paper, we address a fundamental aspect in complex analysis, namely, the growth rate of analytic functions, for which a gap appears to exist in the literature: while the growth rate of RAF for entire functions has been fairly well understood since the celebrated work of Littlewood and Offord in 1948 [Reference Littlewood17], relatively less is known about the growth rate of RAF in the unit disk, where a polynomial rate appears to be the most natural. In the deterministic context, a closely related framework is formulated in Section 4.3 of the monograph [Reference Gao8]. In the present paper, we show that, by imposing a polynomial growth rate, a considerably satisfactory theory can be established for random analytic functions over the unit disk. Our key finding suggests that the following three aspects, when suitably formulated, are mutually determinative:

  • the (polynomial) growth rate;

  • the growth of Taylor coefficients; and

  • the asymptotic distribution of the zero sets.

This phenomenon, which we refer to as “rigidity,” stands in contrast to entire functions, where estimates instead of rigidity are often observed.

Let $H({\mathbb D})$ denote the space of analytic functions over the unit disk ${\mathbb D}$ in the complex plane.

Definition 1.1. A function $f \in H({\mathbb D})$ has a polynomial growth rate if there exists a constant $\alpha> 0$ such that

$$ \begin{align*}\qquad \qquad |f(z)| = O((1 - |z|)^{-\alpha}) \qquad \text{ as } \quad |z| \to 1. \end{align*} $$

In this case, the infimum of such constants $\alpha $ is called the growth rate of f.

Let ${\mathbb G}_{<\infty }$ denote the collection of analytic functions with a finite polynomial growth rate. The motivation for our work is the common belief, often found in folklore, that a randomized summation tends to exhibit enhanced regularity. An exemplary illustration of this enhancement in regularity is observed in the randomized p-harmonic series $\sum _{n=1}^\infty \pm \frac {1}{n^p}$ , which converges almost surely if and only if $p> \frac {1}{2}$ , indicating a $\frac {1}{2}$ -order improvement in regularity compared to the deterministic p-harmonic series. Then, the 1930 Littlewood theorem aligns with this perspective as the first example involving analytic functions. Consequently, in terms of growth rates, a natural question arises:

Question A How much growth rate improvement can one gain for the random function

$$ \begin{align*}\mathcal{R}f(z) := \sum_{n=0}^\infty \pm a_n z^n, \end{align*} $$

when compared with that of $f(z)=\sum _{n=0}^\infty a_n z^n \in H({\mathbb D})$ ?

We shall see that the rate of growth of $\mathcal {R}f$ is indeed always at most that of f, and the amount of improvement in terms of $\alpha $ is at most $\frac {1}{2}$ , which is sharp. On the other hand, this prompts a more fundamental question:

Question B How to characterize those functions $f \in H({\mathbb D})$ such that the random function ${\mathcal R} f$ belongs to ${\mathbb G}_{<\infty }$ almost surely?

Once we address the aforementioned inquiries (see Theorem C and Theorem B below), our investigation shifts toward examining the zero sets of the corresponding $\mathcal {R} f$ . Then, the unexpected discovery, at least to us, is that the integrated counting function $N_{{\mathcal R} f}(r)$ exhibits a strong rigidity (Theorem D).

In this paper, we consider the following three types of randomization.

Definition 1.2 A random variable X is called Bernoulli if ${\mathbb P}(X=1) = {\mathbb P}(X=-1) = \frac {1}{2}$ , Steinhaus if it is uniformly distributed on the unit circle, and by a standard complex Gaussian, i.e., $N_{\mathbb {C}}(0,1)$ , we mean the law of $U+iV$ , where $U, V$ are independent, real Gaussian variables with zero mean and $\text {Var}(U)=\text {Var}(V)=\frac {1}{2}$ . Moreover, let X be either Bernoulli, Steinhaus, or standard complex Gaussian, and a standard X sequence is a sequence of independent, identically distributed X variables, denoted by $(\varepsilon _n)_{n \geq 0}$ , $(e^{2\pi i \alpha _n})_{n \geq 0}$ , and $(\xi _n)_{n\geq 0}$ , respectively. Lastly, a standard random sequence $(X_n)_{n \geq 0}$ refers to either a standard Bernoulli, Steinhaus, or standard complex Gaussian sequence.

In the rest of this paper, we shall always assume that $(X_n)_{n \geq 0}$ is a standard random sequence if not otherwise indicated, and for such a sequence, we define

$$ \begin{align*}{\mathcal R} f(z): = \sum_{n=0}^{\infty} a_n X_n z^n \quad \text{ for } \quad f(z) = \sum_{n=0}^{\infty}a_n z^n \in H({\mathbb D}). \end{align*} $$

An expert might wonder whether the examination of the Steinhaus and Gaussian cases can be simplified by applying Kahane’s reduction principle to the Bernoulli case [Reference Kahane11]. While such an approach may be applicable in certain scenarios, it falls short of attaining the desired level of rigidity, as illustrated, say, by Proposition 2.9.

Let ${\mathcal X}\subset H({\mathbb D})$ be any subspace of analytic functions over ${\mathbb D}$ . We introduce another (deterministic) subspace ${\mathcal X}_*\subset H({\mathbb D})$ , which we call the random symbol space of ${\mathcal X}$ , by

$$ \begin{align*}{\mathcal X}_*: = \{ f \in H({\mathbb D}): {\mathbb P}({\mathcal R} f \in {\mathcal X}) = 1 \}. \end{align*} $$

Clearly, ${\mathcal X}_*$ depends on, a priori, the choice of $(X_n)_{n \geq 0}$ .

As a preparatory step, we show that there exists a well-defined notion of growth rate for random analytic functions.

Lemma A (Existence)

For any $f \in H({\mathbb D})$ , the following statements hold:

  1. (a) ${\mathbb P}({\mathcal R} f \ \text {has a polynomial growth rate}) \in \{0, 1\}$ .

  2. (b) ${\mathcal R} f$ has a polynomial growth rate almost surely if and only if f has a polynomial growth rate.

  3. (c) If ${\mathcal R} f $ has a polynomial growth rate almost surely, then there exists a constant $\alpha \in [0, \infty )$ such that the growth rate of ${\mathcal R} f$ is almost surely equal to $\alpha $ .

An immediate consequence is that

$$ \begin{align*}\left({\mathbb G}_{< \infty}\right)_* = \bigsqcup_{\alpha \geq 0} \left({\mathbb G}_{\alpha}\right)_*, \end{align*} $$

where $\bigsqcup $ represents the disjoint union, and ${\mathbb G}_{\alpha }$ denotes the collection of analytic functions with precisely a growth rate $\alpha $ .

Next, we show that a concise characterization of $\left ({\mathbb G}_{\alpha }\right )_*$ , in terms of Taylor coefficients, can be obtained. For this, we need:

Definition 1.3 A sequence of complex numbers $(a_n)_{n\ge 0}$ has a polynomial growth rate if there exists some constant $\alpha> 0$ such that

$$ \begin{align*}\qquad \qquad |a_n| = O(n^{\alpha}) \qquad \text{ as } \qquad n \to \infty. \end{align*} $$

The infimum of such constants $\alpha $ is called the growth rate of $(a_n)_{n\ge 0}$ .

Theorem B (Characterization)

Let $\alpha \in [0, \infty )$ and $f(z) = \sum _{n=0}^{\infty } a_nz^n \in H({\mathbb D})$ . Then the random function ${\mathcal R} f$ has a growth rate $\alpha $ almost surely if and only if the growth rate of the sequence $\left (\sum _{k=0}^n|a_k|^2\right )^{\frac {1}{2}}$ is also $\alpha $ .

Interestingly, our proof of the above theorem relies heavily on Banach space techniques. For Question A, we show next that the rate of ${\mathcal R} f$ does improve, compared with that of f, and the amount of improvement is at most $\frac {1}{2}$ .

Theorem C (Regularity)

Let $\alpha \in [0, \infty )$ .

  1. (a) If $f \in H({\mathbb D})$ has a growth rate $\alpha $ , then the growth rate of ${\mathcal R} f$ belongs to $\left [\max \{\alpha - \frac {1}{2}, 0\}, \alpha \right ]$ almost surely.

  2. (b) For each $\alpha ' \in \left [\max \{\alpha - \frac {1}{2}, 0\}, \alpha \right ]$ , there is a function $f \in H({\mathbb D})$ with growth rate $\alpha $ such that ${\mathcal R} f$ has growth rate $\alpha '$ almost surely.

The above result is one of the two Littlewood-type theorems which we shall prove in Section 3. The other one is Theorem 3.1.

We now proceed to the second characterization of $ \left ({\mathbb G}_{\alpha }\right )_*$ . In Section 4, we study the zero sets of $\mathcal {R}f$ , which, in general, has an extensive literature, and an in-depth analysis when $f \in \left ({\mathbb G}_{\alpha }\right )_*$ might be better conducted in a separate work. In this paper, our focus is on the asymptotic behaviors of the counting function $n_{\mathcal {R}f}(r)$ and the integrated counting function $N_{\mathcal {R}f}(r)$ for $f \in H(\mathbb {D})$ . Here, $n_{f}(r)$ denotes the number of zeros, accounting for multiplicity, of f within the disk $|z| < r$ . We also define

$$ \begin{align*}N_f(r): = \int_0^r \frac{n_f(t)}{t}dt \ \text{ if } f(0) \ne 0 \ \ \text{ or } \ \ N_f(r): = \int_{\frac{1}{2}}^r \frac{n_f(t)}{t}dt \ \text{ if } f(0) = 0. \end{align*} $$

Our second characterization of $ \left ({\mathbb G}_{\alpha }\right )_*$ is the following:

Theorem D (Rigidity)

Let $\alpha \in [0, \infty )$ and $f \in H({\mathbb D})$ . Then the following statements are equivalent:

  1. (i) The function f belongs to $({\mathbb G}_{\alpha })_*$ .

  2. (ii) $ \displaystyle \limsup _{r \to 1}\frac {N_{{\mathcal R} f}(r)}{\log \frac {1}{1-r}} = \alpha $ almost surely.

  3. (iii) $\displaystyle \limsup _{r \to 1}\frac {{\mathbb E}(N_{{\mathcal R} f}(r))}{\log \frac {1}{1-r}} = \alpha $ .

In summary, we establish that three aspects of ${\mathcal R} f$ are equivalent in a certain sense: the growth of the random function, the growth of its Taylor coefficients, and the distribution of its zero set.

As for the proof of Theorem D, when the random function ${\mathcal R} f$ is induced by a standard complex Gaussian sequence, one has the advantage of utilizing the Edelman–Kostlan formula, from which good knowledge on the expected number of zeros of ${\mathcal R} f$ , i.e., on ${\mathbb E}(n_{{\mathcal R} f}(r)),$ follows quickly. This allows us to draw several conclusions on the asymptotic behavior of ${\mathbb E}(n_{{\mathcal R} f}(r))$ . This is, however, far from being enough for our purpose. Three other ingredients play important roles in the proof: the estimates obtained by the methods of [Reference Cochran, Shapiro and Ullrich4], the new Banach space $H_\psi $ with $\psi (x)=x^\alpha \log (x),$ and an estimate of [Reference Nazarov, Nishry and Sodin22].

1.1 Blaschke condition

Lastly, we examine various convergence exponents related to the Blaschke condition, which is perhaps the most important geometric condition for zero sets in the unit disk [Reference Duren5, Section 2.2], and is, however, never satisfied for any $f \in ({\mathbb G}_{\alpha })_*$ when $\alpha>0$ , as an immediate consequence of Corollary 4.3 or [Reference Nazarov, Nishry and Sodin22, Theorem 1.3]; that is,

$$ \begin{align*}\sum_{n=1}^{\infty} (1-|z_n|)=\infty \quad \text{almost surely}. \end{align*} $$

This prompts us to take a closer look at the Blaschke-type conditions, and we introduce four notions of convergence exponent in Section 4.2. Then, as an application of Corollary 4.3, we show that they are indeed the same and equal to one (almost surely) for any $\alpha>0$ (Corollary 4.4).

1.2 Methodology

Most literature on the growth rate of random analytic functions is for entire functions, and relatively less is known over the unit disk. Moreover, techniques in the literature are usually complex analytic, and the conclusions are often approximate. To obtain sharp results such as the rigidity in Theorem D, in this paper, we devise a rather different route of proof featuring a functional analysis approach. The key strategy in our arguments is to introduce an auxiliary class of Banach spaces of analytic functions, which allows us to obtain some quantitative estimates such as those by the methods in [Reference Cochran, Shapiro and Ullrich4]. This Banach space approach also allows us to make effective use of entropy integrals, for which we obtain new estimates which are of independent interests.

The rest of this paper is organized as follows. Section 2 is devoted to Question B. To answer this question, we begin by introducing the Banach spaces $H_\psi $ and analyzing their symbol spaces $(H_\psi )_*$ . Section 3 studies the regularity improvement under randomization, thereby proving Theorem C and answering Question A in particular. The proof of Theorem D is presented in Section 4.1. Section 4.2 defines four convergence exponents related to the classical Blaschke condition, and they are shown to be the same and equal to one when $\alpha>0$ . Finally, the sharpness of various estimates is discussed in Section 4.3.

1.3 Notations

The abbreviation “a.s.” stands for “almost surely.” We assume that all random variables are defined on a probability space $(\Omega , {\mathcal F}, {\mathbb P})$ with expectation denoted by ${\mathbb E}(\cdot )$ . Moreover, $A \lesssim B$ (or, $A \gtrsim B$ ) means that there exists a positive constant C dependent only on the indexes $\alpha , \beta \cdots $ such that $A \leq CB$ (or, respectively, $A \geq \frac {B}{C}$ ), and $A \simeq B$ means that both $A \lesssim B$ and $A \gtrsim B$ hold. Lastly, $m(\cdot )$ denotes the Lebesgue measure.

2 The random symbol space $({\mathbb G}_{\alpha })_*$

In this section, we aim to characterize the random symbol space $({\mathbb G}_{\alpha })_*$ using Taylor coefficients. To achieve this, we introduce a family of Banach spaces denoted as $H_\psi $ and study their symbol spaces $(H_\psi )_*$ . Although a complete characterization of $(H_\psi )_*$ remains elusive, we are able to obtain a sufficient condition (Proposition 2.4) and a necessary condition (Proposition 2.5), which are sufficiently close to allow us to derive a precise characterization of $\left ({\mathbb G}_{\alpha }\right )_*$ and, consequently, $\left ({\mathbb G}_{\leq \alpha }\right )_*$ , where ${\mathbb G}_{\leq \alpha }$ denotes the set of analytic functions with a growth rate at most $\alpha $ .

Following [Reference Bennett, Stegenga and Timoney1], by a doubling weight, we mean an increasing function $\psi : [1, \infty ) \to [0, \infty )$ such that $\psi (2x) = O(\psi (x))$ as $x \to \infty $ . For each doubling weight $\psi $ , we introduce a Banach space by

$$ \begin{align*}H_{\psi}: = \left\{ f \in H({\mathbb D}): \|f\|_{H_{\psi}}: = \sup_{z \in {\mathbb D}} \frac{|f(z)|}{\psi(1/(1 - |z|))} < \infty \right\}. \end{align*} $$

For the standard weight $\psi _{\alpha }(x): = x^{\alpha }$ with $\alpha> 0$ , we shall write $H_{\alpha }$ instead of $H_{\psi _{\alpha }}$ . Another weight of importance to us (in the proof of Theorem D) is $\psi (x)=x^\alpha \log x$ . Such spaces $H_{\psi }$ were first studied by Shields and Williams [Reference Shields and Williams31, Reference Shields and Williams32] in a different setting.

Our first set of techniques build on arguments in [Reference Cochran, Shapiro and Ullrich4]. It is worth mentioning that in [Reference Cochran, Shapiro and Ullrich4], the authors considered standard real Gaussian variables. Nevertheless, every outcome in that study can be extended to complex Gaussian variables by treating the real and imaginary components separately. The following result, which extends Theorem 8 in [Reference Cochran, Shapiro and Ullrich4], will be of repeated use.

Proposition 2.1 Let $\psi $ be a doubling weight and $f \in H({\mathbb D})$ . Then the following statements are equivalent:

  1. (i) ${\mathcal R} f \in H_{\psi }$ a.s.

  2. (ii) ${\mathbb E}(\|{\mathcal R} f\|^s_{H_{\psi }}) < \infty $ for some $s> 0$ .

  3. (iii) ${\mathbb E}(\|{\mathcal R} f\|^s_{H_{\psi }}) < \infty $ for any $s> 0$ .

Moreover, the quantities $\left ( {\mathbb E}(\|{\mathcal R} f\|^s_{H_{\psi }}) \right )^{\frac {1}{s}}$ are equivalent for all $s> 0$ with some constant depending only on s.

To prove Proposition 2.1, we need an auxiliary lemma which extends [Reference Cochran, Shapiro and Ullrich4, Lemma 10]. Here and in what follows, for an analytic function $f(z) = \sum _{n=0}^{\infty }a_nz^n$ , let $s_nf(z): = \sum _{k=0}^n a_k z^k$ denote its nth Taylor polynomial.

Lemma 2.2 Let $\psi $ be a doubling weight and $(X_n)_{n \geq 0}$ be a sequence of independent and symmetric random variables. If a random function ${\mathcal R} f(z): = \sum _{n=0}^{\infty } a_n X_n z^n$ belongs to the space $H_{\psi }$ a.s., then its Taylor series $(s_n{\mathcal R} f)_{n \ge 0}$ is a.s. bounded in $H_{\psi }$ , i.e., $\sup _{n \geq 0} \|s_n{\mathcal R} f\|_{H_{\psi }} < \infty $ a.s.

Proof Since the arguments are similar to those of the proof of [Reference Cochran, Shapiro and Ullrich4, Lemma 10], here we only outline the key points and indicate the differences. We fix an increasing sequence of positive numbers $r_m \to 1$ as $m \to \infty $ . Then, for each $m \geq 1$ , the function ${\mathcal R} f_{r_m}(z): = \sum _{n=0}^{\infty } r_m^n a_n X_n z^n$ belongs to $H_{\psi }$ a.s.; moreover, it is not difficult to see that $ \|{\mathcal R} f_{r_m}\|_{H_{\psi }} \to \|{\mathcal R} f\|_{H_{\psi }}$ as $m \to \infty $ and the Taylor polynomials $(s_n {\mathcal R} f_{r_m})_{n \geq 0}$ of ${\mathcal R} f_{r_m}$ converge to ${\mathcal R} f_{r_m}$ in $H_{\psi }$ . From this and the A-bounded Marcinkiewicz–Zygmund–Kahane theorem [Reference Li and Queffélec16, Theorem II.4], we conclude that the Taylor series $(s_n{\mathcal R} f)_{n \ge 0}$ is a.s. bounded in $H_{\psi }$ .

The Proof of Proposition 2.1

Again, the proof relies on arguments in [Reference Cochran, Shapiro and Ullrich4], to which the reader is suggested to consult for more details since only key points are outlined here. (iii) $\Longrightarrow $ (ii) $\Longrightarrow $ (i) is trivial, and (ii) $\Longrightarrow $ (iii) follows from [Reference Cochran, Shapiro and Ullrich4, Lemma 11]. It remains to prove (i) $\Longrightarrow $ (ii). If ${\mathcal R} f \in H_{\psi }$ a.s., then, by Lemma 2.2, the Taylor series $(s_n {\mathcal R} f)_{n \geq 0}$ is a.s. bounded in $H_{\psi }$ , i.e., ${\mathbb P}(M < \infty ) = 1$ , where $M: = \sup _{n \geq 0} \|s_n {\mathcal R} f\|_{H_{\psi }}$ . Thus, by [Reference Cochran, Shapiro and Ullrich4, Lemma 9], for a small enough constant $\lambda> 0$ , one has $ {\mathbb E}\left (\exp \left ( \lambda \|{\mathcal R} f\|_{H_{\psi }}\right ) \right ) \leq {\mathbb E}(\exp (\lambda M)) < \infty , $ from which and Jensen’s inequality (ii) follows. Moreover, [Reference Cochran, Shapiro and Ullrich4, Lemma 11] implies that the quantities $({\mathbb E}(\|{\mathcal R} f\|^s_{H_{\psi }}))^{\frac {1}{s}}$ are equivalent for all $s> 0$ by a constant depending only on s.

The next set of techniques we shall need are estimates for Dudley–Fernique-type entropy integrals, which deserves perhaps more attention in complex analysis and for which we derive new estimates of independent interests. Recall that the nondecreasing rearrangement of a nonnegative function $\rho : [0,1] \to {\mathbb R}^+$ is defined as $ \overline {\rho }(s): = \sup \{y: m(\{t: \rho (t) < y\}) < s\}. $ For a sequence of complex numbers $(a_n)_{n \geq 0}$ , as in [Reference Marcus and Pisier21, p. 8], one defines invariant pseudometrics $\rho (t+s, s) := \rho (t)$ on the unit circle $\mathbb {T}$ by

$$ \begin{align*}\rho(t) : = \left( \sum_{n=0}^{\infty} |a_n|^2 |e^{2\pi n t i} - 1|^2 \right)^{\frac{1}{2}} \ \ \text{ and } \ \ \rho_n(t) : = \left( \sum_{k=0}^{n} |a_k|^2 |e^{2\pi k t i} - 1|^2 \right)^{\frac{1}{2}}. \end{align*} $$

Lemma 2.3 Let $(a_n)_{n \geq 0}$ be a sequence of complex numbers. Then the following holds for every $n \geq 2$ :

$$ \begin{align*}\left(\sum_{k=1}^n|a_k|^2\right)^{\frac{1}{2}} \lesssim \int_0^1 \dfrac{\overline{\rho_n}(t)}{t\sqrt{\log e/t}} dt \lesssim \sqrt{\log n} \left(\sum_{k=1}^n|a_k|^2\right)^{\frac{1}{2}}. \end{align*} $$

Proof Firstly, we observe that $\displaystyle \sup _{1 \leq k \leq n}|a_k| \lesssim \int _0^1 \dfrac {\overline {\rho _n}(t)}{t\sqrt {\log e/t}} dt.$ By a result of Marcus and Pisier for $\overline {\rho _n}$ [Reference Marcus and Pisier21, Theorem 1.2, p. 126],

$$ \begin{align*} \int_0^1 \dfrac{\overline{\rho_n}(t)}{t\sqrt{\log e/t}}dt &\geq \overline{\rho_n}\left(\frac{1}{2}\right) \int_{\frac{1}{2}}^1 \dfrac{dt}{t\sqrt{\log e/t}} \gtrsim \overline{\rho_n}\left(\frac{1}{2}\right) \\ & \gtrsim \left( \sum_{k = 3}^n (a_k^*)^2 \right)^{\frac{1}{2}} = \left( \sum_{k = 1}^n |a_k|^2 -(a_1^*)^2 - (a_2^*)^2\right)^{\frac{1}{2}}, \end{align*} $$

here and throughout the paper, $(a_k^*)_{k \geq 1}$ denotes the nonincreasing rearrangement of the sequence $(|a_k|)_{k \geq 1}$ if existing. Consequently,

$$ \begin{align*} \left(\sum_{k=1}^n|a_k|^2\right)^{\frac{1}{2}} &\lesssim \int_0^1 \dfrac{\overline{\rho_n}(t)}{t\sqrt{\log e/t}}dt + \sup_{1 \leq k \leq n} |a_k| \lesssim \int_0^1 \dfrac{\overline{\rho_n}(t)}{t\sqrt{\log e/t}}dt. \end{align*} $$

Then, by a result of Jain and Marcus [Reference Hedenmalm, Korenblum and Zhu9, Corollary 2.5],

$$ \begin{align*} \int_0^1 \dfrac{\overline{\rho_n}(t)}{t\sqrt{\log e/t}} dt & \leq \left(\int_0^{1/n} + \int_{1/n}^1\right) \dfrac{\rho_n(t)}{t\sqrt{\log e/t}} dt \\ & \leq 2 \pi \int_0^{1/n} \left(\sum_{k=1}^n k^2 |a_k|^2\right)^{\frac{1}{2}} \dfrac{1}{\sqrt{\log e/t}} dt + 4 \sqrt{\log n} \left(\sum_{k=1}^n|a_k|^2\right)^{\frac{1}{2}}, \end{align*} $$

which is dominated by $ \sqrt {\log n}\left (\sum _{k=1}^n|a_k|^2\right )^{\frac {1}{2}}.$ The proof of Lemma 2.3 is complete now.

Proposition 2.4 Let $\psi $ be a doubling weight and $f(z) = \sum _{n=0}^{\infty }a_nz^n \in H({\mathbb D})$ . If

$$ \begin{align*}\int_0^1 \dfrac{\overline{\rho_n}(t)}{t\sqrt{\log e/t}} dt = O\left(\frac{\psi(n)}{\sqrt{\log n}} \right), \end{align*} $$

then $f \in \left (H_{\psi }\right )_*$ .

Proof By [Reference Jain and Marcus10, Proposition 2], we get

$$ \begin{align*}\left\|\sup_{0 \leq \theta < 2\pi} \left| \sum_{k=0}^n a_k e^{i k \theta} X_k \right| \right\|_{\psi_2} \leq C \left( \left(\sum_{k=0}^n|a_k|^2\right)^{\frac{1}{2}} + \int_0^1 \dfrac{\overline{\rho_n}(t)}{t\sqrt{\log e/t}} dt \right), \end{align*} $$

where, for a random variable x,

$$ \begin{align*}\|x\|_{\psi_2}: = \inf \left\{c> 0: {\mathbb E}\left(\exp\left(\frac{|x|^2}{c^2}\right)\right) \leq 2 \right\} \end{align*} $$

is the Orlicz norm of x (see [Reference Krasnoselski and Ruticki12, Reference Rudin28]). Indeed, [Reference Jain and Marcus10, Proposition 2] treats only the Bernoulli case, but the case of the Steinhaus sequence is essentially the same and the case of the Gaussian sequence is even simpler. Using this, together with the following form of Markov’s inequality

$$ \begin{align*}{\mathbb P}\left(|x|> \|x\|_{\psi_2} \sqrt{t} \right) \leq 2 e^{-t} \quad \text{ for every } t > 0, \end{align*} $$

we get, except on an event with probability at most $\frac {2}{n^2}$ ,

$$ \begin{align*} \sup_{0 \leq \theta < 2\pi} \left| \sum_{k=0}^n a_k e^{i k \theta} X_k \right| \leq C \sqrt{\log n^2} \left( \left(\sum_{k=0}^n|a_k|^2\right)^{\frac{1}{2}} + \int_0^1 \dfrac{\overline{\rho_n}(t)}{t\sqrt{\log e/t}} dt \right), \end{align*} $$

which, by the assumption and Lemma 2.3, implies that

$$ \begin{align*} \sup_{0 \leq \theta < 2\pi} \left| \sum_{k=0}^n a_k e^{i k \theta} X_k \right| \leq C_1 \psi(n), \end{align*} $$

for every $n \in {\mathbb N}$ and for some $C_1> 0$ . Thus, for every $m \in {\mathbb N}$ , we obtain

$$ \begin{align*} & \sup_{z \in {\mathbb D}} \frac{|{\mathcal R} f(z)|}{\psi(1/(1 - |z|))} \\ = & \sup_{0 < r < 1 } \frac{1}{\psi(1/(1 - r))} \sup_{0 \leq \theta < 2\pi} \left| a_0 X_0 + \sum_{n=1}^{\infty} \left(\sum_{k=0}^n a_k e^{i k \theta} X_k - \sum_{k=0}^{n-1} a_k e^{i k \theta} X_k \right) r^{n} \right|\\ = \; & \sup_{0 < r < 1} \frac{1-r}{\psi(1/(1 - r))} \sup_{0 \leq \theta < 2\pi} \left| \sum_{n=0}^{\infty} \left(\sum_{k=0}^n a_k e^{i k \theta} X_k \right) r^{n} \right| \\ \leq \; & \frac{1}{\psi(1)} \sum_{n=0}^{m-1} \left(\sum_{k=0}^n |a_k||X_k| \right) + \sup_{0 < r < 1} \frac{1-r}{\psi(1/(1 - r))} \sum_{n=m}^{\infty} \sup_{0 \leq \theta < 2\pi} \left|\sum_{k=0}^n a_k e^{i k \theta} X_k \right| r^{n} \\ \leq \; & \frac{1}{\psi(1)} \sum_{n=0}^{m-1} \left(\sum_{k=0}^n |a_k||X_k| \right) + C_1 \sup_{0 < r < 1} \frac{1-r}{\psi(1/(1 - r))} \sum_{n=m}^{\infty} \psi(n)r^{n} < \infty \end{align*} $$

on an event with probability at least

$$ \begin{align*}1 - \sum_{n = m}^{\infty}\frac{2}{n^2},\end{align*} $$

where the last inequality above follows from arguments as in [Reference Shields and Williams32, Lemma 1] and [Reference Bennett, Stegenga and Timoney1, Lemma 2.1]. The proof is complete now.

Proposition 2.5 Let $\psi $ be a doubling weight and $f(z) = \sum _{n=0}^{\infty }a_nz^n \in H({\mathbb D})$ . If $f \in \left (H_{\psi }\right )_*$ , then

$$ \begin{align*}\left(\sum_{k=0}^n|a_k|^2\right)^{\frac{1}{2}} = O(\psi(n)) \ \text{ and } \ \int_0^1 \dfrac{\overline{\rho_n}(t)}{t\sqrt{\log e/t}} dt = O(\psi(n)). \end{align*} $$

Proof By Lemma 2.2, the Taylor series $(s_n{\mathcal R} f)_{n \geq 0}$ of ${\mathcal R} f$ is a.s. bounded in $H_{\psi }$ . As in the proof of Proposition 2.1, by [Reference Cochran, Shapiro and Ullrich4, Lemma 9] and Jensen’s inequality, for a small enough constant $\lambda> 0$ , we get

$$ \begin{align*}{\mathbb E}\left(\exp\left(\lambda \sup_{n \geq 0} \|s_n {\mathcal R} f\|_{H_{\psi}}\right)\right) < \infty, \quad \text{hence,}\ \quad {\mathbb E}\left( \sup_{n \geq 0} \|s_n{\mathcal R} f\|_{H_{\psi}}\right) < \infty. \end{align*} $$

For each $n \in {\mathbb N}$ , using [Reference Marcus and Pisier21, Theorem 1.4, p. 11], we get

$$ \begin{align*}\left(\sum_{k=0}^n|a_k|^2\right)^{\frac{1}{2}} + \int_0^1 \dfrac{\overline{\rho_n}(t)}{t\sqrt{\log e/t}} dt \lesssim {\mathbb E} \left( \sup_{0 \leq \theta < 2\pi} \left| \sum_{k=0}^n a_k e^{i k \theta} X_k \right| \right). \end{align*} $$

Moreover, step by step using the contraction principle [Reference Li and Queffélec15, Theorem IV.3, p. 136] and the fact that $ e \left (1 - \frac {1}{n+1}\right )^k \geq 1 $ for every $k \leq n, $ we get

$$ \begin{align*} \frac{1}{\psi(n+1)}&{\mathbb E} \left( \sup_{0 \leq \theta < 2\pi} \left| \sum_{k=0}^n a_k e^{i k \theta} X_k \right| \right) \\ \leq & \; 2e {\mathbb E} \left( \frac{1}{\psi(n+1)} \sup_{0 \leq \theta < 2\pi} \left| \sum_{k=0}^n a_k e^{i k \theta} \left(1 - \frac{1}{n+1}\right)^k X_k \right| \right) \\ \leq & \; 2e {\mathbb E} \left( \sup_{0 < r < 1} \frac{1}{\psi(1/(1-r))} \sup_{0 \leq \theta < 2\pi} \left| \sum_{k=0}^n a_k e^{i k \theta} r^k X_k \right| \right) \\ \leq & \; 2e {\mathbb E} \left( \sup_n \|s_n {\mathcal R} f\|_{H_{\psi}}\right) < \; \infty. \end{align*} $$

From this and the doubling assumption, it follows that

$$ \begin{align*}\left(\sum_{k=0}^n|a_k|^2\right)^{\frac{1}{2}} = O(\psi(n)) \quad \text{ and } \quad \int_0^1 \dfrac{\overline{\rho_n}(t)}{t\sqrt{\log e/t}} dt = O(\psi(n)).\\[-34pt] \end{align*} $$

From Lemma 2.3 and Propositions 2.4 and 2.5, we conclude the following for the space $(H_{\alpha })_*$ .

Corollary 2.6 Let $\alpha> 0$ and $f(z) = \sum _{n=0}^{\infty }a_nz^n \in H({\mathbb D})$ .

  1. (a) If $\left (\sum _{k=0}^n|a_k|^2\right )^{\frac {1}{2}} = O\left (\frac {n^{\alpha }}{\log n} \right )$ , then $f \in (H_{\alpha })_*$ .

  2. (b) If $f \in (H_{\alpha })_*$ , then

    $$ \begin{align*}\left(\sum_{k=0}^n|a_k|^2\right)^{\frac{1}{2}} = O\left(n^{\alpha} \right) \quad \text{ and } \quad \int_0^{1}\frac{\overline{\rho_n}(t)}{t\sqrt{\log e/t}}dt = O\left(n^{\alpha} \right). \end{align*} $$

Now we are ready to prove Lemma A and Theorem B. First, for any $\alpha> 0$ and $f \in H({\mathbb D})$ , by [Reference Cochran, Shapiro and Ullrich4, Lemma 4], one has

(2.1) $$ \begin{align} {\mathbb P} ({\mathcal R} f \in H_{\alpha}) \in \{0, 1\}. \end{align} $$

The proofs of Lemma A and Theorem B follow from scrutinizing the following decomposition, with the aid of Corollary 2.6:

(2.2) $$ \begin{align} {\mathbb G}_{< \infty} = \bigsqcup_{\alpha \geq 0} {\mathbb G}_{\alpha} = \bigcup_{\alpha> 0} H_{\alpha}, \qquad \ {\mathbb G}_{0} = \bigcap_{\alpha > 0} H_{\alpha}, \end{align} $$

and

(2.3) $$ \begin{align} {\mathbb G}_{\alpha} = \bigcap_{\beta> \alpha} H_{\beta} \setminus \bigcup_{\gamma < \alpha} H_{\gamma}, \qquad \ {\mathbb G}_{\leq \alpha} = \bigcap_{\beta> \alpha} H_{\beta} = \bigsqcup_{\gamma \leq \alpha} {\mathbb G}_{\gamma}. \end{align} $$

In details, for Lemma A, we first observe that ${\mathbb P} ({\mathcal R} f \in {\mathbb G}_{<\infty }) \in \{0, 1\}$ follows from (2.1) and (2.2). Now, letting $f(z) = \sum _{n=0}^{\infty } a_n z^n \in H({\mathbb D})$ , it is an exercise that f has a polynomial growth rate if and only if so does the sequence of its Taylor coefficients $(a_n)_{n \geq 0}$ . Indeed, the direct implication follows from Cauchy’s inequality for the Taylor coefficients, while the other direction follows from [Reference Bennett, Stegenga and Timoney1, Theorem 1.10(a)] and the fact that the sequence $\sum _{k=0}^n|a_k|$ has a polynomial growth rate whenever so does $(a_n)_{n \geq 0}$ .

If $f \in ({\mathbb G}_{<\infty })_*$ , then $f \in (H_{\alpha })_*$ for some $\alpha $ . This, together with Corollary 2.6, implies that $|a_n| = O(n^{\alpha })$ . Conversely, if $|a_n| = O(n^{\alpha })$ for some $\alpha> 0$ , then, again by Corollary 2.6(a), $f \in (H_{\beta })_*$ for $\beta> \alpha + \frac {1}{2}$ , hence $f \in ({\mathbb G}_{<\infty })_*$ . Lastly, ${\mathbb P} ({\mathcal R} f \in {\mathbb G}_{<\infty }) = 1$ implies that, for some $\alpha _1\ge 0$ , ${\mathbb P} ({\mathcal R} f \in H_{\alpha }) = 1$ for all $\alpha \geq \alpha _1$ . We denote by $\alpha _0$ the infimum of all such constants $\alpha _1$ . Then one checks that, by (2.3), $f \in ({\mathbb G}_{\alpha _0})_*$ . This yields Lemma A.

As for Theorem B, we will verify the following claim: Let $\alpha \in [0, \infty )$ and $f(z) = \sum _{n=0}^{\infty }a_nz^n \in H({\mathbb D})$ . Then $f \in ({\mathbb G}_{\alpha })_*$ (or, $f \in ({\mathbb G}_{\leq \alpha })_*$ ) if and only if the sequence $\left (\sum _{k=0}^n|a_k|^2\right )^{\frac {1}{2}}$ has the growth rate $\alpha $ (or, respectively, a growth rate at most $\alpha $ ). It suffices to consider $\alpha> 0$ . The case $\alpha = 0$ is similar and indeed simpler. Let $A_n: = \left (\sum _{k=0}^n|a_k|^2\right )^{\frac {1}{2}}$ . By (2.1) and (2.3), together with Corollary 2.6, $f \in ({\mathbb G}_{\alpha })_*$ if and only if $ A_n = O(n^{\beta }) $ and $ A_n \neq O(n^{\gamma })$ for all $ \beta> \alpha $ and $ \gamma < \alpha $ , i.e., $(A_n)_{n \geq 0}$ has the growth rate $\alpha $ . Similarly, using (2.3) and Corollary 2.6, we can check that $f \in ({\mathbb G}_{\leq \alpha })_*$ if and only if $A_n = O(n^{\beta })$ for all $ \beta> \alpha $ , i.e., $(A_n)_{n \geq 0}$ has a growth rate at most $\alpha $ .

We end this section with two classes of examples of general interests: lacunary series and those with monotone coefficients. A side result is that the random symbol space $(H_\psi )_*$ induced by a general normal weight $\psi $ does depend on the choice of the randomization sequence $(X_n)_{n \geq 0}$ . This stands in contrast with $(H^{\infty })_*$ , which is the same for any standard random sequence [Reference Kahane11, Theorem 7, p. 231]. Later, we shall use these examples to illustrate the sharpness of Theorem C, Theorem D, and Corollary 4.3.

We say that a sequence $(b_k)_{k \geq 1}$ has a polynomial growth rate with respect to a sequence $(n_k)_{k \geq 1}$ if there exists a number $\alpha> 0 $ such that $ |b_k| = O(n_k^{\alpha }) $ as $ k \to \infty .$ In this case, the infimum of such constants $\alpha $ is called the growth rate of $(b_k)_{k \geq 1}$ with respect to $(n_k)_{k \geq 1}$ .

Proposition 2.7 Let $\alpha \in [0, \infty )$ and $f(z) = \sum _{k=1}^{\infty }b_k z^{n_k} \in H({\mathbb D})$ with a Hadamard lacunary sequence $(n_k)_{k \geq 1}$ , i.e., $\inf _{k\geq 1} n_{k+1}/n_k> 1$ . Then $f \in ({\mathbb G}_{\alpha })_*$ (or, $f \in ({\mathbb G}_{\leq \alpha })_*$ ) if and only if $(b_k)_{k \geq 1}$ has the growth rate $\alpha $ (or, respectively, a growth rate at most $\alpha $ ) with respect to $(n_k)_{k \geq 1}$ .

The proof is a straightforward application of Theorem B.

Recall that, according to [Reference Shields and Williams31, p. 291] and [Reference Yang and Xu35, p. 152], an increasing continuous function $\psi : [1, \infty ) \to [1, \infty )$ is called a normal weight if there are constants $0 < \alpha < \beta $ and $x_0> 0$ such that

$$ \begin{align*}&\frac{\psi(x)}{x^{\alpha}} \ \text{ is increasing on } (x_0, \infty) \ \text{ and } \ x^{\alpha} = o(\psi(x)),\ x \to \infty;\\&\frac{\psi(x)}{x^{\beta}} \ \text{ is decreasing on } (x_0, \infty) \ \text{ and } \ \psi(x) = o(x^{\beta}),\ x \to \infty. \end{align*} $$

By [Reference Shields and Williams32, Remark 1], normal weights are doubling ones.

Lemma 2.8 Let $(X_k)_{k\geq 1}$ be a standard complex Gaussian sequence and $(c_k)_{k\geq 1}$ a sequence of positive numbers. Then $|X_k| = O(c_k)$ a.s. if and only if there is a constant $a> 0$ such that $\sum _{k=1}^{\infty } \exp \left (-a c_k^2\right ) < \infty .$

Proof For each $n, k \in {\mathbb N}$ , let us consider the following events:

$$ \begin{align*}E_{n, k}: = \left\{|X_k| < n c_k \right\}, \quad E_n = \liminf_{k\to \infty}E_{n,k}, \end{align*} $$

and

$$ \begin{align*}E = \left\{\limsup_{k\to \infty}c_k^{-1}|X_k| < \infty\right\}. \end{align*} $$

It is clear that

$$ \begin{align*}E_n \subset E_{n+1} \quad \text{ and } \quad E = \bigcup_{n \geq 1} E_n. \end{align*} $$

In addition, since $\{E_{n,k}, k \geq 1\}$ are independent, by an application of the (second) Borel–Cantelli lemma, we get

$$ \begin{align*}{\mathbb P}(E_n^c) = {\mathbb P}\left(\limsup_{k \to \infty}E_{n,k}^c\right) = 0 \quad \text{ if and only if } \quad \sum_{k\geq 1}{\mathbb P}(E_{n,k}^c) < \infty, \end{align*} $$

where

$$ \begin{align*}{\mathbb P}(E_{n,k}^c) = \frac{1}{\pi}\int_{|z| \geq nc_k}e^{-|z|^2}dA(z) = e^{-n^2 c_k^2}. \end{align*} $$

From this, the conclusion follows when n is large enough.

Proposition 2.9 Let $\psi $ be a normal weight, and $f(z) = \sum _{k=1}^{\infty }b_k z^{n_k} \in H({\mathbb D})$ with a Hadamard lacunary sequence $(n_k)_{k \geq 1}$ .

  1. (a) If $(X_k)_{k \geq 1}$ is a standard Bernoulli or Steinhaus sequence, then $f \in (H_{\psi })_*$ if and only if $|b_k| = O(\psi (n_k))$ .

  2. (b) If $(X_k)_{k \geq 1}$ is a standard complex Gaussian sequence, then $f \in (H_{\psi })_*$ if and only if there is a constant $a> 0$ such that

    (2.4) $$ \begin{align} \sum_{k=1}^{\infty} \exp \left( - a||b_k|^{-2}|\psi^2(n_k) \right) < \infty. \end{align} $$

Proof By [Reference Yang and Xu35, Theorem 2.3], ${\mathcal R} f(z) = \sum _{k=1}^{\infty }b_k X_k z^{n_k} \in H_{\psi }$ a.s. if and only if $|b_kX_k| = O(\psi (n_k))$ a.s., which immediately implies the assertion in Part (a) and together with Lemma 2.8 yields the Part (b).

Proposition 2.10 Let $\alpha \in [0,\infty )$ and $f(z) = \sum _{n=0}^{\infty }a_n z^{n} \in H({\mathbb D})$ with a monotone sequence $(|a_n|)_{n \ge 0}$ . Then $f \in ({\mathbb G}_{\alpha })_*$ (or, $f \in ({\mathbb G}_{\leq \alpha })_*$ ) if and only if $(a_n\sqrt {n})_{n \geq 0}$ has the growth rate $\alpha $ (or, respectively, a growth rate at most $\alpha $ ).

The proof follows from Theorem B, together with some elementary manipulation of coefficients, hence skipped.

3 Littlewood-type theorems

In this section, we present further applications of Corollary 2.6. Namely, we prove two Littlewood-type theorems that address the improvement of regularity through randomization, with the first one (Theorem C) for ${\mathbb G}_{\alpha }$ and the second one (Theorem 3.1) for $H_{\alpha }$ .

Proof of Theorem C

(a) By Lemma A, we assume that f has a growth rate $\alpha $ and ${\mathcal R} f$ a growth rate $\alpha _0$ almost surely. By (2.3) and [Reference Bennett, Stegenga and Timoney1, Theorem 1.8(a)], $ \left (\sum _{k=0}^n |a_k|^2\right )^{\frac {1}{2}} = O(n^{\beta }), $ which, together with Corollary 2.6(a), implies that ${\mathcal R} f \in H_{\beta }$ a.s. for every $\beta> \alpha $ , hence, $\alpha _0 \leq \alpha $ . To show $\alpha _0 \geq \max \{\alpha - \frac {1}{2}, 0\}$ , it suffices to assume $\alpha> \frac {1}{2}$ . For every $\gamma> \alpha _0$ , ${\mathcal R} f \in H_{\gamma }$ a.s., and hence, by Corollary 2.6(b), $\left (\sum _{k=0}^n |a_k|^2\right )^{\frac {1}{2}} = O(n^{\gamma })$ , which by the Cauchy–Schwarz inequality implies that $ \sum _{k=0}^n |a_k| = O(n^{\gamma + \frac {1}{2}})$ . This, together with [Reference Bennett, Stegenga and Timoney1, Theorem 1.10(a)], implies that the function $g(z): = \sum _{n=0}^{\infty } |a_n| z^n$ belongs to $H_{\gamma + 1/2}$ . Thus, $f \in H_{\gamma + 1/2}$ , and hence the growth rate $\alpha $ of f is no greater than $\alpha _0 + \frac {1}{2}$ .

(b) For each $\alpha ' \in [\max \{\alpha - \frac {1}{2},0\}, \alpha ]$ , we take a sequence $(n_k)_{k \ge 1}$ as follows:

$$ \begin{align*}n_0: = 0, \ \ n_1 \in {\mathbb N}, \ \ n_{k}: = [x_{k}] \ \ \text{ with } \ \ x_{k} - x_{k}^{\frac{1}{2} - \alpha + \alpha'} = n_{k-1} \ \ \ \text{ for } \ k \geq 2. \end{align*} $$

We define $ f(z) := \sum _{k=1}^{\infty } b_k z^{n_k}$ with $ b_k := n_k^{\alpha } - n_{k-1}^{\alpha }. $ By [Reference Bennett, Stegenga and Timoney1, Theorem 1.10(a)], f has a growth rate $\alpha $ . Moreover, one checks that $ \left (\sum _{j=1}^k b_j^2\right )^{\frac {1}{2}} \simeq n_k^{\alpha '}. $ Now, an application of Theorem B completes the proof.

In view of Theorem C, for simplicity, we denote $\alpha _*: = \max \{\alpha - \frac {1}{2},0\}$ . Then,

$$ \begin{align*}{\mathbb G}_{\alpha} \subset \bigsqcup_{\alpha_* \leq \beta \leq \alpha} \left({\mathbb G}_{\beta}\right)_* \ \text{ and } \ {\mathbb G}_{\alpha} \cap ({\mathbb G}_{\beta})_* \neq \emptyset \ \text{ for every } \beta \in [\alpha_*, \alpha]. \end{align*} $$

This implies that $ {\mathbb G}_{\alpha } \cap ({\mathbb G}_{\beta })_* \neq \emptyset $ if and only if $ \beta \in [\alpha _*, \alpha ].$

Naturally, one may wonder what happens in terms of ${\mathbb G}_{\leq \alpha }$ , and we claim

$$\begin{align*}{\mathbb G}_{\leq \alpha_1} \subset ({\mathbb G}_{\leq \alpha_2})_*\ \text{if and only if}\ \alpha_1 \leq \alpha_2. \end{align*}$$

Indeed, if $\alpha _1 \leq \alpha _2$ , then the conclusion follows readily from (2.3), Corollary 2.6(a), and [Reference Bennett, Stegenga and Timoney1, Theorem 1.8(a)]. For $\alpha _1> \alpha _2$ , we take a Hadamard lacunary function $f_0(z):= \sum _{k=1}^{\infty } b_k z^{n_k}$ with a growth rate $\alpha _1$ . By [Reference Yang and Xu35, Theorem 2.3], $(b_k)_{k \geq 1}$ has the growth rate $\alpha _1$ with respect to $(n_k)_{k \geq 1}$ . By Proposition 2.7, ${\mathcal R} f_0$ has the growth rate $\alpha _1$ a.s.; in particular, $f \notin ({\mathbb G}_{\leq \alpha _2})_*$ .

On the other hand, the consideration of $H_\alpha $ yields an interesting comparison; indeed, it exhibits a loss of regularity.

Theorem 3.1 Let $\alpha _1, \alpha _2 \in (0,\infty )$ . Then $H_{\alpha _1} \subset (H_{\alpha _2})_*$ if and only if $\alpha _1 < \alpha _2$ .

Proof If $\alpha _1 < \alpha _2$ , then the conclusion follows from Corollary 2.6(a) and [Reference Bennett, Stegenga and Timoney1, Theorem 1.8(a)]. For $\alpha _1> \alpha _2$ , the Hadamard lacunary function $f_0(z) := \sum _{k=1}^{\infty } n_k^{\alpha _1} z^{n_k}\in H_{\alpha _1}$ and $f_0 \notin (H_{\alpha _2})_*$ , by [Reference Yang and Xu35, Theorem 2.3] and Proposition 2.9, respectively. The main case is when $\alpha _1 = \alpha _2 = \alpha $ , for which we construct a function $f_0 \in H_{\alpha }$ , but $f_0 \notin (H_{\alpha })_*$ . By [Reference Paley and Zygmund26, Theorem 1], there exists a sequence $(\alpha _n)_{n \geq 1}$ in $\{-1, 1\}$ such that the polynomials $ P_n(z) := \sum _{j=1}^n \alpha _j z^j $ satisfy

$$ \begin{align*}\sup_{z \in {\mathbb D}} |P_n(z)| \leq 5\sqrt{n}.\end{align*} $$

Let $ f_0(z) := \sum _{n=1}^{\infty }\alpha _n n^{\alpha -\frac {1}{2}} z^n $ and $P_0(z): = 0$ . For any $n \geq 1$ , one has

$$ \begin{align*} \sup_{z \in {\mathbb D}} \left|s_nf_0(z)\right| & = \sup_{z \in {\mathbb D}} \left|\sum_{k=1}^n k^{\alpha-\frac{1}{2}} \left(P_k(z) - P_{k-1}(z)\right)\right| \\ & \leq \sup_{z \in {\mathbb D}} \left( \sum_{k=1}^{n-1} \left|P_k(z) \left(k^{\alpha-\frac{1}{2}} - (k+1)^{\alpha-\frac{1}{2}}\right) \right| + n^{\alpha - \frac{1}{2}} |P_n(z)| \right) \lesssim n^{\alpha}, \end{align*} $$

which, by [Reference Bennett, Stegenga and Timoney1, Theorem 1.4], implies that $f_0 \in H_{\alpha }$ . Now, for this function, using [Reference Marcus and Pisier21, Theorem 1.2, p. 126], we have

$$ \begin{align*} \int_0^1 \dfrac{\overline{\rho_{2n}}(t)}{t\sqrt{\log e/t}}dt &\geq \overline{\rho_{2n}}\left(\frac{1}{n}\right) \int_{\frac{1}{n}}^1 \dfrac{dt}{t\sqrt{\log e/t}} \gtrsim \overline{\rho_{2n}}\left(\frac{1}{n}\right) \sqrt{\log n} \\ & \gtrsim \left( \sum_{k = n+1}^{2n} (a_k^*)^2 \right)^{\frac{1}{2}} \sqrt{\log n} \gtrsim n^{\alpha} \sqrt{\log n}, \end{align*} $$

where $(a_k^*)_{k=1}^{2n}$ is the nonincreasing rearrangement of the sequence $\left (k^{\alpha - \frac {1}{2}}\right )_{k=1}^{2n}$ . This, together with Corollary 2.6(b), implies that $f_0 \notin (H_{\alpha })_*$ .

We end this section by constructing examples of analytic functions f such that f has a growth rate $\alpha $ and the rate of ${\mathcal R} f$ is $\alpha $ or $\alpha - \frac {1}{2}$ almost surely.

Proposition 3.2 Let $(n_k)_{k \geq 1}$ be a sequence such that $\log k = o(\log n_k)$ . Then, for every function $f(z) = \sum _{k=1}^{\infty }b_k z^{n_k}$ with a growth rate $\alpha $ , its randomization ${\mathcal R} f$ has also the growth rate $\alpha $ almost surely.

Proof By Lemma A and Theorem C, the almost surely growth rate $\alpha _0$ of ${\mathcal R} f$ is at most $\alpha $ . By contradiction, we assume that $\alpha _0 < \alpha $ . Then, by Corollary 2.6(b), there exists $\gamma \in (\alpha _0, \alpha )$ such that $ \left (\sum _{j=1}^k |b_j|^2\right )^{\frac {1}{2}} = O(n_k^{\gamma }). $ Then by our hypothesis and the Cauchy–Schwarz inequality, we get $ \sum _{j=1}^k |b_j| = O(n_k^{\gamma '}) $ for every $\gamma ' \in (\gamma , \alpha )$ . Thus, by [Reference Bennett, Stegenga and Timoney1, Theorem 1.10(a)], the function $g(z): = \sum _{k=1}^{\infty }|b_k| z^{n_k} \in H_{\gamma '}$ , which implies that $f \in H_{\gamma '}$ . This contradiction completes the proof.

Proposition 3.3 Let $\alpha \geq \frac {1}{2}$ . For every function $f(z) = \sum _{n=0}^{\infty }a_n z^{n}$ with a growth rate $\alpha $ , where $(a_n)_{n \geq 0}$ is a monotone sequence of real numbers, its randomization ${\mathcal R} f$ has the growth rate $\alpha -\frac {1}{2}$ almost surely.

Proof Without loss of generality, we suppose that $a_n \geq 0$ for all n. As above, by contradiction, we assume that ${\mathcal R} f$ has a growth rate $\alpha _0$ a.s. with $\alpha _0> \alpha - \frac {1}{2}$ . Then, by Proposition 2.10, $a_n \neq O(n^{\gamma - \frac {1}{2}})$ for every $\gamma \in (\alpha - \frac {1}{2}, \alpha _0)$ . On the other hand, $f \in H_{\gamma + \frac {1}{2}}$ for any $\gamma \in (\alpha - \frac {1}{2}, \alpha _0)$ , which, by [Reference Bennett, Stegenga and Timoney1, Theorem 1.10(a)], implies that $ \sum _{k=0}^n a_k = O\left (n^{\gamma + \frac {1}{2}}\right ). $ Now, using the monotonicity of $(a_n)_{n \geq 0}$ , we get $a_n = O\left (n^{\gamma - \frac {1}{2}}\right )$ , which is a contradiction.

From the above two propositions, we immediately get the following concrete examples.

Example 3.1 Let $(n_k)_{k \geq 1}$ be a Hadamard lacunary sequence. For every function $f(z) = \sum _{k=1}^{\infty } b_k z^{n_k}$ with a growth rate $\alpha $ , its randomization ${\mathcal R} f$ also has the growth rate $\alpha $ almost surely.

Example 3.2 For every $\alpha \geq \frac {1}{2}$ , the function $f(z) = \frac {1}{(1 - z)^{\alpha }}$ has the growth rate $\alpha $ , but its randomization ${\mathcal R} f$ has the growth rate $\alpha - \frac {1}{2}$ almost surely.

Remark 3.3 The function $f_0$ constructed in the proof of Theorem 3.1 satisfies $f_0 \in H_{\alpha }$ and $f_0 \notin (H_{\alpha })_*$ . Thus, the growth rate of $f_0$ is no greater than $\alpha $ and the growth rate of ${\mathcal R} f_0$ is a.s. no less than $\alpha $ . Therefore, by Theorem C, the growth rate of $f_0$ is $\alpha $ and ${\mathcal R} f_0$ has the growth rate $\alpha $ almost surely. From this, we can draw the following conclusions:

  • The condition $\log k = o(\log n_k)$ is sufficient for analytic functions $f(z) = \sum _{k=1}^{\infty }b_k z^{n_k}$ to preserve its growth rate $\alpha $ under randomization, but it is not a necessary condition.

  • The monotone property of the sequence $(a_n)_{n \geq 0} \subset {\mathbb R}$ is essential in Proposition 3.3.

4 Zero sets

This section concerns the second main topic in the present paper, namely, the zero sets of ${\mathcal R} f$ when $f \in ({\mathbb G}_{\alpha })_*$ . The rigidity of $N_{{\mathcal R} f}(r)$ (Theorem D) is proved in Section 4.1. Then, in Section 4.2, we explore to what extent the rigidity fails for $n_{{\mathcal R} f}(r)$ , and, as an application of the estimates we obtain, we introduce four Blaschke-type exponents and show that they are always the same and equal to one when $\alpha>0$ . The bulk of Section 4.3 is devoted to the analysis of an example (Example 4.3) in order to illustrate the sharpness of various estimates in this section.

4.1 Proof of Theorem D

Together with Lemma A and Theorem B, the proof follows from the following two lemmas, which are of independent interests.

Lemma 4.1 Let $\alpha> 0$ and $f(z) = \sum _{n=0}^{\infty }a_nz^n \in H({\mathbb D})$ such that

$$ \begin{align*}\left(\sum_{k=0}^n |a_k|^2\right)^{\frac{1}{2}} = O(n^{\alpha}). \end{align*} $$

Then the following estimates hold:

  1. (a) $\displaystyle \limsup _{r \to 1} \frac {N_{{\mathcal R} f}(r)}{\log \frac {1}{1-r}} \leq \alpha \quad \quad \ \text { a.s.}\ {and } \quad \limsup _{r \to 1} \frac {{\mathbb E}(N_{{\mathcal R} f}(r))}{\log \frac {1}{1-r}} \leq \alpha. $

  2. (b) $ \displaystyle \limsup _{r \to 1} \frac {n_{{\mathcal R} f}(r)}{\frac {1}{1-r}\log \frac {1}{1-r}} \leq \alpha \quad \text { a.s.}\ {and } \quad \limsup _{r \to 1} \frac {{\mathbb E}(n_{{\mathcal R} f}(r))}{\frac {1}{1-r}\log \frac {1}{1-r}} \leq \alpha. $

Lemma 4.2 Let $\alpha> 0$ and $f(z) = \sum _{n=0}^{\infty }a_nz^n \in H({\mathbb D})$ such that

$$ \begin{align*}\left(\sum_{k=0}^n |a_k|^2\right)^{\frac{1}{2}} \neq O(n^{\alpha}). \end{align*} $$

Then the following estimates hold:

  1. (a) $ \displaystyle \limsup _{r \to 1} \frac {N_{{\mathcal R} f}(r)}{\log \frac {1}{1-r}} \geq \alpha \quad \text { a.s. } \ \text { and } \ \ \limsup _{r \to 1} \frac {{\mathbb E}(N_{{\mathcal R} f}(r))}{\log \frac {1}{1-r}} \geq \alpha. $

  2. (b) $ \displaystyle \limsup _{r \to 1} \frac {n_{{\mathcal R} f}(r)}{\frac {1}{1-r}} \geq \alpha \quad \text { a.s. } \ \text { and } \ \ \limsup _{r \to 1} \frac {{\mathbb E}(n_{{\mathcal R} f}(r))}{\frac {1}{1-r}} \geq \alpha. $

Proof of Lemma 4.1

We may assume $|a_0| = 1$ since only a little modification is needed to take care of the general case. By Corollary 2.6(a), ${\mathcal R} f \in H_{\psi }$ a.s. with

$$ \begin{align*}\psi(x): = x^{\alpha} \log x,\end{align*} $$

hence $\|{\mathcal R} f\|_{H_{\psi }} < + \infty $ a.s. Jensen’s formula yields that almost surely for every $r \in (0,1)$ ,

(4.1) $$ \begin{align} N_{{\mathcal R} f}(r) \leq \log \|{\mathcal R} f\|_{H_{\psi}} + \log\psi \left(\frac{1}{1-r} \right) - \log|X_0| < \infty. \end{align} $$

In addition, by Jensen’s inequality, for every $r \in (0,1)$ ,

(4.2) $$ \begin{align} {\mathbb E}(N_{{\mathcal R} f}(r)) \leq \log {\mathbb E}(\|{\mathcal R} f\|_{H_{\psi}}) + \log\psi \left(\frac{1}{1-r} \right) - {\mathbb E}(\log|X_0|). \end{align} $$

Now an application of Proposition 2.1 yields Part (a).

For Part (b), by (4.1), for each $\lambda \in (0,1)$ and $r \in (0,1)$ , we have

$$ \begin{align*} n_{{\mathcal R} f}(r) \log \frac{1 - \lambda (1-r)}{r} & \leq \int_r^{1 - \lambda (1-r)} \frac{n_{{\mathcal R} f}(t)}{t}dt \\ & \leq \log \|{\mathcal R} f\|_{H_{\psi}} + \log\psi \left(\frac{1}{\lambda(1-r)} \right) - \log|X_0| \ \ \text{a.s.}, \end{align*} $$

which yields the first half of Part (b) by arguments similar to the above. The case of ${\mathbb E}(n_{{\mathcal R} f}(r))$ follows in an analogous manner.

Here and in what follows, for an analytic function $f(z) = \sum _{n=0}^{\infty }a_nz^n$ , let

$$ \begin{align*}\sigma^2_{f}(r) := {\mathbb E}(|{\mathcal R} f(z)|^2) = \sum_{n\geq 0}|a_n|^2 r^{2n}. \end{align*} $$

Proof of Lemma 4.2

We still assume $|a_0| = 1$ . Jensen’s formula yields that

(4.3) $$ \begin{align} \qquad{\mathbb E}(N_{{\mathcal R} f}(r)) = \log \sigma_{f}(r) + {\mathbb E}\left(\frac{1}{2\pi}\int_0^{2\pi} \log |\widehat{{\mathcal R} f}_r(\theta)|d\theta\right) - {\mathbb E}(\log|X_0|), \end{align} $$

where

$$ \begin{align*}\widehat{{\mathcal R} f}_r(\theta): = \frac{{\mathcal R} f(r e^{i\theta})}{\sigma_{f}(r)} = \sum_{n\geq 0} \widehat{a}_n(r) X_n e^{i n\theta} \quad \text{ with } \quad \widehat{a}_n(r): = \frac{|a_n|r^n}{\sigma_{f}(r)} \end{align*} $$

is a random Fourier series satisfying the condition $\sum _{n\geq 0} |\widehat {a}_n(r)|^2 = 1$ . By [Reference Nazarov, Nishry and Sodin22, Corollary 1.2] for the Bernoulli case, [Reference Rao and Ren27, Reference Ullrich33, Reference Ullrich34] for the Steinhaus case, and a basic fact for the Gaussian case (see also [Reference Nazarov, Nishry and Sodin23, Section 1.1]), there exists a constant $C> 0$ such that

(4.4) $$ \begin{align} \qquad \quad {\mathbb E}\left(\frac{1}{2\pi}\int_0^{2\pi} \left| \log |\widehat{{\mathcal R} f}_r(\theta)| \right| d\theta\right) \leq C \quad \text{ for all } \ r \in (0,1). \end{align} $$

On the other hand, from the assumption and [Reference Bennett, Stegenga and Timoney1, Theorem 1.10(a)], it follows that the function $g(z): = \sum _{n=0}^{\infty } |a_n|^2 z^{2n}$ does not belong to the space $H_{2\alpha }$ , which implies that $ \limsup _{r \to 1} \frac {\log \sigma _f(r)}{\log \frac {1}{1-r}} \geq \alpha , $ hence $ \limsup _{r \to 1} \frac {{\mathbb E}(N_{{\mathcal R} f}(r))}{\log \frac {1}{1-r}} \geq \alpha , $ and, by L’Hôpital’s rule, $ \limsup _{r \to 1} \frac {{\mathbb E}(n_{{\mathcal R} f}(r))}{\frac {1}{1-r}} \geq \alpha. $ Next, we take a sequence $r_n \uparrow 1$ such that

$$ \begin{align*}\limsup_{n \to \infty} \frac{\log \sigma_f(r_n)}{\log\frac{1}{1-r_n}} \geq \alpha \quad \text{ and } \quad n^2 = o\left(\log\frac{1}{1-r_n}\right). \end{align*} $$

For each $n \in {\mathbb N}$ , by (4.4),

$$ \begin{align*} {\mathbb P}\left(\frac{1}{2\pi}\int_0^{2\pi} \left| \log |\widehat{{\mathcal R} f}_{r_n}(\theta)| \right| d\theta> n^2\right) \leq \frac{C}{n^2}. \end{align*} $$

Now, an application of the Borel–Cantelli lemma yields that, for almost every standard random sequence $(X_n)_{n \geq 0}$ , there is a number $n_0 \in {\mathbb N}$ such that, for every $n \geq n_0$ ,

$$ \begin{align*}\frac{1}{2\pi}\int_0^{2\pi} \left| \log |\widehat{{\mathcal R} f}_{r_{n}}(\theta)| \right| d\theta \leq n^2, \end{align*} $$

which, together with Jensen’s formula, implies that

$$ \begin{align*} N_{{\mathcal R} f}(r_{n}) &\geq \log \sigma_{f}(r_{n}) - \frac{1}{2\pi}\int_0^{2\pi}\left| \log |\widehat{{\mathcal R} f}_{r_{n}}(\theta)| \right| d\theta - \log |X_0| \\ & \geq \log \sigma_{f}(r_{n}) - n^2 - \log |X_0|. \end{align*} $$

Thus, the proof is complete by our choice of $r_n$ , and by L’Hôpital’s rule again.

4.2 Blaschke-type exponents

Expectedly, the rigidity phenomenon fails for $n_{{\mathcal R} f}(r)$ , and in this subsection, we first explore the extent to which it fails. The following result follows from Theorem B and Lemmas 4.1 and 4.2.

Corollary 4.3 Let $\alpha \in [0, \infty )$ and $f \in ({\mathbb G}_{\alpha })_*$ . Then the following estimates hold:

  1. (a) $\displaystyle \limsup _{r \to 1} \frac {n_{{\mathcal R} f}(r)}{\frac {1}{1-r}\log \frac {1}{1-r}} \leq \alpha $ a.s. and $\displaystyle \limsup _{r \to 1} \frac {{\mathbb E}(n_{{\mathcal R} f}(r))}{\frac {1}{1-r}\log \frac {1}{1-r}} \leq \alpha $ .

  2. (b) $\displaystyle \limsup _{r \to 1} \frac {n_{{\mathcal R} f}(r)}{\frac {1}{1-r}} \geq \alpha $ a.s. and $\displaystyle \limsup _{r \to 1} \frac {{\mathbb E}(n_{{\mathcal R} f}(r))}{\frac {1}{1-r}} \geq \alpha $ .

The above estimates turn out to be quite sharp, and this is addressed in Section 4.3. For now, as an application of these estimates, we recall the Blaschke condition, which is perhaps the best known geometric condition for zero sets $(z_n)_{n \geq 1}$ of analytic functions in the unit disk:

$$ \begin{align*}\sum_{n=1}^{\infty} (1-|z_n|)<\infty. \end{align*} $$

By Corollary 4.3, as well as [Reference Nazarov, Nishry and Sodin22, Theorem 1.3], however, this condition always fails to hold for the zero sets of ${\mathcal R} f$ for any $f \in ({\mathbb G}_\alpha )_*$ whenever $\alpha>0$ . A finer look at Blaschke-type conditions is hence in need. For this purpose, we introduce the following four exponents, which are clearly reminiscent of the convergent exponents of zero sets of entire functions (see, e.g., [Reference Levin14, p. 17]).

Definition 4.1 Given a sequence $(z_n)_{n \geq 1} \subset {\mathbb D}$ , its Blaschke exponent and polynomial order are defined as

$$ \begin{align*}\lambda := \lambda((z_n)_{n \geq 1}): = \inf \left\{\gamma> 0: \sum_{n=1}^{\infty} (1 - |z_n|)^{\gamma} < \infty \right\} \end{align*} $$

and, respectively,

$$ \begin{align*}\rho: = \rho((z_n)_{n \geq 1}) := \limsup_{r \to 1} \frac{\log n(r)}{\log \frac{1}{1-r}}, \end{align*} $$

where $n(r)$ is the counting function of $(z_n)_{n \geq 1}$ with multiplicity.

For simplicity, we shall write $\lambda (f)$ and $\rho (f)$ instead of $\lambda ((z_n)_{n \geq 1})$ and $\rho ((z_n)_{n \geq 1})$ , respectively, if $(z_n)_{n \geq 1}$ is the zero sequence of $f \in H({\mathbb D}).$

Definition 4.2 Given a random sequence $(z_n)_{n \geq 1} \subset {\mathbb D}$ , its expected Blaschke exponent and expected polynomial order are defined as

$$ \begin{align*}\lambda_E := \lambda_E((z_n)_{n \geq 1}):= \inf \left\{\gamma> 0: {\mathbb E}\left(\sum_{n=1}^{\infty} (1 - |z_n|)^{\gamma}\right) < \infty \right\}, \end{align*} $$

and, respectively,

$$ \begin{align*}\rho_E:= \rho_E((z_n)_{n \geq 1}): = \limsup_{r \to 1} \frac{\log {\mathbb E}(n(r))}{\log \frac{1}{1-r}}. \end{align*} $$

We shall use the notations $\lambda _E({\mathcal R} f)$ and $\rho _E({\mathcal R} f)$ whose meaning is clear.

Corollary 4.4 For any $f \in ({\mathbb G}_{\alpha })_*$ with $\alpha> 0$ , one has

$$ \begin{align*}\lambda({\mathcal R} f) = \rho({\mathcal R} f) = 1 \ \text{ a.s. and } \ \lambda_E({\mathcal R} f) = \rho_E({\mathcal R} f) = 1. \end{align*} $$

This corollary follows from Corollary 4.3 and the following elementary properties of the four introduced exponent.

Lemma 4.5 The following statements hold:

  1. (a) For every sequence $(z_n)_{n \geq 1} \subset {\mathbb D}$ , its Blaschke exponent $\lambda $ and polynomial order $\rho $ are equal.

  2. (b) For every random sequence $(z_n)_{n \geq 1} \subset {\mathbb D}$ , its expected Blaschke exponent $\lambda _E$ and polynomial order $\rho _E$ are equal.

The proof of this lemma is similar to certain familiar arguments on the convergent exponents of zero sets for entire functions in [Reference Levin13, Reference Levin14]. For the reader’s convenience, we outline two key points below:

  • Using the argument in [Reference Levin14, Lemma 1, p. 17], we prove that the series $\sum _{n=1}^{\infty }(1 - |z_n|)^{\gamma }$ converges (or, the expectation ${\mathbb E}(\sum _{n=1}^{\infty }(1 - |z_n|)^{\gamma })$ is finite) if and only if the integral $\int _0^1 (1-t)^{\gamma - 1} n(t)dt$ (or, respectively, $\int _0^1 (1-t)^{\gamma - 1} {\mathbb E}(n(t))dt$ ) converges.

  • Following this and the argument in [Reference Levin14, Lemma 2, p. 18], we get $\lambda ((z_n)_{n \geq 1}) = \rho ((z_n)_{n \geq 1})$ and $\lambda _E((z_n)_{n \geq 1}) = \rho _E((z_n)_{n \geq 1})$ .

4.3 Sharpness

In this last subsection, we explore the possibility of replacing the limsup in Theorem D by a limit. This is possible when $\sigma _f(r)$ is of regular growth. Indeed, by taking a sequence $r_n: = 1 - e^{-n^3}$ and repeating the arguments in the proof of Lemma 4.2, one gets the following:

Corollary 4.6 Let $\alpha \in [0, \infty )$ and $f(z) = \sum _{n=0}^{\infty }a_nz^n \in ({\mathbb G}_{\alpha })_*$ such that

$$ \begin{align*}\lim_{r \to 1} \frac{\log \sigma_f(r)}{\log \frac{1}{1-r}} = \alpha.\end{align*} $$

Then

$$ \begin{align*}\lim_{r \to 1}\frac{N_{{\mathcal R} f}(r)}{\log \frac{1}{1-r}} = \alpha \ \text{ a.s. } \ \text{ and } \ \lim_{r \to 1}\frac{{\mathbb E}(N_{{\mathcal R} f}(r))}{\log \frac{1}{1-r}} = \alpha. \end{align*} $$

Next, we construct an example to show how badly the conclusion of Corollary 4.6 fails when $\sigma _f(r)$ is of irregular growth. Incidentally, this example establishes the sharpness of the upper estimates in Corollary 4.3.

Example 4.3 For any $\alpha> 0$ , there exists a function $f \in ({\mathbb G}_{\alpha })_*$ such that

  1. (a) $\displaystyle \limsup _{r \to 1} \frac {n_{{\mathcal R} f}(r)}{\frac {1}{1-r}\log \frac {1}{1-r}} = \alpha \ \ \,\text { a.s.} \ \ \text{ and } \ \limsup _{r \to 1} \frac {{\mathbb E}(n_{{\mathcal R} f}(r))}{\frac {1}{1-r}\log \frac {1}{1-r}} = \alpha; $

  2. (b) $\displaystyle \liminf _{r \to 1} \frac {n_{{\mathcal R} f}(r)}{\frac {1}{1-r}\log \frac {1}{1-r}} = 0 \ \ \ \kern1.7pt\text { a.s.} \ \ \text{ and } \ \ \liminf _{r \to 1} \frac {{\mathbb E}(n_{{\mathcal R} f}(r))}{\frac {1}{1-r}\log \frac {1}{1-r}} = 0; $

  3. (c) $\displaystyle \liminf _{r \to 1} \frac {N_{{\mathcal R} f}(r)}{\log \frac {1}{1-r}} = 0 \ \ \ \ \quad \text { a.s.} \ \ \text{ and } \ \ \liminf _{r \to 1} \frac {{\mathbb E}(N_{{\mathcal R} f}(r))}{\log \frac {1}{1-r}} = 0. $

Proof We take an increasing sequence of integers $(m_k)_{k \geq 1}$ with $m_1 = 3$ and

(4.5) $$ \begin{align} m_{k+1}> \left(m_{k} 2^{m_{k}}\right)^{1 + \frac{1}{\alpha}}, \end{align} $$

and hence

(4.6) $$ \begin{align} m_{k+1}> (k+1)^{\frac{4}{\alpha}} m_{k} 2^{m_{k}}. \end{align} $$

Put

$$ \begin{align*}n_k: = m_k 2^{m_k}, \ \ a_k: = \frac{n_k^{\alpha}}{k^2}, \ \ \text{ and } \ \ f(z): = \sum_{k=1}^{\infty} a_k z^{n_k}. \end{align*} $$

Then, by Proposition 2.7, $f \in ({\mathbb G}_{\alpha })_*$ .

(a) Put $r_k: = 2^{-\alpha 2^{-m_k}}$ . Then, letting $k \to \infty $ , we get

(4.7) $$ \begin{align} \frac{1}{1 - r_k} \log \frac{1}{1 - r_k} \sim \frac{m_k 2^{m_k}}{\alpha}. \end{align} $$

Let $A_k$ be the event that

$$ \begin{align*}\left| a_k X_k \left(r_ke^{i\theta}\right)^{n_k} \right|> \left| \sum_{j \neq k} a_j X_j \left(r_ke^{i\theta}\right)^{n_j} \right| \quad \text{ for every } \ \theta \in [0, 2\pi]. \end{align*} $$

By Rouché’s theorem, $n_{{\mathcal R} f}(r_k) = n_k$ on $A_k$ . Hence, ${\mathbb E}(n_{{\mathcal R} f}(r_k)) \geq n_k {\mathbb P}(A_k)$ for each $k \in {\mathbb N}$ . We claim that ${\mathbb P}(A_k) \to 1$ as $k \to \infty $ . This, together with (4.7), implies

$$ \begin{align*}\limsup_{k \to \infty}\frac{{\mathbb E}(n_{{\mathcal R} f}(r_k))}{\frac{1}{1 - r_k} \log \frac{1}{1 - r_k}} \geq \limsup_{k \to \infty} \frac{n_k {\mathbb P}(A_k)}{\frac{1}{1 - r_k} \log \frac{1}{1 - r_k}} = \alpha, \end{align*} $$

which, in turn, implies the assertion for ${\mathbb E}(n_{{\mathcal R} f}(r))$ . On the other hand, putting $ A := \limsup _{k \to \infty } A_k, $ we get ${\mathbb P}(A) = 1$ . For each standard random sequence $(X_k)_{k \geq 1}$ on A, there is a subsequence $(k_j)_{j \geq 1}$ of integer numbers such that $(X_k)_{k \geq 1} \in A_{k_j}$ for every $j \in {\mathbb N}$ . Then

$$ \begin{align*}\limsup_{j \to \infty}\frac{n_{{\mathcal R} f}(r_{k_j})}{\frac{1}{1 - r_{k_j}} \log \frac{1}{1 - r_{k_j}}} = \limsup_{j \to \infty} \frac{n_{k_j}}{\frac{1}{1 - r_{k_j}} \log \frac{1}{1 - r_{k_j}}} = \alpha, \end{align*} $$

which implies the assertion for $n_{{\mathcal R} f}(r)$ .

It remains to prove the claim. For sufficiently large k, by (4.6), we get

$$ \begin{align*} a_k r_k^{n_k} = \frac{m_k^{\alpha}}{k^2} \ \ \text{ and } \ \ \sum_{j \neq k} j^2 a_j r_k^{n_j} & \leq (k-1)n_{k-1}^{\alpha} + \sum_{j> k} m_j^{\alpha} \left( 2^{\alpha (1 - 2^{m_j - m_k})}\right)^{m_j}, \end{align*} $$

which is at most $(k-1)n_{k-1}^{\alpha } + 1$ . It follows that

(4.8) $$ \begin{align} \frac{\sum_{j \neq k} j^2 a_j r_k^{n_j}}{a_k r_k^{n_k}} < \frac{2}{k} \end{align} $$

for sufficiently large k. Next, we assume that $(X_k)_{k \geq 1}$ is a standard complex Gaussian sequence since the other two cases are easier. For each $k \geq 1$ and $j \neq k$ , put

$$ \begin{align*}\widetilde{X_k}: = a_k X_k \left(r_ke^{i\theta}\right)^{n_k} \ \ \text{ and } \ \ \ \widetilde{X_{k,j}}: = 2j^2 a_j X_j \left(r_ke^{i\theta}\right)^{n_j}. \end{align*} $$

Here, $\widetilde {X_k}$ and $\widetilde {X_{k,j}}$ are independent complex Gaussian random variables with zero mean and variances

$$ \begin{align*}\sigma_k = a_k r_k^{n_k} \ \ \text{ and, respectively, } \ \ \sigma_{k, j} = 2j^2 a_j r_k^{n_j}. \end{align*} $$

Letting $A_{k, j}$ be the event that $\left |\widetilde {X_k}\right |> \left |\widetilde {X_{k,j}}\right |$ , we get

$$ \begin{align*}\bigcap_{j \neq k}A_{k, j} \subset A_k, \quad \text{ and hence, } \quad 1 - {\mathbb P}(A_k) \leq \sum_{j \neq k} (1 - P(A_{k,j})). \end{align*} $$

Moreover, for each $j \neq k$ ,

$$ \begin{align*}1- {\mathbb P}(A_{k,j}) = 1- \frac{2}{\pi} \int_{\frac{\sigma_{k, j}}{\sigma_k}}^{+\infty} \frac{dt}{1+t^2} = \frac{2}{\pi} \arctan \frac{\sigma_{k, j}}{\sigma_k} \leq \frac{2\sigma_{k, j}}{\pi\sigma_k}, \end{align*} $$

which together with (4.8) implies that

$$ \begin{align*}1 - {\mathbb P}(A_k) \leq \frac{2}{\pi} \frac{\sum_{j \neq k} \sigma_{k, j}}{\sigma_k} \to 0 \quad \text{ as } \quad k \to \infty. \end{align*} $$

(b) For every $\alpha ' < \alpha $ , we put $r^{\prime }_k: = 2^{-\alpha ' 2^{-m_k}}.$ As above, we have

$$ \begin{align*}\frac{1}{1 - r^{\prime}_k} \log \frac{1}{1 - r^{\prime}_k} \sim \frac{m_k 2^{m_k}}{\alpha'}. \end{align*} $$

Let $A^{\prime }_k$ be the event that

$$ \begin{align*}\left| a_k X_k \left(r^{\prime}_ke^{i\theta}\right)^{n_k} \right|> \left| \sum_{j \neq k} a_j X_j \left(r^{\prime}_ke^{i\theta}\right)^{n_j} \right| \quad \text{ for every } \theta \in [0, 2\pi]. \end{align*} $$

Again, Rouché’s theorem yields that $n_{{\mathcal R} f}(r^{\prime }_k) = n_k$ on $A^{\prime }_k$ for each $k \in {\mathbb N}$ and for sufficiently large k,

(4.9) $$ \begin{align} \frac{\sum_{j \neq k} j^2 a_j (r^{\prime}_k)^{n_j}}{a_k (r^{\prime}_k)^{n_k}} < \frac{2}{k}. \end{align} $$

Using arguments similar as above, we see that ${\mathbb P}(A^{\prime }_k) \to 1$ .

Now, we consider the limit for $n_{{\mathcal R} f}(r)$ . Putting $A_{[\alpha ']} = \limsup _{k \to \infty } A^{\prime }_k$ , we get ${\mathbb P}(A_{[\alpha ']}) = 1$ and

$$ \begin{align*}\liminf_{r \to 1} \frac{n_{{\mathcal R} f}(r)}{\frac{1}{1-r}\log \frac{1}{1-r}} \leq \alpha' \ \ \text{ on } \ \ A_{[\alpha']}. \end{align*} $$

Then, putting $A_{[0]}: = \limsup _{m \to \infty } A_{[1/m]}$ , we get ${\mathbb P}(A_{[0]}) = 1$ and

$$ \begin{align*}\liminf_{r \to 1} \frac{n_{{\mathcal R} f}(r)}{\frac{1}{1-r}\log \frac{1}{1-r}} = 0 \ \ \text{ on } \ \ A_{[0]}. \end{align*} $$

To treat ${\mathbb E}(n_{{\mathcal R} f}(r))$ , we separate into two cases. If $(X_k)_{k \geq 1}$ is either a standard Bernoulli or Steinhaus sequence, then the event $A_k'$ is the whole space $\Omega $ , hence, $n_{{\mathcal R} f}(r^{\prime }_k) = n_k$ on $\Omega $ and ${\mathbb E}(n_{{\mathcal R} f}(r^{\prime }_k)) = n_k$ , which implies that

$$ \begin{align*}\liminf_{r \to 1} \frac{{\mathbb E}(n_{{\mathcal R} f}(r))}{\frac{1}{1-r}\log \frac{1}{1-r}} \leq \limsup_{k \to \infty}\frac{{\mathbb E}(n_{{\mathcal R} f}(r^{\prime}_{k}))}{\frac{1}{1 - r^{\prime}_{k}} \log \frac{1}{1 - r^{\prime}_{k}}} = \alpha'. \end{align*} $$

Now we assume that $(X_k)_{k \geq 1}$ is a standard complex Gaussian sequence. In this case, we may use the Kac formula as in [Reference Edelman and Kostlan6, Theorem 8.2] to get

$$ \begin{align*}{\mathbb E}(n_{{\mathcal R} f}(r)) = r \frac{d}{dr} \log \sigma_f(r) = \frac{\sum_{k=1}^{\infty} n_k a_k^2 r^{2n_k}}{\sum_{k=1}^{\infty} a_k^2 r^{2n_k}}. \end{align*} $$

Similarly as above, for sufficiently large k,

$$ \begin{align*}\sum_{j \neq k} n_j a_j^2 (r^{\prime}_k)^{2n_j} \leq n_{k-1}^{2\alpha+1} + 1. \end{align*} $$

Thus,

$$ \begin{align*}{\mathbb E}(n_{{\mathcal R} f}(r_k')) \leq n_k + \frac{k^4(n_{k-1}^{2\alpha+1} + 1)}{m_k^{2\alpha}}. \end{align*} $$

From this and (4.5), it follows that

$$ \begin{align*}\liminf_{r \to 1} \frac{{\mathbb E}(n_{{\mathcal R} f}(r))}{\frac{1}{1-r}\log \frac{1}{1-r}} \leq \limsup_{k \to \infty}\frac{{\mathbb E}(n_{{\mathcal R} f}(r^{\prime}_{k}))}{\frac{1}{1 - r^{\prime}_{k}} \log \frac{1}{1 - r^{\prime}_{k}}} \leq \alpha'. \end{align*} $$

(c) For sufficiently large k, by (4.9),

$$ \begin{align*}\log \sigma_f(r_k') \leq \alpha \log m_k + (\alpha - \alpha')m_k \log 2. \end{align*} $$

Repeating the arguments in the proof of Lemma 4.2 for the sequence $(r_k')_{k \geq 1}$ instead of $(r_n)_{n \geq 1}$ , we get

$$ \begin{align*}\liminf_{r \to 1} \frac{{\mathbb E}(N_{{\mathcal R} f}(r))}{\log \frac{1}{1-r}} \leq \limsup_{k \to \infty}\frac{\log \sigma_f(r_k') + C - {\mathbb E}(\log|X_0|)}{\log \frac{1}{1 - r^{\prime}_{k}}} \leq \alpha - \alpha' \end{align*} $$

and

$$ \begin{align*}\liminf_{r \to 1} \frac{N_{{\mathcal R} f}(r)}{\log \frac{1}{1-r}} \leq \limsup_{k \to \infty} \frac{\log \sigma_f(r^{\prime}_{k}) + k^2 - \log |X_0|}{\log \frac{1}{1 - r^{\prime}_{k}}} \leq \alpha - \alpha' \ \text{a.s.} \end{align*} $$

This completes the proof since $\alpha ' <\alpha $ is arbitrary.

Now, to illustrate the sharpness of the lower bounds in Corollary 4.3, we construct an example in the case of a standard complex Gaussian sequence. We do not know whether this sharpness persists for the Bernoulli and Steinhaus cases.

Example 4.4 Let $\alpha> 0$ and $(X_n)_{n \ge 0}$ be a standard complex Gaussian sequence. There exists a function f in $({\mathbb G}_{\alpha })_*$ such that $ \lim _{r \to 1} \frac {{\mathbb E}(n_{{\mathcal R} f}(r))}{\frac {1}{1-r}} = \alpha. $ Indeed, we consider $ f(z): = \sum _{n=0}^{\infty }a_nz^n $ with

$$ \begin{align*}a_n: = \sqrt{\frac{\Gamma(n+2\alpha)}{\Gamma(2\alpha)\Gamma(n+1)}}. \end{align*} $$

Then $ \sum _{k=0}^n a_k^{2} \simeq n^{2\alpha }, $ hence $f \in ({\mathbb G}_{\alpha })_*$ by Theorem B. In this case, $ \sigma ^2_f(r) = \frac {1}{(1-r^2)^{2\alpha }}. $ By the Kac formula as improved by Edelman and Kostlan in [Reference Edelman and Kostlan6, Theorem 8.2], $ {\mathbb E}(n_{{\mathcal R} f}(r)) = r\frac {d}{dr} \log \sigma _f(r) = \frac {2\alpha r^2}{1-r^2},$ from which the desired conclusion follows.

We end this paper with two remarks on the zero sets of ${\mathcal R} f$ when $f \in (H_{\alpha })_*$ and that of f when $f \in H_{\alpha }$ .

Remark 4.5 By Lemma 4.1 and Corollary 2.6, it follows that

$$ \begin{align*}\limsup_{r \to 1} \frac{n_{{\mathcal R} f}(r)}{\frac{1}{1-r}\log \frac{1}{1-r}} \leq \alpha \ \ \text{ a.s.}\ {and } \ \ \limsup_{r \to 1} \frac{{\mathbb E}(n_{{\mathcal R} f}(r))}{\frac{1}{1-r}\log \frac{1}{1-r}} \leq \alpha \end{align*} $$

for every $f \in (H_{\alpha })_*$ with $\alpha> 0$ . Moreover, by Proposition 2.9, the function f constructed in Example 4.3 belongs to $(H_{\alpha })_*$ .

Remark 4.6 For every function $f \in H_{\alpha }$ , $\left (\sum _{k=0}^n|a_k|^2\right )^{\frac {1}{2}} = O(n^{\alpha })$ by [Reference Bennett, Stegenga and Timoney1, Theorem 1.8(a)], which, together with the arguments in the proof of Lemma 4.1, implies that

$$ \begin{align*}\qquad \qquad \limsup_{r \to 1} \frac{n_{f}(r)}{\frac{1}{1-r}\log \frac{1}{1-r}} \leq \alpha. \end{align*} $$

For the function f constructed in Example 4.3, we have

$$ \begin{align*}\quad f \in H_{\alpha} \ \ \ \text{ and } \ \ \limsup_{r \to 1} \frac{n_{f}(r)}{\frac{1}{1-r}\log \frac{1}{1-r}} = \alpha. \end{align*} $$

This improves results of Shapiro and Shields in [Reference Shapiro and Shields30, Theorems 5 and 6].

Acknowledgement

We thank the referee for a meticulous reading which greatly improves the presentation of this paper.

Footnotes

X. Fang is supported by NSTC of Taiwan (Grant No. 112-2115-M-008-010-MY2).

References

Bennett, G., Stegenga, D. A., and Timoney, R. M., Coefficients of Bloch and Lipschitz functions . Ill. J. Math. 25(1981), 520531.Google Scholar
Billard, P., Séries de Fourier aléatoirement bornées, continues, uniformément convergentes . Stud. Math. 22(1963), 309329.10.4064/sm-22-3-309-329CrossRefGoogle Scholar
Cheng, G., Fang, X., and Liu, C., A Littlewood-type theorem for random Bergman functions . Int. Math. Res. Not. 2022(2022), 1105611091.10.1093/imrn/rnab018CrossRefGoogle Scholar
Cochran, W. G., Shapiro, J. H., and Ullrich, D. C., Random Dirichlet functions: multipliers and smoothness . Canad. J. Math. 45(1993), 255268.10.4153/CJM-1993-012-6CrossRefGoogle Scholar
Duren, P. L., Theory of Hp spaces, Academic Press, New York, 1970.Google Scholar
Edelman, A. and Kostlan, E., How many zeros of a random polynomial are real? Bull. Amer. Math. Soc. 32(1995), 137.10.1090/S0273-0979-1995-00571-9CrossRefGoogle Scholar
Fang, X., Pham Trong Tien, Two problems on random analytic functions in Fock spaces . Canad. J. Math. 75(2023), no. 4, 11761198.10.4153/S0008414X22000372CrossRefGoogle Scholar
Gao, F., A characterization of random Bloch functions . J. Math. Anal. Appl. 252(2000), 959966.10.1006/jmaa.2000.7192CrossRefGoogle Scholar
Hedenmalm, H., Korenblum, B., and Zhu, K., Theory of Bergman spaces, Graduate Texts in Mathematics, 199, Springer, New York, 2000.10.1007/978-1-4612-0497-8CrossRefGoogle Scholar
Jain, N. C. and Marcus, M. B., Sufficient conditions for the continuity of stationary Gaussian processes and applications to random series of functions . Ann. Inst. Fourier 24(1974), 117141.10.5802/aif.508CrossRefGoogle Scholar
Kahane, J.-P., Some random series of functions, 2nd ed., Cambridge Studies in Advanced Mathematics, 5, Cambridge University Press, Cambridge, 1985.Google Scholar
Krasnoselski, M. A. and Ruticki, J. B., Convex functions and Orlicz spaces, Boron P. Noordhoff Ltd., Groningen, 1961. Translated from the first Russian edition by Leo F.Google Scholar
Levin, B. Y., Distribution of zeros of entire functions, Translation of Mathematical Monographs, 6, American Mathematical Society, Providence, RI, 1964.10.1090/mmono/005CrossRefGoogle Scholar
Levin, B. Y., Lectures on entire functions, Translation of Mathematical Monographs, American Mathematical Society, Providence, RI, 1996.10.1090/mmono/150CrossRefGoogle Scholar
Li, D. and Queffélec, H., Introduction to Banach spaces: analysis and probability, Vol. 1, Cambridge Studies in Advanced Mathematics, 167, Cambridge University Press, Cambridge, 2018.Google Scholar
Li, D. and Queffélec, H., Introduction to Banach spaces: analysis and probability, Vol. 2, Cambridge Studies in Advanced Mathematics, 167, Cambridge University Press, Cambridge, 2018.Google Scholar
Littlewood, J. E., On mean values of power series (II) . J. Lond. Math. Soc. 5(1930), 179182.10.1112/jlms/s1-5.3.179CrossRefGoogle Scholar
Littlewood, J. E. and Offord, A. C., On the distribution of zeros and $a$ -values of a random integral function (II) . Ann. Math. 49(1948), 885952.10.2307/1969404CrossRefGoogle Scholar
Liu, C., Multipliers for Dirichlet type spaces by randomization . Banach J. Math. Anal. 14(2020), 935949.10.1007/s43037-019-00046-wCrossRefGoogle Scholar
Marcus, M. B. and Pisier, G., Necessary and sufficient conditions for the uniform convergence of random trigonometric series, Lecture Notes Series, 50, Matematisk Institut, Aarhus Universitet, Aarhus, 1978.Google Scholar
Marcus, M. B. and Pisier, G., Random Fourier series with applications to harmonic analysis, Princeton Univeristy Press, Princeton, NJ, 1981.Google Scholar
Nazarov, F., Nishry, A., and Sodin, M., Log-integrability of Rademacher Fourier series with applications to random analytic functions . St. Petersburg Math. J. 25(2014), 467494.10.1090/S1061-0022-2014-01300-3CrossRefGoogle Scholar
Nazarov, F., Nishry, A., and Sodin, M., Distribution of zeros of Rademacher Taylor series . Ann. Fac. Sci. 25(2016), 759784.Google Scholar
Offord, A. C., The distribution of zeros of power series whose coefficients are independent random variables . Indian J. Math. 9(1967), 175196.Google Scholar
Paley, R. E. A. C. and Zygmund, A., On some series of functions (1) . Proc. Cambridge Philos. Soc. 26(1930), 337357.10.1017/S0305004100016078CrossRefGoogle Scholar
Paley, R. E. A. C. and Zygmund, A., On some series of functions (2) . Proc. Cambridge Philos. Soc. 26(1930), 458474.10.1017/S0305004100016212CrossRefGoogle Scholar
Rao, M. M. and Ren, Z. D., Theory of Orlicz spaces, Monographs and Textbooks in Pure and Applied Mathematics, 146, Marcel Dekker, Inc., New York, 1991.Google Scholar
Rudin, W., Some theorems on Fourier coefficients . Proc. Amer. Math. Soc. 10(1959), 855859.10.1090/S0002-9939-1959-0116184-5CrossRefGoogle Scholar
Salem, R. and Zygmund, A., Some properties of trigonometric series whose terms have random signs . Acta Math. 91(1954), 245301.10.1007/BF02393433CrossRefGoogle Scholar
Shapiro, H. S. and Shields, A. L., On the zeros of functions with finite Dirichlet integral and some related function spaces . Math. Z. 80(1962), 217229.10.1007/BF01162379CrossRefGoogle Scholar
Shields, A. L. and Williams, D. L., Bounded projections, duality and multipliers in spaces of analytic functions . Trans. Amer. Math. Soc. 162(1971), 287302.Google Scholar
Shields, A. L. and Williams, D. L., Bounded projections, duality and multipliers in spaces of harmonic functions . J. Reine Angew. Math. 299 /300(1978), 256279.Google Scholar
Ullrich, D. C., An extension of the Kahane–Khinchine inequality in a Banach space . Israel J. Math. 62(1988), 5662.10.1007/BF02767353CrossRefGoogle Scholar
Ullrich, D. C., Khinchin’s inequality and the zeros of Bloch functions . Duke Math. J. 57(1988), 519535.10.1215/S0012-7094-88-05723-7CrossRefGoogle Scholar
Yang, C. and Xu, W., Spaces with normal weights and Hadamard gap series . Arch. Math. 96(2011), 151160.10.1007/s00013-011-0223-8CrossRefGoogle Scholar