Hostname: page-component-586b7cd67f-t8hqh Total loading time: 0 Render date: 2024-11-24T02:33:54.546Z Has data issue: false hasContentIssue false

Distribution and moments of the error term in the lattice point counting problem for three-dimensional Cygan–Korányi balls

Published online by Cambridge University Press:  19 May 2023

Yoav A. Gath*
Affiliation:
Centre for Mathematical Sciences, Wilberforce Road, Cambridge CB3 0WA, UK ([email protected])
Rights & Permissions [Opens in a new window]

Abstract

We study fluctuations of the error term for the number of integer lattice points lying inside a three-dimensional Cygan–Korányi ball of large radius. We prove that the error term, suitably normalized, has a limiting value distribution which is absolutely continuous, and we provide estimates for the decay rate of the corresponding density on the real line. In addition, we establish the existence of all moments for the normalized error term, and we prove that these are given by the moments of the corresponding density.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press on behalf of The Royal Society of Edinburgh

1. Introduction, notation and statement of results

1.1 Introduction

Given an integer $q\geq 1$, let

\[ N_{q}(x)=\big|\big\{\mathbf{z}\in\mathbb{Z}^{2q+1}:|\mathbf{z}|_{{\scriptscriptstyle Cyg}}\leq x\big\}\big| \]

be the counting function for the number of integer lattice points which lie inside a $(2q+1)$-dimensional Cygan–Korányi ball of large radius $x>0$, where $|\,|_{{\scriptscriptstyle Cyg}}$ is the Cygan–Korányi norm defined by

\[ |\mathbf{u}|_{{\scriptscriptstyle Cyg}}=\Big(\Big(u^{2}_{1}+\cdots+u^{2}_{2q}\Big)^{2}+u^{2}_{2q+1}\Big)^{1/4}. \]

The problem of estimating $N_{q}(x)$ arises naturally from the homogeneous structure imposed on the $q$-th Heisenberg group $\mathbb {H}_{q}$ (when realized over $\mathbb {R}^{2q+1}$), namely, we have

\[ N_{q}(x)=\big|\mathbb{Z}^{2q+1}\cap\delta_{x}\mathcal{B}\big|, \]

where $\delta _{x}\mathbf {u}=(xu_{{\scriptscriptstyle 1}},\,\ldots,\, xu_{{\scriptscriptstyle 2q}},\,x^{2}u_{{\scriptscriptstyle 2q+1}})$ with $x>0$ are the Heisenberg dilations, and $\mathcal {B}=\big \{\mathbf {u}\in \mathbb {R}^{2q+1}:|\mathbf {u}|_{{\scriptscriptstyle Cyg}}\leq 1\big \}$ is the unit ball with respect to the Cygan–Korányi norm (see [Reference Garg, Nevo and Taylor4, Reference Gath6] for more details). It is clear that $N_{q}(x)$ will grow for large $x$ like $\textit {vol}(\mathcal {B})x^{2q+2}$, where $\textit {vol}(\cdot )$ is the Euclidean volume, and we shall be interested in the error term resulting from this approximation. In particular, in the present paper, we investigate the nature in which $N_{1}(x)$ fluctuates around its expected value $\textit {vol}(\mathcal {B})x^{4}$.

Definition Let $q\geq 1$ be an integer. For $x>0$, define

\[ \mathcal{E}_{q}(x)=N_{q}(x)-\textit{vol}\big(\mathcal{B}\big)x^{2q+2}, \]

and set $\kappa _{q}=\sup \big \{\alpha >0:\big |\mathcal {E}_{q}(x)\big |\ll x^{2q+2-\alpha }\big \}$.

We shall refer to the lattice point counting problem for $(2q+1)$-dimensional Cygan–Korányi balls as the problem of determining the value of $\kappa _{q}$. This problem was first considered by Garg et al. [Reference Garg, Nevo and Taylor4], who established the lower bound $\kappa _{q}\geq 2$ for all integers $q\geq 1$. Before we proceed to state our main results, we recall what is known for this lattice point counting problem.

For $q=1$, the lower bound of Garg et al. was proven by the author to be sharp, that is $\kappa _{1}=2$ (see [Reference Gath5], theorem 1.1, and note that a different normalization is used for the exponent of the error term). Thus, the lattice point counting problem for three-dimensional Cygan–Korányi balls is settled.

The behaviour of the error term $\mathcal {E}_{q}(x)$ in the lattice point counting problem for $(2q+1)$-dimensional Cygan–Korányi balls with $q>1$ is of an entirely different nature compared to the case $q=1$, and is closely related to the behaviour of the error term in the Gauss circle problem. In the higher-dimensional case $q\geq 3$, the best result available to date is $|\mathcal {E}_{q}(x)|\ll x^{2q-2/3}$ which was proved by the author ([Reference Gath6], theorem 1), and we also have ([Reference Gath6], theorem 3) the $\Omega$-result $\mathcal {E}_{q}(x)=\Omega (x^{2q-1}(\log {x})^{1/4}(\log {\log {x}})^{1/8})$. It follows that $\frac {8}{3}\leq \kappa _{q}\leq 3$. In regards to what should be the conjectural value of $\kappa _{q}$ in the case of $q\geq 3$, it is known ([Reference Gath6], theorem 2) that $\mathcal {E}_{q}(x)$ has order of magnitude $x^{2q-1}$ in mean-square, which leads to the conjecture that $\kappa _{q}=3$. The case $q=2$ marks somewhat of a transition point between $q=1$ and the higher-dimensional case $q\geq 3$ in which the error term changes its behaviour. It is known (unpublished) that $\mathcal {E}_{2}(x)$ has order of magnitude bounded by $x^{3}\log ^{3}{x}$ in mean square, leading to the conjectural value $\kappa _{2}=3$.

To see how the two cases $q=1$ and $q>1$ differ, let $r_{{\scriptscriptstyle Cyg}}(n;q)=\big |\big \{\mathbf {z}\in \mathbb {Z}^{2q+1}:\Vert \mathbf {z}\Vert ^{4}_{{\scriptscriptstyle Cyg}}=n\big \}\big |$, so that

\[ N_{q}(x)=\sum_{n\,\leq\,x^{4}}r_{{\scriptscriptstyle Cyg}}(n;q). \]

The arithmetical function $r_{{\scriptscriptstyle Cyg}}(n;q)$ exhibits a dichotomous behaviour depending on whether $q=1$ or $q>1$, and this in turn dictates the behaviour of the error term $\mathcal {E}_{q}(x)$ as we now explain. The main point to notice is that $r_{{\scriptscriptstyle Cyg}}(n;q)$ may be expressed as

\[ r_{{\scriptscriptstyle Cyg}}(n;q)=\sum_{m^{2}+\ell^{2}=n}r_{2q}(m), \]

where $r_{{\scriptscriptstyle 2q}}(m)$ is the counting function for the number of representation of the integer $m$ as a sum of $2q$-squares. Using classical results on the representation of integers by positive-definite binary quadratic forms, it can be shown that $r_{{\scriptscriptstyle Cyg}}(n;q)\asymp n^{\frac {q-1}{2}}r_{{\scriptscriptstyle 2}}(n)$ for $q\geq 3$, while for $q=2$, $r_{{\scriptscriptstyle Cyg}}(n;2)$ obeys a similar growth rate with slight variation, namely $n^{1/2}r_{{\scriptscriptstyle 2}}(n)/\log {\log {3n}}\ll r_{{\scriptscriptstyle Cyg}}(n;2)\ll n^{1/2}r_{{\scriptscriptstyle 2}}(n)\log {\log {3n}}$. Ignoring this slight variation, it follows that $r_{{\scriptscriptstyle Cyg}}(n;q)$ grows roughly like $n^{\frac {q-1}{2}}r_{{\scriptscriptstyle 2}}(n)$ for $q>1$. This estimate together with partial summation implies that

\[ N_{q}(x)\asymp x^{2q-2}\sum_{n\,\leq\,x^{4}}r_{{\scriptscriptstyle2}}(n)\quad;\quad q>1. \]

By expanding both sides above into main term and error term, one should expect that $\mathcal {E}_{q}(x)$ for $q>1$ should grow for large $x>0$, roughly like $x^{2q-2}E(x^{4})$, where $E(y)=\sum _{n\leq y}r_{{\scriptscriptstyle 2}}(n)-\pi y$ is the error term in the Gauss circle problem.

This is in sharp contrast to what happens in the case $q=1$, where to begin with one notes that $r_{{\scriptscriptstyle Cyg}}(n;1)$ cannot be effectively estimated (in the same manner as was done for $q>1$) in terms of $r_{{\scriptscriptstyle 2}}(n)$. As we shall see however, $\mathcal {E}_{1}(x)$ can be modelled by means of the smoothed error term $R(y)=\sum _{n\leq y}r_{{\scriptscriptstyle 2}}(n)(1-n/y)^{1/2}-\frac {2\pi }{3}y$, and we have (see § 2.2, proof of proposition 2.1) that $\mathcal {E}_{1}(x)$ grows for large $x>0$ roughly like $x^{2}R(x^{2})$.

We now turn to the main objective of the current paper, which concerns the nature in which $N_{1}(x)$ fluctuates around its expected value $\textit {vol}(\mathcal {B})x^{4}$. Our interest in understanding these fluctuations is motivated by trying to quantify (in the measure-theoretic sense) the set of $x$s for which $\pm \mathcal {E}_{1}(x)$ can be large relative to $x^{2}$ (recall that $\kappa _{1}=2$ unconditionally). For instance, theorem 1.2 of [Reference Gath5] guarantees that the sets $\{x>0: \mathcal {E}_{1}(x)/x^{2}>a\}$ are unbounded for any real number $a$, and it is natural to ask what can be said about their relative measure.

To that end, let $\widehat {\mathcal {E}}_{1}(x)=\mathcal {E}_{1}(x)/x^{2}$ be the suitably normalized error term, and for $X>0$ let ${\rm d}\nu _{{\scriptscriptstyle X,1}}$ be the distribution given by

\[ \int\limits_{\mathcal{I}}{\rm d}\nu_{{\scriptscriptstyle X,1}}(\alpha)=\frac{1}{X}\textit{meas}\big\{X< x<2X:\widehat{\mathcal{E}}_{1}(x)\in\mathcal{I}\big\}\,, \]

where $\mathcal {I}$ is an interval (not necessarily finite) on the real line. Our goal will be to establish the weak convergence of the distributions ${\rm d}\nu _{{\scriptscriptstyle X,1}}$, as $X\to \infty$, to an absolutely continuous distribution ${\rm d}\nu _{{\scriptscriptstyle 1}}(\alpha )=\mathcal {P}_{1}(\alpha ){\rm d}\alpha$, and obtain decay estimates for its defining density $\mathcal {P}_{1}(\alpha )$. In order to put the results of this paper in the proper context, we quote the corresponding results in the higher-dimensional case $q\geq 3$ ([Reference Gath7], theorems 1 and 3) so that they can be compared with the ones appearing here, highlighting once more distinction between the case $q=1$ and $q\geq 3$.

Theorem 1.1 [Reference Gath7]

Let $q\geq 3$ be an integer, and let $\widehat {\mathcal {E}}_{q}(x)=\mathcal {E}_{q}(x)/x^{2q-1}$ be the suitably normalized error term. Then there exists a probability density $\mathcal {P}_{q}(\alpha )$ such that, for any (piecewise)-continuous function $\mathcal {F}$ satisfying the growth condition $|\mathcal {F}(\alpha )|\ll \alpha ^{2}$, we have

\[ \lim\limits_{X\to\infty}\frac{1}{X}\int\limits_{ X}^{2X}\mathcal{F}\big(\widehat{\mathcal{E}}_{q}(x)\big){\rm d}x=\int\limits_{-\infty}^{\infty}\mathcal{F}(\alpha)\mathcal{P}_{q}(\alpha){\rm d}\alpha. \]

The density $\mathcal {P}_{q}(\alpha )$ can be extended to the whole complex plane $\mathbb {C}$ as an entire function of $\alpha$, which satisfies for any non-negative integer $j\geq 0$, and any $\alpha \in \mathbb {R}$, $|\alpha |$ sufficiently large in terms of $j$ and $q$, the decay estimate

\[ \big|\mathcal{P}^{(j)}_{q}(\alpha)\big|\leq\exp{\Big(-|\alpha|^{4-\beta/\log\log{|\alpha|}}\Big)}, \]

where $\beta >0$ is an absolute constant.

We are going to establish an analogue of the above result in the case of $q=1$, where the suitable normalization of the error term is given by $\widehat {\mathcal {E}}_{1}(x)=\mathcal {E}_{1}(x)/x^{2}$. We shall see once the distinction between the case $q=1$ and the higher-dimensional case $q\geq 3$, this time with respect to the probability density $\mathcal {P}_{q}(\alpha )$.

1.2 Statement of the main results

Theorem 1.2 Let $\widehat {\mathcal {E}}_{1}(x)=\mathcal {E}_{1}(x)/x^{2}$ be the suitably normalized error term. Then $\widehat {\mathcal {E}}_{1}(x)$ has a limiting value distribution in the sense that, there exists a probability density $\mathcal {P}_{1}(\alpha )$ such that, for any (piecewise)-continuous function $\mathcal {F}$ of polynomial growth we have

(1.1)\begin{equation} \lim\limits_{X\to\infty}\frac{1}{X}\int\limits_{X}^{2X}\mathcal{F}\big(\widehat{\mathcal{E}}_{1}(x)\big){\rm d}x=\int\limits_{-\infty}^{\infty}\mathcal{F}(\alpha)\mathcal{P}_{1}(\alpha){\rm d}\alpha. \end{equation}

The density $\mathcal {P}_{1}(\alpha )$ satisfies for any non-negative integer $j\geq 0$, and any $\alpha \in \mathbb {R}$, $|\alpha |$ sufficiently large in terms of $j$, the decay estimate

(1.2)\begin{equation} \big|\mathcal{P}^{(j)}_{1}(\alpha)\big|\leq\exp{\left(-\frac{\pi}{2}|\alpha|\exp{\big(\rho|\alpha|\big)}\right)}, \end{equation}

where $\rho >0$ is an absolute constant.

Remark 1 In the particular case where $\mathcal {F}(\alpha )=\alpha ^{j}$ with $j\geq 1$ an integer, we have

\[ \lim\limits_{X\to\infty}\frac{1}{X}\int\limits_{ X}^{2X}\widehat{\mathcal{E}}^{j}_{1}(x){\rm d}x=\int\limits_{-\infty}^{\infty}\alpha^{j}\mathcal{P}_{1}(\alpha){\rm d}\alpha\,, \]

thereby establishing the existence of all moments (see the remark following proposi-tion 5.1 in § 5 regarding quantitative estimates).

Remark 2 Note that the decay estimates (1.2) for the probability density in the case where $q=1$ are much stronger compared to the corresponding ones in the higher-dimensional case $q\geq 3$ as stated in the introduction. Also, note that whereas for $q\geq 3$, the density $\mathcal {P}_{q}(\alpha )$ extends to the whole complex plane as an entire function of $\alpha$, and in particular is supported on all of the real line, theorem 1.1 above makes no such claim in the case of $q=1$.

Our next result gives a closed-form expression for all the integral moments of the density $\mathcal {P}_{1}(\alpha )$.

Theorem 1.3 Let $j\geq 1$ be an integer. Then the $j$-th integral moment of $\mathcal {P}_{1}(\alpha )$ is given by

(1.3)\begin{equation} \int\limits_{-\infty}^{\infty}\alpha^{j}\mathcal{P}_{1}(\alpha){\rm d}\alpha=\sum_{s=1}^{j}\underset{\,\,\ell_{1},\,\ldots\,,\ell_{s}\geq1}{\sum_{\ell_{1}+\cdots+\ell_{s}=j}}\,\,\frac{j!}{\ell_{1}!\cdots\ell_{s}!}\underset{\,\,\,m_{s}>\cdots>m_{1}}{\sum_{m_{1},\,\ldots\,,m_{s}=1}^{\infty}}\prod_{i=1}^{s}\Xi(m_{i},\ell_{i}), \end{equation}

where the series on the right-hand side (RHS) of (1.3) converges absolutely. For integers $m,\,\ell \geq 1$, the term $\Xi (m,\,\ell )$ is given by

(1.4)\begin{equation} \Xi(m,\ell)=({-}1)^{\ell}\left(\frac{1}{\sqrt{2}\pi}\right)^{\ell}\frac{\mu^{2}(m)}{m^{\ell}}\sum_{\textit{e}_{1},\ldots,\textit{e}_{\ell}=\pm1}\underset{\textit{e}_{1}k_{1}+\cdots+\textit{e}_{\ell}k_{\ell}=0}{\sum_{k_{1},\ldots,k_{\ell}=1}^{\infty}}\prod_{i=1}^{\ell}\frac{r_{{\scriptscriptstyle2}}\big(mk^{2}_{i}\big)}{k_{i}^{2}}, \end{equation}

where $\mu (\cdot )$ is the Möbius function, and $r_{{\scriptscriptstyle 2}}(\cdot )$ is the counting function for the number of representation of an integer as the sum of two squares. For $\ell =1$ the sum in (1.4) is void, so by definition $\Xi (m,\,1)=0$. In particular, it follows that $\int _{-\infty }^{\infty }\alpha \mathcal {P}_{1}(\alpha ){\rm d}\alpha =0$, and $\int _{-\infty }^{\infty }\alpha ^{j}\mathcal {P}_{1}(\alpha ){\rm d}\alpha <0$ for $j\equiv 1\,(2)$ greater than one.

Remark 3 There is a further distinction between the case $q=1$, and the higher-dimensional case $q\geq 3$ in the following aspect. For $q=1$, $\mathcal {P}_{1}(\alpha )$ is the probability density corresponding to the random series $\sum _{m=1}^{\infty }\phi _{{\scriptscriptstyle 1,m}}(X_{m})$, where the $X_{m}$ are independent random variables uniformly distributed on the segment $[0,\,1]$, and the $\phi _{{\scriptscriptstyle 1,m}}(t)$ are real-valued continuous functions, periodic of period $1$, given by (see § 3)

\[ \phi_{{\scriptscriptstyle1,m}}(t)={-}\frac{\sqrt{2}}{\pi}\frac{\mu^{2}(m)}{m}\sum_{k=1}^{\infty}\frac{r_{{\scriptscriptstyle2}}\big(mk^{2}\big)}{k^{2}}\cos{(2\pi kt)}\,. \]

This is in sharp contrast to the higher-dimensional case $q\geq 3$, where the corresponding functions $\phi _{{\scriptscriptstyle q,m}}(t)$ (see [Reference Gath7], § 2 theorem 4) are aperiodic. Also, the presence of the factor $1/m$ in $\phi _{{\scriptscriptstyle 1,m}}(t)$, as apposed to $1/m^{3/4}$ in $\phi _{{\scriptscriptstyle q,m}}(t)$ for $q\geq 3$, is the reason for why we obtain the much stronger decay estimates (1.2) compared to the higher-dimensional case $q\geq 3$.

Remark 4 Let us remark that one may view the density $\mathcal {P}_{1}(\alpha )$ as belonging to a certain family of densities considered by Yuk-Kam Lau and Kai-Man Tsang, which have been shown to admit moment expansion of similar form (see [Reference Lau and Tsang11], theorem 1). We also refer the reader to [Reference Bleher1, Reference Bleher, Cheng, Dyson and Lebowitz2, Reference Wintner16] for analogous results for lattice point statistics in Euclidean setting.

Notation and conventions. The following notation will occur repeatedly throughout this paper. We use the Vinogradov asymptotic notation $\ll$, as well as the Big-$O$ notation. For positive quantities $X,\,Y>0$ we write $X\asymp Y$, to mean that $X\ll Y\ll X$. In addition, we define

\begin{align*} & \textrm{(1)}\quad r_{{\scriptscriptstyle2}}(m)=\sum_{a^{2}+b^{2}=m}1\quad;\quad\text{where the representation runs over }a,b\in\mathbb{Z}\,.\\ & \textrm{(2)}\quad\mu(m)=\left\{ \begin{array}{ll} 1 & ;\, m=1\\ ({-}1)^{\ell} & ;\, \text{if }m=p_{{\scriptscriptstyle1}}\cdots p_{{\scriptscriptstyle\ell}},\,\text{with } p_{{\scriptscriptstyle1}},\ldots,p_{{\scriptscriptstyle\ell}}\,\text{distinct primes}\\ 0 & ;\,\text{otherwise} \end{array} \right.\\ & \textrm{(3)}\quad\psi(t)=t-[t]-1/2,\quad \text{where }[t]=\text{max}\{m\in\mathbb{Z}: m\leq t\}. \end{align*}

2. A Voronoï-type series expansion for $\mathcal {E}_{1}(x)/x^{2}$

In this section, we develop a Voronoï-type series expansion for $\mathcal {E}_{1}(x)/x^{2}$. The main result we shall set out to prove is the following.

Proposition 2.1 Let $X>0$ be large. Then for $X\leq x\leq 2X$ we have

(2.1)\begin{equation} \mathcal{E}_{1}(x)/x^{2}={-}\frac{\sqrt{2}}{\pi}\sum_{m\,\leq\,X^{2}}\frac{r_{{\scriptscriptstyle 2}}(m)}{m}\cos{\big(2\pi\sqrt{m}x\big)}-2x^{{-}2}T\big(x^{2}\big)+O_{\epsilon}\big(X^{{-}1+\epsilon}\big), \end{equation}

for any $\epsilon >0$, where for $Y>0$, $T(Y)$ is given by

\[ T(Y)=\sum_{0\,\leq\,m\,\leq\,Y}r_{{\scriptscriptstyle2}}(m)\psi\Big(\big(Y^{2}-m^{2}\big)^{1/2}\Big). \]

Remark 5 It is not difficult to show that $\big |x^{-2}T(x^{2})\big |\ll x^{-\theta }$ for some $\theta >0$, and so one may replace this term by this bound which simplifies (2.1). However, we have chosen to retain the term $x^{-2}T(x^{2})$ as we are going to show later on (see § 3.1) that its average order is much smaller.

The proof of proposition 2.1 will be given in § 2.2. We shall first need to establish several results regarding weighted integer lattice points in Euclidean circles.

2.1 Weighted integer lattice points in Euclidean circles

We begin this subsection by proving lemma 2.2, which will then be combined with lemma 2.3 to prove proposition 2.1.

Lemma 2.2 For $Y>0,$ let

\[ R(Y)=\sum_{0\,\leq\,m\,\leq\,Y}r_{{\scriptscriptstyle2}}(m)\left(1-\frac{m}{Y}\right)^{1/2}-\frac{2\pi}{3}Y. \]

Then

(2.2)\begin{equation} R(Y)={-}\frac{1}{2\pi}\sum_{m\,\leq\,M}\frac{r_{{\scriptscriptstyle 2}}(m)}{m}\cos{\big(2\pi\sqrt{mY}\,\big)}+O_{\epsilon}\big(Y^{{-}1/2+\epsilon}\big), \end{equation}

for any $\epsilon >0,$ where $M$ as any real number which satisfies $M\asymp Y$.

Proof. Let $Y>0$ be large, $T\asymp Y$ a parameter to be specified later, and set $\delta =\frac {1}{\log {Y}}$. Write $\phi$ for the continuous function on $\mathbb {R}_{>0}$ defined by $\phi (t)=(1-t)^{1/2}$ if $t\in (0,\,1]$, and $\phi (t)=0$ otherwise. We have

(2.3)\begin{equation} \begin{aligned} & \sum_{0\,\leq\,m\,\leq\,Y}r_{{\scriptscriptstyle2}}(m)\left(1-\frac{m}{Y}\right)^{1/2}=1+\sum_{m=1}^{\infty}r_{{\scriptscriptstyle2}}(m)\phi\left(\frac{m}{Y}\right)=1+\frac{1}{2\pi i}\int\limits_{1+\delta-i\infty}^{1+\delta+i\infty}Z(s)\check{\phi}(s)Y^{s}{\rm d}s\\ & \quad=1+\frac{1}{2\pi i}\int\limits_{1+\delta-iT}^{1+\delta+iT}Z(s)\check{\phi}(s)Y^{s}{\rm d}s+O\left(YT^{{-}3/2}\sum_{m=1}^{\infty}\frac{r_{{\scriptscriptstyle2}}(m)}{m^{1+\delta}}\left(1+\text{min}\bigg\{T,\frac{1}{|\log{\frac{Y}{m}}|}\bigg\}\right)\right)\\ & \quad=1+\frac{1}{2\pi i}\int\limits_{1+\delta-iY}^{1+\delta+iY}Z(s)\check{\phi}(s)Y^{s}{\rm d}s+O_{\epsilon}\big(Y^{{-}1/2+\epsilon}\big), \end{aligned} \end{equation}

where $\check {\phi }(s)=\frac {\Gamma (s)\Gamma (3/2)}{\Gamma (s+3/2)}$ is the Mellin transform of $\phi$, and $Z(s)=\sum _{m=1}^{\infty }r_{{\scriptscriptstyle 2}}(m)m^{-s}$ with $\Re (s)>1$. In estimating (2.3), we have made use of Stirling's asymptotic formula for the gamma function (see [Reference Iwaniec and Kowalski10], A.4 (5.113))

(2.4)\begin{equation} \Gamma(\sigma+it)=\sqrt{2\pi}(it)^{\sigma-\frac{1}{2}}e^{-\frac{\pi}{2}|t|}\left(\frac{|t|}{e}\right)^{it}\left(1+O\left(\frac{1}{|t|}\right)\right), \end{equation}

valid uniformly for $\alpha <\sigma <\beta$ with any fixed $\alpha,\,\beta \in \mathbb {R}$, provided $|t|$ is large enough in terms of $\alpha$ and $\beta$.

The Zeta function $Z(s)$, initially defined for $\Re (s)>1$, admits an analytic continuation to the entire complex plane, except at $s=1$ where it has a simple pole with residue $\pi$, and satisfies the functional equation [Reference Epstein3]

(2.5)\begin{equation} \pi^{{-}s}\Gamma(s)Z(s)=\pi^{-(1-s)}\Gamma(1-s)Z(1-s)\,. \end{equation}

Now, $s(s-1)Z(s)\check {\phi }(s)Y^{s}$ is regular in the strip $-\delta \leq \Re (s)\leq 1+\delta$, and by Stirling's asymptotic formula (2.4) together with the functional equation (2.5), we obtain the bounds

(2.6)\begin{equation} \begin{aligned} & \big|s(s-1)Z(s)\check{\phi}(s)Y^{s}\big|\ll Y(\log{Y})\big(1+|s|\big)^{1/2};\quad\Re(s)=1+\delta\\ & \big|s(s-1)Z(s)\check{\phi}(s)Y^{s}\big|\ll (\log{Y})\big(1+|s|\big)^{3/2+2\delta};\Re(s)={-}\delta. \end{aligned} \end{equation}

On recalling that $T\asymp Y$, it follows from the Phragmén–Lindelöf principle that

(2.7)\begin{equation} \begin{aligned} \big|Z(s)\check{\phi}(s)Y^{s}\big| & \ll(\log{Y}) T^{\delta-1/2}\left(\frac{Y}{T}\right)^{\sigma}\\ & \ll Y^{{-}1/2}\log{Y}\quad;\qquad-\delta\leq\sigma=\Re(s)\leq1+\delta,\quad|\Im(s)|=T. \end{aligned} \end{equation}

Moving the line of integration to $\Re (s)=-\delta$, and using (2.7), we have by the theorem of residues

(2.8)\begin{equation} \begin{aligned} \frac{1}{2\pi i}\int\limits_{1+\delta-iY}^{1+\delta+iY}Z(s)\check{\phi}(s)Y^{s}{\rm d}s & =\bigg\{\underset{s=1}{\text{Res}}+\underset{s=0}{\text{Res}}\bigg\}Z(s)\check{\phi}(s)Y^{s}+\frac{1}{2\pi i}\int\limits_{-\delta-iY}^{-\delta+iY}Z(s)\check{\phi}(s)Y^{s}{\rm d}s\\ & \qquad + O\big(Y^{{-}1/2}\log{Y}\big)\\ & \quad=\frac{2\pi}{3}Y-1+\frac{1}{2\pi i}\int\limits_{-\delta-iY}^{-\delta+iY}Z(s)\check{\phi}(s)Y^{s}{\rm d}s\\ & \qquad +O\big(Y^{{-}1/2}\log{Y}\big). \end{aligned} \end{equation}

Inserting (2.8) into the RHS of (2.3), and applying the functional equation (2.5), we arrive at

(2.9)\begin{equation} R(Y)=\frac{1}{2\sqrt{\pi}}\sum_{m=1}^{\infty}\frac{r_{{\scriptscriptstyle2}}(m)}{m}{\textrm J}_{m}+O_{\epsilon}\big(Y^{{-}1/2+\epsilon}\big), \end{equation}

where

(2.10)\begin{equation} {\textrm J}_{m}=\frac{1}{2\pi i}\int\limits_{-\delta-iT}^{-\delta+iT}\frac{\Gamma(1-s)\big(\pi^{2}mY\big)^{s}}{\Gamma(s+3/2)}{\rm d}s. \end{equation}

Now, let $M>0$ be a real number which satisfies $M\asymp Y$. We then specify the parameter $T$ by making the choice $T=\pi \sqrt {MY}$. Clearly, we have $T\asymp Y$. We are going to estimate ${\textrm J}_{m}$ separately for $m\leq M$ and $m>M$.

Suppose first that $m>M$. We have

(2.11)\begin{equation} \begin{aligned} \bigg|\frac{\Gamma(1-s)\big(\pi^{2}mY\big)^{s}}{\Gamma(s+3/2)}\bigg| & \ll T^{{-}1/2}\big(mT^{{-}2}\pi^{2}Y\big)^{\sigma}\\ & =T^{{-}1/2}\left(\frac{m}{M}\right)^{\sigma}\\ & \ll Y^{{-}1/2}m^{-\delta};\quad-\frac{1}{4}\leq\sigma=\Re(s)\leq{-}\delta,\quad|\Im(s)|=T. \end{aligned} \end{equation}

Moving the line of integration to $\Re (s)=-\frac {1}{4}$, and using (2.11), we obtain

(2.12)\begin{equation} {\textrm J}_{m}=\frac{1}{2\pi i}\int\limits_{-\frac{1}{4}-iT}^{-\frac{1}{4}+iT}\frac{\Gamma(1-s)\big(\pi^{2}mY\big)^{s}}{\Gamma(s+3/2)}{\rm d}s+O\big(Y^{{-}1/2}m^{-\delta}\big). \end{equation}

By Stirling's asymptotic formula (2.4) it follows that

(2.13)\begin{equation} \big|{\textrm J}_{m}\big|\ll\big(mY\big)^{{-}1/4}\bigg\{1+\bigg|\int\limits_{1}^{T}\exp{\big(if_{{\scriptscriptstyle m}}(t)\big)}{\rm d}t\bigg|\,\bigg\}+Y^{{-}1/2}m^{-\delta}, \end{equation}

where $f_{{\scriptscriptstyle m}}(t)=-2t\log {t}+2t +t\log {(\pi ^{2}mY)}$. Trivial integration and integration by parts give

(2.14)\begin{equation} \bigg|\int\limits_{1}^{T}\exp{\big(if_{{\scriptscriptstyle m}}(t)\big)}{\rm d}t\bigg|\ll\text{min}\bigg\{T,\frac{1}{\log{\frac{m}{M}}}\bigg\}. \end{equation}

Inserting (2.14) into (2.13), and then summing over all $m>M$, we obtain

(2.15)\begin{equation} \begin{aligned} \sum_{m>M}\frac{r_{{\scriptscriptstyle2}}(m)}{m}\big|{\textrm J}_{m}\big| & \ll Y^{{-}1/4}\sum_{m>M}\frac{r_{{\scriptscriptstyle2}}(m)}{m^{5/4}}\left(1+\text{min}\bigg\{T,\frac{1}{\log{\frac{m}{M}}}\bigg\}\right)+Y^{{-}1/2}\log{Y}\\ & \ll_{\epsilon} Y^{{-}1/2+\epsilon}. \end{aligned} \end{equation}

Inserting (2.15) into (2.9), we arrive at

(2.16)\begin{equation} R(Y)=\frac{1}{2\sqrt{\pi}}\sum_{m\leq M}\frac{r_{{\scriptscriptstyle2}}(m)}{m}{\textrm J}_{m}+O_{\epsilon}\big(Y^{{-}1/2+\epsilon}\big). \end{equation}

It remains to estimate ${\textrm J}_{m}$ for $m\leq M$. We have

(2.17)\begin{equation} \begin{aligned} \bigg|\frac{\Gamma(1-s)\big(\pi^{2}mY\big)^{s}}{\Gamma(s+3/2)}\bigg| & \ll T^{{-}1/2}\big(mT^{{-}2}\pi^{2}Y\big)^{\sigma}\\ & =T^{{-}1/2}\left(\frac{m}{M}\right)^{\sigma}\\ & \ll Y^{{-}1/2}\quad;\qquad-\delta\leq\sigma=\Re(s)\leq1-\delta,\quad|\Im(s)|=T. \end{aligned} \end{equation}

Moving the line of integration to $\Re (s)=1-\delta$, and using (2.17), we obtain

(2.18)\begin{equation} {\textrm J}_{m}=\frac{1}{2\pi i}\int\limits_{1-\delta-iT}^{1-\delta+iT}\frac{\Gamma(1-s)\big(\pi^{2}mY\big)^{s}}{\Gamma(s+3/2)}{\rm d}s+O\big(Y^{{-}1/2}\big). \end{equation}

Extending the integral all the way to $\pm \infty$, by Stirling's asymptotic formula (2.4) we have

(2.19)\begin{equation} \begin{aligned} & \frac{1}{2\pi i}\int\limits_{1-\delta-iT}^{1-\delta+iT}\frac{\Gamma(1-s)\big(\pi^{2}mY\big)^{s}}{\Gamma(s+3/2)}{\rm d}s\\ & \quad=\frac{1}{2\pi i}\int\limits_{1-\delta-i\infty}^{1-\delta+i\infty}\frac{\Gamma(1-s)\big(\pi^{2}mY\big)^{s}}{\Gamma(s+3/2)}{\rm d}s\\ & \qquad +O\left(Y^{2}\bigg|\int\limits_{T}^{\infty}t^{{-}5/2+2\delta}\exp{\big(if_{{\scriptscriptstyle m}}(t)\big)}{\rm d}t\bigg|+Y^{{-}1/2}\right), \end{aligned} \end{equation}

where $f_{{\scriptscriptstyle m}}(t)$ is defined as before. Trivial integration and integration by parts give

(2.20)\begin{equation} \bigg|\int\limits_{T}^{\infty}t^{{-}5/2+2\delta}\exp{\big(if_{{\scriptscriptstyle m}}(t)\big)}{\rm d}t\bigg|\ll T^{{-}5/2}\text{min}\bigg\{T,\frac{1}{\log{\frac{M}{m}}}\bigg\}. \end{equation}

Inserting (2.19) into the RHS of (2.18), we obtain by (2.20)

(2.21)\begin{equation} {\textrm J}_{m}=\frac{1}{2\pi i}\int\limits_{1-\delta-i\infty}^{1-\delta+i\infty}\frac{\Gamma(1-s)\big(\pi^{2}mY\big)^{s}}{\Gamma(s+3/2)}{\rm d}s+O\left(Y^{{-}1/2}\left(1+\text{min}\bigg\{T,\frac{1}{\log{\frac{m}{M}}}\bigg\}\right)\right) \end{equation}

Moving the line of integration in (2.21) to $\Re (s)=N+1/2$ with $N\geq 1$ an integer, and then letting $N\to \infty$, we have by the theorem of residues

(2.22)\begin{equation} \frac{1}{2\pi i}\int\limits_{1-\delta-i\infty}^{1-\delta+i\infty}\frac{\Gamma(1-s)\big(\pi^{2}mY\big)^{s}}{\Gamma(s+3/2)}{\rm d}s=\big(\pi\sqrt{mY}\,\big)^{1/2}\mathcal{J}_{{\scriptscriptstyle3/2}}\big(2\pi\sqrt{mY}\,\big), \end{equation}

where for $\nu >0$, the Bessel function $\mathcal {J}_{{\scriptscriptstyle \nu }}$ of order $\nu$ is defined by

\[ \mathcal{J}_{\nu}(y)=\sum_{k=0}^{\infty}\frac{({-}1)^{k}}{k!\Gamma(k+1+\nu)}\left(\frac{y}{2}\right)^{\nu+2k}. \]

We have the following asymptotic estimate for the Bessel function (see [Reference Iwaniec9], B.4 (B.35)). For fixed $\nu >0$,

(2.23)\begin{equation} \mathcal{J}_{\nu}(y)=\left(\frac{2}{\pi y}\right)^{1/2}\cos{\left(y-\frac{1}{2}\nu\pi-\frac{1}{4}\pi\right)}+O\left(\frac{1}{y^{3/2}}\right)\,,\text{ as } y\to\infty. \end{equation}

Inserting (2.22) into the RHS of (2.21), we have by (2.23)

(2.24)\begin{equation} {\textrm J}_{m}={-}\frac{1}{\sqrt{\pi}}\cos{\big(2\pi\sqrt{mY}\,\big)}+O\left(Y^{{-}1/2}\left(1+\text{min}\bigg\{T,\frac{1}{\log{\frac{m}{M}}}\bigg\}\right)\right). \end{equation}

Summing over all $m\leq M$, we obtain

(2.25)\begin{equation} \begin{aligned} & \sum_{m\leq M}\frac{r_{{\scriptscriptstyle2}}(m)}{m}{\textrm J}_{m}\\ & \quad={-}\frac{1}{\sqrt{\pi}}\sum_{m\leq M}\frac{r_{{\scriptscriptstyle2}}(m)}{m}\cos{\big(2\pi\sqrt{mY}\,\big)}\\ & \qquad +O\left(Y^{{-}1/2}\sum_{m\leq M}\frac{r_{{\scriptscriptstyle2}}(m)}{m}\left(1+\text{min}\bigg\{T,\frac{1}{\log{\frac{m}{M}}}\bigg\}\right)\right)\\ & \quad={-}\frac{1}{\sqrt{\pi}}\sum_{m\leq M}\frac{r_{{\scriptscriptstyle2}}(m)}{m}\cos{\big(2\pi\sqrt{mY}\,\big)}+O_{\epsilon}\big(Y^{{-}1/2+\epsilon}\big). \end{aligned} \end{equation}

Finally, inserting (2.25) into the RHS of (2.16), we arrive at

(2.26)\begin{equation} R(Y)={-}\frac{1}{2\pi}\sum_{m\,\leq\,M}\frac{r_{{\scriptscriptstyle 2}}(m)}{m}\cos{\big(2\pi\sqrt{mY}\,\big)}+O_{\epsilon}\big(Y^{{-}1/2+\epsilon}\big). \end{equation}

This concludes the proof.

We need an additional result regarding weighted integer lattice points in Euclidean circles. First, we make the following definition.

Definition For $Y>0$ and $n\geq 1$ an integer, define

\[ S_{{\scriptscriptstyle n}}(Y)=\frac{({-}1)^{n}}{n!}f^{(n)}\big(2Y\big)\sum_{0\,\leq\,m\,\leq\,Y}r_{{\scriptscriptstyle 2}}(m)\big(Y-m\big)^{n+1/2}, \]

with $f(y)=\sqrt {y}$, and let

\[ R_{{\scriptscriptstyle n}}(Y)=S_{{\scriptscriptstyle n}}(Y)-c_{{\scriptscriptstyle n}}Y^{2}, \]

where $c_{{\scriptscriptstyle 0}}=\frac {2^{3/2}\pi }{3}$, $c_{{\scriptscriptstyle 1}}=-\frac {\pi }{5\sqrt {2}}$ and $c_{{\scriptscriptstyle n}}=-\pi \frac {\prod _{k=1}^{n-1}(1-\frac {1}{2k})}{\sqrt []{2}\,2^{n}n(n+\frac {3}{2})}$ for $n\geq 2$.

We quote the following result (see [Reference Gath5], lemma $2.1$). Here, we shall only need that part of the lemma which concerns the case $n\geq 1$. The case $n=0$ will be treated by lemma 2.2 as we shall see later.

Lemma 2.3 [Reference Gath5]

For $n\geq 1$ an integer, the error term $R_{{\scriptscriptstyle n}}(Y)$ satisfies the bound

(2.27)\begin{equation} \big|R_{{\scriptscriptstyle n}}(Y)\big|\ll2^{{-}n}Y^{1/2}, \end{equation}

where the implied constant is absolute.

Remark 6 The proof of lemma 2.3 goes along the same line as the proof of lemma 2.2, where in fact the proof is much simpler in this case. Moreover, one can show that the upper-bound estimate (2.27) is sharp for $n=1$. For $n\geq 2$, the estimates are no longer sharp, but they will more then suffice for our needs.

2.2 A decomposition identity for $N_{1}(x)$ and proof of proposition 2.1

We have everything we need for the proof of proposition 2.1. Before presenting the proof, we need the following decomposition identity for $N_{1}(x)$ which we prove in lemma 2.4 below.

Lemma 2.4 Let $x>0$. We have

(2.28)\begin{equation} N_{1}(x)=2\sum_{n=0}^{\infty}S_{{\scriptscriptstyle n}}\big(x^{2}\big)-2T\big(x^{2}\big), \end{equation}

where $T(x^{2})$ is defined as in proposition 2.1.

Proof. The first step is to execute the lattice point count as follows. By the definition of the Cygan–Korányi norm, we have

(2.29)\begin{equation} \begin{aligned} N_{1}(x) & =\sum_{m^{2}+n^{2}\leq\,x^{4}}r_{{\scriptscriptstyle2}}(m)\\ & =2\sum_{0\,\leq\,m\,\leq\,x^{2}}r_{{\scriptscriptstyle 2}}(m)\big(x^{4}-m^{2}\big)^{1/2}-2\sum_{0\,\leq\,m\,\leq\,x^{2}}r_{{\scriptscriptstyle2}}(m)\psi\Big(\big(x^{4}-m^{2}\big)^{1/2}\Big)\\ & =2\sum_{0\,\leq\,m\,\leq\,x^{2}}r_{{\scriptscriptstyle 2}}(m)\big(x^{4}-m^{2}\big)^{1/2}-2T\big(x^{2}\big)\,. \end{aligned} \end{equation}

Next, we decompose the first sum using the following procedure. For $0\leq m\leq x^{2}$ an integer, we use Taylor expansion to write

\[ \big(x^{4}-m^{2}\big)^{1/2}=\sum_{n=0}^{\infty}\frac{({-}1)^{n}}{n!}f^{(n)}\big(2x^{2}\big)\big(x^{2}-m\big)^{n+1/2}, \]

with $f(y)=\sqrt {y}$. Multiplying the above identity by $r_{{\scriptscriptstyle 2}}(m)$, and then summing over the range $0\leq m\leq x^{2}$, we have

(2.30)\begin{equation} \begin{aligned} \sum_{0\,\leq\,m\,\leq\,x^{2}}r_{{\scriptscriptstyle 2}}(m)\big(x^{4}-m^{2}\big)^{1/2} & =\sum_{n=0}^{\infty}\frac{({-}1)^{n}}{n!}f^{(n)}\big(2x^{2}\big)\sum_{0\,\leq\,m\,\leq\,x^{2}}r_{{\scriptscriptstyle 2}}(m)\big(x^{2}-m\big)^{n+1/2}\\ & =\sum_{n=0}^{\infty}S_{{\scriptscriptstyle n}}\big(x^{2}\big)\,. \end{aligned} \end{equation}

Inserting (2.30) into the RHS of (2.29), we obtain

(2.31)\begin{equation} N_{1}(x)=2\sum_{n=0}^{\infty}S_{{\scriptscriptstyle n}}\big(x^{2}\big)-2T\big(x^{2}\big). \end{equation}

This concludes the proof.

We now turn to the proof of proposition 2.1.

Proof Proof of proposition 2.1

Let $X>0$ be large, and suppose that $X< x<2X$. By the upper-bound estimate (2.27) in lemma 2.3, it follows from lemma 2.4 that

(2.32)\begin{equation} \begin{aligned} N_{1}(x) & =2\sum_{n=0}^{\infty}S_{{\scriptscriptstyle n}}\big(x^{2}\big)-2T\big(x^{2}\big)\\ & =2\sum_{n=0}^{\infty}\Big\{c_{{\scriptscriptstyle n}}x^{4}+R_{{\scriptscriptstyle n}}\big(x^{2}\big)\Big\}-2T\big(x^{2}\big)\\ & =2cx^{4}+2R_{{\scriptscriptstyle0}}\big(x^{2}\big)+2\sum_{n=1}^{\infty}R_{{\scriptscriptstyle n}}\big(x^{2}\big)-2T\big(x^{2}\big)\\ & =2cx^{4}+2R_{{\scriptscriptstyle0}}\big(x^{2}\big)-2T\big(x^{2}\big)+O(x), \end{aligned} \end{equation}

with $c=\sum _{n=0}^{\infty }c_{{\scriptscriptstyle n}}$, where the infinite sum clearly converges absolutely. By the definition of $R_{{\scriptscriptstyle 0}}(Y)$, it is easily verified that

(2.33)\begin{equation} \begin{aligned} R_{{\scriptscriptstyle0}}(Y) & =2^{1/2}Y\Bigg\{\,\sum_{0\,\leq\,m\,\leq\,Y}r_{{\scriptscriptstyle 2}}(m)\left(1-\frac{m}{Y}\right)^{1/2}-\frac{2\pi}{3}Y\Bigg\}\\ & =2^{1/2}YR(Y), \end{aligned} \end{equation}

Inserting (2.33) into the RHS of (2.32), we find that

(2.34)\begin{equation} N_{1}(x)=2cx^{4}+2^{3/2}x^{2}R\big(x^{2}\big)-2T\big(x^{2}\big)+O(x). \end{equation}

We claim that $2c=\textit {vol}(\mathcal {B})$. To see this, first note that by (2.2) in lemma 2.2 we have the bound $|R(x^{2})|\ll \log {x}$, and since $|T(x^{2})|\ll x^{2}$, we obtain by (2.34) that $N_{1}(x)=2cx^{4}+O(x^{2}\log {x})$. Since $N_{1}(x)\sim \textit {vol}(\mathcal {B})x^{4}$ as $x\to \infty$, we conclude that $2c=\textit {vol}(\mathcal {B})$.

Subtracting $\textit {vol}(\mathcal {B})x^{4}$ from both sides of (2.34), and then dividing throughout by $x^{2}$, we have by (2.2) in lemma 2.2 upon choosing $M=X^{2}$

(2.35)\begin{equation} \mathcal{E}_{1}(x)/x^{2}={-}\frac{\sqrt{2}}{\pi}\sum_{m\,\leq\,X^{2}}\frac{r_{{\scriptscriptstyle 2}}(m)}{m}\cos{\big(2\pi\sqrt{m}x\big)}-2x^{{-}2}T\big(x^{2}\big)+O_{\epsilon}\big(X^{{-}1+\epsilon}\big). \end{equation}

This concludes the proof of proposition 2.1.

3. Almost periodicity

In this section, we show that the suitably normalized error term $\mathcal {E}_{1}(x)/x^{2}$ can be approximated, in a suitable sense, by means of certain oscillating series. From this point onward, we shall use the notation $\widehat {\mathcal {E}}_{1}(x)=\mathcal {E}_{1}(x)/x^{2}$. The main result we shall set out to prove is the following.

Proposition 3.1 We have

(3.1)\begin{equation} \lim_{M\to\infty}\limsup_{X\to\infty}\frac{1}{X}\int\limits_{X}^{2X}\Big|\widehat{\mathcal{E}}_{1}(x)-\sum_{m\,\leq\,M}\phi_{{\scriptscriptstyle 1,m}}\big(\sqrt{m}x\big)\Big|{\rm d}x=0, \end{equation}

where $\phi _{{\scriptscriptstyle 1,1}}(t),\, \phi _{{\scriptscriptstyle 1,2}}(t),\,\ldots$ are real-valued continuous functions, periodic of period $1$, given by

\[ \phi_{{\scriptscriptstyle1,m}}(t)={-}\frac{\sqrt{2}}{\pi}\frac{\mu^{2}(m)}{m}\sum_{k=1}^{\infty}\frac{r_{{\scriptscriptstyle2}}\big(mk^{2}\big)}{k^{2}}\cos{(2\pi kt)}. \]

The proof of proposition 3.1 will be given in § 3.2. Our first task will be to deal with the remainder term $x^{-2}T(x^{2})$ appearing in the approximate expression (2.1).

3.1 Bounding the remainder term

This subsection is devoted to proving the following lemma.

Lemma 3.2 We have

(3.2)\begin{equation} \frac{1}{X}\int\limits_{X}^{2X}T^{2}\big(x^{2}\big){\rm d}x\ll X^{2}\big(\log{X}\big)^{4}. \end{equation}

Before commencing with the proof, we need the following result on trigonometric approximation for the $\psi$ function (see [Reference Vaaler13]).

VL ([Reference Vaaler13])

Let $H\geq 1$. Then there exist trigonometrical polynomials

\begin{align*} & {\rm (1)}\qquad\psi_{{\scriptscriptstyle H}}(\omega)=\sum_{1\,\leq\,h\,\leq\,H}\nu(h)\sin{(2\pi h\omega)}\\ & {\rm (2)}\qquad\psi^{{\ast}}_{{\scriptscriptstyle H}}(\omega)=\sum_{1\,\leq\,h\,\leq\,H}\nu^{{\ast}}(h)\cos{(2\pi h\omega)}, \end{align*}

with real coefficients satisfying $|\nu (h)|,\,|\nu ^{\ast }(h)|\ll 1/h$, such that

\[ \big|\psi(\omega)-\psi_{{\scriptscriptstyle H}}(\omega)\big|\leq\psi^{{\ast}}_{{\scriptscriptstyle H}}(\omega)+\frac{1}{2[H]+2}. \]

We now turn to the proof of lemma 3.2.

Proof Proof of lemma 3.2

Let $X>0$ be large. By Vaaler's Lemma with $H=X$, we have for $x$ in the range $X< x<2X$

(3.3)\begin{equation} \big|T\big(x^{2}\big)\big|\ll\sum_{1\,\leq\,h\,\leq\,X}\frac{1}{h}\bigg|\sum_{0\,\leq\,m\,\leq\,x^{2}}r_{{\scriptscriptstyle2}}(m)\exp{\Big(2\pi ih\big(x^{4}-m^{2}\big)^{1/2}\Big)}\bigg|+X. \end{equation}

Applying Cauchy–Schwarz inequality we obtain

(3.4)\begin{equation} T^{2}\big(x^{2}\big)\ll(\log{X})\sum_{1\,\leq\,h\,\leq\,X}\frac{1}{h}\big|S_{h}\big(x^{2}\big)\big|^{2}+X^{2}, \end{equation}

where for $Y>0$ and $h\geq 1$ an integer, $S_{h}(Y)$ is given by

\[ S_{h}(Y)=\sum_{0\,\leq\,m\,\leq\,Y}r_{{\scriptscriptstyle2}}(m)\exp{\Big(2\pi ih\big(Y^{2}-m^{2}\big)^{1/2}\Big)}. \]

Fix an integer $1\leq h\leq X$. Making a change of variable, we have

(3.5)\begin{equation} \begin{aligned} \frac{1}{X}\int\limits_{X}^{2X}\big|S_{h}\big(x^{2}\big)\big|^{2}{\rm d}x & \ll\frac{1}{X^{4}}\int\limits_{X^{4}}^{16X^{4}}\big|S_{h}\big(\sqrt{x}\,\big)\big|^{2}{\rm d}x\\ & =\sum_{0\,\leq\,m,\,n\,\leq\,4X^{2}}r_{{\scriptscriptstyle2}}(m)r_{{\scriptscriptstyle2}}(n){\textrm I}_{h}(m,n), \end{aligned} \end{equation}

where ${\textrm I}_{h}(m,\,n)$ is given by

(3.6)\begin{equation} {\textrm I}_{h}(m,n)=\frac{1}{X^{4}}\int\limits_{\text{max}\{m^{2},\,n^{2},\,X^{4}\}}^{16X^{4}}\exp{\Big(2\pi ih\Big\{\big(x-m^{2}\big)^{1/2}-\big(x-n^{2}\big)^{1/2}\Big\}\Big)}{\rm d}x. \end{equation}

We have the estimate

(3.7)\begin{equation} \big|{\textrm I}_{h}(m,n)\big|\ll\left\{ \begin{array}{@{}ll} 1 & ;m=n \\ \dfrac{X^{2}}{h|m^{2}-n^{2}|} & ;m\neq n. \end{array} \right. \end{equation}

Inserting (3.7) into the RHS of (3.5), we obtain

(3.8)\begin{equation} \begin{aligned} \frac{1}{X}\int\limits_{X}^{2X}\big|S_{h}\big(x^{2}\big)\big|^{2}{\rm d}x & \ll\sum_{0\,\leq\,m\,\leq\,4X^{2}}r^{2}_{{\scriptscriptstyle2}}(m)+X^{2}h^{{-}1}\sum_{0\,\leq\,m\neq n\,\leq\,4X^{2}}\frac{r_{{\scriptscriptstyle2}}(m)r_{{\scriptscriptstyle2}}(n)}{\big|m^{2}-n^{2}\big|}\\ & \ll X^{2}\log{X}+X^{2}h^{{-}1}\big(\log{X}\big)^{3}\,. \end{aligned} \end{equation}

Integrating both sides of (3.4) and using (3.8), we find that

(3.9)\begin{equation} \frac{1}{X}\int\limits_{X}^{2X}T^{2}\big(x^{2}\big){\rm d}x\ll X^{2}\big(\log{X}\big)^{4}. \end{equation}

This concludes the proof.

We end this subsection by quoting the following result (see [Reference Montgomery and Vaughan12]) which will be needed in subsequent sections of the paper.

HI ([Reference Montgomery and Vaughan12])

Let $(\alpha (\lambda ))_{\lambda \in \Lambda }$ and $(\beta (\lambda ))_{\lambda \in \Lambda }$ be two sequences of complex numbers indexed by a finite set $\Lambda$ of real numbers. Then

\[ \bigg|\,\underset{\lambda\neq\nu}{\sum_{\lambda,\nu\in\Lambda}}\frac{\alpha(\lambda)\overline{\beta(\nu)}}{\lambda-\nu}\bigg|\ll\left(\sum_{\lambda\in\Lambda}|\alpha(\lambda)|^{2}\delta_{\lambda}^{{-}1}\right)^{1/2}\left(\sum_{\lambda\in\Lambda}|\beta(\lambda)|^{2}\delta_{\lambda}^{{-}1}\right)^{1/2} \]

where $\delta _{\lambda }=\underset {\nu \neq \lambda }{\underset {\nu \in \Lambda }{\textit {min}}}\,|\lambda -\nu |$, and the implied constant is absolute.

3.2 Proof of proposition 3.1

We now turn to the proof of proposition 3.1.

Proof Proof of proposition 3.1

Fix an integer $M\geq 1$, and let $X>M^{1/2}$ be large. In the range $X< x<2X$, we have by proposition 2.1

(3.10)\begin{equation} \begin{aligned} \Big|\widehat{\mathcal{E}}_{1}(x) & -\sum_{m\,\leq\,M}\phi_{{\scriptscriptstyle 1,m}}\big(\sqrt{m}x\big)\Big|\\ & \ll_{\epsilon}\big|W_{{\scriptscriptstyle M,X^{2}}}(x)\big|+x^{{-}2}\big|T\big(x^{2}\big)\big|+\sum_{m\,\leq\,M}\frac{1}{m}\sum_{k>X/\sqrt{m}}\frac{r_{{\scriptscriptstyle2}}\big(mk^{2}\big)}{k^{2}}+X^{{-}1+\epsilon}\\ & \ll_{\epsilon}\big|W_{{\scriptscriptstyle M,X^{2}}}(x)\big|+x^{{-}2}\big|T\big(x^{2}\big)\big|+M^{1/2}X^{{-}1+\epsilon}, \end{aligned} \end{equation}

where $W_{{\scriptscriptstyle M,X^{2}}}(x)$ is given by

(3.11)\begin{equation} W_{{\scriptscriptstyle M,X^{2}}}(x)=\sum_{m\,\leq\,X^{2}}\mathbf{a}(m)\exp{\big(2\pi i\sqrt{m}x\big)}, \end{equation}

and

\[ \mathbf{a}(m)=\left\{ \begin{array}{@{}ll} r_{{\scriptscriptstyle2}}(m)/m & ;m=\ell k^{2}\text{ with } \ell>M\text{ square-free} \\ 0 & ;\text{otherwise}. \end{array} \right. \]

Integrating both sides of (3.10), we have by lemma 3.2 and Cauchy–Schwarz inequality

(3.12)\begin{equation} \begin{aligned} \frac{1}{X}\int\limits_{X}^{2X}\Big|\widehat{\mathcal{E}}_{1}(x)-\sum_{m\,\leq\,M}\phi_{{\scriptscriptstyle 1,m}}\big(\sqrt{m}x\big)\Big|{\rm d}x & \ll_{\epsilon}\frac{1}{X}\int\limits_{X}^{2X}\big|W_{{\scriptscriptstyle M,X^{2}}}(x)\big|{\rm d}x\\ & \quad +X^{{-}1}\big(\log{X}\big)^{2}+M^{1/2}X^{{-}1+\epsilon}\\ & \ll_{\epsilon}\frac{1}{X}\int\limits_{X}^{2X}\big|W_{{\scriptscriptstyle M,X^{2}}}(x)\big|{\rm d}x+M^{1/2}X^{{-}1+\epsilon} \end{aligned} \end{equation}

It remains to estimate the first term appearing on the RHS of (3.12). We have

(3.13)\begin{equation} \begin{aligned} \frac{1}{X}\int\limits_{X}^{2X}\big|W_{{\scriptscriptstyle M,X^{2}}} & (x)\big|^{2}{\rm d}x =\sum_{m\,\leq\,X^{2}}\mathbf{a}^{2}(m)\\ & +\frac{1}{2\pi iX}\bigg\{\sum_{m\neq n\,\leq\,X^{2}}\frac{\mathbf{a}_{{\scriptscriptstyle2X}}(m)\overline{\mathbf{a}_{{\scriptscriptstyle2X}}(n)}}{\sqrt{m}-\sqrt{n}}-\sum_{m\neq n\,\leq\,X^{2}}\frac{\mathbf{a}_{{\scriptscriptstyle X}}(m)\overline{\mathbf{a}_{{\scriptscriptstyle X}}(n)}}{\sqrt{m}-\sqrt{n}}\bigg\}, \end{aligned} \end{equation}

where for $\gamma >0$ and $m\geq 1$ an integer, we define $\mathbf {a}_{{\scriptscriptstyle \gamma }}(m)=\mathbf {a}(m)\exp {(2\pi i\sqrt {m}\gamma )}$. We first estimate the off-diagonal terms. By Hilbert's inequality, we have for $\gamma =X,\, 2X$

(3.14)\begin{equation} \begin{aligned} \bigg|\,\,\sum_{m\neq n\,\leq\,X^{2}}\frac{\mathbf{a}_{{\scriptscriptstyle\gamma}}(m)\overline{\mathbf{a}_{{\scriptscriptstyle\gamma}}(n)}}{\sqrt{m}-\sqrt{n}}\,\,\bigg|\ll\sum_{m\,\leq\,X^{2}}\mathbf{a}^{2}(m)m^{1/2} & \ll\sum_{m > M}r^{2}_{{\scriptscriptstyle2}}(m)m^{{-}3/2}\\ & \ll M^{{-}1/2}\log{2M}. \end{aligned} \end{equation}

Inserting (3.14) into the RHS of (3.13) and recalling that $X>M^{1/2}$, it follows that

(3.15)\begin{equation} \begin{aligned} \frac{1}{X}\int\limits_{X}^{2X}\big|W_{{\scriptscriptstyle M,X^{2}}}(x)\big|^{2}{\rm d}x & \ll\sum_{m > M}r^{2}_{{\scriptscriptstyle2}}(m)m^{{-}2}+M^{{-}1}\log{2M}\\ & \ll M^{{-}1}\log{2M}. \end{aligned} \end{equation}

Inserting (3.15) into the RHS of (3.12), applying Cauchy–Schwarz inequality and then taking $\limsup$, we arrive at

(3.16)\begin{equation} \limsup_{X\to\infty}\frac{1}{X}\int\limits_{X}^{2X}\Big|\widehat{\mathcal{E}}_{1}(x)-\sum_{m\,\leq\,M}\phi_{{\scriptscriptstyle 1,m}}\big(\sqrt{m}x\big)\Big|{\rm d}x\ll M^{{-}1/2}\big(\log{2M}\big)^{1/2} \end{equation}

Finally, letting $M\to \infty$ in (3.16) concludes the proof.

4. The probability density

Having proved proposition 3.1 in the last section, in this section we turn to the construction of the probability density $\mathcal {P}_{1}(\alpha )$. We begin by making the following definition.

Definition For $\alpha \in \mathbb {C}$ and $M\geq 1$ an integer, define

\[ \mathfrak{M}_{X}(\alpha;M)=\frac{1}{X}\int\limits_{X}^{2X}\exp{\left(2\pi i\alpha\sum_{m\,\leq\,M}\phi_{{\scriptscriptstyle 1,m}}\big(\sqrt{m}x\big)\right)}{\rm d}x, \]

and let

\[ \mathfrak{M}(\alpha;M)=\prod_{m\,\leq\,M}\Phi_{{\scriptscriptstyle1,m}}(\alpha)\quad;\quad\Phi_{{\scriptscriptstyle1,m}}(\alpha)=\int\limits_{0}^{1}\exp{\big(2\pi i\alpha\phi_{{\scriptscriptstyle 1,m}}(t)\big)}{\rm d}t\,, \]

We quote the following result (see [Reference Heath-Brown8], lemma 2.3) which will be needed in subsequent sections of the paper.

Lemma 4.1 [Reference Heath-Brown8]

Suppose that $\mathbf {b}_{{\scriptscriptstyle 1}}(t),\, \mathbf {b}_{{\scriptscriptstyle 2}}(t),\,\ldots,\, \mathbf {b}_{{\scriptscriptstyle k}}(t)$ are continuous functions from $\mathbb {R}$ to $\mathbb {C}$, periodic of period $1$, and that $\gamma _{{\scriptscriptstyle 1}},\, \gamma _{{\scriptscriptstyle 2}},\,\ldots,\, \gamma _{{\scriptscriptstyle k}}$ are positive real numbers which are linearly independent over $\mathbb {Q}$. Then

(4.1)\begin{equation} \lim_{X\to\infty}\frac{1}{X}\int\limits_{X}^{2X}\prod_{i\,\leq\,k}\mathbf{b}_{{\scriptscriptstyle i}}(\gamma_{{\scriptscriptstyle i}}x){\rm d}x=\prod_{i\,\leq\,k}\int\limits_{0}^{1}\mathbf{b}_{{\scriptscriptstyle i}}(t){\rm d}t. \end{equation}

We now state the main result of this section.

Proposition 4.2 (I) We have

(4.2)\begin{equation} \lim_{X\to\infty}\mathfrak{M}_{X}(\alpha;M)=\mathfrak{M}(\alpha;M)\,. \end{equation}

Moreover, if we let

\[ \Phi_{{\scriptscriptstyle1}}(\alpha)=\prod_{m=1}^{\infty}\Phi_{{\scriptscriptstyle1,m}}(\alpha), \]

then $\Phi _{{\scriptscriptstyle 1}}(\alpha )$ defines an entire function of $\alpha$, where the infinite product converges absolutely and uniformly on any compact subset of the plane. For large $|\alpha |$, $\alpha =\sigma +i\tau$, $\Phi _{{\scriptscriptstyle 1}}(\alpha )$ satisfies the bound

(4.3)\begin{equation} \big|\Phi_{{\scriptscriptstyle1}}(\alpha)\big|\leq\exp{\left(-\frac{\pi^{2}}{2}\big(C_{{\scriptscriptstyle1}}^{{-}1}\sigma^{2}-C_{{\scriptscriptstyle1}}\tau^{2}\big)|\alpha|^{{-}1/\theta(|\alpha|)}\log{|\alpha|}+C_{{\scriptscriptstyle1}}|\tau|\log{|\alpha|}\right)}, \end{equation}

where $C_{{\scriptscriptstyle 1}}>1$ is an absolute constant, and $\theta (x)=1-c/\log \log {x}$ with $c>0$ an absolute constant.

(II) For $x\in \mathbb {R}$, let $\mathcal {P}_{1}(x)=\widehat {\Phi }_{{\scriptscriptstyle 1}}(x)$ be the Fourier transform of $\Phi _{{\scriptscriptstyle 1}}$. Then $\mathcal {P}_{1}(x)$ defines a probability density which satisfies for any non-negative integer $j\geq 0$ and any $x\in \mathbb {R}$, $|x|$ sufficiently large in terms of $j$, the bound

(4.4)\begin{equation} \big|\mathcal{P}^{(j)}_{1}(x)\big|\leq\exp{\left(-\frac{\pi}{2}|x|\exp{\big(\rho|x|\big)}\right)}\quad;\quad\rho=\frac{\pi}{5C_{{\scriptscriptstyle1}}}. \end{equation}

Proof. We begin with the proof of $\textit {(I)}$. Let $\alpha \in \mathbb {C}$ and $M\geq 1$ an integer. We are going to apply lemma 4.1 with $\mathbf {b}_{{\scriptscriptstyle m}}(t)=\exp {(2\pi i\alpha \phi _{{\scriptscriptstyle 1,m}}(t))}$, and frequencies $\gamma _{{\scriptscriptstyle m}}=\sqrt {m}$. The elements of the set $\mathscr {B}=\{\sqrt {m}:|\mu (m)|=1\}$ are linearly independent over $\mathbb {Q}$, and since $\phi _{{\scriptscriptstyle 1,m}}(t)\equiv 0$ whenever $\sqrt {m}\notin \mathscr {B}$ (i.e. whenever $m$ is not square-free), we see that the conditions of lemma 4.1 are satisfied, and thus (4.2) holds.

Now, let us show that $\Phi _{{\scriptscriptstyle 1}}(\alpha )$ defines an entire function of $\alpha$. First, we note the following. By the definition of $\phi _{{\scriptscriptstyle 1,m}}(t)$, we have

(4.5)\begin{equation} \begin{aligned} \int\limits_{0}^{1}\phi_{{\scriptscriptstyle 1,m}}(t){\rm d}t & ={-}\frac{\sqrt{2}}{\pi}\frac{\mu^{2}(m)}{m}\sum_{k=1}^{\infty}\frac{r_{{\scriptscriptstyle2}}\big(mk^{2}\big)}{k^{2}}\int\limits_{0}^{1}\cos{(2\pi kt)}{\rm d}t\\ & =0, \end{aligned} \end{equation}

and we also have the uniform bound

(4.6)\begin{equation} \begin{aligned} \big|\phi_{{\scriptscriptstyle 1,m}}(t)\big| & \leq\frac{\sqrt{2}}{\pi}\frac{\mu^{2}(m)}{m}\sum_{k=1}^{\infty}\frac{r_{{\scriptscriptstyle2}}\big(mk^{2}\big)}{k^{2}}\\ & \leq2^{5/2}\pi^{{-}1}\mu^{2}(m)\frac{r_{{\scriptscriptstyle2}}(m)}{m}\sum_{k=1}^{\infty}\frac{r_{{\scriptscriptstyle2}}\big(k^{2}\big)}{k^{2}}\leq\mathfrak{a}\,\mu^{2}(m)\frac{r_{{\scriptscriptstyle2}}(m)}{m}, \end{aligned} \end{equation}

for some absolute constant $\mathfrak {a}>0$. By (4.5) and (4.6), we obtain

(4.7)\begin{equation} \begin{aligned} \Phi_{{\scriptscriptstyle1,m}}(\alpha) & =\int\limits_{0}^{1}\exp{\big(2\pi i\alpha\phi_{{\scriptscriptstyle 1,m}}(t)\big)}{\rm d}t\\ & =1+O\left(|\alpha|\frac{r_{{\scriptscriptstyle2}}(m)}{m}\right)^{2}, \end{aligned} \end{equation}

whenever, say, $m\geq |\alpha |^{2}$. Since $\sum _{m=1}^{\infty }r^{2}_{{\scriptscriptstyle 2}}(m)/m^{2}<\infty$, it follows from (4.7) that the infinite product $\prod _{m=1}^{\infty }\Phi _{{\scriptscriptstyle 1,m}}(\alpha )$ converges absolutely and uniformly on any compact subset of the plane, and so $\Phi _{{\scriptscriptstyle 1}}(\alpha )$ defines an entire function of $\alpha$.

We now estimate $\Phi _{{\scriptscriptstyle 1}}(\alpha )$ for large $|\alpha |$, $\alpha =\sigma +i\tau$. Let $c>0$ be an absolute constant such that (see [Reference Wigert15])

(4.8)\begin{equation} r_{{\scriptscriptstyle2}}(m)\leq m^{c/\log\log{m}}\quad:\quad m>2, \end{equation}

and for real $x>\textit {e}$ we write $\theta (x)=1-c/\log \log {x}$. Let $\epsilon >0$ be a small absolute constant which will be specified later, and set

\[ \ell=\ell(\alpha)=\Big[\Big(\epsilon^{{-}1}|\alpha|\Big)^{1/\theta(|\alpha|)}\Big]+1. \]

We are going to estimate the infinite product $\prod _{m=1}^{\infty }\Phi _{{\scriptscriptstyle 1,m}}(\alpha )$ separately for $m<\ell$ and $m\geq \ell$. In what follows, we assume that $|\alpha |$ is sufficiently large in terms of $c$ and $\epsilon$. The product over $m<\ell$ is estimated trivially by using the upper bound (4.6), leading to

(4.9)\begin{equation} \begin{aligned} \Big|\prod_{m<\ell}\Phi_{{\scriptscriptstyle1,m}}(\alpha)\Big| & \leq\exp{\left(2\pi\mathfrak{a}|\tau|\sum_{m<\ell}\mu^{2}(m)\frac{r_{{\scriptscriptstyle2}}(m)}{m}\right)}\\ & \leq\exp{\Big(\mathfrak{b}|\tau|\log{|\alpha|}\Big)}, \end{aligned} \end{equation}

for some absolute constant $\mathfrak {b}>0$.

Suppose now that $m\geq \ell$. By (4.8) we have

(4.10)\begin{equation} \frac{r_{{\scriptscriptstyle2}}(m)}{m}|\alpha|\leq m^{-\theta(m)}|\alpha|\leq\big(\epsilon^{{-}1}|\alpha|\big)^{-\frac{\theta(m)}{\theta(|\alpha|)}}|\alpha|\leq\epsilon \end{equation}

It follows from (4.5), (4.6) and (4.10) that

(4.11)\begin{equation} \Phi_{{\scriptscriptstyle1,m}}(\alpha)=1-\frac{(2\pi\alpha)^{2}}{2}\int\limits_{0}^{1}\phi^{2}_{{\scriptscriptstyle 1,m}}(t){\rm d}t+\mathcal{R}_{{\scriptscriptstyle m}}(\alpha), \end{equation}

where the remainder term $\mathcal {R}_{{\scriptscriptstyle m}}(\alpha )$ satisfies the bound

(4.12)\begin{equation} \big|\mathcal{R}_{{\scriptscriptstyle m}}(\alpha)\big|\leq\bigg\{\frac{2}{3}\pi\mathfrak{a}\epsilon\exp{\big(2\pi\mathfrak{a}\epsilon\big)}\bigg\}\frac{(2\pi|\alpha|)^{2}}{2}\int\limits_{0}^{1}\phi^{2}_{{\scriptscriptstyle 1,m}}(t){\rm d}t. \end{equation}

At this point, we specify $\epsilon$, by choosing $0<\epsilon <\frac {1}{64}$ such that

\[ 2\pi\mathfrak{a}\epsilon^{1/2}\exp{\big(2\pi\mathfrak{a}\epsilon\big)}\leq1. \]

With this choice of $\epsilon$, we have

(4.13)\begin{equation} \big|\mathcal{R}_{{\scriptscriptstyle m}}(\alpha)\big|\leq\epsilon^{1/2}\frac{(2\pi|\alpha|)^{2}}{2}\int\limits_{0}^{1}\phi^{2}_{{\scriptscriptstyle 1,m}}(t){\rm d}t, \end{equation}

and we also note that $\big |\Phi _{{\scriptscriptstyle 1,m}}(\alpha )-1\big |\leq \epsilon <\frac {1}{2}$. Thus, on rewriting (4.11) in the form

(4.14)\begin{equation} \Phi_{{\scriptscriptstyle1,m}}(\alpha)=\exp{\Bigg(-\frac{(2\pi\alpha)^{2}}{2}\int\limits_{0}^{1}\phi^{2}_{{\scriptscriptstyle 1,m}}(t){\rm d}t+\widetilde{\mathcal{R}}_{{\scriptscriptstyle m}}(\alpha)\Bigg)}, \end{equation}

it follows from (4.13) that the remainder term $\widetilde {\mathcal {R}}_{{\scriptscriptstyle m}}(\alpha )$ satisfies the bound

(4.15)\begin{equation} \big|\widetilde{\mathcal{R}}_{{\scriptscriptstyle m}}(\alpha)\big|\leq\epsilon^{1/2}(2\pi|\alpha|)^{2}\int\limits_{0}^{1}\phi^{2}_{{\scriptscriptstyle 1,m}}(t){\rm d}t. \end{equation}

From (4.14) and (4.15), we obtain

(4.16)\begin{equation} \prod_{m\geq\ell}\big|\Phi_{{\scriptscriptstyle1,m}}(\alpha)\big|\leq\exp{\Bigg(-\frac{\pi^{2}}{2}\big(3\sigma^{2}-5\tau^{2}\big)\sum_{m\,\geq\,\ell}\,\int\limits_{0}^{1}\phi^{2}_{{\scriptscriptstyle 1,m}}(t){\rm d}t\Bigg)}. \end{equation}

Now, by (4.6) we have

(4.17)\begin{equation} \sum_{m\,\geq\,\ell}\,\int\limits_{0}^{1}\phi^{2}_{{\scriptscriptstyle 1,m}}(t){\rm d}t\leq\mathfrak{a}^{2}\sum_{m\,\geq\,\ell}\mu^{2}(m)\frac{r^{2}_{{\scriptscriptstyle2}}(m)}{m^{2}}, \end{equation}

and since

(4.18)\begin{equation} \begin{aligned} \int\limits_{0}^{1}\phi^{2}_{{\scriptscriptstyle 1,m}}(t){\rm d}t & =\pi^{{-}2}\frac{\mu^{2}(m)}{m^{2}}\sum_{k=1}^{\infty}\frac{r^{2}_{{\scriptscriptstyle2}}\big(mk^{2}\big)}{k^{4}}\\ & \geq\pi^{{-}2}\mu^{2}(m)\frac{r^{2}_{{\scriptscriptstyle2}}(m)}{m^{2}}, \end{aligned} \end{equation}

we also have the lower bound

(4.19)\begin{equation} \sum_{m\,\geq\,\ell}\,\int\limits_{0}^{1}\phi^{2}_{{\scriptscriptstyle 1,m}}(t){\rm d}t\geq\pi^{{-}2}\sum_{m\,\geq\,\ell}\mu^{2}(m)\frac{r^{2}_{{\scriptscriptstyle2}}(m)}{m^{2}}. \end{equation}

Since

(4.20)\begin{equation} \sum_{m\,\leq\,Y}\mu^{2}(m)r^{2}_{{\scriptscriptstyle2}}(m)\sim\mathfrak{h}Y\log{Y}, \end{equation}

as $Y\to \infty$ where $\mathfrak {h}>0$ is some constant, we obtain by partial summation

(4.21)\begin{equation} \big(3\sigma^{2}-5\tau^{2}\big)\sum_{m\,\geq\,\ell}\,\int\limits_{0}^{1}\phi^{2}_{{\scriptscriptstyle 1,m}}(t){\rm d}t\geq\big(A\sigma^{2}-B\tau^{2}\big)|\alpha|^{{-}1/\theta(|\alpha|)}\log{|\alpha|}, \end{equation}

for some absolute constants $A,\,B>0$. Inserting (4.21) into the RHS of (4.16) we arrive at

(4.22)\begin{equation} \prod_{m\geq\ell}\big|\Phi_{{\scriptscriptstyle1,m}}(\alpha)\big|\leq\exp{\left(-\frac{\pi^{2}}{2}\big(A\sigma^{2}-B\tau^{2}\big)|\alpha|^{{-}1/\theta(|\alpha|)}\log{|\alpha|}\right)}. \end{equation}

Finally, setting $C_{{\scriptscriptstyle 1}}=1+\text {max}\{A,\, A^{-1},\, B,\, \mathfrak {b}\}$, we deduce from (4.9) and (4.22)

(4.23)\begin{equation} \big|\Phi_{{\scriptscriptstyle1}}(\alpha)\big|\leq\exp{\left(-\frac{\pi^{2}}{2}\big(C_{{\scriptscriptstyle1}}^{{-}1}\sigma^{2}-C_{{\scriptscriptstyle1}}\tau^{2}\big)|\alpha|^{{-}1/\theta(|\alpha|)}\log{|\alpha|}+C_{{\scriptscriptstyle1}}|\tau|\log{|\alpha|}\right)}. \end{equation}

This completes the proof of $\textit {(I)}$.

We now turn to the proof of $\textit {(II)}$. By the definition of the Fourier transform, we have for real $x$

(4.24)\begin{equation} \mathcal{P}_{1}(x)=\int\limits_{-\infty}^{\infty}\Phi_{{\scriptscriptstyle1}}(\sigma)\exp{\big({-}2\pi ix\sigma\big)}{\rm d}\sigma, \end{equation}

and it follows from the decay estimates (4.23) that $\mathcal {P}_{1}(x)$ is of class $C^{\infty }$. Let us estimate $|\mathcal {P}^{(j)}_{1}(x)|$ for real $x$, $|x|$ sufficiently large in terms $j$. Let $\tau =\tau _{x}$, $|\tau |$ large, be a real number depending on $x$ to be determined later, which satisfies $\textit {sgn}(\tau )=-\text {sgn}(x)$. By Cauchy's theorem and the decay estimate (4.23), we have

(4.25)\begin{equation} \begin{aligned} \mathcal{P}^{(j)}_{1}(x) & =\big({-}2\pi i\big)^{j}\int\limits_{-\infty}^{\infty}\sigma^{j}\Phi_{{\scriptscriptstyle1}}(\sigma)\exp{\big({-}2\pi ix\sigma\big)}{\rm d}\sigma\\ & =\big({-}2\pi i\big)^{j}\exp{\big({-}2\pi |x||\tau|\big)}\int\limits_{-\infty}^{\infty}\big(\sigma+i\tau\big)^{j}\Phi_{{\scriptscriptstyle1}}(\sigma+i\tau)\exp{\big({-}2\pi ix\sigma\big)}{\rm d}\sigma. \end{aligned} \end{equation}

It follows that

(4.26)\begin{equation} \big|\mathcal{P}^{(j)}_{1}(x)\big|\leq\big(2\pi C_{{\scriptscriptstyle1}}|\tau|\big)^{j+1}\exp{\big({-}2\pi |x||\tau|\big)}\int\limits_{-\infty}^{\infty}\big(\sigma^{2}+1\big)^{j/2}\big|\Phi_{{\scriptscriptstyle1}}\big(C_{{\scriptscriptstyle1}}\tau\sigma+i\tau\big)\big|{\rm d}\sigma. \end{equation}

We decompose the range of integration in (4.26) as follows:

(4.27)\begin{equation} \begin{aligned} \int\limits_{-\infty}^{\infty}\big(\sigma^{2}+1\big)^{j/2}\big|\Phi_{{\scriptscriptstyle1}}\big(C_{{\scriptscriptstyle1}}\tau\sigma+i\tau\big)\big|{\rm d}\sigma & =\int\limits_{|\sigma|\leq\sqrt{2}}\ldots{\rm d}\sigma+\int\limits_{|\sigma|>\sqrt{2}}\ldots{\rm d}\sigma\\ & ={\textrm L}_{1}+{\textrm L}_{2}. \end{aligned} \end{equation}

In what follows, we assume that $|\tau |$ is sufficiently large in terms of $j$. In the range $|\sigma |\leq \sqrt {2}$, we have by (4.23)

(4.28)\begin{equation} \big|\Phi_{{\scriptscriptstyle1}}\big(C_{{\scriptscriptstyle1}}\tau\sigma+i\tau\big)\big|\leq\exp{\big(4C_{{\scriptscriptstyle1}}|\tau|\log{|\tau|}\big)}. \end{equation}

From (4.28), it follows that ${\textrm L}_{1}$ satisfies the bound

(4.29)\begin{equation} {\textrm L}_{1}\leq\exp{\big(5C_{{\scriptscriptstyle1}}|\tau|\log{|\tau|}\big)}. \end{equation}

Referring to (4.23) once again, we have in the range $|\sigma |>\sqrt {2}$

(4.30)\begin{align} & \big|\Phi_{{\scriptscriptstyle1}}\big(C_{{\scriptscriptstyle1}}\tau\sigma+i\tau\big)\big|\nonumber\\ & \quad \leq\exp{\left({-}C_{{\scriptscriptstyle1}}|\tau|\bigg\{\,\bigg|\left(\frac{\pi^{2}}{16C^{2}_{{\scriptscriptstyle1}}}\right)^{\frac{\theta(|\tau|)}{2\theta(|\tau|)-1}}|\tau|^{\frac{\theta(|\tau|)-1}{2\theta(|\tau|)-1}}\sigma\bigg|^{\frac{2\theta(|\tau|)-1}{\theta(|\tau|)}}-1\bigg\}\log{|C_{{\scriptscriptstyle1}}\tau\sigma+i\tau|}\right)} \end{align}

From (4.30), it follows that

(4.31)\begin{equation} {\textrm L}_{2}\leq2^{j/2}{\tilde \tau}^{j+1}\int\limits_{-\infty}^{\infty}|\sigma|^{j}\exp\left({-}C_{{\scriptscriptstyle1}}|\tau|\Big\{|\sigma|^{\frac{2\theta(|\tau|)-1}{\theta(|\tau|)}}-1\Big\}\log{|C_{{\scriptscriptstyle1}}\tau{\tilde \tau}\sigma+i\tau|}\right){\rm d}\sigma, \end{equation}

where ${\tilde \tau}$ is given by

\[ {\tilde \tau}=\left(\frac{16C^{2}_{{\scriptscriptstyle1}}}{\pi^{2}}\right)^{\frac{\theta(|\tau|)}{2\theta(|\tau|)-1}}|\tau|^{\frac{1-\theta(|\tau|)}{2\theta(|\tau|)-1}}. \]

We decompose the range of integration in (4.31) as follows:

(4.32)\begin{equation} \begin{aligned} & \int\limits_{-\infty}^{\infty}|\sigma|^{j}\exp\left({-}C_{{\scriptscriptstyle1}}|\tau|\Big\{|\sigma|^{\frac{2\theta(|\tau|)-1}{\theta(|\tau|)}}-1\Big\}\log{|C_{{\scriptscriptstyle1}}\tau{\tilde \tau}\sigma+i\tau|}\right){\rm d}\sigma\\ & \quad =\int\limits_{|\sigma|\leq u}\ldots{\rm d}\sigma+\int\limits_{|\sigma|>u}\ldots{\rm d}\sigma ={\textrm L}_{3}+{\textrm L}_{4}, \end{aligned} \end{equation}

where $u=2^{\frac {\theta (|\tau |)}{2\theta (|\tau |)-1}}$. In the range $|\sigma |\leq u$ we estimate trivially, obtaining

(4.33)\begin{equation} {\textrm L}_{3}\leq\exp{\big(3C_{{\scriptscriptstyle1}}|\tau|\log{|\tau|}\big)}. \end{equation}

In the range $|\sigma |>u$ we have $|\sigma |^{\frac {2\theta (|\tau |)-1}{\theta (|\tau |)}}-1\geq \frac {1}{2}|\sigma |^{\frac {2\theta (|\tau |)-1}{\theta (|\tau |)}}\geq \frac {1}{2}|\sigma |^{1/2}$. It follows that

(4.34)\begin{equation} {\textrm L}_{4}\leq\int\limits_{-\infty}^{\infty}|\sigma|^{j}\exp\big(-|\sigma|^{1/2}\,\big){\rm d}\sigma. \end{equation}

Combining (4.33) and (4.34), we see that ${\textrm L}_{2}$ satisfies the bound

(4.35)\begin{equation} {\textrm L}_{2}\leq2^{j/2+1}{\tilde \tau}^{j+1}\exp{\big(3C_{{\scriptscriptstyle1}}|\tau|\log{|\tau|}\big)}. \end{equation}

From (4.29) and (4.35) we find that ${\textrm L}_{1}$ dominates. By (4.26) and (4.27) we arrive at

(4.36)\begin{equation} \big|\mathcal{P}^{(j)}_{1}(x)\big|\leq\big(4\pi C_{{\scriptscriptstyle1}}|\tau|\big)^{j+1}\exp{\left({-}2\pi|\tau|\bigg\{|x|-\frac{5C_{{\scriptscriptstyle1}}}{2\pi}\log{|\tau|}\bigg\}\right)\,} \end{equation}

Finally, we specify $\tau$. We choose

\[ \tau={-}\text{sgn}(x)\exp{\big(\rho|x|\big)}\quad;\quad\rho=\frac{\pi}{5C_{{\scriptscriptstyle1}}}. \]

With this choice, we have the bound (recall that $|x|$ is assumed to be large in terms of $j$)

(4.37)\begin{equation} \begin{aligned} \big|\mathcal{P}^{(j)}_{1}(x)\big| & \leq\big(4\pi C_{{\scriptscriptstyle1}}\big)^{j+1}\exp{\left(-\pi|x|\exp{\big(\rho|x|\big)}+(j+1)\rho|x|\right)}\\ & \leq\exp{\left(-\frac{\pi}{2}|x|\exp{\big(\rho|x|\big)}\right)}. \end{aligned} \end{equation}

It remains to show that $\mathcal {P}_{1}(x)$ defines a probability density. This will be a consequence of the proof of theorem 1.1. This concludes the proof of proposition 4.2.

5. Power moment estimates

Having constructed the probability density $\mathcal {P}_{1}(\alpha )$ in the previous section, our final task, before turning to the proof of the main results of this paper, is to establish the existence of all moments of the normalized error term $\widehat {\mathcal {E}}_{1}(x)$. The main result we shall set out to prove is the following.

Proposition 5.1 Let $j\geq 1$ be an integer. Then the $j$-th power moment of $\widehat {\mathcal {E}}_{1}(x)$ is given by

(5.1)\begin{equation} \lim\limits_{X\to\infty}\frac{1}{X}\int\limits_{ X}^{2X}\widehat{\mathcal{E}}^{j}_{1}(x){\rm d}x=\sum_{s=1}^{j}\underset{\,\,\ell_{1},\,\ldots\,,\ell_{s}\geq1}{\sum_{\ell_{1}+\cdots+\ell_{s}=j}}\,\,\frac{j!}{\ell_{1}!\cdots\ell_{s}!}\underset{\,\,\,m_{s}>\cdots>m_{1}}{\sum_{m_{1},\,\ldots\,,m_{s}=1}^{\infty}}\prod_{i=1}^{s}\Xi(m_{i},\ell_{i}), \end{equation}

where the series on the RHS of (5.1) converges absolutely, and for integers $m,\,\ell \geq 1$, the term $\Xi (m,\,\ell )$ is given as in (1.4).

Remark 7 As we shall see later on, the RHS of (5.1) is simply $\int _{-\infty }^{\infty }\alpha ^{j}\mathcal {P}_{1}(\alpha ){\rm d}\alpha$. Also, our proof of proposition 5.1 in fact yields (5.1) in a quantitative form, namely, we obtain $\frac {1}{X}\int _{ X}^{2X}\widehat {\mathcal {E}}^{j}_{1}(x){\rm d}x=\int _{-\infty }^{\infty }\alpha ^{j}\mathcal {P}_{1}(\alpha ){\rm d}\alpha +\mathrm {R}_{j}(X)$ with an explicit decay estimate for the remainder term $\mathrm {R}_{j}(X)$. However, as our sole focus here is on establishing the existence of the limit given on the left-hand side (LHS) of (5.1), proposition 5.1 will suffice for our needs.

Proof. We split the proof into three cases, depending on whether $j=1$, $j=2$ or $j\geq 3$.

Case 1. $j=1$. Since by definition, $\Xi (m,\,1)=0$ for any integer $m\geq 1$, we need to show that $\frac {1}{X}\int _{X}^{2X}\widehat {\mathcal {E}}_{1}(x){\rm d}x\to 0$ as $X\to \infty$. By proposition 2.1 and lemma 3.2, we have

(5.2)\begin{equation} \frac{1}{X}\int\limits_{ X}^{2X}\widehat{\mathcal{E}}_{1}(x){\rm d}x={-}\frac{\sqrt{2}}{\pi}\sum_{m\,\leq\,X^{2}}\frac{r_{{\scriptscriptstyle 2}}(m)}{m}\frac{1}{X}\int\limits_{ X}^{2X}\cos{\big(2\pi\sqrt{m}x\big)}{\rm d}x+O_{\epsilon}\big(X^{{-}1+\epsilon}\big)\,. \end{equation}

It follows that

(5.3)\begin{equation} \begin{aligned} \bigg|\frac{1}{X}\int\limits_{ X}^{2X}\widehat{\mathcal{E}}_{1}(x){\rm d}x\bigg| & \ll_{\epsilon}\frac{1}{X}\sum_{m=1}^{\infty}\frac{r_{{\scriptscriptstyle 2}}(m)}{m^{3/2}}+X^{{-}1+\epsilon}\\ & \ll_{\epsilon} X^{{-}1+\epsilon}. \end{aligned} \end{equation}

This proves (5.1) in the case where $j=1$.

Case 2. $j=2$. By proposition 2.1 and lemma 3.2, we have

(5.4)\begin{align} \frac{1}{X}\int\limits_{ X}^{2X}\widehat{\mathcal{E}}^{2}_{1}(x){\rm d}x& =\frac{2}{\pi^{2}}\sum_{m,\,n\,\leq\,X^{2}}\frac{r_{{\scriptscriptstyle 2}}(m)}{m}\frac{r_{{\scriptscriptstyle 2}}(n)}{n}\frac{1}{X}\nonumber\\ & \quad\times\int\limits_{ X}^{2X}\cos{\big(2\pi\sqrt{m}x\big)}\cos{\big(2\pi\sqrt{n}x\big)}{\rm d}x+O_{\epsilon}\big(X^{{-}1+\epsilon}\big), \end{align}

Now, we have

(5.5)\begin{equation} \begin{aligned} & \frac{1}{X}\int\limits_{ X}^{2X}\cos{\big(2\pi\sqrt{m}x\big)}\cos{\big(2\pi\sqrt{n}x\big)}{\rm d}x\nonumber\\ & \quad =\frac{1}{2}{\mathbb 1}_{m=n}+\frac{1}{4\pi X}{\mathbb 1}_{m\neq n}\frac{\sin{\big(2\pi(\sqrt{m}-\sqrt{n}\,)x\big)}}{\sqrt{m}-\sqrt{n}}\Bigg|^{x=2X}_{x=X}\\ & \qquad+ O\left(\frac{1}{X(mn)^{1/4}}\right)\\ & \quad =\frac{1}{2}{\mathbb 1}_{m=n}+\frac{1}{4\pi X}{\mathbb 1}_{m\neq n}{\textrm K}_{(\sqrt{m},\sqrt{n}\,)}+O\left(\frac{1}{X(mn)^{1/4}}\right), \end{aligned} \end{equation}

say. Inserting (5.5) into the RHS of (5.4), we obtain

(5.6)\begin{align} \frac{1}{X}\int\limits_{ X}^{2X}\widehat{\mathcal{E}}^{2}_{1}(x){\rm d}x& =\frac{1}{\pi^{2}}\sum_{m\,\leq\,X^{2}}\frac{r^{2}_{{\scriptscriptstyle 2}}(m)}{m^{2}}\nonumber\\ & \quad +\frac{1}{2\pi^{3}X}\sum_{m\neq n\,\leq\,X^{2}}\frac{r_{{\scriptscriptstyle 2}}(m)}{m}\frac{r_{{\scriptscriptstyle 2}}(n)}{n}{\textrm K}_{(\sqrt{m},\sqrt{n}\,)}+O_{\epsilon}\big(X^{{-}1+\epsilon}\big), \end{align}

To estimate the off-diagonal terms, we use the identity $\sin {t}=\frac {1}{2\pi i}(\exp {(it)}-\exp {(-it}))$ and then apply Hilbert's inequality, obtaining

(5.7)\begin{equation} \bigg|\sum_{m\neq n\,\leq\,X^{2}}\frac{r_{{\scriptscriptstyle 2}}(m)}{m}\frac{r_{{\scriptscriptstyle 2}}(n)}{n}{\textrm K}_{(\sqrt{m},\sqrt{n}\,)}\bigg|\ll\sum_{m=1}^{\infty}r^{2}_{{\scriptscriptstyle2}}(m)m^{{-}3/2}. \end{equation}

Inserting (5.7) into the RHS of (5.6), we arrive at

(5.8)\begin{equation} \begin{aligned} \frac{1}{X}\int\limits_{ X}^{2X}\widehat{\mathcal{E}}^{2}_{1}(x){\rm d}x & =\frac{1}{\pi^{2}}\sum_{m\,\leq\,X^{2}}\frac{r^{2}_{{\scriptscriptstyle 2}}(m)}{m^{2}}+O_{\epsilon}\big(X^{{-}1+\epsilon}\big)\\ & =\frac{1}{\pi^{2}}\sum_{m=1}^{\infty}\frac{r^{2}_{{\scriptscriptstyle 2}}(m)}{m^{2}}+O_{\epsilon}\big(X^{{-}1+\epsilon}\big). \end{aligned} \end{equation}

Recalling that $\Xi (m,\,1)=0$, it follows that the RHS of (5.1) in the case where $j=2$ is given by

(5.9)\begin{equation} \begin{aligned} \sum_{m=1}^{\infty}\Xi(m,2) & =\frac{1}{2\pi^{2}}\sum_{m=1}^{\infty}\frac{\mu^{2}(m)}{m^{2}}\sum_{\textit{e}_{1},\textit{e}_{2}=\pm1}\underset{\textit{e}_{1}k_{1}+\textit{e}_{2}k_{2}=0}{\sum_{k_{1}, k_{2}=1}^{\infty}}\prod_{i=1}^{2}\frac{r_{{\scriptscriptstyle2}}\big(mk^{2}_{i}\big)}{k_{i}^{2}}\\ & =\frac{1}{\pi^{2}}\sum_{m,k=1}^{\infty}\mu^{2}(m)\frac{r^{2}_{{\scriptscriptstyle2}}\big(mk^{2}\big)}{(mk^{2})^{2}}=\frac{1}{\pi^{2}}\sum_{m=1}^{\infty}\frac{r^{2}_{{\scriptscriptstyle 2}}(m)}{m^{2}}. \end{aligned} \end{equation}

This proves (5.1) in the case where $j=2$.

Case 3. $j\geq 3$. In what follows, all implied constants in the Big $O$ notation are allowed to depend on $j$. By proposition 2.1 and lemma 3.2, together with the trivial bound $|x^{-2}T(x^{2})|\ll 1$, we have

(5.10)\begin{equation} \frac{1}{X}\int\limits_{ X}^{2X}\widehat{\mathcal{E}}^{j}_{1}(x){\rm d}x=\left(-\frac{\sqrt{2}}{\pi}\right)^{j}\frac{1}{X}\int\limits_{ X}^{2X}\left(\,\sum_{m\,\leq\,X^{2}}\frac{r_{{\scriptscriptstyle 2}}(m)}{m}\cos{\big(2\pi\sqrt{m}x\big)}\right)^{j}{\rm d}x\!+O_{\epsilon}\big(X^{{-}1\!+\!\epsilon}\big). \end{equation}

Let $1\leq Y\leq X^{2}$ be a large parameter to be determined later. In the range $X< x<2X$, we have

(5.11)\begin{equation} \begin{aligned} & \Bigg(\sum_{m\leq X^{2}}\frac{r_{{\scriptscriptstyle 2}}(m)}{m}\cos{\big(2\pi\sqrt{m}x\big)}\Bigg)^{j}=\Bigg(\,\sum_{m\,\leq\,Y}\frac{r_{{\scriptscriptstyle 2}}(m)}{m}\cos{\big(2\pi\sqrt{m}x\big)}\Bigg)^{j}\\ & +O\left((\log{X})^{j-1}\bigg|\sum_{Y < m\,\leq\,X^{2}}\frac{r_{{\scriptscriptstyle 2}}(m)}{m}\exp{\big(2\pi i\sqrt{m}x\big)}\bigg|\,\right). \end{aligned} \end{equation}

We first estimate the mean-square of the second summand appearing on the RHS of (5.11). We have

(5.12)\begin{equation} \begin{aligned} & \frac{1}{X}\int\limits_{ X}^{2X}\bigg|\sum_{Y < m\,\leq\,X^{2}}\frac{r_{{\scriptscriptstyle 2}}(m)}{m}\exp{\big(2\pi i\sqrt{m}x\big)}\bigg|^{2}{\rm d}x =\sum_{Y < m\,\leq\,X^{2}}\frac{r^{2}_{{\scriptscriptstyle 2}}(m)}{m^{2}}\\ & \quad +\frac{1}{2\pi iX}\sum_{Y < m\neq n\,\leq\,X^{2}}\frac{r_{{\scriptscriptstyle 2}}(m)}{m}\frac{r_{{\scriptscriptstyle 2}}(n)}{n}\frac{\exp{\big(2\pi i(\sqrt{m}-\sqrt{n}x)\big)}}{\sqrt{m}-\sqrt{n}}\bigg|^{x=2X}_{x=X}. \end{aligned} \end{equation}

Applying Hilbert's inequality, we find that

(5.13)\begin{equation} \begin{aligned} \Bigg|\,\sum_{Y < m\neq n\,\leq\,X^{2}}\frac{r_{{\scriptscriptstyle 2}}(m)}{m}\frac{r_{{\scriptscriptstyle 2}}(n)}{n}\frac{\exp{\big(2\pi i(\sqrt{m}-\sqrt{n}x)\big)}}{\sqrt{m}-\sqrt{n}}\bigg|^{x=2X}_{x=X}\,\Bigg| & \ll\sum_{m > Y}r^{2}_{{\scriptscriptstyle2}}(m)m^{{-}3/2}\\ & \ll Y^{{-}1/2}\log{Y}. \end{aligned} \end{equation}

Inserting (5.13) into the RHS of (5.12), we obtain

(5.14)\begin{equation} \begin{aligned} \frac{1}{X}\int\limits_{ X}^{2X}\bigg|\sum_{Y < m\,\leq\,X^{2}}\frac{r_{{\scriptscriptstyle 2}}(m)}{m}\exp{\big(2\pi i\sqrt{m}x\big)}\bigg|^{2}{\rm d}x & =\sum_{Y < m\,\leq\,X^{2}}\frac{r^{2}_{{\scriptscriptstyle 2}}(m)}{m^{2}}+O\big(X^{{-}1}Y^{{-}1/2}\log{Y}\big)\\ & \ll Y^{{-}1}\log{Y}+X^{{-}1}Y^{{-}1/2}\log{Y}\\ & \ll Y^{{-}1}\log{Y}. \end{aligned} \end{equation}

Integrating both sides of (5.11), we have by (5.10), (5.14) and Cauchy–Schwarz inequality

(5.15)\begin{equation} \begin{aligned} \frac{1}{X}\int\limits_{ X}^{2X}\widehat{\mathcal{E}}^{j}_{1}(x){\rm d}x & =\left(-\frac{\sqrt{2}}{\pi}\right)^{j}\frac{1}{X}\int\limits_{ X}^{2X}\left(\,\sum_{m\,\leq\,Y}\frac{r_{{\scriptscriptstyle 2}}(m)}{m}\cos{\big(2\pi\sqrt{m}x\big)}\right)^{j}{\rm d}x+O_{\epsilon}\big(X^{\epsilon}Y^{{-}1/2}\big)\\ & =\left(\frac{-1}{\sqrt{2}\pi}\right)^{j}\sum_{\textit{e}_{1},\ldots,\textit{e}_{j}=\pm1}\sum_{m_{1},\ldots,m_{j}\leq Y}\prod_{i=1}^{j}\frac{r_{{\scriptscriptstyle 2}}(m_{i})}{m_{i}}\frac{1}{X}\\ & \quad\times\int\limits_{ X}^{2X}\cos{\left(2\pi\Big(\sum_{i=1}^{j}\textit{e}_{i}\sqrt{m_{i}}\Big)x\right)}{\rm d}x + O_{\epsilon}\big(X^{\epsilon}Y^{{-}1/2}\big). \end{aligned} \end{equation}

Before we proceed to evaluate the RHS of (5.15), we need the following result (see [Reference Wenguang14], § 2 lemma $2.2$). Let $\textit {e}_{1},\,\ldots,\,\textit {e}_{j}=\pm 1$, and suppose that $m_{1},\,\ldots,\,m_{j}\leq Y$ are integers. Then it holds

(5.16)\begin{equation} \sum_{i=1}^{j}\textit{e}_{i}\sqrt{m_{i}}\neq0\Longrightarrow\bigg|\sum_{i=1}^{j}\textit{e}_{i}\sqrt{m_{i}}\bigg|\gg Y^{1/2-2^{j-2}}, \end{equation}

where the implied constant depends only on $j$. It follows from (5.16) that

(5.17)\begin{equation} \frac{1}{X}\int\limits_{ X}^{2X}\cos{\left(2\pi\Big(\,\sum_{i=1}^{j}\textit{e}_{i}\sqrt{m_{i}}\Big)x\right)}{\rm d}x=\left\{ \begin{array}{ll} 1 & ;\sum_{i=1}^{j}\textit{e}_{i}\sqrt{m_{i}}=0 \\ O\big(X^{{-}1}Y^{2^{j-2}-1/2}\big) & ; \sum_{i=1}^{j}\textit{e}_{i}\sqrt{m_{i}}\neq0 . \end{array} \right. \end{equation}

Estimate (5.17) in the case where $\sum _{i=1}^{j}\textit {e}_{i}\sqrt {m_{i}}\neq 0$ is somewhat wasteful, but nevertheless it will suffice for us. Inserting (5.17) into the RHS of (5.15), we obtain

(5.18)\begin{align} \frac{1}{X}\int\limits_{ X}^{2X}\widehat{\mathcal{E}}^{j}_{1}(x){\rm d}x& =\left(\frac{-1}{\sqrt{2}\pi}\right)^{j}\sum_{\textit{e}_{1},\ldots,\textit{e}_{j}=\pm1}\,\,\underset{\sum_{i=1}^{j}\textit{e}_{i}\sqrt{m_{i}}=0}{\sum_{m_{1},\ldots,m_{j}\,\leq\,Y}}\prod_{i=1}^{j}\frac{r_{{\scriptscriptstyle 2}}(m_{i})}{m_{i}}\nonumber\\ & \quad +O_{\epsilon}\Big(X^{\epsilon}Y^{{-}1/2}\Big\{1+X^{{-}1}Y^{2^{j-2}}\Big\}\Big). \end{align}

It remains to estimate the first summand appearing on the RHS of (5.18). Let $\textit {e}_{1},\,\ldots,\,\textit {e}_{j}=\pm 1$. We have

(5.19)\begin{equation} \underset{\sum_{i=1}^{j}\textit{e}_{i}\sqrt{m_{i}}=0}{\sum_{m_{1},\ldots,m_{j}\,\leq\,Y}}\prod_{i=1}^{j}\frac{r_{{\scriptscriptstyle 2}}(m_{i})}{m_{i}}=\sum_{m_{1},\ldots,m_{j}\,\leq\,Y}\prod_{i=1}^{j}\frac{\mu^{2}(m_{i})}{m_{i}}\,\,\underset{\sum_{i=1}^{j}\textit{e}_{i}k_{i}\sqrt{m_{i}}=0}{\sum_{k_{1}\,\leq\,\sqrt{Y/m_{1}},\ldots,k_{j}\,\leq\,\sqrt{Y/m_{j}}}}\,\,\prod_{i=1}^{j}\frac{r_{{\scriptscriptstyle 2}}\big(m_{i}k^{2}_{i}\big)}{k^{2}_{i}}\,. \end{equation}

Now, since the elements of the set $\mathscr {B}=\{\sqrt {m}:|\mu (m)|=1\}$ are linearly independent over $\mathbb {Q}$, it follows that the relation $\sum _{i=1}^{j}\textit {e}_{i}k_{i}\sqrt {m_{i}}=0$ with $m_{1},\,\ldots,\,m_{j}$ square-free is equivalent to the relation $\sum _{i\in S_{\ell }}\textit {e}_{i}k_{i}=0$ for $1\leq \ell \leq s$, where $\biguplus _{\ell =1}^{s} S_{\ell }=\{1,\,,\,\ldots,\,j\}$ is a partition. Multiplying (5.19) by $(-1/\sqrt {2}\pi )^{j}$, summing over all $\textit {e}_{1},\,\ldots,\,\textit {e}_{j}=\pm 1$ and rearranging the terms in ascending order, it follows that

(5.20)\begin{equation} \begin{aligned} \left(\frac{-1}{\sqrt{2}\pi}\right)^{j}\sum_{\textit{e}_{1},\ldots,\textit{e}_{j}=\pm1} & \underset{\sum_{i=1}^{j}\textit{e}_{i}\sqrt{m_{i}}=0}{\sum_{m_{1},\ldots,m_{j}\,\leq\,Y}}\prod_{i=1}^{j}\frac{r_{{\scriptscriptstyle 2}}(m_{i})}{m_{i}}\\ & =\sum_{s=1}^{j}\underset{\ell_{1},\ldots,\ell_{s}\geq1}{\sum_{\ell_{1}+\cdots+\ell_{s}=j}}\frac{j!}{\ell_{1}!\cdots\ell_{s}!}\sum_{m_{1}<\cdots< m_{s}\leq Y}\prod_{i=1}^{s}\Xi\big(m_{i},\ell_{i};\sqrt{Y/m_{i}}\,\big), \end{aligned} \end{equation}

where for $y\geq 1$, and integers $m,\,\ell \geq 1$, the term $\Xi (m,\,\ell ;y)$ is given

(5.21)\begin{equation} \Xi(m,\ell;y)=({-}1)^{\ell}\left(\frac{1}{\sqrt{2}\pi}\right)^{\ell}\frac{\mu^{2}(m)}{m^{\ell}}\sum_{\textit{e}_{1},\ldots,\textit{e}_{\ell}=\pm1}\underset{\textit{e}_{1}k_{1}+\cdots+\textit{e}_{\ell}k_{\ell}=0}{\sum_{k_{1},\ldots,k_{\ell}\leq y}}\prod_{i=1}^{\ell}\frac{r_{{\scriptscriptstyle2}}\big(mk^{2}_{i}\big)}{k_{i}^{2}}. \end{equation}

Denoting by $\tau (\cdot )$ the divisor function, we have for $m$ square-free

(5.22)\begin{equation} \sum_{k > y}\frac{r_{{\scriptscriptstyle2}}\big(mk^{2}\big)}{k^{2}}\ll r_{{\scriptscriptstyle2}}(m)\sum_{k > y}\frac{\tau^{2}(k)}{k^{2}}\ll r_{{\scriptscriptstyle2}}(m)y^{{-}1}\big(\log{2y}\big)^{3}. \end{equation}

Using (5.22) repeatedly, it follows that

(5.23)\begin{equation} \Xi(m,\ell;y)=\Xi(m,\ell)+O\left(\frac{\mu^{2}(m)r^{\ell}_{{\scriptscriptstyle2}}(m)}{m^{\ell}}y^{{-}1}\big(\log{2y}\big)^{3}\right), \end{equation}

where $\Xi (m,\,\ell )$ is given as in (1.4). Using (5.23) repeatedly, and noting that $\Xi (m,\,1;y)=0$, we have by (5.20)

(5.24)\begin{equation} \begin{aligned} & \left(\frac{-1}{\sqrt{2}\pi}\right)^{j}\sum_{\textit{e}_{1},\ldots,\textit{e}_{j}=\pm1}\underset{\sum_{i=1}^{j}\textit{e}_{i}\sqrt{m_{i}}=0}{\sum_{m_{1},\ldots,m_{j}\,\leq\,Y}}\prod_{i=1}^{j}\frac{r_{{\scriptscriptstyle 2}}(m_{i})}{m_{i}}\\ & =\sum_{s=1}^{j}\underset{\ell_{1},\ldots,\ell_{s}\geq1}{\sum_{\ell_{1}+\cdots+\ell_{s}=j}}\frac{j!}{\ell_{1}!\cdots\ell_{s}!}\sum_{m_{1}<\cdots< m_{s}\leq Y}\prod_{i=1}^{s}\Xi(m_{i},\ell_{i})+O\big(Y^{{-}1/2}(\log{Y})^{3}\big)\\ & =\sum_{s=1}^{j}\underset{\ell_{1},\ldots,\ell_{s}\geq1}{\sum_{\ell_{1}+\cdots+\ell_{s}=j}}\frac{j!}{\ell_{1}!\cdots\ell_{s}!}\underset{m_{s}>\cdots>m_{1}}{\sum_{m_{1},\ldots\,,m_{s}=1}^{\infty}}\prod_{i=1}^{s}\Xi(m_{i},\ell_{i})+O\big(Y^{{-}1/2}(\log{Y})^{3}\big), \end{aligned} \end{equation}

where the series appearing on the RHS of (5.24) converges absolutely. Finally, inserting (5.24) into the RHS of (5.18) and making the choice $Y=X^{2^{2-j}}$, we arrive at

(5.25)\begin{equation} \frac{1}{X}\int\limits_{ X}^{2X}\widehat{\mathcal{E}}^{j}_{1}(x){\rm d}x=\sum_{s=1}^{j}\underset{\ell_{1},\ldots\,,\ell_{s}\geq1}{\sum_{\ell_{1}+\cdots+\ell_{s}=j}}\,\,\frac{j!}{\ell_{1}!\cdots\ell_{s}!}\underset{m_{s}>\cdots>m_{1}}{\sum_{m_{1},\ldots,m_{s}=1}^{\infty}}\prod_{i=1}^{s}\Xi(m_{i},\ell_{i})+O_{\epsilon}\Big(X^{-\eta_{j}+\epsilon}\Big), \end{equation}

where $\eta _{j}=2^{1-j}$. This settles the proof in the case where $j\geq 3$. The proof of proposition 5.1 is therefore complete.

6. Proof of the main results: theorems 1.1 and 1.3

Collecting the results from the previous sections, we are now in a position to present the proof of the main results. We begin with the proof of theorem 1.1.

Proof Proof of theorem 1.1

We shall first prove (1.1) in the particular case where $\mathcal {F}\in C^{\infty }_{0}(\mathbb {R})$, that is, $\mathcal {F}$ is an infinitely differentiable function having compact support.

To that end, let $\mathcal {F}$ be test function as above, and note that the assumptions on $\mathcal {F}$ imply that $|\mathcal {F}(w)-\mathcal {F}(y)|\leq c_{\mathcal {F}}|w-y|$ for all $w,\,y$, where $c_{\mathcal {F}}>0$ is some constant which depends on $\mathcal {F}$. It follows that for any integer $M\geq 1$ and any $X>0$, we have

(6.1)\begin{equation} \frac{1}{X}\int\limits_{ X}^{2X}\mathcal{F}\big(\widehat{\mathcal{E}}_{1}(x)\big){\rm d}x=\frac{1}{X}\int\limits_{ X}^{2X}\mathcal{F}\Big(\sum_{m\leq M}\phi_{{\scriptscriptstyle 1,m}}\big(\sqrt{m}x\big)\Big){\rm d}x+\mathscr{E}_{\mathcal{F}}\big(X,M\big), \end{equation}

where $\phi _{{\scriptscriptstyle 1,m}}(t)$ is defined as in proposition 3.1, and the remainder term $\mathscr {E}_{\mathcal {F}}(X,\,M)$ satisfies the bound

(6.2)\begin{equation} \big|\mathscr{E}_{\mathcal{F}}\big(X,M\big)\big|\leq c_{\mathcal{F}}\,\frac{1}{X}\int\limits_{X}^{2X}\Big|\widehat{\mathcal{E}}_{1}(x)-\sum_{m\leq M}\phi_{{\scriptscriptstyle 1,m}}\big(\sqrt{m}x\big)\Big|{\rm d}x. \end{equation}

By proposition 3.1 it follows that

(6.3)\begin{equation} \lim_{M\to\infty}\limsup_{X\to\infty}\big|\mathscr{E}_{\mathcal{F}}\big(X,M\big)\big|=0. \end{equation}

Denoting by $\widehat {\mathcal {F}}$ the Fourier transform of $\mathcal {F}$, the assumptions on the test function $\mathcal {F}$ allows us to write

(6.4)\begin{equation} \frac{1}{X}\int\limits_{ X}^{2X}\mathcal{F}\Big(\sum_{m\,\leq\,M}\phi_{{\scriptscriptstyle 1,m}}\big(\sqrt{m}x\big)\Big){\rm d}x=\int\limits_{-\infty}^{\infty}\widehat{\mathcal{F}}(\alpha)\mathfrak{M}_{X}(\alpha;M){\rm d}\alpha, \end{equation}

where $\mathfrak {M}_{X}(\alpha ;M)$ is defined at the beginning of § 4. Letting $X\to \infty$ in (6.4), we have by (4.2) in proposition 4.2 together with an application of Lebesgue's dominated convergence theorem

(6.5)\begin{equation} \lim_{X\to\infty}\frac{1}{X}\int\limits_{ X}^{2X}\mathcal{F}\Big(\sum_{m\,\leq\,M}\phi_{{\scriptscriptstyle 1,m}}\big(\sqrt{m}x\big)\Big){\rm d}x=\int\limits_{-\infty}^{\infty}\widehat{\mathcal{F}}(\alpha)\mathfrak{M}(\alpha;M){\rm d}\alpha, \end{equation}

where $\mathfrak {M}(\alpha ;M)=\prod _{m\,\leq \,M}\Phi _{{\scriptscriptstyle 1,m}}(\alpha )$, and $\Phi _{{\scriptscriptstyle 1,m}}(\alpha )$ is defined at the beginning of § 4. Letting $M\to \infty$ in (6.5), and recalling that $\mathcal {P}_{1}(\alpha )=\widehat {\Phi }_{{\scriptscriptstyle 1}}(\alpha )$ where $\Phi _{{\scriptscriptstyle 1}}(\alpha )=\prod _{m=1}^{\infty }\Phi _{{\scriptscriptstyle 1,m}}(\alpha )$, we have by Lebesgue's dominated convergence theorem

(6.6)\begin{equation} \begin{aligned} \lim_{M\to\infty}\lim_{X\to\infty}\frac{1}{X}\int\limits_{ X}^{2X}\mathcal{F}\Big(\sum_{m\,\leq\,M}\phi_{{\scriptscriptstyle 1,m}}\big(\sqrt{m}x\big)\Big){\rm d}x & =\int\limits_{-\infty}^{\infty}\widehat{\mathcal{F}}(\alpha)\Phi_{{\scriptscriptstyle1}}(\alpha){\rm d}\alpha\\ & =\int\limits_{-\infty}^{\infty}\mathcal{F}(\alpha)\mathcal{P}_{1}(\alpha){\rm d}\alpha, \end{aligned} \end{equation}

where in the second equality we made use of Parseval's theorem which is justified by the decay estimate (4.4) for $\mathcal {P}^{(j)}_{1}(\alpha )$ with $\alpha$ real stated in proposition 4.2. It follows from (6.4), (6.5) and (6.6) that

(6.7)\begin{equation} \frac{1}{X}\int\limits_{ X}^{2X}\mathcal{F}\Big(\sum_{m\,\leq\,M}\phi_{{\scriptscriptstyle 1,m}}\big(\sqrt{m}x\big)\Big){\rm d}x=\int\limits_{-\infty}^{\infty}\mathcal{F}(\alpha)\mathcal{P}_{1}(\alpha){\rm d}\alpha+\mathscr{E}^{\flat}_{\mathcal{F}}\big(X,M\big), \end{equation}

where the remainder term $\mathscr {E}^{\flat }_{\mathcal {F}}(X,\,M)$ satisfies

(6.8)\begin{equation} \lim_{M\to\infty}\lim_{X\to\infty}\big|\mathscr{E}^{\flat}_{\mathcal{F}}\big(X,M\big)\big|=0. \end{equation}

Inserting (6.7) into the RHS of (6.1), we deduce from (6.3) and (6.8) that

(6.9)\begin{equation} \begin{aligned} & \limsup_{X\to\infty}\bigg|\frac{1}{X}\int\limits_{ X}^{2X}\mathcal{F}\big(\widehat{\mathcal{E}}_{1}(x)\big){\rm d}x-\int\limits_{-\infty}^{\infty}\mathcal{F}(\alpha)\mathcal{P}_{1}(\alpha){\rm d}\alpha\bigg|\\ & \quad\leq\lim_{M\to\infty}\lim_{X\to\infty}\big|\mathscr{E}^{\flat}_{\mathcal{F}}\big(X,M\big)\big|+\lim_{M\to\infty}\limsup_{X\to\infty}\big|\mathscr{E}_{\mathcal{F}}\big(X,M\big)\big|=0. \end{aligned} \end{equation}

We conclude that

(6.10)\begin{equation} \lim\limits_{X\to\infty}\frac{1}{X}\int\limits_{X}^{2X}\mathcal{F}\big(\widehat{\mathcal{E}}_{1}(x)\big){\rm d}x=\int\limits_{-\infty}^{\infty}\mathcal{F}(\alpha)\mathcal{P}_{1}(\alpha){\rm d}\alpha, \end{equation}

whenever $\mathcal {F}\in C^{\infty }_{0}(\mathbb {R})$.

The result (6.10) extends easily to include the class $C_{0}(\mathbb {R})$ of continuous functions with compact support. To see this, fix a smooth bump function $\varphi (y)\geq 0$ supported in $[-1,\,1]$ having total mass $1$, and for an integer $n\geq 1$ let $\varphi _{n}(y)=n\varphi (ny)$. Given $\mathcal {F}\in C_{0}(\mathbb {R})$, let $\mathcal {F}_{n}=\mathcal {F}\star \varphi _{n}\in C^{\infty }_{0}(\mathbb {R})$, where $\star$ denotes the Euclidean convolution operator. We then have

(6.11)\begin{equation} \begin{aligned} & \bigg|\frac{1}{X}\int\limits_{X}^{2X}\mathcal{F}\big(\widehat{\mathcal{E}}_{1}(x)\big){\rm d}x-\int\limits_{-\infty}^{\infty}\mathcal{F}(\alpha)\mathcal{P}_{1}(\alpha){\rm d}\alpha\bigg|\\ & \quad\leq\bigg|\frac{1}{X}\int\limits_{X}^{2X}\mathcal{F}_{n}\big(\widehat{\mathcal{E}}_{1}(x)\big){\rm d}x-\int\limits_{-\infty}^{\infty}\mathcal{F}(\alpha)\mathcal{P}_{1}(\alpha){\rm d}\alpha\bigg|+\underset{y\in\mathbb{R}}{\textit{max}}\big|\mathcal{F}(y)-\mathcal{F}_{n}(y)\big|. \end{aligned} \end{equation}

Since $\underset {y\in \mathbb {R}}{\textit {max}}\,\big |\mathcal {F}(y)-\mathcal {F}_{n}(y)\big |\to 0$ as $n\to \infty$, and

(6.12)\begin{equation} \begin{aligned} \int\limits_{-\infty}^{\infty}\mathcal{F}(\alpha)\mathcal{P}_{1}(\alpha){\rm d}\alpha & =\lim_{n\to\infty}\int\limits_{-\infty}^{\infty}\mathcal{F}_{n}(\alpha)\mathcal{P}_{1}(\alpha){\rm d}\alpha\\ & =\lim_{n\to\infty}\lim\limits_{X\to\infty}\frac{1}{X}\int\limits_{X}^{2X}\mathcal{F}_{n}\big(\widehat{\mathcal{E}}_{1}(x)\big){\rm d}x, \end{aligned} \end{equation}

it follows that (6.10) holds whenever $\mathcal {F}\in C_{0}(\mathbb {R})$.

Let us now consider the general case in which $\mathcal {F}$ is a continuous function of polynomial growth, say $|\mathcal {F}(\alpha )|\ll |\alpha |^{j}$ for all sufficiently large $|\alpha |$, where $j\geq 1$ is some integer. Let $\psi \in C^{\infty }_{0}(\mathbb {R})$ satisfy $0\leq \psi (y)\leq 1$, $\psi (y)=1$ for $|y|\leq 1$, and set $\psi _{n}(y)=\psi (y/n)$. Define $\mathcal {F}_{n}(y)=\mathcal {F}(y)\psi _{n}(y)\in C_{0}(\mathbb {R})$. For $n$ sufficiently large we have by proposition 5.1

(6.13)\begin{equation} \begin{aligned} \frac{1}{X}\int\limits_{X}^{2X}\mathcal{F}\big(\widehat{\mathcal{E}}_{1}(x)\big){\rm d}x & =\frac{1}{X}\int\limits_{X}^{2X}\mathcal{F}_{n}\big(\widehat{\mathcal{E}}_{1}(x)\big){\rm d}x+O\Bigg(\frac{1}{n^{j}}\frac{1}{X}\int\limits_{ X}^{2X}\widehat{\mathcal{E}}^{2j}_{1}(x){\rm d}x\Bigg)\\ & =\frac{1}{X}\int\limits_{X}^{2X}\mathcal{F}_{n}\big(\widehat{\mathcal{E}}_{1}(x)\big){\rm d}x+O\left(\frac{1}{n^{j}}\right), \end{aligned} \end{equation}

where the implied constant depends only on $j$ and the implicit constant appearing in the relation $|\mathcal {F}(\alpha )|\ll |\alpha |^{j}$. Since $\mathcal {F}_{n}\to \mathcal {F}$ pointwise as $n\to \infty$, it follows from the rapid decay of $\mathcal {P}_{1}(\alpha )$ that

(6.14)\begin{equation} \begin{aligned} \int\limits_{-\infty}^{\infty}\mathcal{F}(\alpha)\mathcal{P}_{1}(\alpha){\rm d}\alpha & =\lim_{n\to\infty}\int\limits_{-\infty}^{\infty}\mathcal{F}_{n}(\alpha)\mathcal{P}_{}(\alpha){\rm d}\alpha\\ & =\lim_{n\to\infty}\lim\limits_{X\to\infty}\frac{1}{X}\int\limits_{X}^{2X}\mathcal{F}_{n}\big(\widehat{\mathcal{E}}_{1}(x)\big){\rm d}x\,. \end{aligned} \end{equation}

We conclude from (6.13) and (6.14) that $\frac {1}{X}\int _{X}^{2X}\mathcal {F}(\widehat {\mathcal {E}}_{1}(x)){\rm d}x\to \int _{-\infty }^{\infty }\mathcal {F}(\alpha )\mathcal {P}_{1}(\alpha ){\rm d}\alpha$ as $X\to \infty$. It follows that (6.10) holds for all continuous functions of polynomial growth. The extension to include the class of (piecewise)-continuous functions of polynomial growth is now straightforward, and so (1.1) is proved.

The decay estimates (1.2) stated in theorem 1.1 have already been proved in proposition 4.2, and it remains to show that $\mathcal {P}_{1}(\alpha )$ defines a probability density. To that end, we note that the LHS of (6.10) is real and non-negative whenever $\mathcal {F}$ is. Since $\mathcal {P}_{1}(\alpha )$ is continuous, by choosing a suitable test function $\mathcal {F}$ in (6.10), we conclude that $\mathcal {P}_{q}(\alpha )\geq 0$ for real $\alpha$. Taking $\mathcal {F}\equiv 1$ in (6.10) we find $\int _{-\infty }^{\infty }\mathcal {P}_{1}(\alpha ){\rm d}\alpha =1$. The proof of theorem 1.1 is therefore complete.

The proof of theorem 1.3 is now immediate.

Proof Proof of theorem 1.3

In the particular case where $\mathcal {F}(\alpha )=\alpha ^{j}$ with $j\geq 1$ an integer, we have by theorem 1.1

(6.15)\begin{equation} \lim\limits_{X\to\infty}\frac{1}{X}\int\limits_{ X}^{2X}\widehat{\mathcal{E}}^{j}_{1}(x){\rm d}x=\int\limits_{-\infty}^{\infty}\alpha^{j}\mathcal{P}_{1}(\alpha){\rm d}\alpha. \end{equation}

It immediately follows from (5.1) in proposition 5.1 that

(6.16)\begin{equation} \int\limits_{-\infty}^{\infty}\alpha^{j}\mathcal{P}_{1}(\alpha){\rm d}\alpha=\sum_{s=1}^{j}\underset{\,\,\ell_{1},\,\ldots\,,\ell_{s}\geq1}{\sum_{\ell_{1}+\cdots+\ell_{s}=j}}\,\,\frac{j!}{\ell_{1}!\cdots\ell_{s}!}\underset{\,\,\,m_{s}>\cdots>m_{1}}{\sum_{m_{1},\,\ldots\,,m_{s}=1}^{\infty}}\prod_{i=1}^{s}\Xi(m_{i},\ell_{i}), \end{equation}

where the series on the RHS of (6.16) converges absolutely, and for integers $m,\,\ell \geq 1$, the term $\Xi (m,\,\ell )$ is given as in (1.4).

By definition $\Xi (m,\,1)=0$, and so it follows from (6.16) that ${\int _{-\infty }^{\infty }\alpha \mathcal {P}_{1}(\alpha ){\rm d}\alpha =0}$. Suppose now that $j\geq 3$ is an integer which satisfies $j\equiv 1\,(2)$, and consider a summand

(6.17)\begin{equation} \underset{m_{s}>\cdots>m_{1}}{\sum_{m_{1},\ldots,m_{s}=1}^{\infty}}\prod_{i=1}^{s}\Xi(m_{i},\ell_{i}), \end{equation}

with $\ell _{1}+\cdots +\ell _{s}=j$. We may clearly assume that $\ell _{1},\,\ldots,\,\ell _{s}\geq 2$, for otherwise (6.17) vanishes. As $j$ is odd, it follows that $|\{1\leq i\leq s: \ell _{i}\equiv 1\,(2)\}|$ is also odd. Since $\Xi (m,\,\ell )<0$ for $\ell >1$ odd, and $\Xi (m,\,\ell )>0$ for $\ell$ even, it follows that (6.17) is strictly negative. We conclude that $\int _{-\infty }^{\infty }\alpha ^{j}\mathcal {P}_{1}(\alpha ){\rm d}\alpha <0$. The proof of theorem 1.3 is therefore complete.

Acknowledgements

I would like to thank Professor Amos Nevo for his support and encouragement throughout the writing of this paper. I would also like to thank Professor Zeév Rudnick for a stimulating discussion on this fascinating subject, which subsequently lead to the writing of this paper.

References

Bleher, P. M.. On the distribution of the number of lattice points inside a family of convex ovals. Duke Math. J. 67 (1992), 461481.CrossRefGoogle Scholar
Bleher, P. M., Cheng, Z., Dyson, F. J. and Lebowitz, J. L. Distribution of the error term for the number of lattice points inside a shifted circle. Comm. Math. Phys. 154 (1993), 433469.CrossRefGoogle Scholar
Epstein, P.. Zur theorie allgemeiner zetafunctionen. Math. Ann. 56 (1903), 615644.CrossRefGoogle Scholar
Garg, R., Nevo, A. and Taylor, K.. The lattice point counting problem on the Heisenberg groups. Ann. Inst. Fourier 63 (2015), 21992233.CrossRefGoogle Scholar
Gath, Y. A.. The solution of the sphere problem for the Heisenberg group. J. Ramanujan Math. Soc. 35 (2020), 149157.Google Scholar
Gath, Y. A.. On an analogue of the Gauss circle problem for the Heisenberg groups. Ann. Sc. Norm. Super. Pisa Cl. Sci. XXIII (2022), 645717.Google Scholar
Gath, Y. A.. On the distribution of the number of lattice points in norm balls on the Heisenberg groups. Q. J. Math. 73 (2022), 885935.CrossRefGoogle Scholar
Heath-Brown, D. R.. The distribution and moments of the error term in the Dirichlet divisor problem. Acta Arith. 60 (1992), 389415.CrossRefGoogle Scholar
Iwaniec, H.. Spectral methods of automorphic forms, Amer. Math. Soc. (Providence, RI; Rev. Mat. Iberoam., Madrid, 2002).CrossRefGoogle Scholar
Iwaniec, H. and Kowalski, E.. Analytic number theory, Colloquium Publications, Amer. Math. Soc., Providence, RI, 53 2004.CrossRefGoogle Scholar
Lau, Y. K. and Tsang, K. M.. Moments of the probability density functions of error terms in divisor problems. Proc. Amer. Math. Soc. 133 (2005), 12831290.CrossRefGoogle Scholar
Montgomery, H. L. and Vaughan, R. C.. Hilbert's inequality. J. Lond. Math. Soc. 2 (1974), 7382.CrossRefGoogle Scholar
Vaaler, J. D.. Some extremal functions in Fourier analysis. Bull. Amer. Math. Soc. (2) 12 (1985), 183216.CrossRefGoogle Scholar
Wenguang, Z.. On higher-power moments of $\Delta (x)$ (II). Acta Arith. 114 (2004), 3554.Google Scholar
Wigert, S.. Sur l'ordre de grandeur du nombre des diviseurs d'un entie, Ark. Mat 3 (1906-1907), 19.Google Scholar
Wintner, A.. On the lattice problem of Gauss. Amer. J. Math. 63 (1941), 619627.CrossRefGoogle Scholar