Hostname: page-component-78c5997874-xbtfd Total loading time: 0 Render date: 2024-11-13T06:59:15.730Z Has data issue: false hasContentIssue false

Smooth integers and de Bruijn's approximation Ʌ

Published online by Cambridge University Press:  31 October 2023

Ofir Gorodetsky*
Affiliation:
Mathematical Institute, University of Oxford, Oxford OX2 6GG, UK ([email protected])
Rights & Permissions [Opens in a new window]

Abstract

This paper is concerned with the relationship of $y$-smooth integers and de Bruijn's approximation $\Lambda (x,\,y)$. Under the Riemann hypothesis, Saias proved that the count of $y$-smooth integers up to $x$, $\Psi (x,\,y)$, is asymptotic to $\Lambda (x,\,y)$ when $y \ge (\log x)^{2+\varepsilon }$. We extend the range to $y \ge (\log x)^{3/2+\varepsilon }$ by introducing a correction factor that takes into account the contributions of zeta zeros and prime powers. We use this correction term to uncover a lower order term in the asymptotics of $\Psi (x,\,y)/\Lambda (x,\,y)$. The term relates to the error term in the prime number theorem, and implies that large positive (resp. negative) values of $\sum _{n \le y} \Lambda (n)-y$ lead to large positive (resp. negative) values of $\Psi (x,\,y)-\Lambda (x,\,y)$, and vice versa. Under the Linear Independence hypothesis, we show a Chebyshev's bias in $\Psi (x,\,y)-\Lambda (x,\,y)$.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press on behalf of The Royal Society of Edinburgh

1. Introduction

A positive integer is called $y$-smooth if each of its prime factors does not exceed $y$. We denote the number of $y$-smooth integers not exceeding $x$ by $\Psi (x,\,y)$. We assume throughout $x \ge y \ge 2$. Let $\rho \colon [0,\,\infty ) \to (0,\,\infty )$ be the Dickman function, defined as $\rho (t)=1$ for $t \in [0,\,1]$ and via the delay differential equation $t \rho '(t) =-\rho (t-1)$ for $t>1$. Dickman [Reference Dickman7] showed that

(1.1)\begin{equation} \Psi(x,y) \sim x \rho ( \log x/\log y) \quad (x \to \infty) \end{equation}

holds when $y \ge x^{\varepsilon }$. For this reason, it is useful to introduce

\[ u:=\log x/\log y. \]

De Bruijn [Reference de Bruijn3, Eqs. (1.3), (4.6)] showed that

(1.2)\begin{equation} \Psi(x,y)- x\rho(u) \sim (1-\gamma)\frac{x\rho(u-1)}{\log x}>0 \end{equation}

when $x \to \infty$ and $(\log x)/2 >\log y > (\log x)^{5/8}$. Here and later $\gamma$ is the Euler–Mascheroni constant. As we see, there is no arithmetic information in the leading behaviour of the error term $\Psi (x,\,y) - x \rho (u)$, and in particular it does not oscillate. Moreover, the error term is large: the saving (1.2) gives over the main term is merely $\asymp \log (u+1)/\log y$ [Reference de Bruijn3, p. 56].

This begs the question, what is the correct main term for $\Psi (x,\,y)$ that leads to a small and arithmetically rich error term? De Bruijn [Reference de Bruijn3, Eq. (2.9)] introduced a refinement of $\rho$, often denoted $\lambda _y$:

\[ \lambda_y(u) := \int_{0}^{\infty} \rho\left(u-\frac{\log t}{\log y}\right)d\left(\frac{\lfloor t \rfloor}{t}\right) = \int_{\mathbb{R}} \rho(u-v)d\left(\frac{\lfloor y^v \rfloor}{y^v}\right) \]

if $y^u \notin \mathbb {Z}$; otherwise $\lambda _y(u)=\lambda _y(u+)$ (one has $\lambda _y(u)= \lambda _y(u-)+O(1/x)$ if $y^u \in \mathbb {Z}$ [Reference de Bruijn3, p. 54]). The count $\Psi (x,\,y)$ should be compared to

\[ \Lambda(x,y):= x \lambda_y(u). \]

We refer the reader to de Bruijn's original paper for the motivation for this definition. In particular, $\Lambda$ satisfies the following continuous variant of Buchstab's identity:

\[ \Lambda(x,y)=\Lambda(x,z)-\int_{y}^{z}\Lambda\left(\frac{x}{t},t\right)\frac{{\rm d}t}{\log t} \]

for $y \le z$, to be compared with $\Psi (x,\,y)=\Psi (x,\,z)-\sum _{y < p \le z}\Psi (x/p,\,p)$. De Bruijn proved [Reference de Bruijn3, Eq. (1.4)]

(1.3)\begin{equation} \Lambda(x,y)=x\rho(u)\left(1+O_{\varepsilon}\left(\frac{\log(u+1)}{\log y}\right)\right) \end{equation}

holds for $\log y > \sqrt {\log x}$. Saias [Reference Saias17, Lem. 4] improved the range to $y \ge (\log x)^{1+\varepsilon }$. De Bruijn and Saias also provided asymptotic series expansion for $\lambda _y(u)$ in (roughly) powers of $\log (u+1)/\log y$. Hildebrand and Tenenbaum [Reference Hildebrand and Tenenbaum14, Lem. 3.1] showed that for $y \ge (\log x)^{1+\varepsilon }$,

(1.4)\begin{equation} \Lambda(x,y) \asymp_{\varepsilon} x\rho(u) \end{equation}

for $y \ge (\log x)^{1+\varepsilon }$. Implicit in the proof of proposition 4.1 of La Bretèche and Tenenbaum [Reference de la Bretèche and Tenenbaum5] is the estimate

(1.5)\begin{equation} \Lambda(x,y) = x \rho(u) K\left( - \frac{\xi(u)}{\log y}\right) \left(1 + O_{\varepsilon}\left( \frac{1}{\log x}\right)\right), \quad K(t):=\frac{t\zeta(t+1)}{t+1}, \end{equation}

for $y \ge (\log x)^{1+\varepsilon }$ where $\zeta$ is the Riemann zeta function and $\xi \colon [1,\,\infty ) \to [0,\,\infty )$ is defined via

\[ e^{\xi(u)}=1+u\xi(u). \]

We include as an appendix a proof in English of (1.5). The function $K$ originates in de Bruijn's work [Reference de Bruijn3, Eq. (2.8)]. Evidently, $K(0)=1$ and $\lim _{t \to -1^+} K(t)= \infty$. Moreover, $K$ is strictly decreasing in $(-1,\,0]$ [Reference Gorodetsky9].

Suppose $\pi (x)=\mathrm {Li}(x)(1+O(\exp (-(\log x)^{a})))$ for some $a \in (0,\,1)$. Saias [Reference Saias17, Thm.], improving on De Bruijn [Reference de Bruijn3], proved that

(1.6)\begin{equation} \Psi(x,y) = \Lambda(x,y)(1 + O_{\varepsilon}(\exp(-(\log y)^{a-\varepsilon}))) \end{equation}

holds in the range $\log y \ge (\log \log x)^{{1}/{a}+\varepsilon }$. By the Vinogradov–Korobov zero-free region, we may take $a=3/5$. Saias writes without proof [Reference Saias17, p. 81] that under the Riemann hypothesis (RH) his methods give

(1.7)\begin{equation} \Psi(x,y) = \Lambda(x,y) (1+O_{\varepsilon}(y^{\varepsilon-1/2}\log x)) \end{equation}

in the range $y \ge (\log x)^{2+\varepsilon }$, which recovers a conditional result of Hildebrand [Reference Hildebrand11].

1.1 $G$

Define the entire function $I(s)=\int _{0}^{s} \tfrac {e^v-1}{v}\,{\rm d}v$. As shown in [Reference Hildebrand and Tenenbaum14, Lem. 2.6], the Laplace transform of $\rho$ is

(1.8)\begin{equation} \hat{\rho}(s) := \int_{0}^{\infty} e^{{-}sv}\rho(v)\, \,{\rm d}v = \exp( \gamma + I({-}s)) \end{equation}

for all $s \in \mathbb {C}$. In [Reference Gorodetsky9] we studied in detail the ratio

\[ G(s,y) := \zeta(s,y) / F(s,y) \]

where

\[ \zeta(s,y):=\prod_{p \le y} (1-p^{{-}s})^{{-}1}=\sum_{n \text{ is }y\text{-smooth}} n^{{-}s} \quad (\Re s >0) \]

is the partial zeta function and

(1.9)\begin{equation} F(s,y):= \hat{\rho}((s-1)\log y)\zeta(s)(s-1)\log y. \end{equation}

The function $G(s,\,y)$ is defined for $\Re s>0$ such that $\zeta (s) \neq 0$. Informally, $G$ carries information about the ratio $\Psi (x,\,y)/\Lambda (x,\,y)$, since $s\mapsto \zeta (s,\,y)/s$ is the Mellin transform of $x\mapsto \Psi (x,\,y)$ while $s\mapsto F(s,\,y)/s$ is the Mellin transform of $x\mapsto \Lambda (x,\,y)$ [Reference de Bruijn3, p. 54]. As in [Reference Gorodetsky9], it is essential to write $G$ as $G_1 G_2$ where

\begin{align*} \log G_1(s,y) & = \sum_{n \le y} \frac{\Lambda(n)}{n^{s}\log n}-(\log (\zeta(s)(s-1))+\log \log y+\gamma+ I((1-s)\log y)),\\ \log G_2(s,y) & = \sum_{k \ge 2} \sum_{y^{1/k} < p \le y} \frac{p^{{-}ks}}{k}. \end{align*}

We assume $\log \zeta (s)$ is chosen to be real when $s>1$.

1.2 Main results

Let $\psi (y)=\sum _{n \le y}\Lambda (n)$ and

(1.10)\begin{equation} \beta:=1-\frac{\xi(u)}{\log y}. \end{equation}

Theorem 1.1 Assume RH. Fix $\varepsilon \in (0,\,1)$. Suppose that $x \ge C_{\varepsilon}$ and $x^{1-\varepsilon } \ge y \ge (\log x)^{2+\varepsilon }$. Then

(1.11)\begin{equation} \Psi(x,y) = \Lambda(x,y) G(\beta,y) \left( 1 + O_{\varepsilon}\left( \frac{\log (u+1)}{y \log y} \left( |\psi(y)-y| + y^{{1}/{2}}\right)\right)\right). \end{equation}

The following theorem gives an asymptotic formula for $\Psi (x,\,y)$ for $y$ smaller than $(\log x)^{2}$.

Theorem 1.2 Assume RH. Fix $\varepsilon \in (0,\,1/3)$. Suppose that $x \ge C_{\varepsilon}$ and $(\log x)^{3} \ge y \ge (\log x)^{4/3+\varepsilon }$. Then

(1.12)\begin{equation} \Psi(x,y) = \Lambda(x,y) G(\beta,y) \left( 1 + O_{\varepsilon}\left( \frac{(\log y)^3}{y^{{1}/{2}}} + \frac{(\log x)^3 (\log y)^3}{y^2}\right)\right). \end{equation}

If $y \le (\log x)^{2-\varepsilon }$ then the error term can be improved to $O_{\varepsilon } ((\log x)^3/(y^2 \log y))$.

Theorems 1.1 and 1.2, proved in § 4, show that

\[ \Psi(x,y) \sim \Lambda(x,y) G(\beta,y) \]

holds when $y/((\log x)^{3/2}(\log \log x)^{-1/2}) \to \infty$. This range is shown to be optimal in Theorem 2.14 of [Reference Gorodetsky9]. The same theorem also supplies an alternative proof of theorem 1.2 when $y \le (\log x)^{2-\varepsilon }$ (the proof can be adapted to cover $(\log x)^{2-\varepsilon } \le y \le (\log x)^3$ as well).

Hildebrand showed that RH is equivalent to $\Psi (x,\,y) \asymp _{\varepsilon } x\rho (u)$ for $y \ge (\log x)^{2+\varepsilon }$ [Reference Hildebrand11]. He conjectured that $\Psi (x,\,y)$ is not of size $\asymp x\rho (u)$ when $y \le (\log x)^{2-\varepsilon }$ [Reference Hildebrand12]. This was recently confirmed by the author [Reference Gorodetsky9]. This also follows (under RH) from theorem 1.2, since $\Lambda (x,\,y) \asymp _{\varepsilon } x\rho (u)$ for $y \ge (\log x)^{1+\varepsilon }$ while (under RH) $G(\beta,\,y) \to \infty$ when $y \le (\log x)^{2-\varepsilon }$ and $x \to \infty$ (this follows from the estimates for $G$ in [Reference Gorodetsky9], see § 2).

Theorems 1.1 and 1.2 and their proofs have their origin in our work in the polynomial setting [Reference Gorodetsky10], where $\Psi (x,\,y)$ corresponds to the number of $m$-smooth polynomials of degree $n$ over a finite field, while $\Lambda (x,\,y)$ is analogous to the number of $m$-smooth permutations of $S_n$ (multiplied by $q^n/n!$). In that setting, the analogue of $G_1(s,\,y)$ is identically $1$ (the relevant zeta function has no zeros) which makes the analysis unconditional.

1.3 Applications: sign changes and biases

From theorem 1.1 we deduce in § 2.2 the following

Corollary 1.3 Assume RH. Fix $\varepsilon \in (0,\,1)$. Suppose that $x\ge C_{\varepsilon}$ and $x^{1-\varepsilon } \ge y \ge (\log x)^{2+\varepsilon }$. Then

\begin{align*} \Psi(x,y)/\Lambda(x,y) & = 1 +\frac{y^{-\beta}}{\log y} \bigg(- \sum_{|\rho| \le T} \frac{y^{\rho}}{\rho-\beta}\\ & \quad + \frac{y^{{1}/{2}}}{2\beta-1}+ O_{\varepsilon}\big(\frac{y^{{1}/{2}}}{\log y } +\frac{y\log^2(yT)}{T}+ \frac{|\psi(y)-y|+y^{{1}/{2}}}{u}\big)\bigg)\\ & = 1 + \frac{y^{-\beta}}{\log y} ( (\psi(y)-y)(1+O_{\varepsilon}(u^{{-}1})) + O_{\varepsilon}(y^{{1}/{2}}))\\ & =1 + O_{\varepsilon}((\log (u+1))(\log x)y^{-{1}/{2}}) \end{align*}

holds for $T \ge 4$, where the sum is over zeros of $\zeta$.

Corollary 1.3 implies that large positive (resp. negative) values of $\psi (y)-y$ lead to large positive (resp. negative) values of $\Psi (x,\,y) -\Lambda (x,\,y)$ and vice versa. Large and small values of $\psi (y)-y$ were exhibited by Littlewood [Reference Montgomery and Vaughan15, Thm. 15.11]. Note that corollary 1.3 sharpens (1.7) if $y \le x^{1-\varepsilon }$.Footnote 1

Let $\pi (x)$ be the count of primes up to $x$ and $\mathrm {Li}(x)$ be the logarithmic integral. It is known that $\pi (x)-\mathrm {Li}(x)$ is biased towards positive values in the following sense. Assuming RH and the Linear Independence hypothesis (LI) for zeros of $\zeta$, Rubinstein and Sarnak [Reference Rubinstein and Sarnak16] showed that the set

\[ \{ x\ge 2 : \pi(x) > \mathrm{Li}(x)\} \]

has logarithmic density $\approx 0.999997$. This is an Archimedean analogue of the classical Chebyshev's bias on primes in arithmetic progressions. We use corollary 1.3 to exhibit a similar bias for smooth integers. Let us fix the value of $\beta =1-\xi (u)/\log y$ to be

\[ \beta=\beta_0 \]

where $\beta _0 \in (1/2,\,1)$. This amounts to restricting $x$ to be a function $x=x(y)$ of $y$ defined by

(1.13)\begin{equation} x= \exp\left(\frac{y^{1-\beta_0}-1}{1-\beta_0} \right). \end{equation}

In particular, $y=(\log x)^{1/(1-\beta _0)+o(1)}$. Then corollary 1.3 shows

(1.14)\begin{align} \frac{\Psi(x(y),y)-\Lambda(x(y),y)}{\Lambda(x(y),y)} y^{\beta_0-\frac{1}{2}}\log y & ={-}\sum_{|\rho| \le T} \frac{y^{\rho-{1}/{2}}}{\rho-\beta_0} + \frac{1}{2\beta_0-1}\nonumber\\ & \quad + O_{\beta_0}\left( \frac{y^{{1}/{2}}\log^2 (yT)}{T}+\frac{1}{\log y}\right). \end{align}

Applying the formalism of Akbary et al. [Reference Akbary, Ng and Shahabi1] to the right-hand side of (1.14) we deduce immediately

Corollary 1.4 Assume RH. Assume LI for $\zeta$. Fix $\beta _0 \in (1/2,\,1)$ and let $x$ be a function of $y$ defined as in (1.13). Then the set

\[ \{ y \ge 2: \Psi(x(y),y) > \Lambda(x(y),x) \} \]

has logarithmic density greater than $1/2$, and the left-hand side of (1.14) has a limiting distribution in logarithmic sense.

In the same way that Chebyshev's bias for primes relates to the contribution of prime squares, this is also the case for smooth integers. Writing $G$ as $G_1 G_2$ as in § 1.1, $G_2$ captures the contribution of proper powers of primes. When $\beta _0 \in (1/2,\,1)$, the only significant term in $G_2(\beta _0,\,y)$ is $k=2$, which corresponds to squares of primes. The squares lead to the term $y^{1/2}/(2\beta _0-1)$ in (1.14) which creates the bias.

Remark 1.5 Consider the arithmetic function $\alpha _y(n)$ defined implicitly via

\[ \sum_{n\ge 1} \frac{\alpha_y(n)}{n^s} = \exp\bigg(\sum_{m \le y} \frac{\Lambda(m)}{ \log m}\frac{1}{m^s}\bigg). \]

This function is supported on $y$-smooth numbers and coincides with the indicator of $y$-smooth numbers on squarefree integers. Working with the summatory function of $\alpha _y$ instead of $\Psi (x,\,y)$, the bias discussed above disappears. This is because, modifying the proof of theorem 1.1, one finds that

\[ \sum_{n \le y} \alpha_y(n) = \Lambda(x,y) G_1(\beta,y)\left( 1 + O_{\varepsilon}\left( \frac{\log (u+1)}{y \log y} ( |\psi(y)-y|+y^{{1}/{2}})\right)\right) \]

holds in $x^{1-\varepsilon } \ge y \ge (\log x)^{2+\varepsilon }$, meaning the bias-causing factor $G_2(\beta,\,y)$ does not arise. This is analogous to how the indicator function of primes is biased, while $\Lambda (n)/\log n$ is not.

Remark 1.6 It is interesting to see if one can formulate and prove variants of corollaries 1.3 and 1.4 in the range $y \le (\log x)^{1-\varepsilon }$. In this range, an accurate main term for $\Psi (x,\,y)$ was established in [Reference de la Bretèche and Tenenbaum6].

1.4 Strategy behind theorems 1.1 and 1.2

We write $\Psi (x,\,y)$ as a Perron integral, at least for non-integer $x$:

\[ \Psi(x,y) = \frac{1}{2\pi i} \int_{(\sigma)} \zeta(s,y) \frac{x^s}{s} \,{\rm d}s \]

where $\sigma$ can be any positive real. For non-integer $x$ we also have

(1.15)\begin{equation} \Lambda(x,y)=\frac{1 }{2\pi i}\int_{(\sigma)} F(s,y) \frac{x^s}{s}\,{\rm d}s \end{equation}

whenever $\sigma >\varepsilon$ and $y \ge C_{\varepsilon}$. Indeed, the Laplace inversion formula expresses $\Lambda (x,\,y)$ as

(1.16)\begin{align} \Lambda(x,y)& =x\lambda_y(u)=\frac{x}{2\pi i}\int_{(c)}\hat{\lambda}_y(s)e^{us}\,{\rm d}s \nonumber\\ & =\frac{1 }{2\pi i}\int_{(1+{c}/{\log y})}( \hat{\lambda}_y((s-1)\log y) \log y) x^s \,{\rm d}s \end{align}

for any $c$ such that

(1.17)\begin{equation} \hat{\lambda}_y(s):=\int_{0}^{\infty} e^{{-}sv} \lambda_y(v)\,{\rm d}v, \end{equation}

converges absolutely for $\Re s \ge c$. In particular, we may take $c>-(\log y)/(1+\varepsilon )$ if we assume $y\ge C_{\varepsilon}$, as Saias showed, see corollary A.2. As shown by de Bruijn [Reference de Bruijn3, Eq. (2.6)] (cf. [Reference Saias17, Lem. 6]),

\[ \hat{\lambda}_y(s) = \hat{\rho}(s) K(s/\log y). \]

By definition of $F$, (1.9), we can rewrite (1.16) as (1.15). As Saias does, we choose to work with $\sigma =\beta$, which is essentially a saddle point for $F(s,\,y)x^s$. If $x \ge y \ge (\log x)^{1+\varepsilon }$ and $x \ge C_{\varepsilon}$ then lemma 2.1 implies

\[ \beta \ge c_{\varepsilon}>0. \]

Saias proved (1.6) by showing that $\zeta (s,\,y)$ and $F(s,\,y)$ are close and so if we subtract

\[ \Psi(x,y) -\Lambda(x,y) = \frac{1}{2\pi i} \int_{(\beta)} (\zeta(s,y)-F(s,y)) \frac{x^s}{s}\,{\rm d}s \]

then we can bound the integral by using pointwise bounds for the integrand. Instead of subtracting $\Lambda (x,\,y)$, we subtract $\Lambda (x,\,y)$ times $G(\beta,\,y)$, which leads to

(1.18)\begin{equation} \Psi(x,y) =\Lambda(x,y) G(\beta,y) \left(1 + \frac{\Lambda(x,y)^{{-}1}}{2\pi i} \int_{(\beta)} \frac{G(s,y) - G(\beta,y)}{G(\beta,y)}F(s,y) \frac{x^s}{s} \,{\rm d}s\right). \end{equation}

We want to bound the integral in (1.18). The proof of theorem 1.1 considers separately the range

(1.19)\begin{equation} u \ge (\log y) ( \log \log y)^3 \end{equation}

and its complement. When $u$ satisfies (1.19), then in (1.18) one needs only small values of $\Re s$ to estimate the integral ($|\Re s| \le 1/\log y$) with arbitrary power saving in $y$. This is an unconditional observation established in proposition 3.1. However, for smaller $u$, one needs $|\Re s|$ going up to a power of $y$ if one desires power saving in $y$, which makes the proof more involved.

In our proofs, RH is only invoked at the very end to estimate $G_1$ and its derivatives. For instance, in the range where (1.19) and $y \ge (\log x)^{2+\varepsilon }$ hold, we prove in (4.12) the unconditional estimate

(1.20)\begin{align} \Psi(x,y) & = \Lambda(x,y) G(\beta,y) \left( 1+O_{\varepsilon}\left( \frac{\max_{|v|\le 1} |G'(\beta+iv,y)|}{G(\beta,y) \log x}\right.\right.\nonumber\\ & \quad + \left.\left. \frac{ \max_{|v|\le 1}|G''(\beta+iv,y)|}{G(\beta,y) (\log x)(\log y)} + \frac{1}{y}\right)\right). \end{align}

See (4.16) for a similar estimate for $u \le (\log y)(\log \log y)^3$. In particular, our proofs are easily modified to recover (1.6).

Conventions

The letters $C,\,c$ denote absolute positive constants that may change between different occurrences. We denote by $C_{\varepsilon },\,c_{\varepsilon }$ positive constants depending only on $\varepsilon$, which may also change between different occurrences. The notation $A \ll B$ means $|A| \le C B$ for some absolute constant $C$, and $A\ll _{\varepsilon } B$ means $|A| \le C_{\varepsilon } B$. We write $A \asymp B$ to mean $C_1 B \le A \le C_2 B$ for some absolute positive constants $C_i$, and $A \asymp _{\varepsilon } B$ means $C_i$ may depend on $\varepsilon$. The letter $\rho$ will always indicate a non-trivial zero of $\zeta$. When we differentiate a bivariate function, we always do so with respect to the first variable. We set

\[ L(y):=\exp((\log y)^{{3}/{5}}(\log \log y)^{-{1}/{5}}). \]

2. Preliminaries

2.1 Standard lemmas

Recall $\beta$ was defined in (1.10).

Lemma 2.1 [Reference Hildebrand and Tenenbaum13, Lem. 1] For $u \ge 3$ we have $\xi (u) = \log u + \log \log u + O( (\log \log u) / \log u)$. In particular,

(2.1)\begin{equation} y^{1-\beta} \asymp u \log(u+1), \quad u \ge 1. \end{equation}

Lemma 2.2 [Reference de Bruijn2]

For $u \ge 1$ we have $\rho (u) \asymp e^{-u\xi +I(\xi )} u^{-1/2} = x^{\beta -1} e^{I(\xi )}u^{-1/2}$.

In the next lemmas we write $s \in \mathbb {C}$ as $s=\sigma + it$.

Lemma 2.3 [Reference Montgomery and Vaughan15, Cor. 10.5] For $|\sigma | \le A$ and $|t| \ge 1$, $|\zeta (s)| \asymp _A (|t|+4)^{1/2-\sigma }|\zeta (1-s)|$.

Lemma 2.4 [Reference Montgomery and Vaughan15, Cor. 1.17] Fix $\varepsilon >0$. For $\sigma \in [\varepsilon,\,2]$ and $|t| \ge 1$ we have

\[ \zeta(s) \ll_{\varepsilon} (1+(|t|+4)^{1-\sigma})\min\left\{\frac{1}{|\sigma-1|},\log (|t|+4)\right\}. \]

Lemma 2.5 [Reference Titchmarsh19, Thm. 7.2(A)]

We have, for $\sigma \in [1/2,\,2]$ and $T \ge 2$,

\[ \int_{1}^{T} |\zeta(\sigma+it)|^2 \,{\rm d}t \ll T \min\left\{ \log T, \frac{1}{\sigma-\frac{1}{2}}\right\}. \]

Lemma 2.6 [Reference Hildebrand and Tenenbaum14, Lem. 2.7] The following bounds hold for $s=-\xi (u)+it$:

(2.2)\begin{equation} \hat{\rho}(s) =e^{\gamma+I({-}s)}= \begin{cases} O\left(\exp\left(I(\xi)-\frac{t^2u}{2\pi^2}\right)\right) & \mbox{if }|t| \le \pi,\\ O\left(\exp\left(I(\xi)-\frac{u}{\pi^2+\xi^2}\right)\right) & \mbox{if }|t| \ge \pi,\\ \frac{1}{s} + O\left( \frac{1+u \xi }{|s|^2} \right) & \mbox{if }1+u\xi=O(|t|).\end{cases} \end{equation}

The third case of lemma 2.6 is usually stated in the range $1+u\xi \le |t|$, but the same proof works for $1+u\xi = O(|t|)$. Since $1+u\xi =e^{\xi }$, the third case can also be written as

(2.3)\begin{equation} s\hat{\rho}(s) = 1+O( e^{-\sigma}/|t|) \end{equation}

for $s=\sigma +it$, assuming $\sigma <0$ and $e^{-\sigma } =O(|t|)$. The following lemma is a variant of [Reference Hildebrand and Tenenbaum13, Lem. 8], proved in the same way.

Lemma 2.7 [Reference Hildebrand and Tenenbaum13]

Fix $\varepsilon >0$. Suppose $x \ge y \ge (\log x)^{1+\varepsilon }$ and $x \ge C_{\varepsilon}$. For $|t| \le 1/\log y$,

\[ \left|\frac{\zeta(\beta+it,y)}{\zeta(\beta,y)}\right| \le \exp({-}ct^2 (\log x )(\log y) ). \]

For $1/\log y \le |t| \le \exp ((\log y)^{3/2-\varepsilon })$,

(2.4)\begin{equation} \frac{\zeta(\beta+it,y)}{\zeta(\beta,y)} \ll_{\varepsilon} \exp\left(-\frac{c ut^2}{(1-\beta)^2+t^2}\right). \end{equation}

2.2 More on $G$

Lemma 2.8 [Reference Gorodetsky9]

Fix $0 \le i \le 4$. Let $y \ge 4$. Let $s \in \mathbb {C}$ with $\Re s \in [0,\,1]$ and the property that

(2.5)\begin{equation} \min_{\zeta(\rho)=0,\quad t \ge 0} |\rho-s-t| \gg 1. \end{equation}

Then for $T \ge 3+|\Im s|$ we have

(2.6)\begin{align} (\log G_1)^{(i)}(s,y)& ={-}\sum_{|\Im (\rho-s)| \le T}\frac{{\rm d}^{i}}{{\rm d}s^{i}} \int_{0}^{\infty} \frac{y^{\rho-s-t}}{\rho-s-t}\,{\rm d}t\nonumber\\ & \quad +O\left((\log y)^{i} y^{-\Re s}+ \frac{\log^2 (yT)(\log y)^{i-1} }{T} y^{1-\Re s}\right). \end{align}

Corollary 2.9 Fix $0 \le i \le 4$. Let $y \ge 4$. Let $s \in \mathbb {C}$ with $\Re s \in [0,\,1]$. If $|\Im s| \le 1$ we have $(\log G_1)^{(i)}(s,\,y) \ll L(y)^{-c} y^{1-\Re s}$ unconditionally. Under RH, if $T \ge 4$ and $|\Im s| \le 1$ then

(2.7)\begin{align} (\log G_1)^{(i)}(s,y) & =(-\log y)^{i-1}y^{- s}\bigg( \sum_{|\Im (\rho-s)| \le T} \frac{y^{\rho}}{\rho-s} + O \bigg( \frac{y^{{1}/{2}}}{\log y}+ \frac{y\log^2(yT)}{T}\bigg) \bigg) \nonumber\\ & =({-}1)^i(\log y)^{i-1} y^{- s} (\psi(y)-y+ O(y^{{1}/{2}})) \ll y^{{1}/{2}-\Re s}(\log y)^{i+1}. \end{align}

Under RH, if $T \ge 4$, $\Re s \in [3/4,\,1]$ and $|\Im s| \le y^{9/10}$ then

(2.8)\begin{align} (\log G_1)^{(i)}(s,y) & = ({-}1)^i(\log y)^{i-1} y^{- s} (\psi(y)-y+ O(y^{{1}/{2}}\log^2 (|\Im s|+2)))\nonumber\\ & \quad \ll y^{{1}/{2}-\Re s}(\log y)^{i+1}. \end{align}

Proof. If $|\Im s| \le 1$ then (2.5) holds. It is easily seen that, for any zero $\rho$ of $\zeta$,

(2.9)\begin{equation} \frac{{\rm d}^{i}}{{\rm d}s^{i}}\int_{0}^{\infty} \frac{y^{\rho-s-t}}{\rho-s-t}\,{\rm d}t ={-} \frac{(-\log y)^{i-1}y^{\rho-s}}{\rho-s} \left( 1 +O\left( \frac{1}{\min_{t \ge 0} |\rho-s-t|\log y}\right)\right) \end{equation}

if (2.5) holds. We apply lemma 2.8 with $T=L(y)^c$ and use the Vinogradov–Korobov zero-free region and (2.9) to simplify. Now assume RH, i.e. $|y^{\rho }|=y^{1/2}$. We demonstrate (2.7), and (2.8) is proved along similar lines. We apply lemma 2.8 with $T\ge 4$ and simplify it using (2.9). We bound the resulting error using the facts $\min _{t \ge 0}|\rho -s-t| \asymp |\rho -s|$ and $\sum _{\rho } 1/|\rho -s|^2 \ll 1$ for $|s|\le 2$, since there are $\ll \log T$ zeros of $\zeta$ between height $T$ and $T+1$ [Reference Montgomery and Vaughan15, Thm. 10.13]. This gives the first equality in (2.7). The second equality in (2.7) follows by taking $T=y$, recalling the classical estimate

(2.10)\begin{equation} \psi(y)-y ={-}\sum_{|\rho| \le y} \frac{y^{\rho}}{\rho} + O(\log^2 y) \end{equation}

given in [Reference Montgomery and Vaughan15, Thm. 12.5] (it also follows from lemma 2.8 with $(i,\,s,\,T)=(1,\,0,\,y)$), and the bound $\sum _{\rho } 1/(|\rho -s||\rho |) \ll 1$. The last inequality in (2.7) is von Koch's bound $\psi (y)-y=O(y^{1/2}\log ^2 y)$ [Reference von Koch20].

We turn to $G_2$. By the non-negativity of the coefficients of $\log G_2$, for $i \ge 0$ and $\Re s>0$ we have

(2.11)\begin{equation} |(\log G_{2})^{(i)}(s,y)| \le ({-}1)^i\log G_{2}^{(i)}(\Re s ,y ). \end{equation}

Lemma 2.10 [Reference Gorodetsky9]

Fix $\varepsilon >0$ and $0 \le i \le 4$. For $y \ge 2$ and $1 \ge s \ge \varepsilon$,

(2.12)\begin{align} (\log G_2)^{(i)}(s,y) & =( 1+O_{\varepsilon}( L(y)^{{-}c})) \frac{({-}2)^i}{2}\int_{y^{1/2}}^{y}(\log t)^{i-1} t^{{-}2s} \,{\rm d}t\nonumber\\ & \quad \asymp_{\varepsilon} \frac{(-\log y)^{i}y^{\max\{1-2s,\frac{1}{2}-s\}}}{\max\{1,|s-1/2|\log y\}}. \end{align}

Corollary 2.9 and lemma 2.10, applied with $i=0$, imply the following

Lemma 2.11 Assume RH. Fix $\varepsilon >0$. If $1 \ge s \ge 1/2+\varepsilon$ and $T \ge 4$ then

\begin{align*} G(s,y) & = 1 +\frac{y^{{-}s}}{\log y} \bigg(- \sum_{|\rho| \le T} \frac{y^{\rho}}{\rho-s} + \frac{y^{{1}/{2}}}{2s-1}+ O_{\varepsilon}\bigg(\frac{y^{{1}/{2}}}{\log y} + \frac{y \log^2( yT)}{T}\bigg)\bigg)\\ & =1 + \frac{y^{{-}s}}{\log y} ( \psi(y)-y + O_{\varepsilon}(y^{{1}/{2}}))=1 + O_{\varepsilon}( y^{{1}/{2}-s} \log y). \end{align*}

Corollary 1.3 follows from theorem 1.1 by simplifying $G(\beta,\,y)$ using lemma 2.11 and (2.1).

3. Truncation estimates for $\Psi$ and $\Lambda$

The purpose of this section is to prove the following two propositions.

Proposition 3.1 Medium $u$

Suppose $x \ge y \ge 2$ satisfy

\[ u \ge (\log y) ( \log \log y)^3. \]

Fix $\varepsilon >0$. Suppose $y \ge (\log x)^{1+\varepsilon }$ and $x \ge C_{\varepsilon}$. Then

(3.1)\begin{align} \Psi(x,y) & = \frac{1}{2\pi i} \int_{\beta-\frac{i}{\log y}}^{\beta+\frac{i}{\log y}} \zeta(s,y) \frac{x^s}{s}\,{\rm d}s\nonumber\\ & \quad + O_{\varepsilon}\left( \frac{\Psi(x,y)+x\rho(u) G(\beta,y)}{\exp(c_{\varepsilon}\min\{u/\log^2(u+1),(\log y)^{4/3}\})}\right), \end{align}
(3.2)\begin{align} \Lambda(x,y) & = \frac{1}{2\pi i} \int_{\beta-\frac{i}{\log y}}^{\beta+{i}/{\log y}} F(s,y) \frac{x^s}{s}\,{\rm d}s + O_{\varepsilon}\left(\frac{x\rho(u)}{\exp( c u/\log^2(u+1))}\right). \end{align}

Proposition 3.2 Small $u$

Suppose $x \ge y \ge 2$ satisfy

\[ u \le (\log y) ( \log \log y)^3. \]

Suppose $x \ge C$ and let $T \in [(\log x)^5,\,x\rho (u)]$. Then

\begin{align*} \Psi(x,y) & = \frac{1}{2\pi i} \int_{\beta-i T}^{\beta+iT} \zeta(s,y) \frac{x^s}{s}\,{\rm d}s + O\left( \frac{\Psi(x,y)+x\rho(u) G(\beta,y)}{T^{4/5}}\right),\\ \Lambda(x,y) & = \frac{1}{2\pi i} \int_{\beta-iT}^{\beta+iT} F(s,y) \frac{x^s}{s}\,{\rm d}s + O\left(\frac{x\rho(u)}{T^{4/5}}\right). \end{align*}

3.1 Preparation

Lemma 3.3 Fix $\varepsilon \in (0,\,1)$. For $\sigma \in [\varepsilon,\,1]$ and $x \ge T \ge 2$ we have

(3.3)\begin{equation} \frac{1}{2\pi i}\int_{\sigma+it: \, |t|>T} \zeta(s) \frac{x^s}{s} \,{\rm d}s \ll_{\varepsilon} \frac{x^{\sigma}}{T^{\sigma}}\log T + \log x. \end{equation}

The integral should be understood in principal value sense. Lemma 3.3 makes more precise a computation done in p. 96 of Saias’ paper [Reference Saias17] (cf. [Reference Tenenbaum18, p. 537]), which is not stated for general $T$ and $\sigma$ but contains the same ideas.

Proof. By [Reference Titchmarsh19, Thm. 4.11], for every $r>0$ we have

\[ \zeta(s) =\sum_{n \le r} n^{{-}s} - \frac{r^{1-s}}{1-s} + O_{\varepsilon}(r^{-\Re s}) \]

as long as $s \neq 1$, $\Re s \ge \varepsilon$ and $|\Im s|\le 2r$. Suppose $s=\sigma +it$ with $|t| \ge 1$. We apply this estimate with $r=|t|$, obtaining

(3.4)\begin{equation} \zeta(s) =\sum_{n \le |t|} n^{{-}s} - \frac{|t|^{1-s}}{1-s} + O_{\varepsilon}(|t|^{-\sigma})= \sum_{n \le |t|} n^{{-}s} + O_{\varepsilon}(|t|^{-\sigma}). \end{equation}

We now plug (3.4) in the left-hand side of (3.3). The contribution of the error term to the integral is acceptable:

\[ \int_{\sigma+it: \, |t|>T} O(|t|^{-\sigma}) \frac{x^s}{s} \,{\rm d}s \ll x^{\sigma}\int_{T}^{\infty} |t|^{-\sigma-1}\,{\rm d}t \ll_{\varepsilon} \frac{x^{\sigma}}{T^{\sigma}}. \]

The contribution of $n^{-s} \mathbf {1}_{n\le |t|}$ in (3.4) to the left-hand side of (3.3) is

(3.5)\begin{equation} \frac{1}{2\pi i} \int_{\sigma+it: |t|>\max\{n,T\}} n^{{-}s} \frac{x^s}{s}\,{\rm d}s. \end{equation}

Since

\[ \frac{1}{2\pi i} \int_{\sigma+it:\, |t| \le S} n^{{-}s} \frac{x^s}{s}\,{\rm d}s =\mathbf{1}_{x>n} + \frac{\mathbf{1}_{x=n}}{2} +O\left( \frac{(x/n)^{\sigma}}{1+S|\log(x/n)|}\right), \quad S \ge 1, \]

by the truncated Perron's formula [Reference Hildebrand and Tenenbaum14, p. 435], and

\[ \frac{1}{2\pi i} \int_{(\sigma)} n^{{-}s} \frac{x^s}{s}\,{\rm d}s =\mathbf{1}_{x>n} + \frac{\mathbf{1}_{x=n}}{2} \]

by Perron's formula, it follows that the integral in (3.5) is bounded by

\[{\ll} \frac{(x/n)^{\sigma}}{1+\max\{n,T\}|\log(x/n)|} \]

and so the total contribution of the $n$-sum in (3.4) to the left-hand side of (3.3) is

(3.6)\begin{equation} \ll x^{\sigma}\sum_{n \ge 1} \frac{n^{-\sigma}}{1+\max\{n,T\}|\log(x/n)|}. \end{equation}

It remains to estimate (3.6), which we do according to the size of $n$. The contribution of $n \ge 2x$ is

\[{\ll} x^{\sigma} \sum_{n \ge 2x} n^{-\sigma-1} \ll_{\varepsilon} 1. \]

The contribution of $n \in (x/2,\,2x)$ can be bounded by considering separately the $n$ closest to $x$, and partitioning the rest of the $n$s according to the value of $k\ge 0$ for which $|\log (x/n)| \in [2^{-k},\,2^{1-k})$:

\[{\ll} x^{\sigma}\sum_{n \in (x/2,2x)} \frac{n^{-\sigma}}{1+x|\log (x/n)|}\ll 1+ \sum_{k \ge 0:\, 2^k \le 2x} \frac{x}{2^k} \frac{1}{1+x/2^k} \ll \log x. \]

The contribution of $n \le T/2$ is

\[{\ll} \frac{x^{\sigma}}{T} \sum_{n \le T/2}n^{-\sigma} \ll \frac{x^{\sigma}}{T^{\sigma}} \log T. \]

Finally, the contribution of $T/2< n\le x/2$ is

\[{\ll} x^{\sigma} \sum_{n>T/2} n^{{-}1-\sigma} \ll_{\varepsilon} \frac{x^{\sigma}}{T^{\sigma}}, \]

acceptable as well.

Corollary 3.4 Fix $\varepsilon \in (0,\,1)$. Suppose $x \ge y \ge C_{\varepsilon}$. For $\sigma \in [\varepsilon,\,1]$ and $x \ge T \ge \max \{2,\,y^{1-\sigma }/\log y\}$ we have

\begin{align*} \Lambda(x,y) & = \frac{1}{2\pi i}\int_{\sigma-iT}^{\sigma+iT} F(s,y) \frac{x^s}{s} \,{\rm d}s \\ & \quad + O_{\varepsilon}\left( \frac{x^{\sigma}}{T^{\sigma}}\log T+\log x+ x^{\sigma} \frac{y^{1-\sigma}}{\log y}\frac{\log^{1/2} T}{T^{\min\{1,1/2+\sigma\}}} \right). \end{align*}

Corollary 3.4 rests on lemma 3.3, and makes more precise Proposition 2 of Saias [Reference Saias17].

Proof. Our starting point is the identity (1.15). (If $x \in \mathbb {Z}$ it still holds with an error term of $O(1)$, since the integral converges to the average $(\Lambda (x+,\,y)+\Lambda (x-,\,y))/2 = \Lambda (x,\,y)+O(1)$.) From that identity it follows that our task is equivalent to upper bounding

\[ \left|\int_{\sigma+it:\, |t|>T} F(s,y) \frac{x^s}{s} \,{\rm d}s\right|. \]

Recall $F(s,\,y) = \hat {\rho }((s-1)\log y)\zeta (s)(s-1)\log y$. By (2.3) with $(s-1)\log y$ instead of $s$ we find

\[ F(s,y)=\zeta(s)\left(1+O\left(\frac{y^{1-\sigma}}{|t|\log y}\right)\right) \]

if $y^{1-\sigma } = O(|t| \log y)$, which holds by our assumptions on $T$. By the triangle inequality,

(3.7)\begin{align} & \left|\int_{\sigma+it:\, |t|>T} F(s,y) \frac{x^s}{s} \,{\rm d}s\right| \ll \left|\int_{\sigma+it:\, |t|>T} \frac{\zeta(s)}{s}x^s \,{\rm d}s\right|\nonumber\\ & \quad + x^{\sigma}\frac{y^{1-\sigma}}{\log y} \int_{\sigma+it:\,|t|>T} \frac{|\zeta(s)|}{|t|^2} |\,{\rm d}s| . \end{align}

The first integral in the right-hand side of (3.7) is estimated in lemma 3.3. To bound the second integral we apply the second moment estimate for $\zeta$ given in lemma 2.5. We first suppose that $\sigma \ge 1/2$. Using Cauchy–Schwarz, the second integral in the right-hand side of (3.7) is at most

(3.8)\begin{align} & \int_{\sigma+it:\,|t|>T} \frac{|\zeta(s)|}{|t|^2} |\,{\rm d}s| \ll \sum_{2^k \ge T/2}4^{{-}k} \int_{2^k}^{2^{k+1}} |\zeta(\sigma+it)|\,{\rm d}t\ll \sum_{2^k \ge T/2}2^{{-}k} k^{1/2} \nonumber\\ & \quad\ll \frac{\log^{1/2} T}{T}. \end{align}

Multiplying this by the prefactor $x^{\sigma }y^{1-\sigma }/\log y$, we see that this is acceptable. If $\varepsilon \le \sigma \le 1/2$ we use lemma 2.3. We obtain that the second integral in the right-hand side of (3.7) is at most

(3.9)\begin{align} & \int_{\sigma+it:\,|t|>T} \frac{|\zeta(s)|}{|t|^2} |\,{\rm d}s| \ll \int_{1-\sigma+it:\,|t|>T} \frac{|\zeta(s)|}{|t|^{2+\sigma-1/2}} |\,{\rm d}s|\nonumber\\ & \quad \ll \sum_{2^k \ge T/2} 2^{{-}k(\sigma+1/2)}k^{1/2} \ll \frac{\log^{1/2} T}{T^{{1}/{2}+\sigma}}, \end{align}

concluding the proof.

Let $\alpha =\alpha (x,\,y)$ be the saddle point associated with $y$-smooth numbers up to $x$ [Reference Hildebrand and Tenenbaum13], that is, the minimizer of the convex function $s\mapsto x^s \zeta (s,\,y)$ ($s>0$).

Lemma 3.5 For $\sigma \in (0,\,1]$, $x\ge y \ge C$ and $T \ge 2$ we have

(3.10)\begin{equation} \Psi(x,y)=\frac{1}{2\pi i}\int_{\sigma-iT}^{\sigma+iT} \zeta(s,y)\frac{x^s}{s}\,{\rm d}s+O\left( \frac{x^{\sigma}\zeta(\sigma,y)}{T} + \frac{\Psi(x,y) \log T}{T^{\alpha}}+1\right). \end{equation}

Our proof makes more precise a similar estimate appearing in Saias [Reference Saias17, p. 98], which does not allow general $y$ and $T$ but contains the main ideas.

Proof. The truncated Perron's formula [Reference Hildebrand and Tenenbaum14, p. 435] bounds the error in (3.10) by

\[{\ll} x^{\sigma}\sum_{\substack{n\ge 1\\ n\text{ is }y\text{-smooth}}}\frac{1}{n^{\sigma}(1+T|\log(x/n)|)}. \]

The contribution of the terms with $|\log (x/n)|\ge 1$ is

\[{\ll} \frac{x^{\sigma}}{T} \sum_{\substack{n\ge 1\\n\text{ is } y\text{-smooth}}} \frac{1}{n^{\sigma}} = \frac{x^{\sigma}\zeta(\sigma,y)}{T}. \]

We now study the terms with $|\log (x/n)|<1$. These contribute

(3.11)\begin{equation} \ll \sum_{\substack{e^{{-}1}x< n< ex\\n\text{ is }y\text{-smooth}}} \frac{1}{1+T|\log (x/n)|}. \end{equation}

The subset of terms with $|\log (x/n)| \le 1/T$ contributes to (3.11)

(3.12)\begin{equation} \ll \sum_{\substack{|n-x| \le Cx/T \\ n\, y\text{-smooth}}} 1 \ll \Psi\left(x+ \frac{Cx}{T},y\right)-\Psi\left(x-\frac{Cx}{T},y\right). \end{equation}

The contribution of the rest of the terms to (3.11), namely, those terms with $1/T<|\log (x/n)| <1$, can be dyadically dissected to terms with $|\log (x/n)| \in [2^{-k},\,2^{1-k})$ for each integer $k\ge 1$ such that $2^k<2\,T$ holds. Their total contribution is

(3.13)\begin{equation} \ll \frac{1}{T}\sum_{1 \le k \le \log_2\,T + 1} 2^k\left(\Psi\left(x+ \frac{Cx}{2^k},y\right)-\Psi\left(x-\frac{Cx}{2^k},y\right)\right), \end{equation}

where $\log _2$ is the base-2 logarithm. (We interpret $\Psi (a,\,y)$ for negative $a$ as equal to $0$.) Note that the sum in (3.13) dominates the right-hand side of (3.12). We shall make use of Hildebrand's inequality $\Psi (a+b,\,y)-\Psi (a,\,y) \le \Psi (b,\,y)$, valid for $y \ge C$ and $a,\,b \ge y$. It implies

(3.14)\begin{equation} \Psi(a+b,y)-\Psi(a,y) \le \Psi(b,y)+1 \end{equation}

for $y\ge C$ and all $a,\,b$. We apply (3.14) with $a=x-Cx/2^k$ and $b=2Cx/2^k$ to find that (3.13) is bounded by

(3.15)\begin{equation} \ll \frac{1}{T}\sum_{1 \le k \le \log_2\,T + 1} 2^k\left(\Psi\left(\frac{Cx}{2^k},y\right)+1\right) \ll \frac{1}{T}\sum_{1 \le k \le \log_2\,T + 1} 2^k\left(\Psi\left(\frac{x}{2^k},y\right)+1\right) \end{equation}

where in the second inequality we replaced $\Psi (Cx,\,y)$ with $\Psi (x,\,y)$ using [Reference Hildebrand and Tenenbaum13, Thm. 3]. To conclude, we recall Theorem 2.4 of [Reference de la Bretèche and Tenenbaum5] says $\Psi (x/d,\,y) \ll \Psi (x,\,y)/d^{\alpha }$ holds for $x \ge y \ge 2$ and $1 \le d \le x$. We apply this inequality with $d=2^k$ and obtain

(3.16)\begin{align} & \frac{1}{T}\sum_{1 \le k \le \log_2\,T + 1} 2^k\left(\Psi\left(\frac{x}{2^k},y\right)+1\right) \ll 1+\frac{\Psi(x,y)}{T}\sum_{1 \le k \le \log_2\,T + 1} 2^{(1-\alpha)k}\nonumber\\ & \quad\ll 1+\frac{\Psi(x,y)\log T}{T^{\alpha}} \end{align}

as needed.

3.2 Proof of proposition 3.1

We first truncate the Perron integral for $\Psi (x,\,y)$. We apply lemma 3.5 with $\sigma = \beta$ and $T=\exp ((\log y)^{4/3})$. The assumption $y \ge (\log x)^{1+\varepsilon }$ implies $\beta \gg _{\varepsilon } 1$ and $\Psi (x,\,y) \ge x^{c_{\varepsilon }}$. Since $\alpha =\beta +O(1/\log y)$ [Reference Hildebrand and Tenenbaum13, Lem. 2] it follows that $\alpha \gg _{\varepsilon } 1$ and so

(3.17)\begin{equation} \Psi(x,y)=\frac{1}{2\pi i}\int_{\beta-iT}^{\beta+iT} \zeta(s,y)\frac{x^s}{s}\,{\rm d}s+O_{\varepsilon}\left( \frac{x^{\beta}\zeta(\beta,y)+\Psi(x,y)}{T^{c_{\varepsilon}}}\right). \end{equation}

We use lemma 2.7 to bound the contribution of $1/\log y \le |\Im s| \le T$:

\begin{align*} \int_{\beta+i/\log y}^{\beta+iT} \zeta(s,y)\frac{x^s}{s}\,{\rm d}s & \ll x^{\beta} \zeta(\beta,y) \int_{1/\log y}^{T} \left|\frac{\zeta(\beta+it,y)}{\zeta(\beta,y)}\right|\frac{{\rm d}t}{\beta+t}\\ & \ll x^{\beta} \zeta(\beta,y) \int_{1/\log y}^{T} \exp\left(-\frac{c ut^2}{(1-\beta)^2+t^2}\right)\frac{{\rm d}t}{\beta+t}\\ & \ll x^{\beta} \zeta(\beta,y) \left( \exp({-}c u)\log T + \int_{1/\log y}^{\xi(u)/\log y}\right.\nonumber\\ & \quad\left. \exp\left(-\frac{c (\log x)( \log y)}{\log^2 (u+1)}t^2\right)\,{\rm d}t \right)\\ & \ll x^{\beta} \zeta(\beta,y)\exp\left(-\frac{c u}{\log^2(u+1)}\right). \end{align*}

We estimate $x^{\beta } \zeta (\beta,\,y)$:

(3.18)\begin{align} ^{\beta}\zeta(\beta,y) & = \frac{x}{e^{u\xi(u)}}F(\beta,y) G(\beta,y) \nonumber\\ & =\zeta(\beta)(\beta-1) \frac{x e^{I(\xi)+\gamma}\log y}{e^{u\xi(u)}} G(\beta,y) \ll_{\varepsilon} x\rho(u)\sqrt{(\log x)( \log y)} G(\beta,y) \end{align}

using (1.9) and lemma 2.2. Finally, note that both $T$ and $\exp (u/\log ^2(u+1))$ grow faster than any power of $\log x$. We turn to $\Lambda (x,\,y)$. We apply corollary 3.4 with $\sigma =\beta$ and

\[ T = \frac{y^{1-\beta}}{\log y} = \frac{e^{\xi(u)}}{\log y} \asymp \frac{u \log (u+1)}{\log y} \gg (\log \log y)^4. \]

We obtain

\[ \Lambda(x,y) = \frac{1}{2\pi i}\int_{\beta-iT}^{\beta+iT} F(s,y) \frac{x^s}{s} \,{\rm d}s+ O_{\varepsilon}\left( \frac{ux}{\exp(u\xi)}\right). \]

We now treat the range $1/\log y\le | \Im s| \le T$. By the definition of $F$,

(3.19)\begin{equation} \int_{\beta+\frac{i}{\log y}}^{\beta+iT} F(s,y) \frac{x^s}{s} \,{\rm d}s\ll_{\varepsilon} \frac{x\log y}{\exp(u\xi)} \int_{1/\log y}^{T} |\zeta(\beta+it)| |\hat{\rho}(-\xi(u)+it \log y)| \,{\rm d}t. \end{equation}

First suppose $t \ge \pi /\log y$. By the second case of lemma 2.6, this range contributes

(3.20)\begin{align} & \ll_{\varepsilon} \frac{x \exp(I(\xi))\log y}{\exp(u\xi)} \exp\left( - \frac{u}{\pi^2 +\xi^2}\right) \int_{\pi/\log y}^{T} |\zeta(\beta+it)|\,{\rm d}t \nonumber\\ & \ll x\rho(u) \sqrt{(\log x)(\log y)}\exp\left(- \frac{u}{\pi^2+\xi^2}\right)\int_{\pi/\log y}^{T} |\zeta(\beta+it)|\,{\rm d}t \end{align}

using lemma 2.2 in the second inequality. Recall the second moment estimate for $\zeta$ given in lemma 2.5. It shows that right-hand side of (3.20) is bounded by

\[{\ll} x\rho(u) \sqrt{(\log x )(\log y)}\exp\left(- \frac{u}{\pi^2+\xi^2}\right) T^{\max\{1,3/2-\beta\}}\sqrt{\log T} \]

where we used the functional equation if $\beta < 1/2$ (lemma 2.3). The contribution of $1/\log y \le t \le \pi /\log y$ to the right-hand side of (3.19) is treated using the first part of lemma 2.6, and we find that it is at most

(3.21)\begin{equation} \ll_{\varepsilon} \frac{x \exp(I(\xi))\log y}{\exp(u\xi)} \int_{1/\log y}^{\pi/\log y} \exp\bigg(- \frac{(\log x)( \log y)}{2\pi^2}t^2\bigg)\,{\rm d}t \ll_{\varepsilon} x\rho(u) \exp({-}cu), \end{equation}

using lemma 2.2 in the second inequality. In conclusion,

\[ \Lambda(x,y) = \frac{1}{2\pi i}\int_{\beta-i/\log y}^{\beta+i/\log y} F(s,y) \frac{x^s}{s} \,{\rm d}s+ E \]

where

\begin{align*} & E \ll_{\varepsilon} \frac{ux}{\exp(u\xi)} + x\rho(u) \left( \sqrt{(\log x)( \log y)}\right.\\ & \left.\quad \exp\left(- \frac{u}{\pi^2+\xi^2}\right) T^{\max\{1,3/2-\beta\}}\sqrt{\log T} + \exp({-}cu)\right). \end{align*}

By our choice of $T$ and assumptions on $u$ and $y$, this can be absorbed in the error term of (3.2).

3.3 Proof of proposition 3.2

We first truncate the Perron integral for $\Psi (x,\,y)$. We apply lemma 3.5 with $\sigma =\beta$ and our $T$, finding

(3.22)\begin{equation} \Psi(x,y)=\frac{1}{2\pi i}\int_{\beta-iT}^{\beta+iT} \zeta(s,y)\frac{x^s}{s}\,{\rm d}s+O\left( 1+ \frac{\Psi(x,y)\log T}{T^{\alpha}}+\frac{x^{\beta}\zeta(\beta,y)}{T}\right). \end{equation}

In the considered range, $\Psi (x,\,y) \asymp x \rho (u)$. In particular, the error term $O(1)$ is acceptable since our $T$ is $\ll x\rho (u) \ll \Psi (x,\,y)$ and so $1 \ll \Psi (x,\,y) /T^{4/5}$. Additionally, $\beta \sim 1$ as $x \to \infty$ by lemma 2.1 and $\alpha =\beta +O(1/\log y)$ [Reference Hildebrand and Tenenbaum13, Lem. 2], so $\alpha \sim 1$. This implies that $(\log T)/T^{\alpha } \ll 1/T^{4/5}$ and the error term $O(\Psi (x,\,y)(\log T)/T^{\alpha })$ is also acceptable. The estimate (3.18) treats the last error term and finishes the estimation. We turn to $\Lambda (x,\,y)$. We apply corollary 3.4 with our $T$, obtaining

(3.23)\begin{equation} \Lambda(x,y) = \frac{1}{2\pi i}\int_{\beta-iT}^{\beta+iT} F(s,y) \frac{x^s}{s} \,{\rm d}s+ O\left(\log x +x \exp({-}u\xi) u \log (u+1) \frac{\log T}{T^{\sigma}}\right). \end{equation}

In our range $x\rho (u) \asymp x^{1+o(1)}$, so the term $\log x$ is acceptable. We have $\exp (-u\xi ) u\log (u+1) \ll \rho (u)$ by lemma 2.2, so the second term in the error term of (3.23) is also acceptable.

4. Proofs of theorems 1.1 and 1.2

Proposition 4.1 Medium $u$

Suppose $x \ge y \ge 2$ satisfy

\[ u \ge (\log y) ( \log \log y)^3. \]

Fix $\varepsilon >0$ and suppose $y \ge (\log x)^{1+\varepsilon }$ and $x \ge C_{\varepsilon}$. Let

\[ t_0:= (\log x)^{{-}1/3}(\log y)^{{-}2/3}, \quad T := \exp(\min\{u/\log^2(u+1),(\log y)^{4/3}\}). \]

Then $\Psi (x,\,y) = \Lambda (x,\,y) G(\beta,\,y) ( 1 + E)$ for

(4.1)\begin{align} & E \ll_{\varepsilon} \frac{ |G'(\beta,y)|}{G(\beta,y)\log x} + \frac{ \max_{|v|\le t_0}|G''(\beta+iv,y)|}{G(\beta,y)(\log x)(\log y)}\nonumber\\ & \quad +\frac{\max_{|v|\le \frac{1}{\log y}} |G'(\beta+iv,y)|\exp({-}u^{1/3}/20)}{G(\beta,y)\log x} + \frac{1}{T^{c_{\varepsilon}}}. \end{align}

Proof. Our strategy is to establish $\Psi (x,\,y) = \Lambda (x,\,y) G(\beta,\,y) (1 + E_1 + E_2) + E_3$ for

\begin{align*} E_1 & \ll_{\varepsilon} \frac{ |G'(\beta,y)|}{G(\beta,y)\log x} + \frac{ \max_{|v|\le t_0}|G''(\beta+iv,y)|}{G(\beta,y)(\log x) (\log y)},\\ E_2 & \ll_{\varepsilon} \frac{\max_{|v|\le \frac{1}{\log y}} |G'(\beta+iv,y)|\exp({-}u^{1/3}/20)}{G(\beta,y)\log x},\\ E_3 & \ll_{\varepsilon} \frac{\Psi(x,y)+x\rho(u)G(\beta,y)}{T^{c_{\varepsilon}}}. \end{align*}

The theorem will then follow by rearranging, once we recall that $x\rho (u) \asymp _{\varepsilon } \Lambda (x,\,y)$. From proposition 3.1,

(4.2)\begin{align} & \Psi(x,y) -\Lambda(x,y)G(\beta,y) \nonumber\\ & \quad =\frac{1}{2\pi i} \int_{\beta-\frac{i}{\log y}}^{\beta+\frac{i}{\log y}} ( G(s,y)-G(\beta,y))F(s,y) \frac{x^s}{s}\,{\rm d}s + O_{\varepsilon}\left( \frac{\Psi(x,y)+x\rho(u) G(\beta,y)}{T^{c_{\varepsilon}}} \right), \end{align}

which explains $E_3$. Let $t_0$ be as in the statement of the proposition. We upper bound the contribution of $t_0 \le |\Im s| \le 1/\log y$ to the integral in the right-hand side of (4.2). We have

\[ |G(s,y)-G(\beta,y)| \le |\Im s| \max_{ |t|\le |\Im s|} |G'(\beta+it,y)|. \]

The triangle inequality shows, by definition of $F$, that

(4.3)\begin{align} & \int_{\beta+it_0}^{\beta+\frac{i}{\log y}} ( G(s,y)-G(\beta,y))F(s,y) \frac{x^s}{s}\,{\rm d}s \nonumber\\ & \quad \ll_{\varepsilon} \max_{|t| \le \frac{1}{\log y}} |G'(\beta+it,y)| x^{\beta} \log y \int_{t_0}^{{1}/{\log y}} t |e^{I(\xi-it\log y)}| \,{\rm d}t. \end{align}

Since $-e^{-v^2/2}$ is the antiderivative of $e^{-v^2/2}v$, the first part of lemma 2.6 shows

\begin{align*} \int_{t_0}^{{1}/{\log y}} t |e^{ I(\xi-it\log y)}| \,{\rm d}t & \ll \exp(I(\xi))\int_{t_0}^{{1}/{\log y}} t \exp(-(\log x) (\log y)t^2/(2\pi^2)) \,{\rm d}t\\ & \ll \exp(I(\xi)) \frac{\exp({-}u^{1/3}/(2\pi^2))}{(\log x)( \log y)}. \end{align*}

Hence, $t_0 \le |\Im s| \le 1/\log y$ contributes in total

\[ \ll_{\varepsilon} \max_{|t| \le 1/\log y}|G'(\beta+it,y)| x\rho(u)\exp({-}u^{1/3}/20)/\log x \]

where we used lemma 2.2 to simplify. Once we divide this by $\Lambda (x,\,y)G(\beta,\,y) \asymp _{\varepsilon } x\rho (u)G(\beta,\,y)$ we obtain the error term $E_2$. It remains to study the contribution of $|\Im s|\le t_0$ to the integral in the right-hand side of (4.2), which will yield $E_1$. We Taylor-expand the integrand at $s=\beta$. We write $s=\beta +it$, $|t|\le t_0$. We first simplify the integrand using the definition of $F$:

\begin{align*} \frac{F(s,y)x^{s}}{s} & = (\log y) K(s-1) e^{\gamma+I(\xi)}x^{\beta+it} \exp(I(\xi-it \log y)-I(\xi))\\ & = (\log y) K(s-1) x^{\beta}e^{\gamma+I(\xi)} \exp(I(\xi-it \log y)-I(\xi)+it \log x). \end{align*}

We Taylor-expand $\log K(s-1)$ and $G(s,\,y)-G(\beta,\,y)$:

\begin{align*} K(s-1) & = K(\beta-1) (1+O_{\varepsilon}(t)),\\ G(s,y)-G(\beta,y) & = itG'(\beta,y)+ O(t^2 \max_{|v|\le t} |G''(\beta+iv,y)|). \end{align*}

We expand $I(\xi -it \log y)-I(\xi )+it \log x$:

(4.4)\begin{equation} I(\xi-it \log y)-I(\xi)+it \log x ={-}\frac{t^2}{2}I''(\xi)\log^2 y + O( |t|^3 (\log x) (\log y)^2), \end{equation}

where we used $I'(\xi (u))=u$ and $I^{(3)}(\xi (u)+it) \ll e^{\xi (u)}/(1+\xi (u)) \asymp u$. This implies

(4.5)\begin{align} & \exp(I(\xi-it \log y)-I(\xi)-it\log y)\nonumber\\ & \quad = \exp\left(-\frac{t^2}{2}I''(\xi)\log^2 y\right)( 1+ O( |t|^3 (\log x) (\log y)^2)) \end{align}

for $|t| \le t_0$. By two basic properties of moments of the Gaussian,

\begin{align*} & \int_{{-}t_0}^{t_0} t\exp\left(-\frac{t^2}{2}I''(\xi)\log^2 y\right) \,{\rm d}t = 0,\\ & \int_{{-}t_0}^{t_0} |t|^{k}\exp\left(-\frac{t^2}{2}I''(\xi)\log^2 y\right) \,{\rm d}t\nonumber\\ & \quad \ll_k (I''(\xi)\log^2 y)^{-{k+1}/{2}} \ll_k ((\log x)(\log y))^{-{k+1}/{2}}, \end{align*}

we find

(4.6)\begin{align} & \int_{\beta-it_0}^{\beta+it_0} ( G(s,y)-G(\beta,y))F(s,y) \frac{x^s}{s}\,{\rm d}s\nonumber\\ & \quad\ll_{\varepsilon} x^{\beta}e^{I(\xi)} \left( \frac{|G'(\beta,y)|\sqrt{\log y}}{(\log x)^{3/2} } + \frac{\max_{|v|\le t_0}|G''(\beta+iv)|}{(\log x)^{3/2}(\log y)^{1/2}}\right). \end{align}

By lemma 2.2, we can replace $x^{\beta } e^{I(\xi )}$ with $x\rho (u)\sqrt {u}$, to obtain

(4.7)\begin{align} & \int_{\beta-it_0}^{\beta+it_0} ( G(s,y)-G(\beta,y))F(s,y) \frac{x^s}{s}\,{\rm d}s\nonumber\\ & \quad \ll_{\varepsilon} x\rho(u)\left( \frac{|G'(\beta,y)|}{\log x} + \frac{\max_{|v|\le t_0}|G''(\beta+iv)|}{(\log x)(\log y)}\right). \end{align}

Dividing by $G(\beta,\,y)\Lambda (x,\,y) \asymp _{\varepsilon } G(\beta,\,y) x\rho (u)$ gives the error term $E_1$.

Proposition 4.2 Small $u$

Suppose $x \ge y \ge C$ satisfy

(4.8)\begin{equation} u \le (\log y) ( \log \log y)^3. \end{equation}

Let

(4.9)\begin{equation} t_0:= (\log x)^{{-}1/3}(\log y)^{{-}2/3}, \quad t_1: = \frac{u\log(u+1)}{\log y},\quad t_2 \in [(\log x)^{5}, y^{4/5}]. \end{equation}

Then $\Psi (x,\,y) = \Lambda (x,\,y) G(\beta,\,y) ( 1 + E)$ for

\begin{align*} E & \ll \frac{ |G'( \beta,y)|}{\log x} + \frac{ \max_{|v|\le t_0}|G''( \beta+iv,y)|}{(\log x)(\log y)}\\ & \quad + \frac{\max_{|v|\le t_1} |G'( \beta+iv,y)|\exp({-}u^{1/3}/20)}{\log x}+t_2^{{-}4/5} \\ & \quad + \exp({-}u/2)\left( \max_{|t|\le t_2}\left| \frac{G( \beta+it,y)}{G( \beta,y)}-1\right|\right.\\ & \quad +\left. \left|\int_{t_1 \le |t| \le t_2} K( \beta+it-1)x^{it} \frac{G( \beta+it,y)-G( \beta,y)}{G( \beta,y)}\frac{{\rm d}t}{t}\right|\right). \end{align*}

Proof. Our strategy is to establish $\Psi (x,\,y) = \Lambda (x,\,y) G( \beta,\,y) (1 + E_1 + E_2 + E_3 + E_4) + E_5$ for

(4.10)\begin{align} E_1 & \ll \frac{ |G'( \beta,y)|}{G( \beta,y)\log x} + \frac{ \max_{|v|\le t_0}|G''( \beta+iv,y)|}{G( \beta,y)(\log x)( \log y)}, \nonumber\\ E_2 & \ll \frac{\max_{|v|\le t_1} |G'( \beta+iv,y)|\exp({-}u^{1/3}/20)}{G( \beta,y)\log x},\nonumber\\ E_3 & \ll \frac{\exp({-}u/2)}{\log y} \int_{t_1 \le |t| \le t_2} \left| \frac{G( \beta+it)-G( \beta,y)}{G( \beta,y)}\right| \frac{\log(|t|+2)}{t^2} \,{\rm d}t, \nonumber\\ E_4 & \ll \exp({-}u/2) \left|\int_{t_1 \le |t| \le t_2} K( \beta+it-1)x^{it} \frac{G( \beta+it,y)-G( \beta,y)}{G( \beta,y)}\frac{{\rm d}t}{t}\right|,\nonumber\\ E_5 & \ll t_2^{{-}4/5} (\Psi(x,y)+x\rho(u)G( \beta,y)). \end{align}

The proposition will then follow by rearranging and the fact that $G(\beta,\,y) \asymp 1$ in the considered range, unconditionally, as follows from corollary 2.9 and lemma 2.10. From proposition 3.2 with $T=t_2$,

\begin{align*} & \Psi(x,y) -\Lambda(x,y)G( \beta,y) \\ & \quad =\frac{1}{2\pi i} \int_{ \beta-it_2}^{ \beta+it_2} ( G(s,y)-G( \beta,y))F(s,y) \frac{x^s}{s}\,{\rm d}s\\ & \qquad + O( t_2^{{-}4/5}(\Psi(x,y)+x\rho(u) G( \beta,y)) ), \end{align*}

which explains $E_5$. For $|\Im s| \le t_0$, we Taylor-expand $I(\xi -it\log y)$ as in the medium $u$ range and obtain the contribution of $E_1$ (see (4.7)) We treat the contribution of $|\Im s| \in [t_0,\,t_1]$. We replace $G(s,\,y)-G( \beta,\,y)$ with

\[ |G(s,y)-G( \beta,y)| \le |\Im s| \max_{0 \le |t|\le |\Im s|} |G'( \beta+it,y)|. \]

The first two parts of lemma 2.6 show

\begin{align*} & \int_{ \beta+it, \, |t| \in [t_0,t_1]} ( G(s,y)-G( \beta,y))F(s,y) \frac{x^s}{s}\,{\rm d}s \\ & \quad \ll \max_{|t|\le t_1} |G'( \beta+it,y)| x\rho(u)(\log y)\sqrt{u}\\ & \qquad \int_{|t| \in [t_0,t_1]} |t|\left( \exp\left(-\frac{t^2 (\log x)( \log y)}{2\pi^2}\right) + \exp({-}u/(\pi^2+\xi^2))\right) \,{\rm d}t\\ & \quad \ll \max_{|t|\le t_1} |G'( \beta+it,y)| x\rho(u)\sqrt{u}\frac{\exp({-}u^{1/3}/2\pi^2)}{\log x}. \end{align*}

This explains $E_2$. It remains to consider $t_2 \ge |\Im s| \ge t_1$. We use the third part of lemma 2.6 to replace $\hat {\rho }((s-1)\log y)$, appearing in $F(s,\,y)$, with its approximation:

(4.11)\begin{align} & \int_{ \beta+it, \, |t| \in [t_1,t_2]} ( G(s,y)-G( \beta,y))F(s,y) \frac{x^s}{s}\,{\rm d}s \nonumber\\ & \quad = (\log y) x^{ \beta} \int_{s= \beta+it, \, |t| \in [t_1,t_2]} K(s-1)x^{it} ( G(s,y)\nonumber\\ & \qquad -G( \beta,y))\left( \frac{i}{t\log y} + O\left( \frac{u\log (u+1)}{t^2 \log^2 y}\right)\right) \,{\rm d}s. \end{align}

Recall $x^{ \beta } \ll x\rho (u) \sqrt {u} \exp (-I(\xi (u)))$ by lemma 2.2, and that $I(\xi (u)) \sim u$ since a change of variables shows $I(r)= \mathrm {Li}(e^r)+O(\log r)\sim e^r/r$. The contribution of the error term in the right-hand side of (4.11) is

\begin{align*} & \ll x^{ \beta} \log y \int_{s= \beta+it, \, |t| \in [t_1,t_2]} |K(s-1)x^{it} ( G(s,y)-G( \beta,y))| \frac{u\log (u+1)}{t^2 \log^2 y} |\,{\rm d}s| \\ & \ll \frac{x\rho(u)\exp({-}2u/3)}{\log y} \int_{|t|\in[t_1,t_2]}| G( \beta+it)-G( \beta,y)| \frac{|\zeta( \beta+it)| | \beta+it-1|} {t^2 | \beta+it|} \,{\rm d}t. \end{align*}

If $|t|\le 2$ we use $|\zeta ( \beta +it)( \beta +it-1)| \ll 1$ while if $|t| \ge 2$ we use lemma 2.4, to obtain an error term of size $E_3$. The main term of (4.11) gives $E_4$.

4.1 Proof of theorem 1.1: medium $u$

Here we prove theorem 1.1 in the range (1.19). We obtain from proposition 4.1 that unconditionally

(4.12)\begin{equation} \Psi(x,y) = \Lambda(x,y) G(\beta,y) ( 1+E) \end{equation}

for

(4.13)\begin{equation} E \ll_{\varepsilon} \frac{\max_{|v|\le 1} |G'(\beta+iv,y)|}{G(\beta,y)\log x} + \frac{ \max_{|v|\le 1}|G''(\beta+iv,y)|}{G(\beta,y)(\log x )(\log y)}+ \frac{1}{y}. \end{equation}

Because we assume $y \ge (\log x)^{2+\varepsilon }$, we have $\beta \ge 1/2+ c_{\varepsilon }$. Under RH, $\log G(\beta,\,y) = O_{\varepsilon }(1)$ by lemma 2.11. To bound the quantities appearing in $E$, we write $G(\beta +it,\,y)$ as $G_1(\beta +it,\,y)$ times $G_2(\beta +it,\,y)$. Lemma 2.10 and equation (2.11) tell us that

(4.14)\begin{equation} (\log G_2)^{(i)}(\beta+it,y) \ll_{\varepsilon} (\log y)^{i-1} y^{{1}/{2}-\beta} \end{equation}

for $i=0,\,1,\,2$ and $t \in \mathbb {R}$. Corollary 2.9 says that under RH

(4.15)\begin{align} (\log G_1)^{(i)}(\beta+it,y) & = ({-}1)^i(\log y)^{i-1}y^{-\beta-it} ( \psi(y)-y+ O_{\varepsilon}(y^{{1}/{2}}))\nonumber\\ & \quad \ll_{\varepsilon} (\log y)^{i+1} y^{{1}/{2}-\beta} \end{align}

for all $i=0,\,1,\,2$ and $|t|\le 1$. Putting these two together, one obtains (1.11).

4.2 Proof of theorem 1.1: small $u$

Here we prove theorem 1.1 for $u$ in the range (4.8). In this range, $\beta =1+o(1)$ and $\Psi (x,\,y)=x^{1+o(1)}$. Moreover, $\log G(\beta,\,y) = O(1)$ unconditionally by corollary 2.9 and lemma 2.10. The hardest range of the proof will be $u\asymp 1$. Before proceeding with the actual proof, note that from proposition 4.2 and the triangle inequality, it follows that

(4.16)\begin{align} \Psi(x,y) & = \Lambda(x,y) G(\beta,y) \left(1+O\left(t_2^{{-}4/5} + t_2 \max_{|t|\le t_2}|G'(\beta+it,y)|\right.\right.\nonumber\\ & \left.\left.\quad+\max_{|t|\le 1}|G''(\beta+it,y)|\right)\right) \end{align}

holds unconditionally for $t_2 \in [(\log x)^5,\, y^{4/5}]$ and the range $x \ge y \ge C$, $u \le (\log y)(\log \log y)^3$.

We obtain from proposition 4.2 with $t_2=y^{4/5}$ that

\[ \Psi(x,y) = \Lambda(x,y) G(\beta,y) ( 1+E_1 + E_2 + E_3 + E_4 + y^{{-}3/5}) \]

for $E_i$ bounded in (4.10). We write $G(\beta +it,\,y)$ as $G_1(\beta +it,\,y)$ times $G_2(\beta +it,\,y)$. By lemma 2.10 and (2.11),

(4.17)\begin{equation} (\log G_2)^{(i)}(\beta+it,y) \ll (\log y)^{i-1}u\log (u+1) y^{-{1}/{2}} \end{equation}

for $i=0,\,1,\,2$ and $t \in \mathbb {R}$ where we simplified $y^{-\beta }$ using (2.1). From now on we assume RH. Corollary 2.9 implies

(4.18)\begin{equation} (\log G_1)^{(i)}(\beta+it,y) \ll \frac{ (\log y)^{i-1}u \log (u+1)}{y}(|\psi(y)-y|+y^{{1}/{2}}) \end{equation}

for $i=0,\,1,\,2$ when $|t|\le 1$. As in the medium $u$ case, one can bound $E_1$ by an acceptable quantity using our estimates for $(\log G_1)^{(i)}$ and $(\log G_2)^{(i)}$. Recall

\[ E_2 \ll \frac{\max_{|v|\le t_1} |G'( \beta+iv,y)|\exp({-}u^{1/3}/20)}{G( \beta,y)\log x} \]

where $t_1 = u\log (u+1)/\log y$. If $t_1 \le 1$ we bound $E_2$ in the same way we bounded $E_1$. Otherwise we use (2.8), which implies that

(4.19)\begin{equation} (\log G_1)^{(i)}(\beta+it,y) \ll (\log y)^{i+1}u \log (u+1) y^{-{1}/{2}} \end{equation}

holds for $i=0,\,1,\,2$ and $|t|\le y^{9/10}$. This shows that, if $t_1>1$, i.e. $u\log (u+1)\ge \log y$,

\[ E_2 \ll \frac{(\log y)^2 u \log(u+1)\exp({-}u^{1/3}/20)}{y^{{1}/{2}}\log x} \ll \log (u+1) y^{-{1}/{2}}. \]

This is an acceptable contribution when $u\log (u+1)> \log y$. We now study $E_3$ and $E_4$. Due to $G(\beta +it,\,y)/G(\beta,\,y)$ being very close to $1$ in our considered range by (4.17) and (4.19), we may replace

\[ G(\beta+it,y)/G(\beta,y)-1 \]

by

\[ \log G(\beta+it,y)-\log G(\beta,y) \]

and incur a negligible error, in both $E_3$ and $E_4$. So to show $E_3$ is acceptable we need to prove

(4.20)\begin{equation} \int_{t_1 \le |t| \le y^{4/5}} | \log G( \beta+it,y)\!-\!\log G( \beta,y)| \frac{\log(|t|+2)}{t^2} \,{\rm d}t \!\ll\! \frac{e^{u/3}}{y} (|\psi(y)-y|\!+\!y^{{1}/{2}}). \end{equation}

This is shown using the bound

(4.21)\begin{equation} \log G( \beta+it,y) \ll \frac{u \log (u+1)}{y \log y} (|\psi(y)-y| + y^{{1}/{2}}\log^2(|t|+2)), \quad |t| \le y^{9/10}, \end{equation}

which is a consequence of (2.8) and (4.17). To handle $E_4$ it remains to prove

(4.22)\begin{align} & \int_{t_1 \le |t| \le y^{4/5}} K(\beta+it-1)x^{it}( \log G( \beta+it,y)-\log G( \beta,y)) \frac{{\rm d}t}{t}\nonumber\\ & \quad \ll_{\varepsilon} \frac{e^{u/2}}{y \log y} (|\psi(y)-y|+y^{{1}/{2}}). \end{align}

Here we cannot use the triangle inequality and put absolute value inside the integral. Indeed, if we use the pointwise bound (4.21), along with our bounds for $\zeta$ (lemmas 2.4 and 2.5), we get a bound which falls short by a factor of $(\log y)^3$. We shall overcome this by several integrations by parts as we now describe.

To deal with the contribution of $\log G(\beta,\,y)$ to (4.22) we use (4.21) with $t=0$ along with the bound

\[ \int_{t_1 \le |t| \le y^{4/5}} K(\beta+it-1)x^{it}\frac{{\rm d}t}{t} \ll u^2 \]

which follows by integration by parts, where we replace $x^{it}$ by its antiderivative $x^{it}/\log x$.

Note that due to integration by parts, derivatives of $\zeta$ arise. This means that in addition to lemmas 2.4 and 2.5 we need the bounds $\zeta ^{(k)}(s) \ll _k (1+(|t|+4)^{1-\sigma })\log ^{k+1}(|t|+4)$ and $\int _{1}^{T} |\zeta ^{(k)}(\sigma +it)|^2 \,{\rm d}t \ll _k T$ for $\sigma \in [2/3,\,1]$ and $T,\, |t| \ge 1$. These bounds follow from lemmas 2.4 and 2.5 through Cauchy's integral formula.

To deal with the contribution of $\log G(\beta +it,\,y)$ to (4.22) we write it $\log G_1( \beta +it,\,y)+\log G_2( \beta +it,\,y)$ and obtain two integrals which we bound separately.

4.2.1 Treatment of $\log G_1$

Recall we assume $y \le x^{1-\varepsilon }$. We want to show

(4.23)\begin{equation} \int_{t_1 \le |t| \le y^{4/5}} K(\beta+it-1)x^{it} \log G_1( \beta+it,y) \frac{{\rm d}t}{t} \ll_{\varepsilon} \frac{e^{u/2}}{y \log y} (|\psi(y)-y|+y^{{1}/{2}}). \end{equation}

We integrate by parts, replacing $x^{it}$ by its antiderivative, reducing matters to showing

(4.24)\begin{equation} \frac{1}{\log x}\int_{t_1\le |t| \le y^{4/5}} K(\beta+it-1) x^{it} \frac{G_1'}{G_1}(\beta+it,y) \frac{{\rm d}t}{t} \ll_{\varepsilon} \frac{e^{u/2}}{y \log y} (|\psi(y)-y|+y^{{1}/{2}}). \end{equation}

We divide and multiply the integrand by $y^{it}$, so the left-hand side of (4.23) is now

(4.25)\begin{equation} \frac{1}{\log x}\int_{t_1\le |t| \le y^{4/5}} K(\beta+it-1) (x/y)^{it} H(t)\frac{{\rm d}t}{t} \end{equation}

where $H(t):= y^{it}(G'_1/G_1)(\beta +it,\,y)$. From lemma 2.8,

\[ y^{\beta} \cdot H(t) = \sum_{|\Im (\rho)-t| \le 2y^{4/5}}\frac{y^{\rho}}{\rho-\beta-it}+O(y^{{2}/{5}}) \ll |\psi(y)-y|+ y^{{1}/{2}}\log^2 (|t|+2) \]

and, for $k=1,\,2,\,3$,

\[ y^{\beta} \cdot H^{(k)}(t) \!=\! (k+1)!i^{k} \sum_{|\Im (\rho)-t| \le 2y^{4/5}}\frac{y^{\rho}}{(\rho-\beta-it)^{k+1}}+O(y^{{2}/{5}})\ll y^{{1}/{2}}\log (|t|\!+\!2). \]

We integrate by parts 3 times, replacing $(x/y)^{it}$ by its antiderivative. We are guaranteed to get enough saving since $\log (x/y) \gg _{\varepsilon } \log x$.

4.2.2 Treatment of $\log G_2$

The function $\log G_2(\beta +it,\,y)$ is given as a sum over proper primes powers. As the cubes and higher powers contribute at most $\ll y^{-2/3+o(1)}$ to it by the prime number theorem (see [Reference Gorodetsky9]), we can replace $\log G_2(\beta +it,\,y)$ with the prime sum $\sum _{y^{1/2}< p \le y} p^{-2(\beta +it)}/2$, so we are left to show

\[ \sum_{y^{1/2}< p \le y} p^{{-}2\beta}\int_{t_1 \le |t| \le y^{4/5}} K(\beta+it-1)(x/p^2)^{it} \frac{{\rm d}t}{t} \ll \frac{e^{u/2}}{y^{{1}/{2}} \log y}. \]

For a given $p$, the pointwise bound $(x/p^2)^{it} \ll 1$ leads to the above integral being bounded by $\ll \log y$. This is good enough for the primes $p \in [y^{1/2}\log y,\, y]$, since

\[ \sum_{y^{{1}/{2}}\log y \le p\le y}p^{{-}2\beta} \log y \asymp \frac{u\log (u+1)}{y^{{1}/{2}}\log y}. \]

For the primes $p \in (y^{1/2},\,y^{1/2}\log y)$ we integrate by parts, replacing $(x/p^2)^{it}$ by its antiderivatives.

4.3 Proof of theorem 1.2

Suppose $(\log x)^{3} \ge y \ge (\log x)^{4/3+\varepsilon }$. It follows from proposition 4.1 that $\Psi (x,\,y) = \Lambda (x,\,y) G(\beta,\,y) ( 1 + E)$ holds unconditionally for

(4.26)\begin{equation} E \ll_{\varepsilon} \frac{ |G'(\beta,y)|}{G(\beta,y)\log x} + \frac{ \max_{|v|\le t_0}|G''(\beta+iv,y)|}{G(\beta,y)(\log x)(\log y)} + \frac{\max_{|v|\le \frac{1}{\log y}} |G'(\beta+iv,y)|}{G(\beta,y)\exp(u^{1/3}/20)} +\frac{1}{y} \end{equation}

where $t_0$ is given in the proposition. It remains to bound the quantities appearing in $E$. From now on we assume RH. Let $A:= (\log x) / y^{1/2}$. We will prove the stronger bound

(4.27)\begin{align} E & \ll_{\varepsilon} \frac{|\psi(y)-y|+y^{{1}/{2}}}{y} \bigg( 1+ u \frac{|\psi(y)-y|+y^{\frac{1}{2}}}{y}\bigg)\nonumber\\ & \quad + \frac{\max\{A,A^2\}}{u\max\{1,|\log A|\}} \bigg( 1+\frac{\max\{A,A^2\}}{\max\{1,|\log A|\}}\bigg), \end{align}

which implies the theorem using $\psi (y)-y \ll y^{1/2} \log ^2 y$. Recall we can always simplify $y^{-\beta }$ using (2.1) as $\asymp _{\varepsilon } (\log x)/y$. In particular, $y^{1/2-\beta } \asymp _{\varepsilon } A$. Recall $G=G_1 G_2$. Lemma 2.10 and equation (2.11) tell us that

(4.28)\begin{equation} (\log G_2)^{(i)}(\beta+it,y) \ll (\log y)^{i} \frac{\max\{A, A^2\}}{\max\{1,|\log A|\}} \end{equation}

for $i=0,\,1,\,2$ and $t \in \mathbb {R}$. Corollary 2.9 says that under RH

(4.29)\begin{equation} (\log G_1)^{(i)}(\beta+it,y) \ll (\log y)^{i-1}\frac{\log x}{y} ( |\psi(y)-y| + y^{{1}/{2}}) \end{equation}

for $i=0,\,1,\,2$ and $|t|\le 1$. Applying (4.28) and (4.29) with $i=1$ shows

\[ \frac{ |G'(\beta,y)|}{G(\beta,y)} \frac{1}{\log x} \ll \frac{|\psi(y)-y|+y^{{1}/{2}}}{y}+ \frac{\max\{A,A^2\}}{u\max\{1,|\log A|\}} \]

which treats the first quantity in (4.26). We now consider the third term in (4.26). Observe

(4.30)\begin{align} & \frac{ \max_{|v|\le 1/\log y}|G'(\beta+iv,y)|}{G(\beta,y)\exp(u^{1/3}/20)} \le \frac{ \max_{|v|\le 1/\log y}|G(\beta+iv,y)|}{G(\beta,y)\exp(u^{1/3}/20)}\nonumber\\ & \quad \cdot \max_{|v|\le 1 }|(\log G)'(\beta+iv,y)|. \end{align}

From (4.28) and (4.29) we have

(4.31)\begin{equation} \max_{|v|\le 1 }|(\log G)'(\beta+iv,y)| \ll (\log x)^4, \end{equation}

say, and, by (2.11) and (4.29),

(4.32)\begin{equation} \frac{ \max_{|v|\le 1/\log y}|G(\beta+iv,y)|}{G(\beta,y)}\le \exp(C_{\varepsilon} (\log y)^2 (\log x) /y^{1/2}), \end{equation}

so that (4.30) leads to

\[ \frac{\max_{|v|\le 1/\log y} |G'(\beta+iv,y)|}{G(\beta,y)\exp(u^{1/3}/20)} \ll_{\varepsilon} \frac{\exp(C_{\varepsilon} (\log y)^2 (\log x) /y^{1/2})}{ \exp(u^{1/3}/40)} \ll_{\varepsilon} \frac{1}{y}. \]

It remains to bound the second term in (4.26). Observe

(4.33)\begin{align} & \frac{ \max_{|v|\le t_0}|G''(\beta+iv,y)|}{G(\beta,y)(\log x) (\log y)} \le \frac{ \max_{|v|\le t_0}|G(\beta+iv,y)|}{G(\beta,y)(\log x) (\log y)} \nonumber\\ & \quad \cdot ( \max_{|v|\le 1}|(\log G)''(\beta+iv,y)| + \max_{|v|\le 1}|(\log G)'(\beta+iv,y)|^2). \end{align}

By (2.11) we can bound the fraction in the right-hand side of (4.33) by $O_{\varepsilon }(1)$:

\begin{align*} & \frac{ \max_{|v|\le t_0}|G(\beta+iv,y)|}{G(\beta,y)}\le \frac{ \max_{|v|\le t_0}|G_1(\beta+iv,y)|}{G_1(\beta,y)}\\ & \quad \le \exp\big( \int_{{-}t_0}^{t_0} |G_1'/G_1|(\beta+iv,y) \,{\rm d}v\big)\le \exp(C_{\varepsilon}t_0 (\log y)^2(\log x)/y^{1/2}) \ll_{\varepsilon} 1. \end{align*}

The derivatives of $\log G$ in the right-hand side of (4.33) are handled by (4.28) and (4.29), giving

\begin{align*} & \max_{|v|\le 1}|(\log G)''(\beta+iv,y)| + \max_{|v|\le 1}|(\log G)'(\beta+iv,y)|^2 \\ & \quad \ll \frac{(\log y )(\log x)}{y}(|\psi(y)-y|+y^{{1}/{2}}|) + \frac{(\log x)^2}{y^2} (|\psi(y)-y|+y^{{1}/{2}})^2\\ & \quad + (\log y)^{2} \left( \frac{\max\{A, A^2\}}{\max\{1,|\log A|\}} + \frac{\max\{A, A^2\}^2}{\max\{1,|\log A|\}^2}\right). \end{align*}

Dividing this by $(\log x) (\log y)$ gives a bound for the second term in (4.26).

Acknowledgements

We are grateful to Sacha Mangerel for asking us about the integer analogue of [Reference Gorodetsky10]. We thank the referee for useful suggestions and comments that improved the manuscript. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 851318).

Appendix A. Review of $\Lambda (x,\,y)$

Appendix A.1 $\lambda _y$ and its Laplace transform

Saias [Reference Saias17, Lem. 4(iii)] proved that $\lambda _y(v) \ll \rho (v)v^3 + e^{2v}y^{-v}$ holds for $y \ge 2$, $v \ge 1$. The following is a weaker version of his result which suffices for us.

Lemma A.1 Saias

If $u \ge \max \{C,\,y+1\}$ we have $\lambda _y(u) \ll (C/y)^{u}$.

Proof. The condition $u \ge \max \{C,\,y+1\}$ ensures $e^{\xi (u-1)} \ge y$:

\[ e^{\xi(u-1)} \ge (u-1) \xi(u-1) \ge y\xi(u-1) \ge y. \]

Integrating the definition of $\lambda _y$ by parts gives

(A.1)\begin{equation} \lambda_y(u)=\rho(u)+\int_{0}^{u-1}(-\rho'(u-v))\{y^v\}y^{{-}v}\,{\rm d}v + O(y^{{-}u}). \end{equation}

By (A.1) and the definition of $\rho$ we have

(A.2)\begin{align} \frac{\lambda_y(u)}{\rho(u)} & = 1 - \int_{0}^{u-1} \frac{\rho'(u-v)}{\rho(u)} \frac{\{y^v\}}{y^v} \,{\rm d}v+ O( y^{{-}u})\nonumber\\ & = \int_{0}^{u-1} \frac{\rho(u-v-1)}{(u-v)\rho(u)} \frac{\{y^v\}}{y^v} \,{\rm d}v+ O(1). \end{align}

One has $\rho (u-v)\ll \rho (u)e^{v \xi (u)}$ uniformly for $0 \le v \le u$ [Reference Hildebrand and Tenenbaum14, Cor. 2.4]. Hence the integral on the right-hand side of (A.2) is

\[{\ll} \frac{\rho(u-1)}{\rho(u)} \int_{0}^{u-1} \left(\frac{e^{\xi(u-1)}}{y}\right)^v \,{\rm d}v \le \frac{\rho(u-1)}{\rho(u)} (u-1) \ll u e^{\xi(u)} \]

which is $\ll u^2 \log (u+1)$ by lemma 2.1. Hence

\begin{align*} \lambda_y(u) & \ll \rho(u) u^2\log(u+1) \ll u^{3/2} \log(u+1) \exp(I(\xi(u)) e^{{-}u\xi(u)}\\ & \quad \le u^{3/2} \log(u+1) \exp(I(\xi(u))y^{{-}u} \end{align*}

using lemma 2.2. We have $I(\xi (u)) \ll u$. As $u^{3/2}\log (u+1)$ may be absorbed in $C^u$, we are done.

By lemma A.1, the contribution of $v \ge \max \{C,\,y+1\}$ to (1.17) is

\[ \int_{\max\{C,y+1\}}^{\infty}| e^{{-}sv}\lambda_y(v)| \,{\rm d}v \ll \int_{\max\{C,y+1\}}^{\infty} ( e^{-\Re s}C/y)^v \,{\rm d}v<\infty. \]

This establishes

Corollary A.2 Fix $\varepsilon >0$. If $y \ge C_{\varepsilon}$ then $\hat {\lambda }_y$ converges absolutely for $\Re s > -(\log y)/(1+\varepsilon )$.

Appendix A.2 Asymptotics of $\Lambda$

We define $r \colon [1,\,\infty ) \to \mathbb {R}$ by $r(t):=-\rho '(t)/\rho (t)=\rho (t-1)/(t \rho (t))$.

Lemma A.3 [Reference Fouvry and Tenenbaum8, Eq. (6.3)] For $0 \le v \le u-1$ and $u \ge 1$ we have

\[ \rho'(u-v)-\rho'(u)e^{vr(u)} \ll \frac{\rho(u)ve^{vr(u)}}{u}( 1+v\log(u+1)). \]

Lemma A.4 [Reference de la Bretèche and Tenenbaum4, Lem. 3.7] For $u \ge 1$ we have $r(u) = \xi (u)+O(1/u)$.

Proposition A.5 Fix $\varepsilon >0$. Suppose $x\ge C_{\varepsilon}$. For $x \ge y \ge (\log x)^{1+\varepsilon }$,

\[ \Lambda(x,y) =x \rho(u) K\left(- \frac{r(u)}{\log y}\right) \left(1+O_{\varepsilon}\left( \frac{1}{(\log x)(\log y)} + \frac{y}{x\log x}\right)\right). \]

Equation (1.5) follows from proposition A.5 using lemma A.4. Proposition A.5, in slightly weaker form, is implicit in [Reference de la Bretèche and Tenenbaum5, pp. 176–177], and the proof given below follows these pages.

Proof. For $u=1$ the claim is trivial since $\Lambda (x,\,x)=\lfloor x\rfloor$ [Reference de Bruijn3, Eq. (3.2)], so we assume $u>1$. Recall the integral representation $\zeta (s)=s/(s-1) -s \int _{1}^{\infty }\{t\}\,{\rm d}t/t^{1+s}$ for $\Re s >0$ [Reference Montgomery and Vaughan15, Eq. (1.24)]. We apply it with $s=1-r(u)/\log y$ and perform the change of variable $t=y^v$ to obtain

(A.3)\begin{equation} K({-}r(u)/\log y) =1+r(u) \int_{0}^{\infty} e^{r(u)v}\{y^v\}y^{{-}v} \,{\rm d}v. \end{equation}

From (A.3) and (A.1) we deduce

(A.4)\begin{equation} x\rho(u)K(- r(u)/\log y)-\Lambda(x,y)= x\int_{0}^{\infty}(\rho'(u-v)-\rho'(u)e^{r(u)v})\{y^v\}y^{{-}v}\,{\rm d}v+ O(1). \end{equation}

It remains to show that the right-hand side of (A.4) is

\[ \ll_{\varepsilon} x\rho(u)\left(\frac{1}{(\log x)(\log y)} +\frac{y}{x \log x}\right). \]

It is convenient to set

(A.5)\begin{equation} a :=\log\left(\frac{y}{e^{r(u)}}\right)= (\log y) - r(u) \ge \frac{\varepsilon}{2}\log y, \end{equation}

where the inequality is due to lemmas A.4 and 2.1 and our assumptions on $x$ and $y$. By lemma A.3, the contribution of $0 \le v \le u-1$ to the right-hand side of (A.4) is

\begin{align*} & \ll \frac{x \rho(u)}{u} \int_{0}^{u-1} \left( \frac{e^{r(u)}}{y}\right)^{v} v(1+v\log(u+1))\,{\rm d}v\\ & = \frac{x \rho(u)}{u}\left({-}e^{{-}av} \left( \frac{\log(u+1)}{a}v^2+\frac{2\log(u+1)+a}{a^2}v + \frac{2\log(u+1)+a}{a^3}\right)\right) \Big|^{v=u-1}_{v=0}. \end{align*}

Using $e^{(u-1)a}\gg \max \{(u-1)a,\, (u-1)^2 a^2\}$ and (A.5) we find that the last quantity is $\ll _{\varepsilon } x\rho (u)/((\log x)(\log y))$ which is acceptable. For $v > u-1$, $\rho '(u-v)=0$ and that part of the integral (times $x$) is estimated as

\[{\ll} x(-\rho'(u)) \int_{u-1}^{\infty}e^{{-}av}\,{\rm d}v =x\rho(u) r(u)\frac{e^{{-}a(u-1)}}{a}\ll_{\varepsilon} x\rho(u) \log(u+1) \frac{e^{{-}a(u-1)}}{\log y}. \]

If $u \ge 2$ this is $\ll _{\varepsilon } x\rho (u) /((\log x)(\log y))$, otherwise this is $\ll x\rho (u)(y/x)/\log x$. Both cases give an acceptable contribution.

Footnotes

1 For $x \ge y \ge x^{1-\varepsilon }$, de Bruijn proved $\Psi (x,\,y) = \Lambda (x,\,y) (1 + O_{\varepsilon }( (\log x)^2 /y^{1/2}))$ under RH [Reference de Bruijn3, Eq. (1.3)].

References

Akbary, A., Ng, N. and Shahabi, M.. Limiting distributions of the classical error terms of prime number theory. Q. J. Math. 65 (2014), 743780.Google Scholar
de Bruijn, N. G.. The asymptotic behaviour of a function occurring in the theory of primes. J. Indian Math. Soc. (N.S.) 15 (1951), 2532.Google Scholar
de Bruijn, N. G.. On the number of positive integers $\leq x$ and free of prime factors $>y$. Nederl. Acad. Wetensch. Proc. Ser. A. 54 (1951), 5060.Google Scholar
de la Bretèche, R. and Tenenbaum, G.. Entiers friables: inégalité de Turán-Kubilius et applications. Invent. Math. 159 (2005), 531588.Google Scholar
de la Bretèche, R. and Tenenbaum, G.. Propriétés statistiques des entiers friables. Ramanujan J. 9 (2005), 139202.Google Scholar
de la Bretèche, R. and Tenenbaum, G.. Une nouvelle approche dans la théorie des entiers friables. Compos. Math. 153 (2017), 453473.Google Scholar
Dickman, K.. On the frequency of numbers containing prime factors of a certain relative magnitude. Ark. Mat. Astron. Fys. 22 A (1930), 14.Google Scholar
Fouvry, É. and Tenenbaum, G.. Répartition statistique des entiers sans grand facteur premier dans les progressions arithmétiques. Proc. London Math. Soc. (3) 72 (1996), 481514.Google Scholar
Gorodetsky, O.. Smooth integers and the Dickman ${\rho }$ function. To appear in Journal d'Analyse Mathématique.Google Scholar
Gorodetsky, O.. Uniform estimates for smooth polynomials over finite fields. Discrete Analysis 16 (2023), 31.Google Scholar
Hildebrand, A.. Integers free of large prime factors and the Riemann hypothesis. Mathematika 31 (1984), 258271. (1985).CrossRefGoogle Scholar
Hildebrand, A.. On the number of positive integers $\leq x$ and free of prime factors $>y$. J. Number Theory 22 (1986), 289307.Google Scholar
Hildebrand, A. and Tenenbaum, G.. On integers free of large prime factors. Trans. Amer. Math. Soc. 296 (1986), 265290.Google Scholar
Hildebrand, A. and Tenenbaum, G.. Integers without large prime factors. J. Théor. Nombres Bordeaux 5 (1993), 411484.Google Scholar
Montgomery, H. L. and Vaughan, R. C.. Multiplicative number theory. I. Classical theory, Cambridge Studies in Advanced Mathematics, Vol. 97 (Cambridge University Press, Cambridge, 2007).Google Scholar
Rubinstein, M. and Sarnak, P.. Chebyshev's bias. Experiment. Math. 3 (1994), 173197.Google Scholar
Saias, É.. Sur le nombre des entiers sans grand facteur premier. J. Number Theory 32 (1989), 7899.Google Scholar
Tenenbaum, G.. Introduction to analytic and probabilistic number theory, Graduate Studies in Mathematics, Vol. 163 (American Mathematical Society, Providence, RI, third edition, 2015). Translated from the 2008 French edition by Patrick D. F. Ion.Google Scholar
Titchmarsh, E. C.. The theory of the Riemann zeta-function (The Clarendon Press, Oxford University Press, New York, 1986), second edition, edited and with a preface by D. R. Heath-Brown.Google Scholar
von Koch, H.. Sur la distribution des nombres premiers. Acta Math. 24 (1901), 159182.Google Scholar