Hostname: page-component-586b7cd67f-r5fsc Total loading time: 0 Render date: 2024-11-27T20:02:26.341Z Has data issue: false hasContentIssue false

A weakly nonlinear amplitude equation approach to the bypass transition in the two-dimensional Lamb–Oseen vortex

Published online by Cambridge University Press:  28 November 2023

Yves-Marie Ducimetière*
Affiliation:
Laboratory of Fluid Mechanics and Instabilities, EPFL, CH1015 Lausanne, Switzerland
François Gallaire
Affiliation:
Laboratory of Fluid Mechanics and Instabilities, EPFL, CH1015 Lausanne, Switzerland
*
Email address for correspondence: [email protected]

Abstract

We analytically derive an amplitude equation for the weakly nonlinear evolution of the linearly most amplified response of a non-normal dynamical system. The development generalizes the method proposed in Ducimetière et al. (J. Fluid Mech., vol. 947, 2022, A43), in that the base flow now arbitrarily depends on time, and the operator exponential formalism for the evolution of the perturbation is not used. Applied to the two-dimensional Lamb–Oseen vortex, the amplitude equation successfully predicts the nonlinearities to weaken or reinforce the transient gain in the weakly nonlinear regime. In particular, the minimum amplitude of the linear optimal initial perturbation required for the amplitude equation to lose a solution, interpreted as the flow experiencing a bypass (subcritical) transition, is found to decay as a power law with the Reynolds number. Although with a different exponent, this is recovered in direct numerical simulations, showing a transition towards a tripolar state. The simplicity of the amplitude equation and the link made with the sensitivity formula permits a physical interpretation of nonlinear effects, in light of existing work on Landau damping and on shear instabilities. The amplitude equation also quantifies the respective contributions of the second harmonic and the spatial mean flow distortion in the nonlinear modification of the gain.

Type
JFM Papers
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press.

1. Introduction

The two-dimensional axisymmetric Lamb–Oseen (Gaussian) vortex flow, the vorticity of which is a strictly decreasing radial function, is linearly stable: the eigenvalues of $\boldsymbol{\mathsf{L}}$, the linearized Navier–Stokes operator around this flow, all possess a positive or null damping rate. In fact, even in the absence of viscosity where all damping rates are null, linear perturbations experience an inviscid exponential decay; this phenomenon, called ‘Landau damping’, was observed and interpreted for instance in Schecter et al. (Reference Schecter, Dubin, Cass, Driscoll, Lansky and O'Neil2000). In particular, it was analytically derived that the Landau damping rate can be related to the vorticity gradient, at the specific radius where the angular velocity associated with the dominant Landau pole is equal to that of the base flow.

The Landau damping can be interpreted in mathematical terms. In an inviscid framework, Schecter et al. (Reference Schecter, Dubin, Cass, Driscoll, Lansky and O'Neil2000) consider at first a ‘top-hat’ base vortex, for which the vorticity decreases slowly then rapidly drops to zero when the radial coordinate r is greater than a specific value $r_v$. This vortex supports a continuum of modes whose critical layers are located at some $r \leq r_v$, where they are singular since the base vorticity gradient is non-zero there; it also supports a discrete (‘Kelvin’, according to the terminology of, for instance, Balmforth, Llewellyn Smith & Young Reference Balmforth, Llewellyn Smith and Young2001) mode whose critical layer is located at $r_c \geq r_v$ where the base vorticity gradient is exactly zero: for this reason, the discrete mode is smooth, regular and can be interpreted as a classical eigenmode. Secondly, Schecter et al. (Reference Schecter, Dubin, Cass, Driscoll, Lansky and O'Neil2000) consider a base vortex that is equivalent to the first, with the addition of a low-vorticity ‘skirt’ that extends radially to a new, larger, $r_v \geq r_c$. This introduces a non-zero base vorticity gradient at $r_c$ which ruins the regularity of the previously discrete mode. However, Schecter et al. (Reference Schecter, Dubin, Cass, Driscoll, Lansky and O'Neil2000) argue that symptoms of the original discrete mode remain; some of the continuum modes closely resemble the latter in terms of structure and frequency and combine to form what Schecter et al. (Reference Schecter, Dubin, Cass, Driscoll, Lansky and O'Neil2000) call a ‘quasi-mode’. Therefore, if the discrete mode is used as an initial condition, it will excite the continuum following a Lorentzian distribution that is peaked around the discrete mode frequency (see figure 3 in Schecter et al. Reference Schecter, Dubin, Cass, Driscoll, Lansky and O'Neil2000). As time evolves, the continuum modes disperse, and their superposition behaves like an exponentially damped version of the original discrete mode (hence the appellation ‘quasi-mode’).

Schecter et al. (Reference Schecter, Dubin, Cass, Driscoll, Lansky and O'Neil2000) also consider the response of a Gaussian vortex, such as the one considered here, to a generic external impulse. Although the response does not behave as a single damped wave but projects well on a very large number of structurally different modes, the Landau damping is still found to be relevant and to dominate the initial decay of the perturbation.

Nevertheless, Rossi, Lingevitch & Bernoff (Reference Rossi, Lingevitch and Bernoff1997) evidenced that the Gaussian vortex flow, despite its linear stability, could relax to a new non-axisymmetric, called ‘tripolar’, state when subject to an arbitrary perturbation of sufficiently large amplitude. Such phenomenology is symptomatic of a subcritical bifurcation. This tripolar structure is well described for instance in Nolan & Farrell (Reference Nolan and Farrell1999) as a vortex for ‘which the low vorticity of the moat pools into two satellites of an elliptically deformed central vortex, with the whole structure rotating cyclonically’. It has been observed in the laboratory experiments of Denoix, Sommeria & Thess (Reference Denoix, Sommeria and Thess1994) as well as in Van Heijst & Kloosterziel (Reference Van Heijst and Kloosterziel1989), Kloosterziel & van Heijst (Reference Kloosterziel and van Heijst1991) and Van Heijst, Kloosterziel & Williams (Reference Van Heijst, Kloosterziel and Williams1991), and described in this last article as being ‘a very stable structure, even persisting in a highly sheared fluid environment’.

This corroborates its importance in geophysical contexts, where tropical cyclones sometimes show rotating elliptical eyes, as was reported for instance by Kuo, Williams & Chen (Reference Kuo, Williams and Chen1999) for Typhoon Herb, which occurred in Taiwan in 1996. A radar located on the Wu-Feng mountain could measure the horizontal distribution of maximum reflectivity for the Typhoon, from which we observe an elliptical eye (see their figures 1 and 2). The eye was rotating cyclonically with a period of 144 min. Concerned with Hurricane Olivia, which was part of the 1994 Pacific hurricane season, the work of Reasor et al. (Reference Reasor, Montgomery, Marks and Gamache2000) also documented an elliptical eye (see their figure 16). The ratio of minor to major axis was approximately $0.7$, and the period of (cyclonic) rotation was found to be around 50 min.

A substantial body of theoretical work has therefore been devoted to the apparition and persistence of the tripolar state. Some of them reflect on the problem in terms of the Landau damping, for instance Schecter et al. (Reference Schecter, Dubin, Cass, Driscoll, Lansky and O'Neil2000), Le Dizès (Reference Le Dizès2000), Balmforth et al. (Reference Balmforth, Llewellyn Smith and Young2001), Turner & Gilbert (Reference Turner and Gilbert2007) and Turner, Gilbert & Bassom (Reference Turner, Gilbert and Bassom2008). Common to all these latter works is the following idea: if the perturbation is large enough to nonlinearly feedback on the mean vorticity (averaged in the azimuthal direction), in such a way that the mean vorticity gradient vanishes near the radius where the angular velocity associated with the dominant Landau pole equates to that of the base flow, then the Landau damping is deactivated. Indeed, it was shown in Schecter et al. (Reference Schecter, Dubin, Cass, Driscoll, Lansky and O'Neil2000), Turner & Gilbert (Reference Turner and Gilbert2007) and Turner et al. (Reference Turner, Gilbert and Bassom2008) that such cancelling of the Landau damping goes with the appearance of an undamped Kelvin mode. The effects of flattening the mean vorticity distribution have been thoroughly studied in Schecter et al. (Reference Schecter, Dubin, Cass, Driscoll, Lansky and O'Neil2000) and Turner et al. (Reference Turner, Gilbert and Bassom2008), although it was introduced rather artificially in order to a posteriori mimic nonlinear effects. It has, however, been rigorously quantified using a matched asymptotic expansion in Balmforth et al. (Reference Balmforth, Llewellyn Smith and Young2001), the small parameter being directly linked to the amount of vorticity near the critical radius $r_c$ of the neutral Kelvin mode of a compact vortex, whose vorticity there would be zero otherwise. The developments result in an amplitude equation for the weakly nonlinear quasi-mode, that can predict a secondary instability for a sufficiently strong disturbance amplitude.

Under certain conditions, the vorticity tripole has also been shown to be the nonlinear fate of a shear instability (also sometimes called ‘barotropic’ instability, in opposition to ‘baroclinic’ instability, this latter requiring density stratification). This was illustrated clearly for instance in Carton, Flierl & Polvani (Reference Carton, Flierl and Polvani1989), Carton & Legras (Reference Carton and Legras1994), Carnevale & Kloosterziel (Reference Carnevale and Kloosterziel1994) and Kossin, Schubert & Montgomery (Reference Kossin, Schubert and Montgomery2000), and many other works. If the mean vorticity profile (averaged along the azimuthal direction) presents a local extremum at some radius, a necessary condition for shear instability is satisfied according to a generalization of the Rayleigh theorem of inflectional point by Billant & Gallaire (Reference Billant and Gallaire2005). This is, for instance, the case of the family of shielded monopoles, where the vortex core of positive vorticity is surrounded by a ring of negative vorticity. By increasing the intensity of the shear, the shielded vortex becomes unstable with a maximum growth rate for perturbations of azimuthal wavenumber $m=2$; by increasing the shear further, $m=3$ becomes even more unstable (see figure 7 in Carnevale & Kloosterziel Reference Carnevale and Kloosterziel1994). Inspired by velocity measurements of Hurricane Gilbert that occurred in 1988, Kossin et al. (Reference Kossin, Schubert and Montgomery2000) considered the vorticity of a piecewise-constant vorticity profile. The latter is constituted of four distinct regions of vorticity: an inner region of very high vorticity, a moat region of relatively low vorticity, an annular ring of positive vorticity and an irrotational far field. Kossin et al. (Reference Kossin, Schubert and Montgomery2000) then show that, by narrowing the moat, a shear instability appears with a maximum growth rate at a wavenumber $m=2$ (see their figure A1). This shear instability can be conceptualized as resulting from a wave interaction across the moat, between two Rossby waves riding respectively the inner edge of the annular ring, and the outer edge of the central vortex. Note that what we designate here as a ‘Rossby’ wave, according to the terminology of Kossin et al. (Reference Kossin, Schubert and Montgomery2000), is similar to the ‘Kelvin’ mode discussed until now. Nonlinear simulations have then shown this instability to saturate into a tripolar state.

Furthermore, an important ingredient to the subcritical transition towards the tripolar state was found to be the non-normality of the linear operator $\boldsymbol{\mathsf{L}}$. An operator is non-normal if it does not commute with its adjoint, the expression of the latter being relative to the choice of an inner product. The eigenvectors of a non-normal operator do not form an orthogonal basis under this inner product, thus a negative growth rate for all eigenvalues is not a guarantee for the associated norm to decay monotonically. In fact, some small-amplitude perturbations may experience a large transient amplification. In particular, the linear transient growth analyses conducted for instance in Antkowiak & Brancher (Reference Antkowiak and Brancher2004), Antkowiak (Reference Antkowiak2005), Pradeep & Hussain (Reference Pradeep and Hussain2006), Heaton & Peake (Reference Heaton and Peake2007) and Mao & Sherwin (Reference Mao and Sherwin2012), have revealed the Lamb–Oseen (and Batchelor) vortex flow to support such strong transient energy growth. Therefore, perturbations to the base flow field, such as free-stream turbulence or acoustic disturbances, can be amplified strongly enough through non-normal, linear, mechanisms to lead to a regime where nonlinearities come into play; the flow may then escape from its linearly stable solution. This conjunction of non-normality then nonlinearity corresponds to the ‘by-pass’ scenario proposed in Trefethen et al. (Reference Trefethen, Trefethen, Reddy and Driscoll1993) and contextualized to the Lamb–Oseen vortex flow in Antkowiak (Reference Antkowiak2005). Recently, the transient growth analysis of the Lamb–Oseen vortex has been numerically extended in the fully nonlinear regime via a Lagrangian optimization in Navrose et al. (Reference Navrose, Johnson, Brion, Jacquin and Robinet2018).

The present work aims at reconciling the simplicity of a weakly nonlinear model (such as in Sipp Reference Sipp2000; Balmforth et al. Reference Balmforth, Llewellyn Smith and Young2001), in the sense that it is easier to solve and interpret than the original equation, with non-normality. Specifically, the objective is to construct an amplitude equation that is not restricted to the description of close-to-neutral modes or quasi-modes, but that extends to responses associated with optimal transient growth.

The amplitude equation analytically derived in this paper will not restrict the shape of the base flow in order for the latter to support a close-to-neutral mode or a quasi-mode, and will tolerate arbitrary temporal dependence of this base flow; this is precisely because these complexities are already incorporated in the optimal transient growth analysis of which we study the weakly nonlinear continuation. This makes possible the weakly nonlinear analysis of optimal responses on vortices with more realistic vorticity profiles from field measurements. For instance, this could be applied to the profile reported in figure 1 of Kossin et al. (Reference Kossin, Schubert and Montgomery2000), from flight-level radar measurement of Hurricane Gilbert.

The derivation of a weakly nonlinear reduced-order model will make it possible to distinguish the regimes where weak nonlinearities reinforce the transient gain, from the regimes where they cause it to decrease. It will also provide a rough criterion for the minimum amplitude of the initial condition required to trigger a bypass transition away from the axisymmetric state. It will also help us quantify the importance of the distortion of the flow averaged in the azimuthal direction, called ‘mean flow’, with respect to the importance of the second harmonic in nonlinear effects at stake.

After a brief derivation on the linear formulation in § 2, the method advanced to derive the amplitude equation is outlined in § 3; specifically, we vary the amplitude of a given initial condition and predict, at low numerical cost, the gain of the response at a selected time $t=t_o$. The (general) method is then illustrated with the two-dimensional Lamb–Oseen vortex flow with azimuthal wavenumber $m=2$ in § 5, exhibiting large gain and subcritical bifurcation.

2. Linear formulation

In the following subsection, the formulation of the linear transient growth problem is briefly recalled. In the rest of the paper, we shall consider a purely two-dimensional flow, invariant and with zero velocity in the axial direction. This two-dimensionality implicitly assumes the flow not to be subject to any three-dimensional instabilities, or any kind of spontaneous axial variations. This assumption is often made in considering vortex flows, and will not be discussed further in the present paper. Note that the restriction to a two-dimensional flow is not intrinsic to the analytical development proposed in the following, but is simply made to ease the computations thus concentrating our efforts on the analysis of the subcritical transition toward the tripolar state.

Let $\boldsymbol {U}_{{b}}(r,t) = [0,U_{\text {b},\theta }]^{\rm T}(r,t)$ denote a reference vortex flow, satisfying the Navier–Stokes equations exactly (without body force) and supporting a small-amplitude perturbation field of the form

(2.1)\begin{equation} \begin{bmatrix} \boldsymbol{u} \\ p \end{bmatrix} (r,\theta,t) = \begin{bmatrix} \hat{\boldsymbol{u}}\\ \hat{p} \end{bmatrix}(r,t)\,{\rm e}^{\text{i} m\theta}+\text{c.c.}, \end{equation}

and $\hat {\boldsymbol {u}}(r,t)=[\hat {u}_r,\hat {u}_{\theta }]^{\rm T}(r,t)$. The invariance of the base flow along the azimuthal ($\theta$) coordinate justifies the Fourier mode expansion of the perturbation in this direction; $m \in \mathbb {Z}$ denotes the wavenumber in the azimuthal direction. Linearizing the Navier–Stokes equations around $\boldsymbol {U}_{{b}}(r,t)$ leads to an equation for the temporal evolution of the velocity field $\hat {\boldsymbol {u}}(r,t)$

(2.2)\begin{align} \partial_t \hat{\boldsymbol{u}} &= \underbrace{\begin{bmatrix} (\varDelta_{m}-1/r^2)/Re - \text{i} m\varOmega & -2\text{i} m/(r^2 Re) + 2 \varOmega \\ 2\text{i} m/(r^2 Re) - W_z & (\varDelta_{m}-1/r^2)/Re - \text{i} m\varOmega \end{bmatrix}}_{{\doteq} \boldsymbol{\mathsf{A}}_{m}(t)} \hat{\boldsymbol{u}} - \hat{\boldsymbol{\nabla}}_{m} \hat{p}(\hat{\boldsymbol{u}}) \nonumber\\ &\doteq \boldsymbol{\mathsf{L}}_{m}(t) \hat{\boldsymbol{u}} , \end{align}

where $\hat {\boldsymbol {\nabla }}_{m} \doteq [\partial _r,\text {i} m/r]^{\rm T}$ and

(2.3ac)\begin{equation} \varOmega \doteq U_{{b},\theta}/r,\quad W_z \doteq \varOmega+\partial_r U_{{b},\theta},\quad \varDelta_{m} \doteq \partial_{rr}+(1/r)\partial_r - m^2/r^2. \end{equation}

The letter $\varOmega$ denotes the angular velocity of the base flow, where $W_z$ is its vorticity along the $z$-axis. The operator $\text {i} m\varOmega$ is the advection by the base flow, and $\varDelta _{m}$ is the Laplacian associated with viscous diffusion: it is therefore systematically multiplied by $1/{\textit {Re}}$, with Re the Reynolds number. The presence of the pressure term in (2.2) ensures the velocity field $\hat {\boldsymbol {u}}$ to be divergence-free for all times. Indeed, the pressure is linearly linked to the velocity field according to the Poisson equation

(2.4)\begin{equation} \varDelta_{m} \hat{p}(\boldsymbol{u}) =- 2 \text{i} ( m \partial_r U_{\text{b},\theta}/r) \hat{u}_r + 2(\partial_r+1/r)(\varOmega \hat{u}_{\theta}), \end{equation}

obtained by taking the divergence of the momentum equations, then enforcing the continuity $(\partial _r+1/r)\hat {u}_r + \text {i} m \hat {u}_{\theta }/r=0$. The perturbation fields are subject to the following boundary conditions over $r\in [0;+\infty [$, valid for all times:

(2.5a,b)\begin{align} \begin{cases} \hat{u}_r |_{r=0} = \hat{u}_{\theta} |_{r=0} = 0\\ \partial_r \hat{p}|_{r=0} = 0 \end{cases} \text{for $m$ even},\quad \text{and}\quad \begin{cases} \partial_r \hat{u}_r |_{r=0} = \partial_r\hat{u}_{\theta} |_{r=0} = 0\\ \hat{p}|_{r=0} = 0 \end{cases} \text{for $m$ odd,} \end{align}

as well as $\lim _{r\rightarrow \infty } \hat {\boldsymbol {u}} =\boldsymbol {0}$. As shown in the appendix of Kerswell & Davey (Reference Kerswell and Davey1996), by imposing the parity conditions (2.5a,b), ‘the correct axial behaviour automatically follows without need to explicitly impose the regularity conditions’.

Only the temporal dependence of the operators will be made explicit in the rest of the paper; for instance, $\boldsymbol{\mathsf{L}}_{m}(t)$, whose temporal dependence is inherited from the base flow, is actually $\boldsymbol{\mathsf{L}}_{m}(t)=\boldsymbol{\mathsf{L}}_{m}(r,t; {\textit {Re}})$. Precisely due to the fact that it depends on time, the operator exponential formalism cannot be used to solve (2.2). Instead, and given the value of the perturbation field at a time $t_i$, its temporal evolution at a further time $t$ according to (2.2) formally reads $\hat {\boldsymbol {u}}(t) = \boldsymbol {\varPsi }(t,t_i) \hat {\boldsymbol {u}}(t_i)$. The operator $\boldsymbol {\varPsi }(t,t_i)$ is traditionally named the propagator, for its action directly maps $\hat {\boldsymbol {u}}(t_i)$ onto $\hat {\boldsymbol {u}}(t)$ (Farrell & Ioannou Reference Farrell and Ioannou1996). If $t_i=0$ in particular,

(2.6)\begin{equation} \hat{\boldsymbol{u}}(t) = \boldsymbol{\varPsi}(t,0)\hat{\boldsymbol{u}}(0). \end{equation}

The propagator responds to the semigroup properties

(2.7)\begin{equation} \boldsymbol{\varPsi}(t,t_i) = \boldsymbol{\varPsi}(t,s)\boldsymbol{\varPsi}(s,t_i), \end{equation}

and $\boldsymbol {\varPsi }(t_i,t_i) = \boldsymbol {I}$ (the identity operator). Injecting $\hat {\boldsymbol {u}}(t) = \boldsymbol {\varPsi }(t,t_i) \hat {\boldsymbol {u}}(t_i)$ in (2.2), and enforcing the equation to be satisfied for all $\hat {\boldsymbol {u}}(t_i)$ leads to an evolution equation for the propagator

(2.8)\begin{equation} \partial_t \boldsymbol{\varPsi}(t,t_i) = \boldsymbol{\mathsf{L}}_{m}(t) \boldsymbol{\varPsi}(t,t_i). \end{equation}

By evaluating (2.7) at $t=t_i$ and $s=t$, it follows that

(2.9)\begin{equation} \boldsymbol{I} = \boldsymbol{\varPsi}(t_i,t)\boldsymbol{\varPsi}(t,t_i) \quad \text{thereby}\ [\boldsymbol{\varPsi}(t,t_i)]^{-1} = \boldsymbol{\varPsi}(t_i,t). \end{equation}

Eventually, taking the temporal derivative of the first equation in (2.9) results in an evolution equation for the inverse propagator

(2.10)\begin{equation} \partial_t \boldsymbol{\varPsi}(t_i,t) =- \boldsymbol{\varPsi}(t_i,t)\boldsymbol{\mathsf{L}}_{m}(t). \end{equation}

Note that in presence of a forcing term $\hat {\boldsymbol {f}}(t)$ at the right-hand side of the system (2.2), its solution (2.6) generalizes into

(2.11)\begin{equation} \hat{\boldsymbol{u}}(t) = \boldsymbol{\varPsi}(t,0)\hat{\boldsymbol{u}}(0) + \boldsymbol{\varPsi}(t,0)\int_{0}^{t}\boldsymbol{\varPsi}(0,s)\hat{\boldsymbol{f}}(s)\,\mathrm{d} s. \end{equation}

This formula will turn out to be useful in a moment.

The transient growth analysis amounts to constructing an orthonormal basis for the initial flow field, with the particular feature that elements of this basis are ranked according to how much they are amplified at a given temporal horizon $t_o>0$. The first element of this basis is the initial condition that is most amplified at $t=t_o$, the second is the most amplified under the constraint that it must be orthogonal to the first, etc. Due to the non-normality of $\boldsymbol{\mathsf{L}}_{m}(t)$, the first few elements often lead to amplifications that are much larger than all the others. In the case where the initial condition projects equivalently on each element of the basis, the flow response at $t=t_o$ is dominated by those stemming from the few first elements of the basis only. In other terms, the key idea behind a transient growth (and/or a resolvent) analysis is that the response of a system to external perturbations is intrinsic to the operator, thus the precise form of these external perturbations matters little.

The first step is to define according to which measure the amplification is sought for. In the following, we choose the norm induced by the Hermitian inner product

(2.12)\begin{equation} \left \langle \hat{\boldsymbol{u}}_a | \hat{\boldsymbol{u}}_b \right \rangle \doteq \int^{\infty}_{0}\hat{\boldsymbol{u}}_a^{H}\hat{\boldsymbol{u}}_b r \,\mathrm{d} r, \end{equation}

the superscript ‘${H}$’ denoting the Hermitian transpose. Note that $\left \langle \hat {\boldsymbol {u}} | \hat {\boldsymbol {u}} \right \rangle =\|\hat {\boldsymbol {u}}\|^2$ is directly the kinetic energy of the perturbation. The largest linear amplification (or ‘gain’) between an initial flow structure and its evolution at $t=t_o>0$ (subscript $o$ for ‘optimal’) reads

(2.13)\begin{equation} G_o(t_o) \doteq \max_{\hat{\boldsymbol{u}}(0)}\frac{\|\hat{\boldsymbol{u}}(t_o)\|}{\|\hat{\boldsymbol{u}}(0)\|} \doteq \frac{1}{\epsilon_o}. \end{equation}

The optimal gain does not depend on the time itself, but only on the temporal horizon $t_o$. In the following, $G_o$ alone implies $G_o(t_o)$. The propagator formalism (2.6) is used to re-write the maximization problem (2.13) as an eigenvalue problem

(2.14) \begin{align} G_o^2 = \max_{\hat{\boldsymbol{u}}(0)}\frac{\left \langle \hat{\boldsymbol{u}}(t_o) | \hat{\boldsymbol{u}}(t_o) \right \rangle}{\left \langle \hat{\boldsymbol{u}}(0) | \hat{\boldsymbol{u}}(0) \right \rangle} &= \max_{\hat{\boldsymbol{u}}(0)} \frac{\left \langle \boldsymbol{\varPsi}(t_o,0)\hat{\boldsymbol{u}}(0) | \boldsymbol{\varPsi}(t_o,0)\hat{\boldsymbol{u}}(0) \right \rangle}{\left \langle \hat{\boldsymbol{u}}(0) | \hat{\boldsymbol{u}}(0) \right \rangle} \nonumber\\ &=\max_{\hat{\boldsymbol{u}}(0)}\frac{\left \langle \hat{\boldsymbol{u}}(0) | \boldsymbol{\varPsi}(t_o,0)^{{{\dagger}}}\boldsymbol{\varPsi}(t_o,0)\hat{\boldsymbol{u}}(0) \right \rangle}{\left \langle \hat{\boldsymbol{u}}(0) | \hat{\boldsymbol{u}}(0) \right \rangle}, \end{align}

where the operator $\boldsymbol {\varPsi }(t_o,0)^{{\dagger} }$ denotes the adjoint of $\boldsymbol {\varPsi }(t_o,0)$ under the inner product (2.12). From (2.14), it is clear that $G_o^2$ is also the largest (necessary real) eigenvalue of the self-adjoint operator $\boldsymbol {\varPsi }(t_o,0)^{{\dagger} }\boldsymbol {\varPsi }(t_o,0)$, and the associated eigenvector, named $\hat {\boldsymbol {u}}_o$ in what follows, is the optimal initial condition. The latter is normalized such that $\left \langle \hat {\boldsymbol {u}}_o | \hat {\boldsymbol {u}}_o \right \rangle =1$. Smaller eigenvalues and corresponding eigenmodes constitute sub-optimal gains and initial conditions, and the family of eigenvectors is orthonormal. Let $\hat {\boldsymbol {l}}_o$ be the unit-norm response to the optimal initial condition $\hat {\boldsymbol {u}}_o$ at $t=t_o$, such that $\hat {\boldsymbol {l}}_o \doteq \boldsymbol {\varPsi }(t_o,0)\hat {\boldsymbol {u}}_o/G_o$ and $\left \langle \hat {\boldsymbol {l}}_o | \hat {\boldsymbol {l}}_o \right \rangle =1$. From this definition, and using that $\boldsymbol {\varPsi }(t_o,0)^{{\dagger} }\boldsymbol {\varPsi }(t_o,0)\hat {\boldsymbol {u}}_o = G_o^2\hat {\boldsymbol {u}}_o$ and that the inverse of the adjoint is the adjoint of the inverse, two relations follow

(2.15a,b)\begin{equation} \boldsymbol{\varPsi}(0,t_o)\hat{\boldsymbol{l}}_o = \epsilon_o \hat{\boldsymbol{u}}_o \quad \text{and} \quad \boldsymbol{\varPsi}(0,t_o)^{{{\dagger}}}\hat{\boldsymbol{u}}_o = \epsilon_o \hat{\boldsymbol{l}}_o, \end{equation}

where we recall that $\epsilon _o$ is the inverse of the optimal gain. Note that these two last relations also indicate the optimal initial condition and its associated response to be respectively the right and left singular vectors of $\boldsymbol {\varPsi }(t_o,0)$, associated with its largest singular value. Since the operator $\boldsymbol{\mathsf{L}}_{m}(t)$ is assumed in the rest of the study to be strongly non-normal, the structures $\hat {\boldsymbol {u}}_o$ and $\hat {\boldsymbol {l}}_o$ generally project poorly each of its direct and adjoint eigenmodes (except in the limit $t_o \rightarrow \infty$).

The full linear response seeded by $\epsilon _o \hat {\boldsymbol {u}}_o$ and such that $\hat {\boldsymbol {l}}(t_o)=\hat {\boldsymbol {l}}_o$ reads $\hat {\boldsymbol {l}}(t) = \epsilon _o \boldsymbol {\varPsi }(t,0)\hat {\boldsymbol {u}}_o$, or $\boldsymbol {\varPsi }(0,t)\hat {\boldsymbol {l}}(t) = \epsilon _o\hat {\boldsymbol {u}}_o$. The gain associated with this full linear response reads

(2.16)\begin{equation} G(t;t_o) \doteq \frac{\|\hat{\boldsymbol{l}}(t)\|}{\|\epsilon_o \hat{\boldsymbol{u}}_o\|} = \frac{\|\hat{\boldsymbol{l}}(t)\|}{\epsilon_o} =\|\boldsymbol{\varPsi}(t,0)\hat{\boldsymbol{u}}_o\|, \end{equation}

where the parameter $t_o$ after the semi-colon in $G(t;t_o)$ highlights that this gain was optimized for the time $t_o$ specifically. Therefore, the gain (2.16) evaluated in $t=t_o$ equals to that defined in (2.13), i.e. $G(t_o;t_o)=G_o(t_o)$. Nevertheless, we insist that the gain (2.16) depends on the time $t$ and is parameterized by the temporal horizon $t_o$, whereas the optimal gain (2.13) depends only on the temporal horizon $t_o$. In the following, the shortened notation $G(t)$ will systematically imply $G(t;t_o)$, and will be sometimes used to lighten the writing.

Of all the temporal horizons $t_o$, the one leading to the largest optimal gain will be highlighted with the subscript ‘max’ such that

(2.17)\begin{equation} \max_{t_o>0}G_o(t_o) = G_o(t_{o,{max}}) \doteq \frac{1}{\epsilon_{o,{max}}}. \end{equation}

3. Weakly nonlinear formulation

In the linear paradigm, the gain is independent of $\|\hat {\boldsymbol {u}}(0)\|$, for the latter is assumed to be arbitrarily small. In reality, $\|\hat {\boldsymbol {u}}(0)\|$ may be sufficiently large for the nonlinear corrections to the response not to be negligible anymore, thus for the transient gain to depart from its linear prediction. Building on our previous work (Ducimetière, Boujo & Gallaire Reference Ducimetière, Boujo and Gallaire2022), we propose thereafter a method for capturing weakly nonlinear effects on the transient gain over a time-varying base flow.

In the rest of the present study, we subject the two-dimensional, unforced, Navier–Stokes equations governing the flow field $\boldsymbol {U}(t)$ to a small-amplitude initial perturbation around the reference vortex flow $\boldsymbol {U}_{{b}}(t)$. The initial perturbation has an azimuthal wavenumber $m$ and its radial structure $\hat {\boldsymbol {u}}_h$, with $\|\hat {\boldsymbol {u}}_h\|=1$, is for now arbitrary. The strong non-normality assumption justifies further assuming the transient gain to be large, that is to say, $\epsilon _o \ll 1$, such that $\epsilon _o$ constitutes a natural choice of small parameter with which to scale the amplitude of the initial perturbation. Specifically,

(3.1)\begin{equation} \boldsymbol{U}(0) - \boldsymbol{U}_{{b}}(0)= \alpha \sqrt{\epsilon_o}^3 (\hat{\boldsymbol{u}}_h \,{\rm e}^{\text{i} m\theta} + \text{c.c.}) = U_0 (\hat{\boldsymbol{u}}_h \,{\rm e}^{\text{i} m\theta} + \text{c.c.}), \end{equation}

where the amplitude of the initial condition, $U_0 \doteq \alpha \sqrt {\epsilon _o}^3$, can vary through the real prefactor $\alpha = O(1)$. We intend at capturing the variation of the transient gain by increasing $U_0$.

For this purpose, the total flow field $\boldsymbol {U}$ is developed as a multiple-scale asymptotic expansion in terms of powers of $\sqrt {\epsilon _o}$ around the reference vortex flow $\boldsymbol {U}_{{b}}(t)$

(3.2)\begin{equation} \boldsymbol{U}(t,T) = \boldsymbol{U}_{{b}}(t) + \sqrt{\epsilon_o} \boldsymbol{u}_1(t,T) + \epsilon_o \boldsymbol{u}_2(t,T) + \sqrt{\epsilon_o}^3 \boldsymbol{u}_3(t,T) + O(\epsilon_o^2). \end{equation}

The slow time scale $T \doteq \epsilon _o t$ was introduced, aiming at capturing the slow modulation of the linear trajectory as nonlinearities progressively set in. The reason for which the expansion and the scaling of the initial condition are made in terms of powers of $\sqrt {\epsilon _o}$ shall be specified later. Injecting (3.2) in the Navier–Stokes equations, then expanding each $\boldsymbol {u}_j$ as a Fourier series in space according to

(3.3)\begin{equation} \boldsymbol{u}_j(t,T) = \bar{\boldsymbol{u}}^{(0)}_j(t,T) + \sum_{n}(\bar{\boldsymbol{u}}^{(n)}_{j}(t,T)\exp({\text{i} n m\theta})+\text{c.c.}) ,\end{equation}

with $n=1,2,3,\ldots$, yields

(3.4)\begin{align} & \sqrt{\epsilon_o} \left[\left[(\partial_t-\boldsymbol{\mathsf{L}}_{m}(t))\bar{\boldsymbol{u}}_1^{(1)} \,{\rm e}^{\text{i} m\theta} + \text{c.c.} \right] + \bar{\boldsymbol{s}}_1 \right] \nonumber\\ &\quad +\epsilon_o \left[\left[(\partial_t-\boldsymbol{\mathsf{L}}_{m}(t))\bar{\boldsymbol{u}}_2^{(1)} \,{\rm e}^{\text{i} m\theta} + \text{c.c.} \right] + \bar{\boldsymbol{s}}_2 + \boldsymbol{\mathsf{C}} \left [ \boldsymbol{u}_1 , \boldsymbol{u}_1 \right ]\right] \nonumber\\ &\quad +\sqrt{\epsilon_o}^3\left[\left[(\partial_t-\boldsymbol{\mathsf{L}}_{m}(t))\bar{\boldsymbol{u}}_3^{(1)}\,{\rm e}^{\text{i} m\theta} + \text{c.c.} \right]+ \bar{\boldsymbol{s}}_3 + \partial_T\boldsymbol{u}_1 + \boldsymbol{\mathsf{C}} \left [ \boldsymbol{u}_1 , \boldsymbol{u}_2 \right ] + \boldsymbol{\mathsf{C}} \left [ \boldsymbol{u}_2 , \boldsymbol{u}_1 \right ] \right] \nonumber\\ &\quad + O(\epsilon_o^2) = \boldsymbol{0}, \end{align}

where the nonlinear advection operator

(3.5) \begin{equation} \boldsymbol{\mathsf{C}} [\boldsymbol{x} , \boldsymbol{y} ] \doteq (\boldsymbol{x} \boldsymbol{\cdot} \boldsymbol{\nabla})\boldsymbol{y} + r^{-1} [-x_{\theta}y_{\theta},(x_{\theta} y_r+x_r y_{\theta})/2]^{\rm T},\quad \boldsymbol{\nabla} = [\partial_r,r^{-1}\partial_{\theta}], \end{equation}

has been defined. The fundamental field, corresponding to $n=1$, has been isolated at each order in (3.4), and the harmonics have been incorporated in $\bar {\boldsymbol {s}}_j$ with

(3.6)\begin{equation} \bar{\boldsymbol{s}}_j = (\partial_t-\boldsymbol{\mathsf{L}}_{0}(t))\bar{\boldsymbol{u}}_{j}^{(0)} + \sum_{n}\left[(\partial_t-\boldsymbol{\mathsf{L}}_{nm}(t)) \bar{\boldsymbol{u}}_{j}^{(n)}\exp({\text{i} n m\theta})+\text{c.c.}\right] ,\end{equation}

for $n=2,3,\ldots$. From (3.1), the only field with a non-zero initial condition is the fundamental one appearing at third order, specifically, $\bar {\boldsymbol {u}}_{3}^{(1)}|_{t=0}=\alpha \hat {\boldsymbol {u}}_h$. On the other hand, the relation

(3.7)\begin{equation} (\partial_t-\boldsymbol{\mathsf{L}}_{m}(t))\bar{\boldsymbol{u}}^{(1)}_j = \boldsymbol{\varPsi}(t,0)\partial_t \left(\boldsymbol{\varPsi}(0,t)\bar{\boldsymbol{u}}^{(1)}_j \right) ,\end{equation}

can be established using (2.10). Further injecting (3.7) in (3.4) leads to

(3.8)\begin{align} & \sqrt{\epsilon_o} \left[\left[\boldsymbol{\varPsi}(t,0)\partial_t \left(\boldsymbol{\varPsi}(0,t)\bar{\boldsymbol{u}}^{(1)}_1 \right) {\rm e}^{\text{i} m\theta} + \text{c.c.} \right] + \bar{\boldsymbol{s}}_1 \right] \nonumber\\ &\quad +\epsilon_o \left[\left[\boldsymbol{\varPsi}(t,0)\partial_t \left(\boldsymbol{\varPsi}(0,t)\bar{\boldsymbol{u}}^{(1)}_2 \right) {\rm e}^{\text{i} m\theta} + \text{c.c.} \right] + \bar{\boldsymbol{s}}_2 + \boldsymbol{\mathsf{C}} \left [ \boldsymbol{u}_1 , \boldsymbol{u}_1 \right ]\right] \nonumber\\ &\quad +\sqrt{\epsilon_o}^3\left[\left[\boldsymbol{\varPsi}(t,0)\partial_t \left(\boldsymbol{\varPsi}(0,t)\bar{\boldsymbol{u}}^{(1)}_3 \right) {\rm e}^{\text{i} m\theta} + \text{c.c.} \right] \right.\nonumber\\ &\quad \left.\vphantom{\left(\boldsymbol{\varPsi}(0,t)\bar{\boldsymbol{u}}^{(1)}_3 \right)} +\bar{\boldsymbol{s}}_3 + \partial_T\boldsymbol{u}_1 + \boldsymbol{\mathsf{C}} \left [ \boldsymbol{u}_1 , \boldsymbol{u}_2 \right ] + \boldsymbol{\mathsf{C}} \left [ \boldsymbol{u}_2 , \boldsymbol{u}_1 \right ] \right] \nonumber\\ &\quad +O(\epsilon_o^2) = \boldsymbol{0}. \end{align}

We recall that the application of the operator $\boldsymbol {\varPsi }(0,t)$ is equivalent to integrating the system backward from $t$ to $0$, and that it maps the optimal response $\hat {\boldsymbol {l}}_o$ on a field of very small amplitude $\epsilon _o \ll 1$ (by assumption) in (2.15a,b). The main idea behind our method is to take advantage of this by perturbing the operator $\boldsymbol {\varPsi }(0,t)$ for all $t \geq 0$ according to

(3.9) \begin{equation} \boldsymbol{\varPhi}(0,t) \doteq \boldsymbol{\varPsi}(0,t)-\epsilon_o \boldsymbol{\mathsf{P}}(t), \quad \text{with}\ \boldsymbol{\mathsf{P}}(t) \doteq H(t)\frac{\hat{\boldsymbol{u}}_o \big\langle \hat{\boldsymbol{l}}(t) | \ast \big\rangle}{\|\hat{\boldsymbol{l}}(t)\|^2}, \end{equation}

and where the Heaviside distribution $H(t)$ is defined as $H(0)=0$ and $H(t>0)=1$. The operator $\big\langle \hat {\boldsymbol {l}}(t) | \ast \big\rangle$ applied to any field $\hat {\boldsymbol {g}}$ simply results in $\big\langle \hat {\boldsymbol {l}}(t) | \hat {\boldsymbol {g}} \big\rangle$. The operator $\boldsymbol {\varPhi }(0,t)$ satisfies $\boldsymbol {\varPhi }(0,0)=\boldsymbol {I}$ and $\boldsymbol {\varPhi }(0,t)\hat {\boldsymbol {l}}(t) = \boldsymbol {0}$ for $t>0$, and therefore is singular for all strictly positive times; the linear trajectory $\hat {\boldsymbol {l}}(t)$ constitutes its non-trivial (time-evolving) kernel. We further show in Appendix A that the non-trivial kernel $\hat {\boldsymbol {b}}(t)$ of the adjoint operator $\boldsymbol {\varPhi }(0,t)^{{\dagger} }$ for $t>0$, such that $\boldsymbol {\varPhi }(0,t)^{{\dagger} }\hat {\boldsymbol {b}}(t) = \boldsymbol {0}$, reads $\hat {\boldsymbol {b}}(t) = \boldsymbol {\varPsi }(t,0)^{{\dagger} }\hat {\boldsymbol {l}}(t)$.

Note that $\boldsymbol {\varPhi }(0,t) = \boldsymbol {\varPsi }(0,t)(\boldsymbol {I}-\hat {\boldsymbol {l}}(t)\langle \hat {\boldsymbol {l}}(t) | \ast \rangle /\|\hat {\boldsymbol {l}}(t)\|^2)$, where the term in parenthesis is an orthogonal projection operator: it is self-adjoint, linear, idempotent and its application projects into the subspace that is complementary to $\hat {\boldsymbol {l}}(t)$. Therefore (3.9), by stating that $\boldsymbol {\varPsi }(0,t) \approx \boldsymbol {\varPhi }(0,t)$, implies that applying $\boldsymbol {\varPsi }(0,t)$, or applying $\boldsymbol {\varPhi }(0,t)$ that first removes the component on $\hat {\boldsymbol {l}}(t)$ then applies $\boldsymbol {\varPsi }(0,t)$, both lead to very similar initial states. That is precisely because $\boldsymbol {\varPsi }(0,t)$ maps $\hat {\boldsymbol {l}}(t)$ on $\epsilon _o \hat {\boldsymbol {u}}_o$ of very small amplitude $\epsilon _o \ll 1$.

In principle, expansion (3.9) requires $\|\boldsymbol {\varPsi }(0,t)\| \gg \epsilon _o \|\boldsymbol{\mathsf{P}}(t)\| = \epsilon _o / \|\hat {\boldsymbol {l}}(t)\| = 1/ \|\boldsymbol {\varPsi }(t,0)\hat {\boldsymbol {u}}_o\|$; by using that the norm of the inverse of an operator is the inverse of its minimum singular value

(3.10)\begin{equation} \|\boldsymbol{\varPsi}(0,t)\| = \left(\min_{\hat{\boldsymbol{u}}(0),\|\hat{\boldsymbol{u}}(0)\|=1}\|\boldsymbol{\varPsi}(t,0)\hat{\boldsymbol{u}}(0)\|\right)^{-1}, \end{equation}

the condition above is equivalent to

(3.11)\begin{equation} \min_{\hat{\boldsymbol{u}}(0),\|\hat{\boldsymbol{u}}(0)\|=1}\|\boldsymbol{\varPsi}(t,0)\hat{\boldsymbol{u}}(0)\| \ll \|\boldsymbol{\varPsi}(t,0)\hat{\boldsymbol{u}}_o\|. \end{equation}

In other terms, the operator perturbation a priori holds for the times $t$ such that the response to the initial condition that is the least amplified at $t$, is much smaller than the response to $\hat {\boldsymbol {u}}_o$ that is the most amplified at $t_o$. This is certainly verified for times around $t=t_o$, which is appropriate, for the response $\hat {\boldsymbol {l}}(t)$ is expected to be nonlinearly modified at first at these times.

By then perturbing the operator $\boldsymbol {\varPsi }(0,t)$ in (3.8) according to (3.9), we are left with

(3.12)\begin{align} & \sqrt{\epsilon_o} \left[\left[\boldsymbol{\varPsi}(t,0)\partial_t \left(\boldsymbol{\varPhi}(0,t)\bar{\boldsymbol{u}}^{(1)}_1 \right) {\rm e}^{\text{i} m\theta} + \text{c.c.} \right] + \bar{\boldsymbol{s}}_1 \right] \nonumber\\ &\quad +\epsilon_o \left[\left[\boldsymbol{\varPsi}(t,0)\partial_t \left(\boldsymbol{\varPhi}(0,t)\bar{\boldsymbol{u}}^{(1)}_2 \right) {\rm e}^{\text{i} m\theta} + \text{c.c.} \right] + \bar{\boldsymbol{s}}_2 + \boldsymbol{\mathsf{C}} \left [ \boldsymbol{u}_1 , \boldsymbol{u}_1 \right ]\right] \nonumber\\ &\quad +\sqrt{\epsilon_o}^3\left[\left[\boldsymbol{\varPsi}(t,0)\partial_t \left(\boldsymbol{\varPhi}(0,t)\bar{\boldsymbol{u}}^{(1)}_3 \right) {\rm e}^{\text{i} m\theta} + \text{c.c.} \right] \right.\nonumber\\ &\quad + \bar{\boldsymbol{s}}_3 + \partial_T\boldsymbol{u}_1 + \boldsymbol{\mathsf{C}} \left [ \boldsymbol{u}_1 , \boldsymbol{u}_2 \right ]+ \boldsymbol{\mathsf{C}} \left [ \boldsymbol{u}_2 , \boldsymbol{u}_1 \right ] \nonumber\\ &\quad + \left. \left[ \boldsymbol{\varPsi}(t,0)\partial_t \left(\boldsymbol{\mathsf{P}}(t)\bar{\boldsymbol{u}}^{(1)}_1 \right) {\rm e}^{\text{i} m\theta}+\text{c.c.} \right] \right] + O(\epsilon_o^2) = \boldsymbol{0}. \end{align}

Note that the perturbation operator $\epsilon _o \boldsymbol{\mathsf{P}}(t)$, modifying $\boldsymbol {\varPsi }(0,t)$ in $\boldsymbol {\varPhi }(0,t)$ at leading order $O(\sqrt {\epsilon _o})$, is compensated for at third order $O(\sqrt {\epsilon _o}^3)$ specifically. This was made purposely and explains a posteriori why the asymptotic expansion was outlined in terms of integer powers of $\sqrt {\epsilon _o}$, instead of being for instance in terms of integer powers of $\epsilon _o$. More precisely, we anticipated the forcing term $\boldsymbol{\mathsf{C}} \left [ \boldsymbol {u}_1 , \boldsymbol {u}_2 \right ] + \boldsymbol{\mathsf{C}} \left [ \boldsymbol {u}_2 , \boldsymbol {u}_1 \right ]$ appearing at third order, to have a non-zero component on the fundamental wavenumber $m$, due to the bi-linearity of the operator $\boldsymbol{\mathsf{C}} \left [ \ast, \ast \right ]$; therefore, making the term in $\boldsymbol{\mathsf{P}}(t)$ appear at third order as well, was a way of putting all forcing terms oscillating at $m$ at the same order.

Note also that only the propagator associated with the wavenumber $m$ was perturbed, although harmonics may equally lead to significant transient gains. This selective perturbation is justified a priori by the fact these harmonics are not externally excited like the fundamental pair is through the initial condition, but are only generated nonlinearly at higher orders. If, however, the harmonic responses are a posteriori found to endanger the asymptotic hierarchy, they can always be included in the kernel of their associated propagators in the manner of (3.9).

Terms in (3.12) are collected at each order in $\sqrt {\epsilon _o}$, yielding a cascade of linear problems. At order $\sqrt {\epsilon _o}$ we assemble $(\partial _t-\boldsymbol{\mathsf{L}}_{nm}(t))\bar {\boldsymbol {u}}_1^{(n)}=\boldsymbol {0}$ with $\bar {\boldsymbol {u}}_1^{(n)}|_{t=0}=\boldsymbol {0}$ for $n=0,2,3,\ldots$ and

(3.13)\begin{equation} \boldsymbol{\varPsi}(t,0)\partial_t \left(\boldsymbol{\varPhi}(0,t)\bar{\boldsymbol{u}}^{(1)}_1 \right) = \boldsymbol{0} \quad \text{with}\ \bar{\boldsymbol{u}}^{(1)}_1|_{t=0} = \boldsymbol{0}. \end{equation}

This leads to $\bar {\boldsymbol {u}}_1^{(n)}=\boldsymbol {0}$ for all times and $n\neq 1$. In addition, multiplying (3.13) by $\boldsymbol {\varPsi }(0,t)$, then integrating in time and using $\boldsymbol {\varPhi }(0,0) = \boldsymbol {I}$, gives $\boldsymbol {\varPhi }(0,t)\bar {\boldsymbol {u}}^{(1)}_1 = \boldsymbol {0}$. The kernel of $\boldsymbol {\varPhi }(0,t)$ being equal to $H(t)\hat {\boldsymbol {l}}(t)$ for $t \geq 0$, the solution for $\bar {\boldsymbol {u}}_1^{(1)}$ reads

(3.14)\begin{equation} \bar{\boldsymbol{u}}^{(1)}_1(t,T) = A(T)H(t)\hat{\boldsymbol{l}}(t) \quad \text{for}\ t \geq 0, \end{equation}

where $A(T) \in \mathbb {C}$ is a slowly varying scalar amplitude verifying $\partial _t A = 0$. Eventually, the general solution at $\sqrt {\epsilon _o}$ is written

(3.15)\begin{equation} \boldsymbol{u}_1(t,T) = A(T)H(t)\hat{\boldsymbol{l}}(t)\,{\rm e}^{\text{i} m\theta} + \text{c.c.} \quad \text{for}\ t \geq 0. \end{equation}

In the linear regime, $A$ would be constant over time; therefore, by continuity, we expect its variation to be small in the weakly nonlinear regime. This is what is implied in the fact that $A$ depends on the slow time $T$, since $\mathrm {d}_t A = \partial _t A + (\partial _t T) \partial _T A = \epsilon _o \partial _T A = O(\epsilon _o) \ll 1$.

At order $\epsilon _o$, the solution is

(3.16)\begin{equation} \boldsymbol{u}_2(t,T) = |A(T)|^2 \boldsymbol{u}_2^{(0)}(t) + \left( A(T)^2\hat{\boldsymbol{u}}_2^{(2)}(t)\exp({\text{i} 2\;m \theta})+\text{c.c.} \right), \end{equation}

for $t\geq 0$, where

(3.17a)\begin{equation} (\partial_t - \boldsymbol{\mathsf{L}}_{0}(t))\boldsymbol{u}_2^{(0)} =-\boldsymbol{\mathsf{C}}_{m} \big[\hat{\boldsymbol{l}}^* , \hat{\boldsymbol{l}}\big] + \text{c.c.}, \quad \boldsymbol{u}_2^{(0)}(0)=\boldsymbol{0}, \end{equation}

and

(3.17b)\begin{equation} (\partial_t - \boldsymbol{\mathsf{L}}_{2m}(t))\hat{\boldsymbol{u}}_2^{(2)} =-\boldsymbol{\mathsf{C}}_{m} \big[ \hat{\boldsymbol{l}} , \hat{\boldsymbol{l}}\big], \quad \hat{\boldsymbol{u}}_2^{(2)}(0)=\boldsymbol{0}. \end{equation}

The advection operator in the Fourier space $\boldsymbol{\mathsf{C}}_{m} \left [ \hat {\boldsymbol {x}} , \hat {\boldsymbol {y}} \right ]$ is as (3.5), excepted that $\boldsymbol {\nabla }$ is replaced by $\hat {\boldsymbol {\nabla }}_{m}$; it computes the transport, by the field $\hat {\boldsymbol {x}}$, of the field $\hat {\boldsymbol {y}}$ oscillating at $m$. In principle, the Heaviside distribution $H(t)$ multiplies the forcing terms in (3.17). However, it can be ignored here without loss of generality, for the forcing terms appear inside an integral between $0$ and $t$ in the formal expression of the solution, where the presence of a Heaviside distribution is unimportant. In (3.16), the homogeneous solution $A_2(T)H(t)\hat {\boldsymbol {l}}(t)$, solving the system $\boldsymbol {\varPhi }(0,t)\bar {\boldsymbol {u}}_2^{(1)}=\boldsymbol {0}$ for $t\geq 0$, was ignored. In this manner, the component of the overall solution on $\hat {\boldsymbol {l}}(t)$ is fully embedded in $A$, without needing to be corrected.

At order $\sqrt {\epsilon _o}^3$ in (3.12) are gathered two equations: the first yields the Fourier component of the solution oscillating at $m$

(3.18)\begin{equation} \boldsymbol{\varPsi}(t,0)\partial_t\left(\boldsymbol{\varPhi}(0,t)\bar{\boldsymbol{u}}_3^{(1)}\right) =-A|A|^2H\tilde{\boldsymbol{f}}-(\mathrm{d}_T A)H\hat{\boldsymbol{l}}-A\boldsymbol{\varPsi}(t,0)\partial_t (\boldsymbol{\mathsf{P}}(t)H\hat{\boldsymbol{l}}), \end{equation}

subject to $\bar {\boldsymbol {u}}_3^{(1)}|_{t=0}=\alpha \hat {\boldsymbol {u}}_h$ and where we defined

(3.19)\begin{equation} \tilde{\boldsymbol{f}} \doteq \boldsymbol{\mathsf{C}}_{2m} \left [ \hat{\boldsymbol{l}}^* , \hat{\boldsymbol{u}}_{2}^{(2)} \right ] + \boldsymbol{\mathsf{C}}_{-m} \left [ \hat{\boldsymbol{u}}_{2}^{(2)} , \hat{\boldsymbol{l}}^* \right ] + \boldsymbol{\mathsf{C}}_{0} \left [ \hat{\boldsymbol{l}} , \boldsymbol{u}_{2}^{(0)} \right ] +\boldsymbol{\mathsf{C}}_{m} \left [ \boldsymbol{u}_{2}^{(0)} , \hat{\boldsymbol{l}} \right ], \end{equation}

depending only on the fast time scale $t$. The second equation governs the Fourier component oscillating at $3m$

(3.20)\begin{equation} (\partial_t - \boldsymbol{\mathsf{L}}_{3m}(t))\bar{\boldsymbol{u}}_3^{(3)} =-A^3\left(\boldsymbol{\mathsf{C}}_{2m} \left [ \hat{\boldsymbol{l}} , \hat{\boldsymbol{u}}_2^{(2)} \right ] + \boldsymbol{\mathsf{C}}_{m} \left [ \hat{\boldsymbol{u}}_2^{(2)} , \hat{\boldsymbol{l}} \right ] \right), \end{equation}

subject to $\bar {\boldsymbol {u}}_3^{(3)}|_{t=0}=\boldsymbol {0}$. Noticing that $\boldsymbol{\mathsf{P}}(t)H\hat {\boldsymbol {l}} = H^2\hat {\boldsymbol {u}}_o = H\hat {\boldsymbol {u}}_o$, that $\boldsymbol {\varPsi }(0,t)\hat {\boldsymbol {l}}=\epsilon _o \hat {\boldsymbol {u}}_o$ (by definition) and multiplying (3.18) by $\boldsymbol {\varPsi }(0,t)$ results in

(3.21)\begin{equation} \partial_t\left(\boldsymbol{\varPhi}(0,t)\bar{\boldsymbol{u}}_3^{(1)}\right) =-A|A|^2H\boldsymbol{\varPsi}(0,t) \tilde{\boldsymbol{f}}-(\mathrm{d}_T A)H\epsilon_o \hat{\boldsymbol{u}}_o-A\partial_t\left( H\hat{\boldsymbol{u}}_o \right). \end{equation}

Integration over the time $t$, detailed in Appendix B, leads to

(3.22)\begin{equation} \boldsymbol{\varPhi}(0,t)\bar{\boldsymbol{u}}_{3}^{(1)} = \alpha \hat{\boldsymbol{u}}_h + A|A|^2\boldsymbol{\varPsi}(0,t)\tilde{\boldsymbol{u}} - (\mathrm{d}_T A)\epsilon_o t \hat{\boldsymbol{u}}_o -AH\hat{\boldsymbol{u}}_o, \end{equation}

where

(3.23)\begin{equation} (\partial_t - \boldsymbol{\mathsf{L}}_{m}(t))\tilde{\boldsymbol{u}} =- \tilde{\boldsymbol{f}},\quad \text{subject to}\ \tilde{\boldsymbol{u}}(0) = \boldsymbol{0}. \end{equation}

Evaluating (3.22) in $t=0$ in particular, we check that $\bar {\boldsymbol {u}}_{3}^{(1)}|_{t=0}=\alpha \hat {\boldsymbol {u}}_h$ indeed. The operator $\boldsymbol {\varPhi }(0,t)$ being singular for strictly positive time, (3.22) admits a non-diverging particular solution if and only if its right-hand side is orthogonal to $\hat {\boldsymbol {b}}(t)$ for all $t>0$ (Fredholm alternative). This condition results in

(3.24) \begin{equation} \epsilon_o t \frac{\mathrm{d} A}{\mathrm{d} T} = \alpha \frac{ \big\langle \hat{\boldsymbol{b}} | \hat{\boldsymbol{u}}_h \big\rangle}{\big\langle \hat{\boldsymbol{b}} | \hat{\boldsymbol{u}}_o \big\rangle } - A + A|A|^2\frac{\big\langle \hat{\boldsymbol{b}} | \boldsymbol{\varPsi}(0,t)\tilde{\boldsymbol{u}}\big\rangle}{\big\langle \hat{\boldsymbol{b}} | \hat{\boldsymbol{u}}_o \big\rangle} ,\end{equation}

for $t>0$. The adjoint field $\hat {\boldsymbol {b}}(t) = \boldsymbol {\varPsi }(t,0)^{{\dagger} }\hat {\boldsymbol {l}}(t)$ tends towards $\hat {\boldsymbol {b}}(0) = \hat {\boldsymbol {l}}(0) = \epsilon _o \hat {\boldsymbol {u}}_o$ when $t$ tends towards $0$. Therefore, in this limit, (3.24) reduces to

(3.25)\begin{equation} \lim_{t\rightarrow 0} A = \alpha \left \langle \hat{\boldsymbol{u}}_o | \hat{\boldsymbol{u}}_h \right \rangle, \end{equation}

where we also used that the limits of both the left-hand side and the nonlinear term in (3.24) are $0$ when $t$ goes to $0$; the former because of the presence of the factor $t$, and the latter because $\tilde {\boldsymbol {u}}(0)= \boldsymbol {0}$. Solving (3.24) is equivalent to solving its partial derivative with respect to the fast time scale $t$, reading

(3.26) \begin{equation} \epsilon_o \frac{\mathrm{d} A}{\mathrm{d} T} = \alpha \frac{\mathrm{d}}{\mathrm{d} t} \left(\frac{\big\langle \hat{\boldsymbol{b}} | \hat{\boldsymbol{u}}_h \big\rangle}{\big\langle \hat{\boldsymbol{b}} | \hat{\boldsymbol{u}}_o \big\rangle }\right) + A|A|^2\frac{\mathrm{d} }{\mathrm{d} t}\left(\frac{\big\langle \hat{\boldsymbol{b}} | \boldsymbol{\varPsi}(0,t)\tilde{\boldsymbol{u}}\big\rangle} {\big\langle \hat{\boldsymbol{b}} | \hat{\boldsymbol{u}}_o \big\rangle}\right), \end{equation}

subject to the initial condition (3.25). The system is written solely in terms of $t$ by evaluating (3.25) and (3.26) at $T=\epsilon _o t$, and remembering that $\mathrm {d}_t A = \epsilon _o \mathrm {d}_T A |_{T=\epsilon _o t}$, leading to

(3.27) \begin{equation} \frac{\mathrm{d} A}{\mathrm{d} t} = \alpha \frac{\mathrm{d}}{\mathrm{d} t}\left(\frac{\big\langle \hat{\boldsymbol{b}} | \hat{\boldsymbol{u}}_h \big\rangle}{\big\langle \hat{\boldsymbol{b}} | \hat{\boldsymbol{u}}_o \big\rangle}\right) + A|A|^2\frac{\mathrm{d} }{\mathrm{d} t}\left(\frac{\big\langle \hat{\boldsymbol{b}} | \boldsymbol{\varPsi}(0,t)\tilde{\boldsymbol{u}}\big\rangle} {\big\langle \hat{\boldsymbol{b}} | \hat{\boldsymbol{u}}_o \big\rangle}\right),\quad \text{with}\ A(0)=\alpha \big\langle \hat{\boldsymbol{u}}_o | \hat{\boldsymbol{u}}_h \big\rangle. \end{equation}

The amplitude $A$, previously undefined at $t=0$, was prolonged by continuity there by stating $A(0)=\lim _{t\rightarrow 0}A(t)$. Note that such rewriting of the amplitude equation in terms of $t$ was not done directly for (3.24), since it would have given an ordinary differential equation without an initial condition, (3.25) being intrinsically satisfied.

Introducing the rescaled amplitude $a \doteq \sqrt {\epsilon _o}A$ and remembering the amplitude of the initial condition to be $U_0=\alpha \sqrt {\epsilon _o}^3$, (3.27) writes

(3.28) \begin{equation} \frac{\mathrm{d} a}{\mathrm{d} t} = \frac{U_0}{\epsilon_o}\frac{\mathrm{d}}{\mathrm{d} t} \left(\frac{\big\langle \hat{\boldsymbol{b}} | \hat{\boldsymbol{u}}_h \big\rangle}{\big\langle \hat{\boldsymbol{b}} | \hat{\boldsymbol{u}}_o\big\rangle}\right) + a|a|^2 \frac{\mathrm{d} \mu}{\mathrm{d} t},\quad \text{with}\ a(0)= \frac{U_0}{\epsilon_o} \big\langle \hat{\boldsymbol{u}}_o | \hat{\boldsymbol{u}}_h\big\rangle, \end{equation}

and where we defined the nonlinear coefficient

(3.29) \begin{equation} \mu(t) \doteq \frac{1}{\epsilon_o} \frac{\big\langle \hat{\boldsymbol{b}}(t) | \boldsymbol{\varPsi}(0,t)\tilde{\boldsymbol{u}}(t)\big\rangle}{\big\langle \hat{\boldsymbol{b}}(t) | \hat{\boldsymbol{u}}_o \big\rangle}= \frac{1}{\epsilon_o} \frac{\big\langle \boldsymbol{\varPsi}(t,0)^{{{\dagger}}}\hat{\boldsymbol{l}}(t) | \boldsymbol{\varPsi}(0,t)\tilde{\boldsymbol{u}}(t)\big\rangle} {\big\langle \boldsymbol{\varPsi}(t,0)^{{{\dagger}}} \hat{\boldsymbol{l}}(t) | \hat{\boldsymbol{u}}_o\big\rangle} = \frac{\big\langle \hat{\boldsymbol{l}}(t) | \tilde{\boldsymbol{u}}(t)\big\rangle}{\|\hat{\boldsymbol{l}}(t)\|^2}. \end{equation}

From here, the weakly nonlinear transient gain corresponding to the wavenumber $m$ for $t>0$ is simply

(3.30)\begin{equation} G_{{w}}(t;t_o) = \frac{\|\sqrt{\epsilon_o}\bar{\boldsymbol{u}}^{(1)}_1(t)\|}{\|U_0\hat{\boldsymbol{u}}_h\|} = \frac{|a(t)| \|\hat{\boldsymbol{l}}(t)\|}{U_0}. \end{equation}

The parameter $t_o$ after the semi-colon in $G_{{w}}(t;t_o)$ underlines that it is the weakly nonlinear prolongation of a gain that was linearly optimized for $t=t_o$. Again, in what follows, the shortened notation $G_{{w}}(t)$ will systematically imply $G_{{w}}(t;t_o)$. Note that $G_{{w}}(t_o)=|a(t_o)|/U_0$.

The linear regime corresponds to the limit $U_0/\epsilon _o \rightarrow 0$, or simply $U_0 \rightarrow 0$, since if the amplitude of the initial condition is much smaller than the linear gain, then the amplitude of its response is much smaller than one. In this limit, the nonlinear terms in (3.28) becomes negligible, and the solution tends towards

(3.31) \begin{equation} a(t) = \frac{U_0}{\epsilon_o}\frac{\big\langle \hat{\boldsymbol{b}}(t) | \hat{\boldsymbol{u}}_h \big\rangle}{\big\langle \hat{\boldsymbol{b}}(t) | \hat{\boldsymbol{u}}_o \big\rangle}= \frac{\big\langle \hat{\boldsymbol{l}}(t) | U_0\boldsymbol{\varPsi}(t,0)\hat{\boldsymbol{u}}_h\big\rangle}{\|\hat{\boldsymbol{l}}(t)\|^2}, \end{equation}

which is the properly normalized projection on $\hat {\boldsymbol {l}}(t)$ of the response to the initial condition $U_0\hat {\boldsymbol {u}}_h$, as expected at a linear level since $a(t)$ is the amplitude along $\hat {\boldsymbol {l}}(t)$.

Note that in (3.9) any other perturbation operator of the form $\boldsymbol{\mathsf{P}}=H\hat {\boldsymbol {u}}_o\left \langle \hat {\boldsymbol {x}} | \ast \right \rangle /\left \langle \hat {\boldsymbol {x}} | \hat {\boldsymbol {l}} \right \rangle$, with $\hat {\boldsymbol {x}}$ any trajectory, would also have led to a singular operator with $\hat {\boldsymbol {l}}$ as a non-trivial kernel, but with $\hat {\boldsymbol {b}} = \boldsymbol {\varPsi }(t,0)^{{\dagger} }\hat {\boldsymbol {x}}$ as a non-trivial kernel of its adjoint. Choosing $\hat {\boldsymbol {x}} = \hat {\boldsymbol {l}}$ implying $\hat {\boldsymbol {b}} =\boldsymbol {\varPsi }(t,0)^{{\dagger} }\hat {\boldsymbol {l}}$, as was done so far, is the only choice leading to a coherent result in (3.31), thereby guaranteeing the uniqueness of $\boldsymbol{\mathsf{P}}$ in (3.9); as a side comment, it should be noted that among all $\hat {\boldsymbol {x}}$, $\hat {\boldsymbol {x}} = \hat {\boldsymbol {l}}$ also leads to the $\boldsymbol{\mathsf{P}}$ of the smallest possible norm.

For the gain at $t=t_o$, (3.31) corresponds to

(3.32) \begin{equation} \lim_{U_0 \rightarrow 0 } G_{{w}}(t_o) = \frac{1}{\epsilon_o} \frac{\big|\big\langle \hat{\boldsymbol{b}}(t_o) | \hat{\boldsymbol{u}}_h \big\rangle\big|}{\big|\big\langle \hat{\boldsymbol{b}}(t_o) | \hat{\boldsymbol{u}}_o \big\rangle\big|}= \frac{1}{\epsilon_o}\left|\big\langle \hat{\boldsymbol{u}}_o | \hat{\boldsymbol{u}}_h\big\rangle\right|, \end{equation}

since $\hat {\boldsymbol {b}}(t_o) = \epsilon _o \boldsymbol {\varPsi }(t_o,0)^{{\dagger} }\boldsymbol {\varPsi }(t_o,0)\hat {\boldsymbol {u}}_o = \hat {\boldsymbol {u}}_o$. The result (3.32) also is as expected from the linear prediction. In particular, we recover $\lim _{U_0 \rightarrow 0 } G_{{w}}(t_o) = 1/\epsilon _o = G_o$ when the optimal initial condition is applied ($\hat {\boldsymbol {u}}_h = \hat {\boldsymbol {u}}_o$). It also predicts the gain to be null when $\hat {\boldsymbol {u}}_h$ is orthogonal to $\hat {\boldsymbol {u}}_o$, indicating the linear response at $t=t_o$ to be orthogonal to $\hat {\boldsymbol {l}}_o$ without taking into account the gains associated with sub-optimal forcings. Therefore, these latter should be $O(1/\sqrt {\epsilon _o})$ to be mathematically consistent.

For the rest of the paper, we set $\hat {\boldsymbol {u}}_h = \hat {\boldsymbol {u}}_o$. In physical terms, this choice, although very specific, is found to be the most natural in the absence of information regarding the structure of the actual initial condition $\hat {\boldsymbol {u}}_h$. Note, however, that $\hat {\boldsymbol {u}}_h$ might project very poorly on $\hat {\boldsymbol {u}}_o$, and in the latter case reducing the dynamics to the response to $\hat {\boldsymbol {u}}_o$ would give inaccurate results.

Further expressing $a(t)$ in terms of an amplitude $|a(t)|\in \mathbb {R}^+$ and a phase $\phi (t) \in \mathbb {R}$ such that $a(t)\doteq |a(t)|\exp ({{\rm i}\phi (t)})$, the amplitude equation (3.28) becomes

(3.33a)\begin{equation} \frac{\mathrm{d} |a|}{\mathrm{d} t} = |a|^3\frac{\mathrm{d} \mu_r}{\mathrm{d} t},\quad \text{with}\ |a(0)|= \frac{U_0}{\epsilon_o}, \end{equation}

and

(3.33b)\begin{equation} \frac{\mathrm{d} \phi}{\mathrm{d} t} = |a|^2\frac{\mathrm{d} \mu_i}{\mathrm{d} t},\quad \text{with}\ \phi(0)= 0, \end{equation}

the subscripts ‘$r$’ and ‘$i$’ denoting the real and imaginary parts, respectively. The equation for $|a|$, in particular, bears the following analytical solution:

(3.34)\begin{equation} |a(t)| = \frac{U_0}{\epsilon_o} \frac{1}{\sqrt{1-2\left(\dfrac{U_0}{\epsilon_o}\right)^2\mu_r(t)}},\quad \text{thus}\ G_{{w}}(t) = \frac{G(t)}{\sqrt{1-2\left(\dfrac{U_0}{\epsilon_o}\right)^2\mu_r(t)}}. \end{equation}

For any time $t$, the gain decreases strictly monotonically with $U_0$ if $\mu _r(t)<0$, thus is maximum in the linear regime. Moreover, the gain stays constant with $U_0$ if $\mu _r(t)=0$, and increases strictly monotonically if $\mu _r(t)>0$. In the latter case, by increasing $U_0$ the gain eventually becomes singular (tending towards $+\infty$ at time $t$) for

(3.35)\begin{equation} U_0 = \frac{\epsilon_o}{\sqrt{2\mu_r(t)}},\quad \text{defined if}\ \mu_r(t)>0. \end{equation}

In the following, we call (3.34) the weakly nonlinear non-normal transient (WNNt) model. In Appendix C we show that at first order in the gain variation, (3.34) partly reduces to the sensitivity of the gain $G(t)$ to the axisymmetric base flow modification $+a_{l}^2\boldsymbol {u}_2^{(0)}(t)$, where $a_l \doteq (U_0/\epsilon _o)$ is the amplitude of the response in the linear regime. In this sense, our model states that small gain modifications by increasing $U_0$, are partly due to the fact that the perturbation now evolves over a base flow that is distorted through the action of the Reynolds stress forcing $-\boldsymbol{\mathsf{C}}_{m} \big[ a_l\hat {\boldsymbol {l}}^{*} , a_l\hat {\boldsymbol {l}}\big] + \text {c.c.}$ of this very same perturbation (nonlinear feedback). In addition, (3.34) also includes the effects of the second harmonic oscillating at $2m$.

In Appendix D, a second method that is more immediate, although perhaps less rigorous, is proposed to derive the amplitude equation (3.24). This method does not rely on the singularization of the propagator, and was named the ‘pseudo-forcing method’.

4. Linear and fully nonlinear transient growth in the diffusing, two-dimensional Lamb–Oseen vortex

Equation (3.34) is employed thereafter for the analysis of the transient gain experienced by a two-dimensional Gaussian vortex in a weakly nonlinear regime.

4.1. Flow geometry

Vortices are flows combining rotation and shear, and the majority of them possess a localized distribution of vorticity. The Lamb–Oseen vortex is perhaps the simpler solution of the Navier–Stokes equation containing these ingredients, thus is adopted in the rest of this paper. It expresses

(4.1a,b)\begin{equation} U_{{b},r}(r,t) = 0,\quad U_{{b},\theta}(r,t)=\frac{1-\exp({-r^2/(1+4t/ )})}{r}. \end{equation}

The associated vorticity field, $W_z$, has a Gaussian radial profile

(4.2)\begin{equation} W_z(r,t) = \frac{U_{{b},\theta}}{r} + \frac{\partial U_{{b},\theta}}{\partial _r} = \frac{2}{1+4t/{\textit {Re}}}\exp({-r^2/(1+4t/{\textit {Re}})}), \end{equation}

that diffuses with time for finite ${\textit {Re}}$ values. For the linear velocity perturbation, we select the azimuthal wavenumber $m=2$, since it is the wavenumber associated with the subcritical bifurcation towards the tripolar state described in the introduction (see for instance Rossi et al. Reference Rossi, Lingevitch and Bernoff1997). Only the response to an initial condition with $m=2$ will be considered in this paper.

4.2. Numerical methods

In practice, the Poisson equation (2.4) for the pressure is never solved directly, the latter being included in the state space instead:

(4.3)\begin{equation} \boldsymbol{\mathsf{M}}\partial_t\begin{bmatrix} \hat{\boldsymbol{u}}\\ \hat{p} \end{bmatrix} = \begin{bmatrix} \boldsymbol{\mathsf{A}}_{m}(t) & -\hat{\boldsymbol{\nabla}}_{m} \\ \hat{\boldsymbol{\nabla}}_{m}\boldsymbol{\cdot} & 0 \end{bmatrix}\begin{bmatrix} \hat{\boldsymbol{u}}\\ \hat{p} \end{bmatrix} \doteq \boldsymbol{\mathsf{B}}_{m}(t)\begin{bmatrix} \hat{\boldsymbol{u}}\\ \hat{p} \end{bmatrix},\quad \text{with}\ \boldsymbol{\mathsf{M}} \doteq \begin{bmatrix} \boldsymbol{I} & 0\\ 0 & 0 \end{bmatrix}. \end{equation}

Systems (4.3) and (2.2) lead to the same solution for the velocity field, and (2.2) was selected for the analytical development only because it does not contain the singular mass matrix, which lightens the writing. To produce the linear and weakly nonlinear results, (4.3) is discretized on Matlab by means of the pseudospectral Chebyshev collocation method. Specifically, the variables are collocated at the $N$ Gauss–Lobatto nodes $s=\cos (\,j{\rm \pi} /(N-1))$, with $j=0,1,\ldots,N-1$, which includes the endpoints $s=-1$ and $s=1$. This grid is mapped on the physical domain $0\leq r \leq R_{max}$ using an algebraic mapping with domain truncation $r=L(1+s)/(s_{max}-s)$, where $L$ is a mapping parameter to cluster the points close to the origin, and is set equal to $3$. The parameter $s_{max}$ is defined as $s_{max} \doteq (2L+R_{max})/R_{max}$ (see Canuto et al. Reference Canuto, Quarteroni, Hussaini and Zang2007; Viola, Arratia & Gallaire Reference Viola, Arratia and Gallaire2016). In the present work, $R_{max}=50$ and $N=300$ proved to be sufficient for the convergence of all results. The lines corresponding to $r=0$ and $r=R_{max}$ in the discretized version of (4.3) are replaced by those enforcing the boundary conditions there; this also avoids the problem of the singularity in $r=0$.

The action of the propagator $\boldsymbol {\varPsi }(t,0)$ is computed in practice by marching (4.3) in time using a Crank–Nicholson scheme. Specifically, if the pressure and the velocity field are known and equal to $[\hat {\boldsymbol {u}}^{(n)},\hat {p}^{(n)}]^{\rm T}$ at a time $t_n$, their values at $t_{n+1}=t_n+\Delta t$ are obtained upon inverting the system

(4.4)\begin{equation} \left(\frac{\boldsymbol{\mathsf{M}}}{\Delta t} - \frac{1}{2}\boldsymbol{\mathsf{B}}_{m}(t_{n+1})\right)\begin{bmatrix} \hat{\boldsymbol{u}}^{(n+1)}\\ \hat{p}^{(n+1)} \end{bmatrix} = \left(\frac{\boldsymbol{\mathsf{M}}}{\Delta t} + \frac{1}{2}\boldsymbol{\mathsf{B}}_{m}(t_{n}) \right)\begin{bmatrix} \hat{\boldsymbol{u}}^{(n)}\\ \hat{p}^{(n)} \end{bmatrix}, \end{equation}

which is done by means of the command backslash on Matlab. The solutions to the other linear time-dependent systems at higher orders are also approximated upon discretizing in time with a Crank–Nicholson scheme. The adjoint system to (4.3) reads

(4.5)\begin{equation} -\boldsymbol{\mathsf{M}}\partial_t\begin{bmatrix} \hat{\boldsymbol{u}}^{{{\dagger}}}\\ \hat{p}^{{{\dagger}}} \end{bmatrix} = \boldsymbol{\mathsf{B}}_{m}(t)^{{{\dagger}}} \begin{bmatrix} \hat{\boldsymbol{u}}^{{{\dagger}}}\\ \hat{p}^{{{\dagger}}} \end{bmatrix}, \end{equation}

where we used $\boldsymbol{\mathsf{M}}^{{\dagger} } = \boldsymbol{\mathsf{M}}$, and where the expression of $\boldsymbol{\mathsf{B}}_{m}(t)^{{\dagger} }$ and the boundary conditions for the adjoint field are derived in Appendix E. The action of the adjoint propagator $\boldsymbol {\varPsi }(t,0)^{{\dagger} }$, necessary for the linear optimization, is determined by time marching (4.5) backward. That is, from known adjoint velocity and pressure fields at a time $t_{n+1}$, their evolution at $t_{n}=t_{n+1}-\Delta t$ are obtained by solving

(4.6)\begin{equation} \left(\frac{\boldsymbol{\mathsf{M}}}{\Delta t} - \frac{1}{2}\boldsymbol{\mathsf{B}}^{{{\dagger}}}_{m}(t_{n}) \right)\begin{bmatrix} \hat{\boldsymbol{u}}^{{{\dagger}},(n)}\\ \hat{p}^{{{\dagger}},(n)} \end{bmatrix} = \left(\frac{\boldsymbol{\mathsf{M}}}{\Delta t} + \frac{1}{2}\boldsymbol{\mathsf{B}}^{{{\dagger}}}_{m}(t_{n+1}) \right)\begin{bmatrix} \hat{\boldsymbol{u}}^{{{\dagger}},(n+1)}\\ \hat{p}^{{{\dagger}},(n+1)} \end{bmatrix}. \end{equation}

A time step of $\Delta t = 0.02$ was selected and was found sufficient to guarantee convergence of the results. The fully nonlinear direct numerical simulations (DNS) were performed using the spectral element solver Nek5000. A two-dimensional square grid $(x,y)\in [-R_{max},R_{max}]\times [-R_{max},R_{max}]$ was used, with a particularly high density of elements near the vortex core region where the flow gradients are intense. Convergence of the results by refining the mesh and extending the size of the computational domain has been checked.

4.3. Linear results

The results of the linear transient growth analysis for $m=2$ and varying ${\textit {Re}} \in [1.25,2.5,5,10]\times 10^{3}$ are presented below. The optimal gain $G_o(t_o)$ defined in (2.13), also called ‘envelope’, is shown as a function of $t_o$ in figure 1(a) and for different ${\textit {Re}}$. For all considered ${\textit {Re}}$ values, $G_o(t_o)$ reaches a clear maximum at relatively small $t_o$, before decreasing and plateauing by further increasing $t_o$. The values $G_o(t_{o,{max}})$ of these maxima, and of the corresponding temporal horizons $t_{o,{max}}$, seem to increase monotonically with the ${\textit {Re}}$.

Figure 1. Linear transient growth in the two-dimensional, time-dependent Lamb–Oseen vortex flow for varying ${\textit {Re}}$ $\in [1.25,2.5,5,10]\times 10^{3}$, larger ${\textit {Re}}$ corresponding to lighter colours. The perturbation has wavenumber $m=2$. (a) Optimal gain as defined in (2.13) as a function of the temporal horizon $t_o$. The star marker corresponds to its maximum value over $t_o$ and for a given ${\textit {Re}}$, that is, to $G_o(t_{o,{max}})=1/\epsilon _{o,{max}}$; (b) $\epsilon _{o,{max}}$ multiplied by ${\textit {Re}}^{1/3}$ and plotted as a function of ${\textit {Re}}$; (c) $t_{o,{max}}$ multiplied by ${\textit {Re}}^{-1/3}$ and shown as a function of ${\textit {Re}}$.

The type of dependence is made explicit in figure 1(b,c), where we propose in (b) to plot $\epsilon _{o,{max}}$ multiplied by ${\textit {Re}}^{1/3}$ as a function of ${\textit {Re}}$, demonstrating that $G_o(t_{o,{max}}) \propto {\textit {Re}}^{1/3}$. In (c), $t_{o,{max}}$ multiplied by ${\textit {Re}}^{-1/3}$ is also shown as a function of ${\textit {Re}}$: while the data align slightly less well on a constant line, the agreement remains correct and stating $t_{o,{max}} \propto {\textit {Re}}^{1/3}$ is a fair approximation. All these results are in excellent agreement with those presented in chapter 3 of Antkowiak (Reference Antkowiak2005). In particular, the power-law dependencies of both $G_o(t_{o,{max}})$ and $t_{o,{max}}$ have already been observed and interpreted physically, and discussed next.

In figure 2, by fixing ${\textit {Re}}=5000$, the optimal gain $G_o(t_o)$ over $t_o$ is reproduced from figure 1, as well as the gain $G(t)$ associated with the linear trajectory optimized for $t_o=t_{o,{max}}=35$ specifically. We check that $G(0)=1$, that $G(35)=G_o(35)$ and that $G$ is below $G_o$ everywhere else. At the times corresponding to the black dots, the axial vorticity structure, expressed as

(4.7)\begin{equation} \hat{{\omega}} = (1/r+\partial_r)\hat{u}_{\theta}-(\text{i} m/r)\hat{u}_r, \end{equation}

is shown in figure 3. For panel (a), we observe that the optimal initial condition consists of an arrangement of vorticity sheets, or ‘spirals’, orientated in counter-shear with respect to the base flow. As time increases, the sheets unfold under the effect of advection by the base flow, in analogy with the Orr mechanism. Antkowiak & Brancher (Reference Antkowiak and Brancher2004) and Antkowiak (Reference Antkowiak2005) also argue that a Kelvin wave (corresponding to a core mode) is transitorily excited in the heart of the vortex, by induction of the vorticity spirals. By ‘induction’ is meant that a radial velocity perturbation is induced in the heart of the vortex as the spirals unfold, and advects the base vorticity which is large here, thus acting as a source for the vorticity perturbation. The core mode is particularly visible in the heart of the vortex in panel (d), corresponding to $t=45$. However, it quickly disappears for later times, as the vorticity spirals roll up in the other direction, this time in line with the base advection (see panel e for $t=60$). Doing so, the vorticity perturbation decays with time due to a shear-diffusion averaging mechanism studied by Rhines & Young (Reference Rhines and Young1983) for a passive scalar and in Lundgren (Reference Lundgren1982) for vorticity. In particular, Lundgren (Reference Lundgren1982) concludes that, as long as the vorticity perturbations are rapidly radially varying, the shear-diffusion homogenization mechanism is qualitatively identical to the passive scalar case. This was confirmed numerically a few years later in Bernoff & Lingevitch (Reference Bernoff and Lingevitch1994), where is further demonstrated that the decay of the perturbation vorticity spirals acts on a ${\textit {Re}}^{1/3}$ time scale. From here, Antkowiak (Reference Antkowiak2005) argues that, in the limit of vanishing viscosity, the evolution equation for the perturbation vorticity $\hat {{\omega }}$ becomes time reversible; therefore, the unfolding (Orr) amplification mechanism can be seen as the ‘mirror’ of the shear diffusion, a damping one. Indeed, the curve for $G(t)$ is fairly symmetric around $t=t_o$ in figure 2, and would be even more for larger ${\textit {Re}}$ values. In this sense, the Orr mechanism has an ‘anti-mixing’ effect. This explains why the temporal horizon leading to the largest amplification is also in ${\textit {Re}}^{1/3}$, because it is the ‘natural’ amplification time scale of the vortex. It should be noted that this conception of the Orr and shear-diffusion mechanism as ‘mirroring’ each other was already present in Farrell (Reference Farrell1987) in the context of plane shear flows. As a side comment, Antkowiak (Reference Antkowiak2005) also derives the ${\textit {Re}}^{1/3}$ dependence of $t_{o,{max}}$ by simply balancing the unfolding and the diffusion (in the radial direction) time scales.

Figure 2. The continuous line is the reproduction of the envelope shown in figure 1(a) for ${\textit {Re}}=5000$. The dash-dotted line is the gain $G(t)$ associated with the linear trajectory optimized for $t_o=t_{o,{max}}=35$, defined in (2.16). By definition, both curves collapse at $t=35$. The black dots correspond to $t=0,15,30,\ldots,105$, for which the corresponding vorticity structures are shown in figure 3.

Figure 3. Temporal evolution of the vorticity structure $\hat {{\omega }}(r,t)$ of the optimal linear response, shown at the specific times corresponding to by the black dots in figure 2. Each panel shows only $[x,y]\in [-4,4]\times [-4,4]$. The plus sign denotes the origin, the dotted circle is the unit circle, and the dashed circle highlights the radius $r_q$ as defined in (4.9).

We insist that, apart from a tenuous transient excitation of a core mode by vortex induction, the amplification mechanism of the perturbation is essentially the Orr mechanism. Farrell & Ioannou (Reference Farrell and Ioannou1993) have shown that this mechanism was associated with the scaling law $G_o(t_o)\propto t_o$ (note that the gain defined in Farrell & Ioannou (Reference Farrell and Ioannou1993) is the square of the gain presently defined). From here, the scaling law $G_o(t_{o,{max}}) \propto {\textit {Re}}^{1/3}$ follows immediately.

The shear-diffusion (thus the Orr) mechanism is in essence non-viscous. The only role played by viscosity is to select the radial length scale of the initial vorticity structure (which would tend to be infinitely thin in the absence of viscosity, and the optimal gain and associated temporal horizon follow). Indeed, vorticity sheets that are too thin will be smoothed out by viscosity. If both the optimal gain and the decay time are $\propto {\textit {Re}}^{1/3}$, then the decay rate is clearly independent of ${\textit {Re}}$. In fact, the decay of the vorticity moments associated with the structures observable in figure 3 after $t=35$, can also be interpreted in terms of the Landau damping phenomenon, which is purely inviscid (Schecter et al. Reference Schecter, Dubin, Cass, Driscoll, Lansky and O'Neil2000), as developed in the introduction.

More precisely, the exponential decay rate of the vorticity moments can be obtained from a Landau pole of the vortex base profile, as first demonstrated in Briggs, Daugherty & Levy (Reference Briggs, Daugherty and Levy1970). For this, one first needs to define the $m$th multipole moment of the vorticity perturbation as

(4.8)\begin{equation} Q^{(m)}(t)=\int_{0}^{\infty} r^{m+1}\hat{{\omega}}(r,t)\,\mathrm{d} r = [r^{m+1}(\hat{u}_{\theta}-\text{i}\hat{u}_r)]_{r \rightarrow \infty}. \end{equation}

Then, to quote Schecter et al. (Reference Schecter, Dubin, Cass, Driscoll, Lansky and O'Neil2000), ‘A Landau pole is a complex frequency $\omega _q-\text {i} \gamma$ at which the Laplace transform of $Q^{(m)}(t)$ is singular [$\ldots$]. A Landau pole contributes a term to $Q^{(m)}(t)$ of the form $\exp ({-\gamma t - \text {i} \omega _q t})$’. A Landau pole, like an eigenvalue, is a property of some specific linear operator acting on a perturbation and, in this sense, depends on the base profile and on the wavenumber of the perturbation, but not on the specific radial shape of the latter. A Landau pole is generally not unique, and its calculation amounts to solving an eigenvalue problem along a radial contour that was deformed in the complex plane (see the appendix in Schecter et al. Reference Schecter, Dubin, Cass, Driscoll, Lansky and O'Neil2000); however, one can reasonably expect one of these potentially multiple Landau poles to dominates over the others.

In considering the response of an inviscid Gaussian vortex to a generic external impulse with $m=2$, Schecter et al. (Reference Schecter, Dubin, Cass, Driscoll, Lansky and O'Neil2000) indeed find that the quadrupole moment $Q^{(2)}(t)$ of the vorticity perturbation ‘decays exponentially, and the decay rate is given by a Landau pole’. In addition, is also mentioned that ‘the vorticity perturbation [$\ldots$] is poorly described by a single damped wave [$\ldots$] Rather, the vorticity perturbation is rapidly dominated by spiral filament’. This is also the structure observed from $t=60$ in figure 3, which closely resembles the figure 9 in Schecter et al. (Reference Schecter, Dubin, Cass, Driscoll, Lansky and O'Neil2000). In other terms, the response excites a very large number of structurally different eigenmodes, yet it does not invalidate the relevance of the dominant Landau pole.

By writing the quadrupole in terms of amplitude and phase, $Q^{(2)}(t)=|Q^{(2)}(t)| \exp ({{\rm i} \phi (t)})$, we display in figure 4(a) the temporal evolution of the phase $\phi (t)$ normalized by $2{\rm \pi}$ and corresponding to the response shown in figure 3. The amplitude $|Q^{(2)}(t)|$ is also shown in figure 4(b). The best fit of the form of a pure Landau damping $Q^{(2)}(t)=\exp ({-\text {i} \omega _q t-\gamma t})$ is also proposed. Clearly, the phase changes at a constant rate in figure 4(a) for $15 \leq t \leq 65$, and $|Q^{(2)}(t)|$ decays exponentially in figure 4(b) for $40 \leq t \leq 65$. Therefore, we confirm that the quadrupole moment $Q^{(2)}(t)$, associated with the linear response in figure 3, is well approximated by a Landau pole in its decaying part, despite the fact that we consider a viscous flow over a time-dependent base vortex, which is not the case in Schecter et al. (Reference Schecter, Dubin, Cass, Driscoll, Lansky and O'Neil2000). The Reynolds number ${\textit {Re}} = 5000$, however, seems sufficiently large for the conclusions drawn in Schecter et al. (Reference Schecter, Dubin, Cass, Driscoll, Lansky and O'Neil2000) to also hold here. In the following, reference will be made to the Landau pole in order to qualitatively interpret the nonlinear effects, even if the fact that ${\textit {Re}}$ is finite possibly makes it less rigorous. The precise value of $\gamma$ in figure 4(b) is unimportant, for the comparison with any analytical prediction of the Landau pole is outside of the scope of this paper. Note that the exponential decay is followed by an algebraic one for large times, which is also observed in Schecter et al. (Reference Schecter, Dubin, Cass, Driscoll, Lansky and O'Neil2000). The fitted value $\omega _q=0.42$ in figure 4(a) is, however, important, for it gives the location $r_q$ of the radius where the angular velocity $\omega _q/m$ associated with the Landau pole is equal to that of the base flow

(4.9)\begin{equation} m \varOmega(r_q(t),t) = \omega_q. \end{equation}

The radius solving (4.9) at each time is highlighted by a dashed circle in figure 3. It takes the value $r_q \approx 2.16$, very weakly dependent on time.

Figure 4. Temporal evolution of the vorticity moment $Q^{(2)}(t)$ corresponding to the linear response shown in figures 2 and 3; the black dots again denote the specific times where the vorticity field is shown in figure 3. (a) Phase $\phi (t)$ divided by $2{\rm \pi}$. (b) Amplitude $|Q^{(2)}(t)|$. The black dashed lines correspond to a pure Landau damping $Q^{(2)}(t) = \exp ({-\text {i} \omega _q t-\gamma t})$, where we use fitted values for $\omega _q$ and $\gamma$.

It can be shown that the decay rate of the Landau pole is linked to the radial derivative of the base vorticity at $r_q$ specifically. In fact, as demonstrated in Schecter et al. (Reference Schecter, Dubin, Cass, Driscoll, Lansky and O'Neil2000), Turner & Gilbert (Reference Turner and Gilbert2007) and Turner et al. (Reference Turner, Gilbert and Bassom2008), the exponential damping mechanism can be removed from any vortex, in particular a Gaussian one, by cancelling such derivative at $r_q$.

The poor modal description of a perturbation evolution over a Gaussian vortex, mentioned in Schecter et al. (Reference Schecter, Dubin, Cass, Driscoll, Lansky and O'Neil2000) and confirmed in figure 3, is the reason for which non-modal analytical tools are deployed in the present study, in particular for studying nonlinear effects. We address these latter now.

4.4. Fully nonlinear results

We introduce the fully nonlinear results obtained from DNS in the present section. Let us define $\boldsymbol {u}_p$ as being the perturbation between the fully nonlinear solution $\boldsymbol {U}$ obtained from a DNS, and the reference Lamb–Oseen solution $\boldsymbol {U}_{{b}}$ in (4.1a,b); in other terms, $\boldsymbol {u}_p \doteq \boldsymbol {U}-\boldsymbol {U}_{{b}}$. We recall that this perturbation is initialized along the wavenumber $m=2$ with the linear optimal initial condition with an amplitude $U_0$, such that

(4.10)\begin{equation} \boldsymbol{u}_p|_{t=0} = U_0 \hat{\boldsymbol{u}}_o \,{\rm e}^{\text{i} m \theta} + \text{c.c.}. \end{equation}

Owing to nonlinear effects, $\boldsymbol {u}_p$ generally does not oscillate purely along $m$ for strictly positive times. For this reason we need to further extract $\hat {\boldsymbol {u}}^{(1)}_p$, the component of $\boldsymbol {u}_p$ oscillating at the fundamental wavenumber $m$, hence the superscript ‘$(1)$’. For this, a Fourier series is naturally used as

(4.11)\begin{align} \left.\begin{gathered} \hat{\boldsymbol{u}}^{(1)}_p(r) \doteq \frac{1}{2{\rm \pi}}\int_{0}^{2{\rm \pi}} \boldsymbol{u}_p(r,\theta) \cos(m\theta)\,\mathrm{d} \theta - \frac{\text{i}}{2{\rm \pi}}\int_{0}^{2{\rm \pi}} \boldsymbol{u}_p(r,\theta) \sin(m\theta)\,\mathrm{d} \theta,\quad \text{such that} \\ \boldsymbol{u}^{(1)}_p(r,\theta) \doteq \hat{\boldsymbol{u}}^{(1)}_p(r)\,{\rm e}^{\text{i} m\theta} + \text{c.c.}. \end{gathered}\right\} \end{align}

From here, the fully nonlinear gain (associated with the fundamental pair) is immediately defined as

(4.12)\begin{equation} G_{{DNS}} \doteq \frac{\|\hat{\boldsymbol{u}}^{(1)}_p\|}{U_0}. \end{equation}

By fixing ${\textit {Re}}=5000$ and $t_o=35$, we report in figure 5(a) the fully nonlinear evolution of the perturbation amplitude $\|\hat {\boldsymbol {u}}^{(1)}_p\|$, for different amplitudes of the initial condition $U_0$. The associated gain, that is, the same data divided by the corresponding $U_0$, is shown in figure 5(b).

Figure 5. (a) Amplitude of the fully nonlinear fundamental perturbation $\hat {\boldsymbol {u}}_p^{(1)}$ as defined in (4.11), initialized with the linear optimal condition for $({\textit {Re}},m,t_o)=(5000,2,35)$. Larger $U_0$ values correspond to lighter colours. (b) The same data are divided by their corresponding $U_0$, yielding the nonlinear gains as defined in (4.12). For the largest considered $U_0 = 2.82 \times 10^{-2}$, black dots are located at $t=0,30,60,\ldots,210$, for which the corresponding vorticity structures are shown in figure 6.

Figure 6. Temporal evolution of the vorticity structure of the fully nonlinear perturbation for $U_0 = 2.82\times 10^{-2}$, shown at the specific times highlighted by the black dots in figure 5. Each panel shows only $[x,y]\in [-4,4]\times [-4,4]$, the plus sign denotes the origin, and the dashed line is the unit circle.

Figure 5(a) illustrates well a bypass transition, also called ‘bootstrapping effect’ in Trefethen et al. (Reference Trefethen, Trefethen, Reddy and Driscoll1993), where linear transient growth and nonlinear mechanisms interact to bring about a transition to a new state, distinct from the reference one. Indeed, as we increase the amplitude of the initial condition (linearly in log scale), the transient growth of the perturbation, initiated by the linear Orr mechanism as we have seen, becomes sufficient to trigger nonlinear terms; for the two largest considered $U_0$, this prevents the flow to re-axisymmetrize to the Lamb–Oseen vortex. Interestingly, for the third and fourth largest considered $U_0$, the flow seems to temporarily approach a new state but nevertheless relaxes towards the reference one.

For the largest considered $U_0=2.82 \times 10^{-2}$ and the set of times highlighted by the black dots, we report in figure 6 the structure of the vorticity perturbation. From $t=60$, the flow has reached a new nonlinear quasi-equilibrium state, that diffuses very slowly due to viscous effects, and rotates counterclockwise with a period of approximately $45$ units of times. The corresponding total vorticity field, which is obtained by adding to figure 6 the Gaussian reference vorticity (4.2), takes a tripolar shape. This is shown in figure 7 for $t\geq 120$, where the heart of the vortex takes an elliptical shape that is surrounded by two satellite vortices of low and negative vorticities (corresponding to the two blues spots in figure 6 from $t=60$). As developed in the introduction, this tripolar state was already largely reported in a variety of contexts. In particular, its spatial structure compares well with the one reported in figure 2 of Antkowiak & Brancher (Reference Antkowiak and Brancher2007) for ${\textit {Re}}=1000$, or with the experimental visualization in Kloosterziel & van Heijst (Reference Kloosterziel and van Heijst1991).

Figure 7. Same as in figure 6 but the reference vorticity field (4.2) has been added, so as to visualize the total, fully nonlinear, vorticity field; some isolines are also shown to better expose the elliptical deformation of the vortex core.

In terms of the gain in figure 5(b), we notice that all the curves corresponding to different $U_0$ collapse for small times, confirming the linearity of the initial growth mechanism. However, they significantly depart from each other at larger times. If nonlinearities seem to saturate the gain for times around $t_o=35$, they clearly increase the latter up to three orders of magnitude for large times; as said, that is because the flow bifurcated to another state, thereby the perturbation that is measured around the reference vortex remains large.

The weakly nonlinear model is now employed to assess the gain evolution with $U_0$ shown in figure 5(b).

5. Weakly nonlinear transient growth in the diffusing, two-dimensional Lamb–Oseen vortex

In the weakly nonlinear paradigm, the perturbation field $\boldsymbol {u}_p$ is approached by $\boldsymbol {u}_p = \sqrt {\epsilon _o} \boldsymbol {u}_1 + \epsilon _o \boldsymbol {u}_2 + \sqrt {\epsilon _o}^3 \boldsymbol {u}_3 + O(\epsilon _o^2)$, as can been seen in (3.2). Consequently, $\hat {\boldsymbol {u}}_p^{(1)}$ is approximated by $\sqrt {\epsilon _o} \bar {\boldsymbol {u}}_1^{(1)} + \sqrt {\epsilon _o}^3 \bar {\boldsymbol {u}}_3^{(1)} + O(\sqrt {\epsilon _o}^5)$, therefore the fully nonlinear gain $G_{{DNS}}$ defined in (4.12) is expected to reduce to the weakly nonlinear one $G_{{w}}$ defined in (3.30), since $\epsilon _o \ll 1$. These two quantities are compared in this section. Note that we checked that for all considered ${\textit {Re}}$ values the first sub-optimal transient gain was $O(1/\sqrt {\epsilon _o})$, in contrast to $1/\epsilon _o$ for the optimal one, which mathematically justifies focusing on the response to $\hat {\boldsymbol {u}}_o$ only.

5.1. Transient change from nonlinear saturation to nonlinear amplification

The real part of the weakly nonlinear coefficient, defined in (3.29), is shown in figure 8(a) for ${\textit {Re}}=5000$ and $t_o=t_{o,{max}}=35$. It is further split into the sum of a mean flow distortion contribution, $\mu _r^{(0)}$, arising from $\boldsymbol {u}_2^{(0)}$ only, and second harmonic contribution, $\mu _r^{(2)}$, arising from $\hat {\boldsymbol {u}}_2^{(2)}$ only. In this manner, $\mu _r = \mu _r^{(0)} + \mu _r^{(2)}$.

Figure 8. (a) Real part of the weakly nonlinear coefficient $\mu$ defined in (3.29) (black continuous line). It is further split as the sum of a contribution from of a mean flow distortion (red dash-dotted line), plus a contribution from the second harmonic (blue dotted line). In the grey zone, the weakly nonlinear gain $G_{{w}}$ defined in (3.30) increases monotonically with $U_0$, since $\mu _r > 0$. In the white zone, it decreases monotonically. (b) The coefficient $\mu _r$ is compared with its reconstruction from DNS data (magenta dashed line) according to (5.2).

Remarkably, after having significantly decreased until reaching a minimum at $t=59$, the coefficient $\mu _r$ increases again and changes sign at $t=87$. It then reaches its maximum at $t=107$, and decreases again until plateauing around zero for $t \geq 150$. A change of sign in $\mu _r$ brings diversity to the behaviours reported in Ducimetière et al. (Reference Ducimetière, Boujo and Gallaire2022), concerning transient growths in the streamwise-invariant Poiseuille flow, and in the flow past a backward-facing step. There, only negative coefficients, corresponding to saturating nonlinearities, were observed. In the present work, however, nonlinearities appear to reinforce the gain for some times. When it is negative, the behaviour of $\mu _r$ seems largely dominated by the contribution from the mean flow distortion $\mu _r^{(0)}$. When the former is positive, however, $\mu _r^{(0)}$ is reduced and the second harmonics appear to contribute as much to the sum.

Upon taking the derivative of the square the weakly nonlinear gain in (3.34) with respect to $U_0^2$, we obtain

(5.1)\begin{equation} \frac{\partial (G_{{w}}^2)}{\partial (U_0^2)} = \frac{2\mu_r}{\epsilon_o^2-2U_0^2\mu_r}G_{{w}}^2. \end{equation}

Therefore, the coefficient $\mu _r$ relates directly to the rate of change of $G_{{w}}^2$ with respect to $U_0^2$ in the limit where $U_0$ tends towards zero as

(5.2)\begin{equation} \mu_r = \frac{\epsilon_o^2}{2} \left(\lim_{U_0 \rightarrow 0} \frac{1}{G_{{w}}^2 }\frac{\partial (G_{{w}}^2)}{\partial (U_0^2)}\right). \end{equation}

By using (5.2), the coefficient $\mu _r$ can be reconstructed directly from DNS data. For this, the derivative in (5.2) is estimated by using a first-order finite difference approximation between DNS data for the gain squared, corresponding to the two lowest considered $U_0= 2.26\times 10^{-4}$ and $U_0= 2.82 \times 10^{-4}$. The result is shown as the magenta dotted line in figure 8(b), and compared with the actual $\mu _r$. The good agreement between both curves for $t\leq 130$ a posteriori validates for these times our weakly nonlinear expansion, a least in the limit of small $U_0$. However, both curves depart significantly after $t=130$, presumably due to the violation of the condition (3.11) ensuring that the asymptotic expansion is well posed. Indeed, the response to the initial condition optimized for $t=t_o=35$ was shown to rapidly decay in amplitude for larger times in figure 2; therefore the norm of the perturbation operator, $\|\boldsymbol{\mathsf{P}}(t)\|=1/\|\hat {\boldsymbol {l}}(t)\|$, increases accordingly.

5.2. Comparison of fully and weakly nonlinear gains

In figure 9 the weakly nonlinear gain, associated with the coefficient in figure 8, is compared with the fully nonlinear one. For the moment, only short times where the gains are large are shown. The later evolution, being associated with orders of magnitude smaller gains, will be considered in figure 10 in log scale. On the time interval chosen for figure 9, the coefficient $\mu _r$ is negative, thereby the model predicts the gain to decrease monotonically with $U_0$. For values of $U_0$ small enough so that the linear gain (black curves in figure 9) is only slightly modulated, the agreement with DNS data is excellent. By increasing $U_0$, the agreement degrades only slowly and remains very good for these relatively small times, for instance over $0 \leq t \leq 40$ in the panel corresponding to ${\textit {Re}}=10^4$. This corresponds to times when the nonlinear structure remains symptomatic of the linear one, in the sense that the flow has not yet reached the tripolar state (temporarily or not, as shown in figure 5). Indeed, the amplitude equation (3.33) is independent of space, which condemns the response to be structurally close to the linear one. For later times and large $U_0$, as soon as the fully nonlinear gain begins to oscillate and to bear a non-monotonic behaviour, the agreement with our weakly nonlinear model rapidly degrades. The reason is precisely that, the response whose nonlinear evolution is approached by our model, is not selected by the flow anymore, which has reached another state, temporarily or not, and that is structurally completely different; about this new state, the amplitude equation has no information.

Figure 9. Weakly and fully nonlinear gains as defined in (3.30) and (4.12), respectively. Larger $U_0 \in [0.03,0.45,1.12,2.82]\times 10^{-2}$ correspond to lighter colours (direction of increasing $U_0$ is also indicated by the arrow). The continuous line stands for DNS data whereas the dash-dotted is for the weakly nonlinear model. Temporal horizons are $t_o = t_{o,{max}} = [25,30,35,40]$ (times at which a black star is shown) for ${\textit {Re}}=[1.25,2.5,5,10]\times 10^3$, respectively.

Figure 10. Weakly and fully nonlinear gains for ${\textit {Re}}=5000$ and increasing amplitude of the initial condition $U_0\in [4.5,7.1,11.2,17.8]\times 10^{-3}$. Each panel corresponds to a different $U_0$. The grey box denotes the time interval (5.3) over which the weakly nonlinear gain is undefined. The latter is singular at the boundaries of this interval.

Nevertheless, while the proposed amplitude equation fails to predict the gain and the structure of the flow when at the tripolar state, it does predict a bifurcation threshold. This is illustrated in figure 10, where the same data as in figure 9 for ${\textit {Re}}=5000$ are shown, although the time interval is extended until $t=150$ so as to include the time interval where the real part of the weakly nonlinear coefficient becomes positive, indicating ‘anti-saturation’. The weakly nonlinear gain is undefined on the time interval where

(5.3)\begin{equation} \mu_r(t) \geq \frac{1}{2}\left(\frac{\epsilon_o}{U_0} \right)^2 > 0, \end{equation}

which widens by increasing $U_0$. The gain is singular (tends to infinity) at the times corresponding to the boundaries of this interval. The latter is highlighted by the grey zones in figure 10.

5.3. Bifurcation thresholds

By looking at the panels corresponding to $U_0=1.12\times 10^{-2}$ and $U_0=1.78\times 10^{-2}$, we observe that, over the time interval where the equation has no solution, i.e. the grey zone, the fully nonlinear simulation seems to have reached the tripolar state (although for $U_0=1.12\times 10^{-2}$ it is only temporary as it eventually relaxes towards the reference state). In this sense, a loss of solution in the amplitude equation could indicate that the DNS has left the state around which the weakly nonlinear expansion was constructed. The minimum $U_0$ for which the weakly nonlinear gain becomes singular may then be considered as an approximation of the actual bifurcation threshold. Such minimal $U_0$, named $U_0^{s}$ is what follows, reads

(5.4)\begin{equation} U_0^{s} = \frac{\epsilon_o}{\sqrt{2 \mu_r(t_s)}},\quad \text{where}\ \mu_r(t_s)=\max_{t}\mu_r(t), \end{equation}

and is defined if and only if $\mu _r(t_s)>0$.

A bifurcation threshold in the DNS solutions must also be defined and compared directly with (5.4). To the knowledge of the authors, there is no clear and universal subcritical bifurcation criterion, and the choice made is always arguably arbitrary. Nevertheless, we will opt for the criterion proposed in Antkowiak (Reference Antkowiak2005), based on the observation that the tripolar state is characterized by a deformed, non-axisymmetric, heart. Therefore a characteristic aspect ratio $\lambda$ of the heart is established as

(5.5)\begin{equation} \lambda^2 \doteq \frac{J+R}{J-R},\quad \text{with}\ J=(J_{20}+J_{02}),\quad R=\sqrt{(J_{20}-J_{02})^2+4J_{11}^2}, \end{equation}

and $J_{mn}\doteq \int _{\varOmega }x^m y^n \hat {{\omega }}(x,y)\,\mathrm {d}\kern 0.06em x\,\mathrm {d} y$ a vorticity moment. A characteristic eccentricity is further defined as $e\doteq \sqrt {1-1/\lambda ^2}$. From here, Antkowiak (Reference Antkowiak2005) computes the typical half-life time (that is where the criterion is rather arbitrary) of the heart deformation as

(5.6)\begin{equation} \tau,\quad \text{such that}\ \int_{0}^{\tau}e(t)\,\mathrm{d} t = \frac{1}{2}\int_{0}^{t_f} e(t)\,\mathrm{d} t, \end{equation}

where $t_f=500$ is the final time of the fully nonlinear simulations.

We report the half-life time as a function of $U_0$ and for different ${\textit {Re}}$ values in figure 11(a). For each ${\textit {Re}}$, we also highlight an inflection point in $\tau$ at a certain $U_0$, and we declare the latter as being the subcritical bifurcation threshold. It is reported directly as a function of ${\textit {Re}}$ in figure 11(b), and compared with the weakly nonlinear prediction $U_0^s$, in both linear and log scales. Both approaches clearly highlight a decreasing power-law dependence of the subcritical bifurcation threshold with the ${\textit {Re}}$. For the fully nonlinear data, the fitted exponent $-0.88$ found in the present study, agrees relatively well with the one reported in Antkowiak & Brancher (Reference Antkowiak and Brancher2007), which is $-0.8$. However, since little information is given regarding how the threshold amplitude was computed in Antkowiak & Brancher (Reference Antkowiak and Brancher2007), the agreement with our results is perhaps fortunate. In any case, the negative power-law dependence with ${\textit {Re}}$ implies that the threshold amplitude above which the flow goes into the basin of attraction of the tripolar state, and in the direction given by the linear optimal, vanishes for increasing ${\textit {Re}}$. This may explain the formation and persistence of elliptical vortex eyewalls in some tropical cyclones (Kuo et al. Reference Kuo, Williams and Chen1999; Reasor et al. Reference Reasor, Montgomery, Marks and Gamache2000). We insist that we only consider perturbations in the direction of the linear optimal, whereas nonlinear optimal perturbations can also be found by relying on the techniques presented in Pringle & Kerswell (Reference Pringle and Kerswell2010). The latter would be associated with a threshold amplitude $U_0$ for a subcritical bifurcation even smaller than the one shown in figure 11. The fitted exponent for the weakly nonlinear model is found as being $-0.66$, thereby the threshold amplitudes between both models differ significantly by decreasing ${\textit {Re}}$ in figure 11(b).

Figure 11. (a) Typical half-life time of the heart deformation defined in (5.6), as a function of the amplitude of the initial condition $U_0$. Larger ${\textit {Re}}=[1.25,2.5,5,10]\times 10^3$ correspond to lighter colours. A star highlights an inflection point, for which the corresponding $U_0$ is declared as being the threshold amplitude for the subcritical bifurcation. Such thresholds $U_0$ are reported in (b) as a function of ${\textit {Re}}$ (also with a star symbol). The prediction from the weakly nonlinear model, $U_0^s$ defined in (5.4) is also shown. The thin continuous line is a power law fitted on the DNS data for the first three considered ${\textit {Re}}$, with ${\propto }Re^{-0.88}$, whereas the thin dashed line is fitted on the weakly nonlinear data with ${\propto }Re^{-0.66}$. The inset shows the same in log–log scale.

This discrepancy between exponents may possibly be explained as follows. For the DNS, our bifurcation criterion aims at computing the threshold above which the flow has definitely bifurcated, whereas the loss of solution in the amplitude equation refers to a specific time interval. The importance of this conceptual difference is well illustrated in figure 10(c) for $U_0=1.12\times 10^{-2}$. Here, the amplitude equation predicts that no solution exists over some time interval; thus, according to the criterion (5.4), the flow has bifurcated. However, if the DNS data seem indeed to have reached the tripolar state over this time interval, it relaxes to the reference state at longer times; thus, the criterion based on $\tau$ concludes that the flow has not yet bifurcated.

For this reason, we propose another criterion, rather artificial, for which both ‘bifurcations’ refer to the same specific time $t_s$, the first time for which the amplitude equation predicts a loss of solution (see (5.4)). The threshold $U_0$ in the DNS is decreed as being the one corresponding to an inflection point in $G_{{DNS}}(t=t_s)$, as shown in figure 12(a), despite a quite coarse resolution. The agreement with $U_0^s$ improves significantly in figure 12(b) (as compared with figure 11b). Note that, for both models, the power-law dependence of the threshold amplitude cannot be only explained by the power-law dependence of the linear optimal transient gain, for the latter was shown to be in $-1/3$.

Figure 12. Same as in figure 11, although the inflection point is sought in $G_{{DNS}}(t=t_s)$, where $t_s$, defined in (5.4), is the first time for which the amplitude equation predicts a loss of solution. In (a), the inset shows the same data in linear scale. In (b), the thin continuous line is a power law fitted on the DNS data with ${\propto }Re^{-0.69}$, whereas the thin dashed line ${\propto }Re^{-0.66}$ is similar to figure 11.

5.4. Physical interpretation of the nonlinear ‘anti-saturation’: mean flow distortion and inversion of the vorticity gradient

We now propose to interpret physically the behaviour of the coefficient associated with the (azimuthal) mean flow distortion, $\mu _r^{(0)}$ shown in figure 8, and attempt to discuss the reason why it changes sign and leads to the subcritical behaviour discussed above. In the following discussion, we will focus on the case ${\textit {Re}}=5000$. First, the energy of the mean flow distortion $\boldsymbol {u}_2^{(0)}$ is shown as the red curve in figure 13(a).

Figure 13. (a) For ${\textit {Re}}=5000$, the evolution of the energy $\|\ast \|^2$ of the linear response $\hat {\boldsymbol {l}}$ (black dashed line), of the second harmonic $\hat {\boldsymbol {u}}_2^{(2)}$ (blue dashed-dotted line) and, mainly, of the mean flow distortion $\boldsymbol {u}_2^{(0)}$ (red continuous line). The red dots correspond to the specific times $t=30,40,\ldots,100$, for which the vorticity structure of $\boldsymbol {u}_2^{(0)}$ is reported in figure 14. Two horizontal dotted lines are drawn at $t=35$ and $t=85$. (b) Slope of the vorticity at the radius $r_q$, i.e. $\partial _r \omega _2^{(0)}|_{r=r_q}$, as a function of time.

Figure 14. Temporal evolution of $\omega _2^{(0)}$, the vorticity structure of $\boldsymbol {u}_{2}^{(0)}$, the mean flow distortion induced by the Reynolds stress of the linear response shown at the specific times corresponding to the red dots in figure 13(a). Each panel shows only $[x,y]\in [-4,4]\times [-4,4]$. The plus sign denotes the origin, the dotted circle is the unit circle, and the dashed circle highlights the radius $r_q$ solving (4.9) with $\omega _q=0.42$.

For comparison, the energies of the linear response and of the second harmonic are also shown. As written in (3.17), the mean flow distortion is forced by

(5.7)\begin{equation} \boldsymbol{f}_2^{(0)}=-\boldsymbol{\mathsf{C}}_{m} \big[\hat{\boldsymbol{l}}^* , \hat{\boldsymbol{l}}\big] + \text{c.c.}, \end{equation}

which is the Reynolds stress of the linear response. Under its action, the energy of $\boldsymbol {u}_2^{(0)}$ increases until reaching a maximum around $t=t_o=35$, before decaying until $t=65$. From here, the energy rebounds very slightly, then decays extremely slowly for $t\geq 85$. It is striking to notice in figure 13(a) that $\boldsymbol {u}_2^{(0)}$ can persist for extremely long times, even when the linear response, whose nonlinear interactions force $\boldsymbol {u}_2^{(0)}$, has vanished. This suggests that $\boldsymbol {u}_2^{(0)}$ can persist even in the absence of sustained forcing. This is also illustrated in figure 14, where we show $\omega _2^{(0)}$, the vorticity structure associated with $\boldsymbol {u}_2^{(0)}$, for the different times highlighted by the red dots in figure 13(a). If we observe a transient regime until the panel for $t=70$, the vorticity field does not seem to evolve over time afterward, except by slow diffusion.

It will simplify the rest of the analysis to notice that $\boldsymbol {u}_2^{(0)}$ only possesses an azimuthal component, i.e. $\boldsymbol {u}_2^{(0)}= u_{2,\theta }^{(0)} \boldsymbol {e}_{\theta }$. That is because $\boldsymbol {u}_2^{(0)}$ is associated with the wavenumber $m=0$, for which the equation for the radial perturbation is $\partial _r(r u_{2,r}^{(0)}) = 0$ from the continuity. This leads to $u_{2,r}^{(0)}=0$ and a forced diffusion equation for $u_{2,\theta }^{(0)}$, reading

(5.8)\begin{equation} \partial_t u_{2,\theta}^{(0)} = {\textit {Re}}^{-1}(\varDelta_{0}-1/r^2)u_{2,\theta}^{(0)} + f_{2,\theta}^{(0)}. \end{equation}

The panels in the first row in figure 15 show the evolution of $f_{2,\theta }^{(0)}$ (left) and $u_{2,\theta }^{(0)}$ (right) over $0 \leq t \leq 35$. The Reynolds stress forcing $f_{2,\theta }^{(0)}$ grows in amplitude (since $\hat {\boldsymbol {l}}$ does) while roughly conserving its shape, which $u_{2,\theta }^{(0)}$ seems to imitate closely, although logically with some delay. This direct structural link between the forcing and the response is certainly due to the absence of an advection term in (5.8).

Figure 15. Temporal evolution of the forcing $f_{2,\theta }^{(0)}$ (panels on the left) and of the velocity response $u_{2,\theta }^{(0)}$ (panels on the right). Panels on the top and the bottom correspond to two different and successive time intervals. The top line is for $t=0,5,\ldots,35$ (on the left of the first vertical line in figure 13a) and the second for $t=35,40,\ldots,85$ (between the two vertical lines in figure 13a). Larger times correspond to lighter colours.

The panels in the second row in figure 15 show the evolution over the times $35 \leq t \leq 85$. Notice that the forcing $f_{2,\theta }^{(0)}$ has shapes at $t=30$ and $t=40$ that are similar but have opposite signs. As developed in § 4.3, this is because the spiral structures in $\hat {\boldsymbol {l}}$, that unroll in one direction until $t=35$, roll up symmetrically in the other direction afterward. Under the action of this forcing that is now adverse, the velocity $u_{2,\theta }^{(0)}$ tends to invert in figure 15(c), which implies its momentary flattening, therefore $u_{2,\theta }^{(0)}$ rapidly loses energy. This energy would have grown again (which it does shortly after $t=65$ in figure 13a) if the forcing could maintain its intensity, but the latter decays rapidly with the one of $\hat {\boldsymbol {l}}$.

After $t = 85$ the velocity $u_{2,\theta }^{(0)}$ evolves freely, for $f_{2,\theta }^{(0)}$ is negligible. The persistence of $u_{2,\theta }^{(0)}$ is then easily understood once it is realized that the diffusion operator in (5.8) possesses a continuum of orthogonal Bessel eigenfunctions (the domain is infinite). They are associated with eigenvalues spanning the strictly negative part of the imaginary axis. Accordingly, after discretization, we found an extremely dense packing of eigenvalues all along the negative part of the imaginary axis, increasingly dense with the number of discretization points. The velocity structure left by the forcing that has vanished at $t=85$, projects over a significant number of these Bessel eigenfunctions, including some with very low damping rates of the order of $10^{-4}$. It is therefore not surprising to see this structure subject to very slow diffusion in figures 13(a) and 14.

After having proposed some elements to understand the evolution of $u_{2,\theta }^{(0)}$, let us study now how it feeds back on the response. We show in figure 13(b) the slope of $\omega _2^{(0)}$ at the specific radius $r_q$ where the angular velocity $\omega _q/m$ associated with the Landau pole is equal to that of the base flow. We recall this specific radius to be found by solving (4.9) with the fit value $\omega _q = 0.42$, resulting in $r_q\approx 2.16$, very weakly time dependent.

The slope is negative until $t = 60$, where it changes sign. As a consequence, from $t = 60$ onward, the addition of $\omega _2^{(0)}$ to the reference vorticity has a positive contribution to the total vorticity slope at $r_q$. In parallel, we recall that $\mu _r^{(0)}$ can be directly interpreted as the sensibility at a time $t$ of the transient gain to the addition of $u_{2,\theta }^{(0)}$ to the reference flow. For this, $\mu _r^{(0)}$ computes the integrated effect of this addition between $0$ and $t$ (see formula (C10)). Therefore, the decreasing tendency of $\mu _r^{(0)}$ until $t=60$ in figure 8, results from the negativity of the slope of $\omega _2^{(0)}$ at $r_q$, which enhances the Landau damping rate of the response, whose integrated effect is to reduce the gain. Such an effect of the mean vorticity slope at $r_q$ on the Landau damping was reported in Schecter et al. (Reference Schecter, Dubin, Cass, Driscoll, Lansky and O'Neil2000), Turner & Gilbert (Reference Turner and Gilbert2007) and Turner et al. (Reference Turner, Gilbert and Bassom2008). On the contrary, from $t=60$ onward the coefficient $\mu _r^{(0)}$ increases, for the presence of $\omega _2^{(0)}$ now reduces the Landau damping rate; the integrated effect of such mitigation of the Landau damping rate over time leads to a coefficient that becomes positive, traducing a weakly nonlinear gain larger than the linear one. In other words, we interpret the subcritical behaviour of our amplitude equation as being partially due to the fact that the mean flow distortion reminiscence, shown in figure 14 for large time, tends to erase the vorticity slope at the specific radius $r_q$, therefore letting the perturbation persist. This conclusion is comparable to the one drawn in Balmforth et al. (Reference Balmforth, Llewellyn Smith and Young2001). In addition, the present amplitude equation leads to the conclusion that, when $\mu _r$ is at its maximum, the effect of the second harmonic is just as important as the one of the mean flow, although it is harder to interpret.

We insist that the conclusions drawn below relate only to predictions of the amplitude equation, thereby possibly explaining DNS behaviour only when both approaches agree. However, precisely because they do not at large $U_0$ and after increasingly short times in figures 9 and 10, the departure between weakly and fully nonlinear responses there remains to be interpreted. This can still be done by using the amplitude equation, although in an indirect manner.

In figure 16, we compare the linear stability to $m=2$ perturbations of the (azimuthal) mean flow extracted from the DNS, and the one predicted by the weakly nonlinear approach. This is done for $U_0=2.82 \times 10^{-2}$, the largest considered initial condition amplitude in figure 9. The weakly nonlinear mean flow is simply obtained by evaluating the weakly nonlinear expansion as $U_{{b},\theta } + a^2 u_{2,\theta }^{(0)}$, where $a$ solves the amplitude equation. The linear stability is assessed by replacing the reference flow by the mean one in (2.2), then assuming a temporal dependence as $\hat {\boldsymbol {u}}(r,t) \doteq \breve {\boldsymbol {u}}(r)\ \textrm {e}^{\sigma t}$, and solving the subsequent eigenvalue problem. The azimuthal mean is linearly unstable if and only if there exists an eigenvalue $\sigma$ such that ${Re}(\sigma ) = \sigma _r > 0$, and the corresponding eigenmode grows exponentially to the extent that the growth rate is much smaller than the rate of change of the base flow. The mean vorticity profiles are also shown in figure 17.

Figure 16. (a) Maximum growth rate of the eigenvalues of the Navier–Stokes operator at ${\textit {Re}}=5000$ and linearized around the (azimuthal) mean flow, in order to describe $m=2$ perturbations. Mean flows corresponding to $U_0=2.82\times 10^{-2}$ are obtained either from DNS data (continuous line linking red dots) or by evaluating the weakly nonlinear expansion (dash-dotted line linking blue diamonds). Positive growth rate values imply linear instability. (b) Shows the eigenmode corresponding to the most unstable eigenvalue of the operator linearized around the DNS mean flow, at $t=20$. The radius of the dotted circle is equal to one, the radius of the dashed circle is equal to $r_q$, and the radii of the continuous-line circle highlight the extrema of the mean vorticity profile. The colour scale is arbitrary.

Figure 17. Mean (axisymmetric) vorticity profiles corresponding to $U_0=2.82\times 10^{-2}$, extracted from DNS data (red continuous) and reconstituted from the weakly nonlinear expansion as $W_z + a^2\omega _2^{(0)}$, where $a$ solves the amplitude equation (blue dashed-dotted line). The largest dot (respectively diamond) is located at the radius for which $-m\varOmega =\sigma _i$ where $\sigma _i$ is the imaginary part of the most unstable eigenvalue of the DNS (respectively weakly nonlinear) profile. This also corresponds to the critical radius of the most unstable mode. The respectively smaller dot or diamond (if exists) stands for the second most unstable mode.

For $10 \leq t \leq 20$ in figure 16, both DNS and weakly nonlinear mean flows appear unstable, with an excellent agreement between the growth rates, and between the corresponding vorticity profiles in figure 17. For $t=15$ and $t=20$ in figure 17, these profiles consist of a central region of large vorticity, surrounded by a ring of locally enhanced but relatively smaller vorticity. A local minimum is to be noticed at $r=1.6$ between these two regions, as well as a second one a little further at $r=2.5$. Note the qualitative resemblance with the profiles considered in Kossin et al. (Reference Kossin, Schubert and Montgomery2000) (see their figures 1 and 12). The structure of the most unstable eigenmode supported by the DNS mean vorticity profile at $t=20$ is shown at the right of figure 16. It bears several characteristics of a shear instability. Specifically, its phase velocity is very close to the angular velocity of the mean flow at the radius of the vorticity extremum (see the largest dot in figure 17). This radius, highlighted by the first black continuous circle in figure 16, is also the zero amplitude isoline of the eigenmode structure. Tightly around, the eigenmode reaches its largest amplitude under the form of four external lobes, rotated of ${\approx }{\rm \pi} /2$ in the anticlockwise direction with respect to four internal ones. This structure is similar to the one shown in figure 2 of Carton & Legras (Reference Carton and Legras1994), who interpret it under the Rossby-wave interaction paradigm. In words, the mean vorticity profile can be thought of as supporting Rossby (vorticity) waves at the ‘edges’ (sharp slope) at one side and the other of the local minima. These two vorticity waves are precisely the internal and external series of lobes of the eigenmode structures. The velocity perturbation induced by a vorticity wave has a phase lag of ${\rm \pi} /2$ with respect to the latter. Therefore, since the lobe series are also rotated of ${\approx }{\rm \pi} /2$ with respect to each other, the velocity perturbation induced by one is in phase with the vorticity perturbation of the other, thus reinforcing its growth. In addition, Carton & Legras (Reference Carton and Legras1994) mentions that both waves interact in such a way as to travel with the same phase speed, thus the reinforcement of the one by the other is preserved in time, eventually leading to instability.

From $20 \leq t$ in figure 16, growth rates determined with both approaches depart significantly from each other, and so do the mean vorticities profiles in figure 17. This is because the unstable mode grows and evolves in the DNS and feeds back on the mean flow, whereas in our weakly nonlinear approach, the structure of the mean flow distortion is fully determined by the Reynolds stress of the linear response.

Overall, it is plausible to think of the departure of the fully nonlinear solution from the weakly nonlinear one, observable in figure 9 for large $U_0$, as resulting from a shear instability. This instability reaches its maximum growth rate around $t=20$, and requires some time for the unstable mode to gain in amplitude and for its nonlinear evolution, taking a tripolar shape, to dominate the energy of the response from $t=40$ and for $U_0 = 2.82 \times 10^{-2}$ in figure 9. This goes in the sense of Carton & Legras (Reference Carton and Legras1994) and Kossin et al. (Reference Kossin, Schubert and Montgomery2000), who have also shown shear instability modes to saturate into a tripolar state. The weakly nonlinear expansion does not predict an amplitude for the unstable mode, for the latter only declares as a ‘secondary’ mode, on the top of the optimal response. Therefore the predictions from the amplitude equation, for the optimal response, depart from DNS results after the time needed for the shear-driven unstable mode to become dominant.

6. Summary and perspectives

In conclusion, we believe our work to have brought a twofold generalization to existing literature. The first lies on a purely methodological level. We have derived an amplitude equation for non-normal systems, describing the transient response to an initial condition, in a weakly nonlinear regime. Unlike in Ducimetière et al. (Reference Ducimetière, Boujo and Gallaire2022), the reference state of these systems can now depend arbitrarily on times, owing to the propagator formalism, without the need for this latter to take its particular operator exponential shape. This offers numerous possibilities of applications, and weak nonlinearities could be modelled, for instance, in pulsating pipe flows, which play a key role in the hemodynamic system of many species (Pier & Schmid Reference Pier and Schmid2017Reference Pier and Schmid2021); it could also be applied to time-dependent stratified shear flows, which have revealed to support strong transient growth, for instance in Arratia, Caulfield & Chomaz (Reference Arratia, Caulfield and Chomaz2013) and Parker et al. (Reference Parker, Howland, Caulfield and Kerswell2021). Not only could the weakly nonlinear evolution of the gain associated with the linear optimal perturbation be captured, but the amplitude equation could also be included in a Lagrangian optimization problem whose stationary conditions would constitute a weakly nonlinear optimal, parameterized by the amplitude of the initial condition.

The method does not rely on any modal (‘eigenmodal’) quantities, therefore the existence of a continuous spectrum and/or the absence of discrete eigenmode is not problematic. Corollary, the shape of the reference flow is not constrained, apart from the fact that it should lead to strong energy growth at some finite time. The equation is derived for the amplitude of the time-dependent (linear) optimal response, whose computation already encompassed the entirety of the spectrum, regardless of its precise nature. This was particularly convenient when applied to the two-dimensional Lamb–Oseen (Gaussian) vortex flow, the linear optimal response of which is characterized by vorticity filaments under constant shear by the reference flow. The work of Schecter et al. (Reference Schecter, Dubin, Cass, Driscoll, Lansky and O'Neil2000) has shown this response to project well onto a very large number of eigenmodes constituting the continuum, all with different shapes and frequencies. Therefore it was extremely poorly described as a single eigenmode, which invalidates the use of classical weakly nonlinear methods.

Instead, by using the amplitude equation (3.33), we could correctly predict for different ${\textit {Re}}$ values the nonlinear evolution of the response for $0\leq t \leq 130$, and for small amplitudes of the initial condition. In this specific regime, at ${\textit {Re}}=5000$ as an example, nonlinearities have been found to reduce the transient gain for $0\leq t\leq 87$, but to enhance it for $87\leq t\leq 130$, as confirmed by DNS. Owing to the simplicity of the amplitude equation and its link to the sensitivity formula, this could be further related to the creation of an azimuthal mean flow distortion from the Reynolds stress of the linear response, which affects the Landau damping controlling the non-viscous dissipation of the response. The second harmonic effect has also been found to be important over the time interval where nonlinearities reinforce the gain.

However, for relatively large amplitudes of the initial condition, the predictions of the amplitude equation are found to remain accurate at small times only. After a well-captured episode of diminution of the gain, the DNS simulations with the largest considered initial condition amplitudes depart from the weakly nonlinear prediction and evidence a bifurcation towards a tripolar state. By performing a mean flow stability analysis, this departure can be related to the emergence of a shear instability. Specifically, the mean flow distortion by the Reynolds stress of the linear response, at the same time that it enhances the Landau damping by its effect at the radius $r_q$, generates a vorticity ring closely around the central part of the vortex. A shear instability results from an interaction between the inner ‘edge’ (sharp slope) of this annular ring and the outer edge of the central region. The weakly nonlinear approach does not predict an equation for the unstable mode that emerges on the mean flow. Therefore, as soon as the unstable mode dominates the response, around $t=40$ for the largest considered $U_0 = 2.82\times 10^{-2}$ and ${\textit {Re}}=5000$, the weakly nonlinear description rapidly degrades in terms of both energy and structure.

Nevertheless, at larger times, the amplitude equation can still give indirect information about the fully nonlinear response. Specifically, for ${\textit {Re}}=5000$, for a given and above a certain value for the amplitude of the initial condition, the amplitude equation has no solution over a finite time interval contained in $87 \leq t\leq 130$, where the coefficient $\mu _r$ is positive. The non-existence of any solution over a time interval may imply that, at least over the same time interval, the fully nonlinear solution must have reached another nonlinear state. This seems confirmed by the DNS. In this sense, the threshold initial condition amplitude predicted by the amplitude equation could be a reasonable approximation of the fully nonlinear one, even if a direct comparison between the former and the latter is found to be delicate.

For future research, the nonlinear self-sustaining mechanism(s) of the tripolar state remain to be clarified. In this perspective, a semi-linear approach such as the one deployed in Yim, Billant & Gallaire (Reference Yim, Billant and Gallaire2020) could be appropriate. This approach relies on the assumption that the dominant nonlinear mechanism is the Reynolds stress feedback onto the mean flow, thus neglecting the nonlinearity arising from the cross-coupling between different frequencies. In this sense, it is less rigorous than the weakly nonlinear expansion proposed here. Nevertheless, the semi-linear model does not assume the fluctuation over the mean flow to be small and typically retains its spatial degrees of freedom. Therefore the nonlinear structure it predicts is possibly considerably different from that of the linear regime and could evolve towards a tripole.

Eventually, we believe that the fact that the proposed method does not make assumptions on the shape of the base profile, nor on the values of external parameters, only that the reference flow should lead to some energy growth, could be exploited further. For instance, the proposed amplitude equation could be employed to assess and interpret weakly nonlinear effects on the optimal response on vorticity profiles from actual field measurements such as those reported in Kuo et al. (Reference Kuo, Williams and Chen1999), Reasor et al. (Reference Reasor, Montgomery, Marks and Gamache2000), Kossin et al. (Reference Kossin, Schubert and Montgomery2000) and Kossin & Schubert (Reference Kossin and Schubert2001). Indeed, the mean vorticity profile predicted in figure 17 of this work is qualitatively very similar to the one of Hurricane Gilbert shown in figure 1 of Kossin et al. (Reference Kossin, Schubert and Montgomery2000). This raises the question of the relevance of non-normal mechanisms combined with nonlinear effects in tropical cyclones.

Acknowledgements

The authors wish to thank Dr E. Boujo for taking part in some discussions. The authors would also like to thank three anonymous referees for constructive comments which improved the manuscript.

Funding

This work was supported by the Swiss National Science Foundation (grant number 200341).

Declaration of interests

The authors report no conflict of interest.

Appendix A. Kernel of the adjoint of the perturbed inverse propagator

We demonstrate in the following that the non-trivial kernel $\hat {\boldsymbol {b}}(t)$ of the adjoint operator $\boldsymbol {\varPhi }(0,t)^{{\dagger} }$ for $t>0$ reads $\hat {\boldsymbol {b}}(t)=\boldsymbol {\varPsi }(t,0)^{{\dagger} }\hat {\boldsymbol {l}}(t)$

(A1) \begin{align} \boldsymbol{\varPhi}(0,t)^{{{\dagger}}}\hat{\boldsymbol{b}}(t) &= \left[\boldsymbol{\varPsi}(0,t)\left(\boldsymbol{I} - \hat{\boldsymbol{l}}(t) \frac{\big\langle \hat{\boldsymbol{l}}(t) | \ast \big\rangle}{\|\hat{\boldsymbol{l}}(t)\|^2}\right) \right]^{{{\dagger}}}\hat{\boldsymbol{b}}(t) \nonumber\\ &= \left(\boldsymbol{I} - \hat{\boldsymbol{l}}(t)\frac{\big\langle \hat{\boldsymbol{l}}(t) | \ast \big\rangle}{\|\hat{\boldsymbol{l}}(t)\|^2}\right)\boldsymbol{\varPsi}(0,t)^{{{\dagger}}}\hat{\boldsymbol{b}}(t) \nonumber\\ &= \left(\boldsymbol{I} - \hat{\boldsymbol{l}}(t)\frac{\big\langle \hat{\boldsymbol{l}}(t) | \ast \big\rangle}{\|\hat{\boldsymbol{l}}(t)\|^2}\right) \left[ \boldsymbol{\varPsi}(t,0)\boldsymbol{\varPsi}(0,t) \right]^{{{\dagger}}}\hat{\boldsymbol{l}}(t) \nonumber\\ &= \left(\boldsymbol{I} - \hat{\boldsymbol{l}}(t) \frac{\big\langle \hat{\boldsymbol{l}}(t) | \ast\big\rangle}{\|\hat{\boldsymbol{l}}(t)\|^2}\right)\hat{\boldsymbol{l}}(t) \nonumber\\ &= \boldsymbol{0}. \end{align}

Appendix B. Temporal integration of the third-order equation

From (3.21) The particular solution for $\bar {\boldsymbol {u}}_3^{(1)}$ reads

(B1)\begin{equation} \bar{\boldsymbol{u}}_3^{(1)}(t,T) = \hat{\boldsymbol{u}}_{3a}^{(1)}(t) + A(T)|A(T)|^2\hat{\boldsymbol{u}}_{3b}^{(1)}(t) + (\mathrm{d}_T A(T))\hat{\boldsymbol{u}}_{3c}^{(1)}(t) + A(T)\hat{\boldsymbol{u}}_{3d}^{(1)}(t) ,\end{equation}

where

(B2) \begin{equation} \left.\begin{gathered} \partial_t \left(\boldsymbol{\varPhi}(0,t)\hat{\boldsymbol{u}}_{3a}^{(1)} \right) = \boldsymbol{0} \quad \text{with}\ \hat{\boldsymbol{u}}_{3a}^{(1)}(0) = \alpha \hat{\boldsymbol{u}}_h, \\ \partial_t \left(\boldsymbol{\varPhi}(0,t)\hat{\boldsymbol{u}}_{3b}^{(1)} \right) =-H\boldsymbol{\varPsi}(0,t)\tilde{\boldsymbol{f}} \quad \text{with}\ \hat{\boldsymbol{u}}_{3b}^{(1)}(0) = \boldsymbol{0}, \\ \partial_t \left(\boldsymbol{\varPhi}(0,t)\hat{\boldsymbol{u}}_{3c}^{(1)} \right) =-H\epsilon_o \hat{\boldsymbol{u}}_o \quad \text{with}\ \hat{\boldsymbol{u}}_{3c}^{(1)}(0) = \boldsymbol{0}, \\ \partial_t \left(\boldsymbol{\varPhi}(0,t)\hat{\boldsymbol{u}}_{3d}^{(1)} \right) =-\partial_t(H\hat{\boldsymbol{u}}_o) \quad \text{with}\ \hat{\boldsymbol{u}}_{3d}^{(1)}(0) = \boldsymbol{0}. \end{gathered}\right\} \end{equation}

Integrating in time the first equation in (B2) and using $\boldsymbol {\varPhi }(0,0) = \boldsymbol {I}$ yields

(B3)\begin{equation} \boldsymbol{\varPhi}(0,t)\hat{\boldsymbol{u}}_{3a}^{(1)}(t) = \boldsymbol{\varPhi}(0,0)\hat{\boldsymbol{u}}_{3a}^{(1)}(0) = \alpha \hat{\boldsymbol{u}}_h. \end{equation}

Integrating the second gives

(B4)\begin{equation} \boldsymbol{\varPhi}(0,t)\hat{\boldsymbol{u}}_{3b}^{(1)}(t) =-\int_{0}^{t} H(s)\boldsymbol{\varPsi}(0,s)\tilde{\boldsymbol{f}}(s)\,\mathrm{d} s =-\int_{0}^{t} \boldsymbol{\varPsi}(0,s)\tilde{\boldsymbol{f}}(s)\,\mathrm{d} s = \boldsymbol{\varPsi}(0,t)\tilde{\boldsymbol{u}}(t), \end{equation}

where we have defined

(B5)\begin{equation} \tilde{\boldsymbol{u}}(t) \doteq-\boldsymbol{\varPsi}(t,0)\int_{0}^{t}\boldsymbol{\varPsi}(0,s)\tilde{\boldsymbol{f}}(s)\,\mathrm{d} s, \end{equation}

or, equivalently using (2.11), $\tilde {\boldsymbol {u}}$ solves

(B6)\begin{equation} (\partial_t - \boldsymbol{\mathsf{L}}_{m}(t))\tilde{\boldsymbol{u}} =- \tilde{\boldsymbol{f}},\quad \text{subject to}\ \tilde{\boldsymbol{u}}(0) = \boldsymbol{0}. \end{equation}

By integrating the third

(B7)\begin{align} \boldsymbol{\varPhi}(0,t)\hat{\boldsymbol{u}}_{3c}^{(1)}(t) &=-\int_{0}^{t}H(s)\epsilon_o \hat{\boldsymbol{u}}_o\,\mathrm{d} s =-[H(s)\epsilon_o s \hat{\boldsymbol{u}}_o ]^{s=t}_{s=0}+\int_{0}^{t}\delta(s)\epsilon_o s \hat{\boldsymbol{u}}_o\,\mathrm{d} s \nonumber\\ &=-\epsilon_o t \hat{\boldsymbol{u}}_o, \end{align}

and eventually the fourth

(B8)\begin{equation} \boldsymbol{\varPhi}(0,t)\hat{\boldsymbol{u}}_{3d}^{(1)}(t) =-H(t)\hat{\boldsymbol{u}}_o. \end{equation}

Injecting these four solutions in (B1) and combining it with (3.21) leads to

(B9)\begin{equation} \boldsymbol{\varPhi}(0,t)\bar{\boldsymbol{u}}_{3}^{(1)} = \alpha \hat{\boldsymbol{u}}_h + A|A|^2\boldsymbol{\varPsi}(0,t)\tilde{\boldsymbol{u}} - (\mathrm{d}_T A)\epsilon_o t \hat{\boldsymbol{u}}_o -AH\hat{\boldsymbol{u}}_o. \end{equation}

Appendix C. Transient gain sensitivity in time-varying base flow and comparison with the amplitude equation

We recall that the gain squared associated with the linear trajectory optimized for $t=t_o$, through the choice of the initial condition $\hat {\boldsymbol {u}}_o$, reads

(C1)\begin{equation} G(t;t_o)^2 = \|\boldsymbol{\varPsi}(t,0)\hat{\boldsymbol{u}}_o\|^2 = \big\langle \hat{\boldsymbol{u}}_o | \boldsymbol{\varPsi}(t,0)^{{{\dagger}}}\boldsymbol{\varPsi}(t,0)\hat{\boldsymbol{u}}_o \big\rangle. \end{equation}

Note that, by definition, $G(t_o;t_o)=G_o(t_o) = 1/\epsilon _o$ as defined in (2.13). We recall as well that $G(t)$ is used as a shortened notation for $G(t;t_o)$. We derive thereafter the variation $\delta (G(t)^2)$ of this linear gain induced by a variation $\delta \boldsymbol{\mathsf{L}}_{m}(t)$ of the operator $\boldsymbol{\mathsf{L}}_{m}(t)$. We first compute

(C2) \begin{align} \delta(G(t)^2) &= \big\langle \hat{\boldsymbol{u}}_o | \delta(\boldsymbol{\varPsi}(t,0)^{{{\dagger}}}\boldsymbol{\varPsi}(t,0))\hat{\boldsymbol{u}}_o \big\rangle \nonumber\\ &= \big\langle \hat{\boldsymbol{u}}_o | \delta(\boldsymbol{\varPsi}(t,0)^{{{\dagger}}})\boldsymbol{\varPsi}(t,0)\hat{\boldsymbol{u}}_o +\boldsymbol{\varPsi}(t,0)^{{{\dagger}}}\delta\boldsymbol{\varPsi}(t,0)\hat{\boldsymbol{u}}_o \big\rangle \nonumber\\ &= \big\langle \delta\boldsymbol{\varPsi}(t,0)\hat{\boldsymbol{u}}_o | \boldsymbol{\varPsi}(t,0)\hat{\boldsymbol{u}}_o \big\rangle +\big\langle \boldsymbol{\varPsi}(t,0)\hat{\boldsymbol{u}}_o | \delta \boldsymbol{\varPsi}(t,0)\hat{\boldsymbol{u}}_o \big\rangle \nonumber\\ &= 2\mbox{Re}\left(\big\langle \boldsymbol{\varPsi}(t,0)\hat{\boldsymbol{u}}_o | \delta\boldsymbol{\varPsi}(t,0)\hat{\boldsymbol{u}}_o \big\rangle \right) \nonumber\\ &= 2\epsilon_o^{-1}\mbox{Re}\big(\big\langle \hat{\boldsymbol{l}}(t) | \delta\boldsymbol{\varPsi}(t,0)\hat{\boldsymbol{u}}_o \big\rangle \big), \end{align}

where we used that $\delta (\boldsymbol {\varPsi }(t,0)^{{\dagger} }) = (\delta \boldsymbol {\varPsi }(t,0))^{{\dagger} }$: the variation of the adjoint is the adjoint of the variation, as easily shown by using the definition of the adjoint operator. Next, we link $\delta \boldsymbol {\varPsi }(t,0)$ with $\delta \boldsymbol{\mathsf{L}}_{m}(t)$ by taking the variation of (2.8), which leads to

(C3)\begin{equation} \partial_t (\delta\boldsymbol{\varPsi}(t,0)) = (\delta\boldsymbol{\mathsf{L}}_{m}(t))\boldsymbol{\varPsi}(t,0)+ \boldsymbol{\mathsf{L}}_{m}(t)\delta\boldsymbol{\varPsi}(t,0). \end{equation}

Multiplying (C3) by $\boldsymbol {\varPsi }(0,t)$ and using (2.10), stating that $\boldsymbol{\mathsf{L}}_{m}(t) = - \boldsymbol {\varPsi }(t,0)\partial _t\boldsymbol {\varPsi }(0,t)$, leads to

(C4)\begin{equation} \left.\begin{gathered} \boldsymbol{\varPsi}(0,t)\partial_t (\delta\boldsymbol{\varPsi}(t,0)) = \boldsymbol{\varPsi}(0,t)(\delta\boldsymbol{\mathsf{L}}_{m}(t))\boldsymbol{\varPsi}(t,0) - (\partial_t\boldsymbol{\varPsi}(0,t))\delta\boldsymbol{\varPsi}(t,0),\quad \text{thus} \\ \partial_t (\boldsymbol{\varPsi}(0,t) \delta\boldsymbol{\varPsi}(t,0)) = \boldsymbol{\varPsi}(0,t)(\delta\boldsymbol{\mathsf{L}}_{m}(t))\boldsymbol{\varPsi}(t,0). \end{gathered}\right\} \end{equation}

Integrating (C4) in time and imposing $\delta \boldsymbol {\varPsi }(0,0)$ to be null results in

(C5)\begin{align} \delta\boldsymbol{\varPsi}(t,0) &= \boldsymbol{\varPsi}(t,0) \int_{0}^{t} \boldsymbol{\varPsi}(0,s)(\delta\boldsymbol{\mathsf{L}}_{m}(s))\boldsymbol{\varPsi}(s,0)\,\mathrm{d} s \nonumber\\ &= \int_{0}^{t} \boldsymbol{\varPsi}(t,s)(\delta\boldsymbol{\mathsf{L}}_{m}(s))\boldsymbol{\varPsi}(s,0)\,\mathrm{d} s. \end{align}

In the particular case where $\boldsymbol{\mathsf{L}}_{m}$ does not depend on time the propagator writes $\boldsymbol {\varPsi }(t,0)=\textrm {e}^{\boldsymbol{\mathsf{L}}_{m} t}$, and relation (C5) reduces to the formula (6) at p. 175 of Bellman (Reference Bellman1997). By injecting (C5) in (C2) we obtain

(C6)\begin{align} \delta(G(t)^2) &= 2\epsilon_o^{-1}\mbox{Re}\left( \left \langle \hat{\boldsymbol{l}}(t) | \int_{0}^{t} \boldsymbol{\varPsi}(t,s)(\delta\boldsymbol{\mathsf{L}}_{m}(s))\boldsymbol{\varPsi}(s,0)\hat{\boldsymbol{u}}_o \,\mathrm{d} s \right \rangle \right) \nonumber\\ &= 2\epsilon_o^{-2}\mbox{Re}\left(\int_{0}^{t} \big\langle \boldsymbol{\varPsi}(t,s)^{{{\dagger}}}\hat{\boldsymbol{l}}(t) | \delta\boldsymbol{\mathsf{L}}_{m}(s)\hat{\boldsymbol{l}}(s) \big\rangle\mathrm{d} s \right). \end{align}

On the other hand, by re-formulating (3.34), the weakly nonlinear gain $G_{{w}}$ is found to satisfy

(C7)\begin{equation} 1-\frac{G(t)^2}{G_{{w}}(t)^2}= 2\left(\frac{U_0}{\epsilon_o}\right)^2\mu_r(t). \end{equation}

Considering small variations around the linear gain $G_{{w}}(t)^2 = G(t)^2 + \delta (G(t)^2)$ with $\delta (G(t)^2)/G(t)^2 \ll 1$, (C7) reduces to

(C8)\begin{equation} \delta(G(t)^2)= 2G(t)^2\left(\frac{U_0}{\epsilon_o}\right)^2\mu_r(t), \end{equation}

but

(C9) \begin{align} \mu(t) = \frac{ \big\langle \hat{\boldsymbol{l}}(t) | \tilde{\boldsymbol{u}}(t) \big\rangle}{\|\hat{\boldsymbol{l}}(t)\|^2} &=-\frac{\left \langle \hat{\boldsymbol{l}}(t) | \boldsymbol{\varPsi}(t,0) \displaystyle\int_{0}^{t}\boldsymbol{\varPsi}(0,s)\tilde{\boldsymbol{f}}(s)\,\mathrm{d} s \right \rangle}{\|\hat{\boldsymbol{l}}(t)\|^2} \nonumber\\ &=-\frac{\displaystyle\int_{0}^{t}\big\langle \boldsymbol{\varPsi}(t,s)^{{{\dagger}}}\hat{\boldsymbol{l}}(t) |\, \tilde{\boldsymbol{f}}(s) \big\rangle\mathrm{d} s}{\|\hat{\boldsymbol{l}}(t)\|^2} \nonumber\\ &=-\frac{1}{(\epsilon_o G(t))^{2}}\int_{0}^{t}\big\langle \boldsymbol{\varPsi}(t,s)^{{{\dagger}}}\hat{\boldsymbol{l}}(t) |\, \tilde{\boldsymbol{f}}(s) \big\rangle\mathrm{d} s. \end{align}

Combining (C8) with (C9) results in

(C10)\begin{equation} \delta(G(t)^2) =-2\epsilon_o^{-2}\left(\frac{U_0}{\epsilon_o}\right)^2\mbox{Re} \left(\int_{0}^{t}\big\langle \boldsymbol{\varPsi}(t,s)^{{{\dagger}}}\hat{\boldsymbol{l}}(t) |\, \tilde{\boldsymbol{f}}(s) \big\rangle\mathrm{d} s\right). \end{equation}

Identifying (C6) with (C10) leads us to conclude that the small variation of the weakly nonlinear gain around the linear one, as predicted by our model, reduces to the sensitivity of this linear gain to a perturbation $\delta \boldsymbol{\mathsf{L}}_{m}(t)$ satisfying

(C11)\begin{equation} \delta \boldsymbol{\mathsf{L}}_{m}(t)\hat{\boldsymbol{l}}(t) =-\left(\frac{U_0}{\epsilon_o}\right)^2\tilde{\boldsymbol{f}}(t). \end{equation}

From the definition of $\tilde {\boldsymbol {f}}(t)$ in (3.19), such $\delta \boldsymbol{\mathsf{L}}_{m}$ corresponds to an axisymmetric perturbation of the base flow from $\boldsymbol {U}_{{b}}$ to $\boldsymbol {U}_{{b}} + (U_0/\epsilon _o)^2\boldsymbol {u}^{(0)}_2$, and also embeds the effect of the second harmonics $\hat {\boldsymbol {u}}^{(2)}_2$. Note that $(U_0/\epsilon _o)$ is the amplitude of the response in the linear regime.

Appendix D. The pseudo-forcing method

We add to the asymptotic expansion (3.4) the trivial equality

(D1)\begin{equation} \boldsymbol{0} = \left[\sqrt{\epsilon_o}(\epsilon_o A(T)\delta(t)\hat{\boldsymbol{u}}_o)-\sqrt{\epsilon_o}^3(A(T)\delta(t)\hat{\boldsymbol{u}}_o)\right]{\rm e}^{\text{i} m\theta} + \text{c.c.}, \end{equation}

where $\delta (t) = \mathrm {d} H(t) /\mathrm {d} t$ such that $\int _{0}^{t>0}\delta (t)\,\mathrm {d} t = H(t>0)-H(0)=1$. The terms are then directly collected at each order without ever perturbing the propagator. According to (D1), the term $\epsilon _o A(T)\delta (t)\hat {\boldsymbol {u}}_o$ will act as a forcing at order $\sqrt {\epsilon _o}$, whereas the term $-A(T)\delta (t)\hat {\boldsymbol {u}}_o$ will act as a forcing at order $\sqrt {\epsilon _o}^3$.

The equation assembled at $\sqrt {\epsilon _o}$ is therefore

(D2)\begin{equation} (\partial_t-\boldsymbol{\mathsf{L}}_{m}(t))\bar{\boldsymbol{u}}_1^{(1)} = A\delta\epsilon_o\hat{\boldsymbol{u}}_o,\quad \text{with}\ \bar{\boldsymbol{u}}_1^{(1)}|_{t=0} = \boldsymbol{0}. \end{equation}

By postulating $\bar {\boldsymbol {u}}_1^{(1)}(t,T) = A(T)\hat {\boldsymbol {u}}^{(1)}_1(t)$, the field $\hat {\boldsymbol {u}}^{(1)}_1$ solves

(D3)\begin{equation} \hat{\boldsymbol{u}}^{(1)}_1(t) = \boldsymbol{\varPsi}(t,0)\int_{0}^{t}\boldsymbol{\varPsi}(0,s)\delta(s)\epsilon_o\hat{\boldsymbol{u}}_o\,\mathrm{d} s = \begin{cases} \boldsymbol{0} & \text{if } t=0 \\ \boldsymbol{\varPsi}(t,0)\epsilon_o\hat{\boldsymbol{u}}_o = \hat{\boldsymbol{l}}(t) & \text{if } t>0, \end{cases} \end{equation}

thereby $\hat {\boldsymbol {u}}^{(1)}_1(t)=H(t)\hat {\boldsymbol {l}}(t)$ and $\bar {\boldsymbol {u}}_1^{(1)}(t,T) = A(T)H(t)\hat {\boldsymbol {l}}(t)$ for $t\geq 0$ exactly as in (3.14). The equations and their solutions at order $\epsilon _o$ are the same as in the main text. At order $\sqrt {\epsilon _o}^3$, the equation for the Fourier component at $(m)$ reads

(D4)\begin{equation} (\partial_t-\boldsymbol{\mathsf{L}}_{m}(t))\bar{\boldsymbol{u}}_3^{(1)}=-A|A|^2H\tilde{\boldsymbol{f}}-(\mathrm{d}_T A)H\hat{\boldsymbol{l}}-A\delta \hat{\boldsymbol{u}}_o,\quad \text{with}\ \bar{\boldsymbol{u}}_3^{(1)}|_{t=0}=\alpha \hat{\boldsymbol{u}}_h, \end{equation}

where $A$ is still undetermined. The Fredholm alternative cannot be invoked since (D4) is directly solvable, but is replaced by an asymptotic-preserving argument of the same nature. As $\bar {\boldsymbol {u}}_3^{(1)}$ oscillates at $m$ and is subject to a non-zero initial condition, it is susceptible to be amplified of a factor $1/\epsilon _o$ at $t=t_o$, thus conveying the response at order $\sqrt {\epsilon _o}$; to avoid this scenario ruining the asymptotic hierarchy, we impose the orthogonality of $\bar {\boldsymbol {u}}_3^{(1)}$ with the linear response that is the most amplified at $t=t_o$, that is $\hat {\boldsymbol {l}}(t)$, which closes the system. Is it easily shown that the solution to (D4) reads

(D5)\begin{equation} \bar{\boldsymbol{u}}_3^{(1)} = \boldsymbol{\varPsi}(t,0)\alpha\hat{\boldsymbol{u}}_h + A|A|^2\tilde{\boldsymbol{u}} - t (\mathrm{d}_T A) \hat{\boldsymbol{l}} - \epsilon_o^{-1} A H \hat{\boldsymbol{l}}, \end{equation}

whose condition of orthogonality with $\hat {\boldsymbol {l}}(t)$ for all $t>0$ results in

(D6) \begin{equation} \big\langle \hat{\boldsymbol{l}} | \bar{\boldsymbol{u}}_3^{(1)} \big\rangle = \alpha \big\langle \hat{\boldsymbol{l}} | \boldsymbol{\varPsi}(t,0)\hat{\boldsymbol{u}}_h \big\rangle + A|A|^2\big\langle \hat{\boldsymbol{l}} | \tilde{\boldsymbol{u}} \big\rangle - t (\mathrm{d}_T A) \|\hat{\boldsymbol{l}}\|^2 - \epsilon_o^{-1} A H \|\hat{\boldsymbol{l}}\|^2 = 0. \end{equation}

Multiplying (D6) by $\epsilon _o/\|\hat {\boldsymbol {l}}\|^2 = 1/\big\langle \hat {\boldsymbol {b}} | \hat {\boldsymbol {u}}_o \big\rangle$ gives eventually

(D7) \begin{equation} \epsilon_o t \frac{\mathrm{d} A}{\mathrm{d} T} = \alpha \frac{\big\langle \hat{\boldsymbol{b}} | \hat{\boldsymbol{u}}_h \big\rangle}{\big\langle \hat{\boldsymbol{b}} | \hat{\boldsymbol{u}}_o \big\rangle} + A|A|^2 \frac{\big\langle \hat{\boldsymbol{b}} | \boldsymbol{\varPsi}(0,t)\tilde{\boldsymbol{u}}\big\rangle}{\big\langle \hat{\boldsymbol{b}} | \hat{\boldsymbol{u}}_o \big\rangle} - A, \end{equation}

which is exactly the amplitude equation (3.24) derived in the main text by perturbing the propagator.

As a side comment, note that $\bar {\boldsymbol {u}}_3^{(1)}$ in (D5) is also the particular solution of (3.22). Indeed, the orthogonality condition guarantees $\big\langle \hat {\boldsymbol {l}} | \bar {\boldsymbol {u}}_3^{(1)} \big\rangle =0$ such that $\boldsymbol {\varPhi }(0,t)\bar {\boldsymbol {u}}_3^{(1)} = \boldsymbol {\varPsi }(0,t)\bar {\boldsymbol {u}}_3^{(1)}$ and (3.22) is automatically satisfied. Therefore, not only are the amplitude equations the same for both methods, but also the higher-order terms of the development, requiring knowledge of the field $\bar {\boldsymbol {u}}_3^{(1)}$. This holds as long as the homogeneous solution on $\bar {\boldsymbol {u}}_3^{(1)}$, not necessarily null for the method of the singularization of the propagator, is ignored.

Appendix E. Expression of the adjoint operator

The adjoint of the operator $\boldsymbol{\mathsf{B}}_{m}(t)$ under the scalar product defined in (2.12) is such that $\left \langle \boldsymbol{\mathsf{B}}_{m}\hat {\boldsymbol {q}}_a | \hat {\boldsymbol {q}}_b \right \rangle =\left \langle \hat {\boldsymbol {q}}_a | \boldsymbol{\mathsf{B}}_{m}^{{\dagger} }\hat {\boldsymbol {q}}_b \right \rangle$ for any $\hat {\boldsymbol {q}}_a\doteq [\hat {\boldsymbol {u}}_a,\hat {p}_a]$ and $\hat {\boldsymbol {q}}_b\doteq [\hat {\boldsymbol {u}}_b,\hat {p}_b]$. By performing integrations by parts, the following relations are easily demonstrated:

(E1)\begin{equation} \left.\begin{gathered} \left \langle \partial_r \hat{\boldsymbol{u}}_a | \hat{\boldsymbol{u}}_b \right \rangle = (r \hat{\boldsymbol{u}}_a^{H}\hat{\boldsymbol{u}}_b)|_{r \rightarrow \infty} - \left \langle \hat{\boldsymbol{u}}_a | (\partial_r +1/r)\hat{\boldsymbol{u}}_b \right \rangle, \\ \left \langle \varDelta_{m} \hat{\boldsymbol{u}}_a | \hat{\boldsymbol{u}}_b \right \rangle = [(\partial_r\hat{\boldsymbol{u}}_a^{H})\hat{\boldsymbol{u}}_br-\hat{\boldsymbol{u}}_a^{H} (r\partial_r\hat{\boldsymbol{u}}_b)]_{r\rightarrow \infty}+\left \langle \hat{\boldsymbol{u}}_a | \varDelta_{m} \hat{\boldsymbol{u}}_b \right \rangle. \end{gathered}\right\} \end{equation}

From here, the explicit expression of $\boldsymbol{\mathsf{B}}_{m}(t)$ is derived immediately as being

(E2)\begin{equation} \boldsymbol{\mathsf{B}}_{m}(t)^{{{\dagger}}} = \begin{bmatrix} (\varDelta_{m}-1/r^2)/{\textit {Re}} + \text{i} m\varOmega & -2\text{i} m/(r^2 {\textit {Re}}) - W_z & -\partial_r\\ 2\text{i} m/(r^2 {\textit {Re}}) + 2\varOmega & (\varDelta_{m}-1/r^2)/{\textit {Re}} + \text{i} m\varOmega & -\text{i} m/r\\ \partial_r + 1/r & \text{i} m/r & 0, \end{bmatrix} \end{equation}

and the cancellation of the boundary terms resulting from the integration by part imposes

(E3)\begin{equation} -(r\hat{p}^{*}\hat{u}^{{{\dagger}}}) + (r\hat{u}^{*}\hat{p}^{{{\dagger}}}) + r(\hat{u}^{{{\dagger}}}\partial_r\hat{u}^*-\hat{u}^*\partial_r\hat{u}^{{{\dagger}}}) + r(\hat{v}^{{{\dagger}}}\partial_r\hat{v}^*-\hat{v}^*\partial_r\hat{v}^{{{\dagger}}}) =0 ,\end{equation}

to hold at $r\rightarrow \infty$. Using the far-field condition on the direct field, stating $\hat {\boldsymbol {u}}|_{r\rightarrow \infty }= \boldsymbol {0}$, condition (E3) implies the adjoint velocity field to also vanish at infinity, i.e. $\hat {\boldsymbol {u}}^{{\dagger} }|_{r\rightarrow \infty }= \boldsymbol {0}$.

References

Antkowiak, A. 2005 Dynamique aux temps courts d'un tourbillon isolé. PhD thesis, Université Paul Sabatier, Toulouse, France.Google Scholar
Antkowiak, A. & Brancher, P. 2004 Transient energy growth for the Lamb–Oseen vortex. Phys. Fluids 16 (1), L1L4.CrossRefGoogle Scholar
Antkowiak, A. & Brancher, P. 2007 Transition sous-critique tripolaire dans les tourbillons. In CFM 2007 - 18ème Congrès Français de Mécanique (ed. Association Française de Mécanique), Congrès français de mécanique. AFM, Maison de la Mécanique, 39/41 rue Louis Blanc - 92400 Courbevoie, colloque avec actes et comité de lecture internationale.Google Scholar
Arratia, C., Caulfield, C.P. & Chomaz, J.-M. 2013 Transient perturbation growth in time-dependent mixing layers. J. Fluid Mech. 717, 90133.CrossRefGoogle Scholar
Balmforth, N.J., Llewellyn Smith, S.G. & Young, W.R. 2001 Disturbing vortices. J. Fluid Mech. 426, 95133.CrossRefGoogle Scholar
Bellman, R. 1997 Introduction to Matrix Analysis, 2nd edn. Society for Industrial and Applied Mathematics.Google Scholar
Bernoff, A.J. & Lingevitch, J.F. 1994 Rapid relaxation of an axisymmetric vortex. Phys. Fluids 6 (11), 37173723.CrossRefGoogle Scholar
Billant, P & Gallaire, F. 2005 Generalized Rayleigh criterion for non-axisymmetric centrifugal instabilities. J. Fluid Mech. 542, 365379.CrossRefGoogle Scholar
Briggs, R.J., Daugherty, J.D. & Levy, R.H. 1970 Role of Landau damping in crossed-field electron beams and inviscid shear flow. Phys. Fluids 13 (2), 421432.CrossRefGoogle Scholar
Canuto, C., Quarteroni, A., Hussaini, M.Y. & Zang, T.A. 2007 Spectral Methods. Evolution to Complex Geometries and Applications to Fluid Dynamics. Springer.CrossRefGoogle Scholar
Carnevale, G.F. & Kloosterziel, R.C. 1994 Emergence and evolution of triangular vortices. J. Fluid Mech. 259, 305331.CrossRefGoogle Scholar
Carton, X. & Legras, B. 1994 The life-cycle of tripoles in two-dimensional incompressible flows. J. Fluid Mech. 267, 5382.CrossRefGoogle Scholar
Carton, X.J., Flierl, G.R. & Polvani, L.M. 1989 The generation of tripoles from unstable axisymmetric isolated vortex structures. Europhys. Lett. 9 (4), 339344.CrossRefGoogle Scholar
Denoix, M.-A., Sommeria, J. & Thess, A. 1994 Two-dimensional turbulence: the prediction of coherent structures by statistical mechanics. In Progress in Turbulence Research (ed. H. Branover & Y. Unger), pp. 88–107. American Institute of Aeronautics and Astronautics.CrossRefGoogle Scholar
Ducimetière, Y.-M., Boujo, E. & Gallaire, F. 2022 Weak nonlinearity for strong non-normality. J. Fluid Mech. 947, A43.CrossRefGoogle Scholar
Farrell, B.F. 1987 Developing disturbances in shear. J. Atmos. Sci. 44 (16), 21912199.2.0.CO;2>CrossRefGoogle Scholar
Farrell, B.F. & Ioannou, P.J. 1993 Optimal excitation of three-dimensional perturbations in viscous constant shear flow. Phys. Fluids A 5 (6), 13901400.CrossRefGoogle Scholar
Farrell, B.F. & Ioannou, P.J. 1996 Generalized stability theory. Part 2. Nonautonomous operators. J. Atmos. Sci. 53 (14), 20412053.2.0.CO;2>CrossRefGoogle Scholar
Heaton, C.J. & Peake, N. 2007 Transient growth in vortices with axial flow. J. Fluid Mech. 587, 271301.CrossRefGoogle Scholar
Kerswell, R.R. & Davey, A. 1996 On the linear instability of elliptic pipe flow. J. Fluid Mech. 316, 307324.CrossRefGoogle Scholar
Kloosterziel, R.C. & van Heijst, G.J.F. 1991 An experimental study of unstable barotropic vortices in a rotating fluid. J. Fluid Mech. 223, 124.CrossRefGoogle Scholar
Kossin, J.P. & Schubert, W.H. 2001 Mesovortices, polygonal flow patterns, and rapid pressure falls in hurricane-like vortices. J. Atmos. Sci. 58 (15), 21962209.2.0.CO;2>CrossRefGoogle Scholar
Kossin, J.P., Schubert, W.H. & Montgomery, M.T. 2000 Unstable interactions between a hurricane's primary eyewall and a secondary ring of enhanced vorticity. J. Atmos. Sci. 57 (24), 38933917.2.0.CO;2>CrossRefGoogle Scholar
Kuo, H.-C., Williams, R.T. & Chen, J.-H. 1999 A possible mechanism for the eye rotation of typhoon herb. J. Atmos. Sci. 56 (11), 16591673.2.0.CO;2>CrossRefGoogle Scholar
Le Dizès, S. 2000 Non-axisymmetric vortices in two-dimensional flows. J. Fluid Mech. 406, 175198.CrossRefGoogle Scholar
Lundgren, T.S. 1982 Strained spiral vortex model for turbulent fine structure. Phys. Fluids 25 (12), 21932203.CrossRefGoogle Scholar
Mao, X. & Sherwin, S.J. 2012 Transient growth associated with continuous spectra of the Batchelor vortex. J. Fluid Mech. 697, 3559.CrossRefGoogle Scholar
Navrose, , Johnson, H.G., Brion, V., Jacquin, L. & Robinet, J.C. 2018 Optimal perturbation for two-dimensional vortex systems: route to non-axisymmetric state. J. Fluid Mech. 855, 922952.CrossRefGoogle Scholar
Nolan, D.S. & Farrell, B.F. 1999 The intensification of two-dimensional swirling flows by stochastic asymmetric forcing. J. Atmos. Sci. 56 (23), 39373962.2.0.CO;2>CrossRefGoogle Scholar
Parker, J.P., Howland, C.J., Caulfield, C.P. & Kerswell, R.R. 2021 Optimal perturbation growth on a breaking internal gravity wave. J. Fluid Mech. 925, A16.CrossRefGoogle Scholar
Pier, B. & Schmid, P.J. 2017 Linear and nonlinear dynamics of pulsatile channel flow. J. Fluid Mech. 815, 435480.CrossRefGoogle Scholar
Pier, B. & Schmid, P.J. 2021 Optimal energy growth in pulsatile channel and pipe flows. J. Fluid Mech. 926, A11.CrossRefGoogle Scholar
Pradeep, D.S. & Hussain, F. 2006 Transient growth of perturbations in a vortex column. J. Fluid Mech. 550, 251288.CrossRefGoogle Scholar
Pringle, C. & Kerswell, R. 2010 Using nonlinear transient growth to construct the minimal seed for shear flow turbulence. Phys. Rev. Lett. 105, 154502.CrossRefGoogle ScholarPubMed
Reasor, P.D., Montgomery, M.T., Marks, F.D. & Gamache, J.F. 2000 Low-wavenumber structure and evolution of the hurricane inner core observed by airborne dual-doppler radar. Mon. Weath. Rev. 128 (6), 16531680.2.0.CO;2>CrossRefGoogle Scholar
Rhines, P.B. & Young, W.R. 1983 How rapidly is a passive scalar mixed within closed streamlines? J. Fluid Mech. 133, 133145.CrossRefGoogle Scholar
Rossi, L.F., Lingevitch, J.F. & Bernoff, A.J. 1997 Quasi-steady monopole and tripole attractors for relaxing vortices. Phys. Fluids 9 (8), 23292338.CrossRefGoogle Scholar
Schecter, D.A., Dubin, D.H.E., Cass, A.C., Driscoll, C.F., Lansky, I.M. & O'Neil, T.M. 2000 Inviscid damping of asymmetries on a two-dimensional vortex. Phys. Fluids 12 (10), 23972412.CrossRefGoogle Scholar
Sipp, D. 2000 Weakly nonlinear saturation of short-wave instabilities in a strained Lamb–Oseen vortex. Phys. Fluids 12 (7), 17151729.CrossRefGoogle Scholar
Trefethen, L.N., Trefethen, A.E., Reddy, S.C. & Driscoll, T.A. 1993 Hydrodynamic stability without eigenvalues. Science 261 (5121), 578584.CrossRefGoogle ScholarPubMed
Turner, M.R. & Gilbert, A.D. 2007 Linear and nonlinear decay of cat's eyes in two-dimensional vortices, and the link to Landau poles. J. Fluid Mech. 593, 255279.CrossRefGoogle Scholar
Turner, M.R., Gilbert, A.D. & Bassom, A.P. 2008 Neutral modes of a two-dimensional vortex and their link to persistent cat's eyes. Phys. Fluids 20 (2), 027101.CrossRefGoogle Scholar
Van Heijst, G.J.F. & Kloosterziel, R.C. 1989 Tripolar vortices in a rotating fluid. Nature 338, 569571.CrossRefGoogle Scholar
Van Heijst, G.J.F., Kloosterziel, R.C. & Williams, C.W.M. 1991 Formation of a tripolar vortex in a rotating fluid. Phys. Fluids A 3 (9), 20332033.CrossRefGoogle Scholar
Viola, F., Arratia, C. & Gallaire, F. 2016 Mode selection in trailing vortices: harmonic response of the non-parallel Batchelor vortex. J. Fluid Mech. 790, 523552.CrossRefGoogle Scholar
Yim, E., Billant, P. & Gallaire, F. 2020 Nonlinear evolution of the centrifugal instability using a semilinear model. J. Fluid Mech. 897, A34.CrossRefGoogle Scholar
Figure 0

Figure 1. Linear transient growth in the two-dimensional, time-dependent Lamb–Oseen vortex flow for varying ${\textit {Re}}$ $\in [1.25,2.5,5,10]\times 10^{3}$, larger ${\textit {Re}}$ corresponding to lighter colours. The perturbation has wavenumber $m=2$. (a) Optimal gain as defined in (2.13) as a function of the temporal horizon $t_o$. The star marker corresponds to its maximum value over $t_o$ and for a given ${\textit {Re}}$, that is, to $G_o(t_{o,{max}})=1/\epsilon _{o,{max}}$; (b) $\epsilon _{o,{max}}$ multiplied by ${\textit {Re}}^{1/3}$ and plotted as a function of ${\textit {Re}}$; (c) $t_{o,{max}}$ multiplied by ${\textit {Re}}^{-1/3}$ and shown as a function of ${\textit {Re}}$.

Figure 1

Figure 2. The continuous line is the reproduction of the envelope shown in figure 1(a) for ${\textit {Re}}=5000$. The dash-dotted line is the gain $G(t)$ associated with the linear trajectory optimized for $t_o=t_{o,{max}}=35$, defined in (2.16). By definition, both curves collapse at $t=35$. The black dots correspond to $t=0,15,30,\ldots,105$, for which the corresponding vorticity structures are shown in figure 3.

Figure 2

Figure 3. Temporal evolution of the vorticity structure $\hat {{\omega }}(r,t)$ of the optimal linear response, shown at the specific times corresponding to by the black dots in figure 2. Each panel shows only $[x,y]\in [-4,4]\times [-4,4]$. The plus sign denotes the origin, the dotted circle is the unit circle, and the dashed circle highlights the radius $r_q$ as defined in (4.9).

Figure 3

Figure 4. Temporal evolution of the vorticity moment $Q^{(2)}(t)$ corresponding to the linear response shown in figures 2 and 3; the black dots again denote the specific times where the vorticity field is shown in figure 3. (a) Phase $\phi (t)$ divided by $2{\rm \pi}$. (b) Amplitude $|Q^{(2)}(t)|$. The black dashed lines correspond to a pure Landau damping $Q^{(2)}(t) = \exp ({-\text {i} \omega _q t-\gamma t})$, where we use fitted values for $\omega _q$ and $\gamma$.

Figure 4

Figure 5. (a) Amplitude of the fully nonlinear fundamental perturbation $\hat {\boldsymbol {u}}_p^{(1)}$ as defined in (4.11), initialized with the linear optimal condition for $({\textit {Re}},m,t_o)=(5000,2,35)$. Larger $U_0$ values correspond to lighter colours. (b) The same data are divided by their corresponding $U_0$, yielding the nonlinear gains as defined in (4.12). For the largest considered $U_0 = 2.82 \times 10^{-2}$, black dots are located at $t=0,30,60,\ldots,210$, for which the corresponding vorticity structures are shown in figure 6.

Figure 5

Figure 6. Temporal evolution of the vorticity structure of the fully nonlinear perturbation for $U_0 = 2.82\times 10^{-2}$, shown at the specific times highlighted by the black dots in figure 5. Each panel shows only $[x,y]\in [-4,4]\times [-4,4]$, the plus sign denotes the origin, and the dashed line is the unit circle.

Figure 6

Figure 7. Same as in figure 6 but the reference vorticity field (4.2) has been added, so as to visualize the total, fully nonlinear, vorticity field; some isolines are also shown to better expose the elliptical deformation of the vortex core.

Figure 7

Figure 8. (a) Real part of the weakly nonlinear coefficient $\mu$ defined in (3.29) (black continuous line). It is further split as the sum of a contribution from of a mean flow distortion (red dash-dotted line), plus a contribution from the second harmonic (blue dotted line). In the grey zone, the weakly nonlinear gain $G_{{w}}$ defined in (3.30) increases monotonically with $U_0$, since $\mu _r > 0$. In the white zone, it decreases monotonically. (b) The coefficient $\mu _r$ is compared with its reconstruction from DNS data (magenta dashed line) according to (5.2).

Figure 8

Figure 9. Weakly and fully nonlinear gains as defined in (3.30) and (4.12), respectively. Larger $U_0 \in [0.03,0.45,1.12,2.82]\times 10^{-2}$ correspond to lighter colours (direction of increasing $U_0$ is also indicated by the arrow). The continuous line stands for DNS data whereas the dash-dotted is for the weakly nonlinear model. Temporal horizons are $t_o = t_{o,{max}} = [25,30,35,40]$ (times at which a black star is shown) for ${\textit {Re}}=[1.25,2.5,5,10]\times 10^3$, respectively.

Figure 9

Figure 10. Weakly and fully nonlinear gains for ${\textit {Re}}=5000$ and increasing amplitude of the initial condition $U_0\in [4.5,7.1,11.2,17.8]\times 10^{-3}$. Each panel corresponds to a different $U_0$. The grey box denotes the time interval (5.3) over which the weakly nonlinear gain is undefined. The latter is singular at the boundaries of this interval.

Figure 10

Figure 11. (a) Typical half-life time of the heart deformation defined in (5.6), as a function of the amplitude of the initial condition $U_0$. Larger ${\textit {Re}}=[1.25,2.5,5,10]\times 10^3$ correspond to lighter colours. A star highlights an inflection point, for which the corresponding $U_0$ is declared as being the threshold amplitude for the subcritical bifurcation. Such thresholds $U_0$ are reported in (b) as a function of ${\textit {Re}}$ (also with a star symbol). The prediction from the weakly nonlinear model, $U_0^s$ defined in (5.4) is also shown. The thin continuous line is a power law fitted on the DNS data for the first three considered ${\textit {Re}}$, with ${\propto }Re^{-0.88}$, whereas the thin dashed line is fitted on the weakly nonlinear data with ${\propto }Re^{-0.66}$. The inset shows the same in log–log scale.

Figure 11

Figure 12. Same as in figure 11, although the inflection point is sought in $G_{{DNS}}(t=t_s)$, where $t_s$, defined in (5.4), is the first time for which the amplitude equation predicts a loss of solution. In (a), the inset shows the same data in linear scale. In (b), the thin continuous line is a power law fitted on the DNS data with ${\propto }Re^{-0.69}$, whereas the thin dashed line ${\propto }Re^{-0.66}$ is similar to figure 11.

Figure 12

Figure 13. (a) For ${\textit {Re}}=5000$, the evolution of the energy $\|\ast \|^2$ of the linear response $\hat {\boldsymbol {l}}$ (black dashed line), of the second harmonic $\hat {\boldsymbol {u}}_2^{(2)}$ (blue dashed-dotted line) and, mainly, of the mean flow distortion $\boldsymbol {u}_2^{(0)}$ (red continuous line). The red dots correspond to the specific times $t=30,40,\ldots,100$, for which the vorticity structure of $\boldsymbol {u}_2^{(0)}$ is reported in figure 14. Two horizontal dotted lines are drawn at $t=35$ and $t=85$. (b) Slope of the vorticity at the radius $r_q$, i.e. $\partial _r \omega _2^{(0)}|_{r=r_q}$, as a function of time.

Figure 13

Figure 14. Temporal evolution of $\omega _2^{(0)}$, the vorticity structure of $\boldsymbol {u}_{2}^{(0)}$, the mean flow distortion induced by the Reynolds stress of the linear response shown at the specific times corresponding to the red dots in figure 13(a). Each panel shows only $[x,y]\in [-4,4]\times [-4,4]$. The plus sign denotes the origin, the dotted circle is the unit circle, and the dashed circle highlights the radius $r_q$ solving (4.9) with $\omega _q=0.42$.

Figure 14

Figure 15. Temporal evolution of the forcing $f_{2,\theta }^{(0)}$ (panels on the left) and of the velocity response $u_{2,\theta }^{(0)}$ (panels on the right). Panels on the top and the bottom correspond to two different and successive time intervals. The top line is for $t=0,5,\ldots,35$ (on the left of the first vertical line in figure 13a) and the second for $t=35,40,\ldots,85$ (between the two vertical lines in figure 13a). Larger times correspond to lighter colours.

Figure 15

Figure 16. (a) Maximum growth rate of the eigenvalues of the Navier–Stokes operator at ${\textit {Re}}=5000$ and linearized around the (azimuthal) mean flow, in order to describe $m=2$ perturbations. Mean flows corresponding to $U_0=2.82\times 10^{-2}$ are obtained either from DNS data (continuous line linking red dots) or by evaluating the weakly nonlinear expansion (dash-dotted line linking blue diamonds). Positive growth rate values imply linear instability. (b) Shows the eigenmode corresponding to the most unstable eigenvalue of the operator linearized around the DNS mean flow, at $t=20$. The radius of the dotted circle is equal to one, the radius of the dashed circle is equal to $r_q$, and the radii of the continuous-line circle highlight the extrema of the mean vorticity profile. The colour scale is arbitrary.

Figure 16

Figure 17. Mean (axisymmetric) vorticity profiles corresponding to $U_0=2.82\times 10^{-2}$, extracted from DNS data (red continuous) and reconstituted from the weakly nonlinear expansion as $W_z + a^2\omega _2^{(0)}$, where $a$ solves the amplitude equation (blue dashed-dotted line). The largest dot (respectively diamond) is located at the radius for which $-m\varOmega =\sigma _i$ where $\sigma _i$ is the imaginary part of the most unstable eigenvalue of the DNS (respectively weakly nonlinear) profile. This also corresponds to the critical radius of the most unstable mode. The respectively smaller dot or diamond (if exists) stands for the second most unstable mode.