Hostname: page-component-cd9895bd7-8ctnn Total loading time: 0 Render date: 2024-12-25T16:34:45.265Z Has data issue: false hasContentIssue false

Stability properties of multidimensional symmetric hyperbolic systems with damping, differential constraints and delay

Published online by Cambridge University Press:  07 September 2023

Gilbert Peralta*
Affiliation:
Department of Mathematics and Computer Science, University of the Philippines Baguio, Governor Pack Road, Baguio, 2600 Philippines, ([email protected])
Rights & Permissions [Opens in a new window]

Abstract

Multidimensional linear hyperbolic systems with constraints and delay are considered. The existence and uniqueness of solutions for rough data are established using Friedrichs method. With additional regularity and compatibility on the initial data and initial history, the stability of such systems are discussed. Under suitable assumptions on the coefficient matrices, we establish standard or regularity-loss type decay estimates. For data that are integrable, better decay rates are provided. The results are applied to the wave, Timoshenko, and linearized Euler–Maxwell systems with delay.

Type
Research Article
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press on behalf of The Royal Society of Edinburgh

1. Introduction

The aim of this paper is to study linear symmetric hyperbolic systems with damping, differential constraints and delay. Differential constraints for the states occur naturally in certain models in fluid dynamics and electromagnetism. They appear in the system itself, for example in the Euler–Maxwell system, or they are introduced to factor out spurious solutions as in the case of the wave equation. In this work, we consider the multidimensional hyperbolic system

(1.1)\begin{equation} \begin{cases} \displaystyle A^0 \partial_t u(t,x) + \sum_{j = 1}^d A^j \partial_{x_j}u(t,x) + Lu(t,x) + Mu(t-\tau,x) = 0,\\ \displaystyle \sum_{j = 1}^d Q^j \partial_{x_j}u(t,x) + Ru(t,x) = 0, \ u(0,x) = u_0(x), \ u(\theta,x) = z_0(\theta,x), \end{cases} \end{equation}

for $t > 0$, $x \in \mathbb {R}^d$ and $\theta \in (-\tau,\,0)$ with unknown state $u : (0,\,\infty ) \times \mathbb {R}^d \to \mathbb {R}^n$. The positive integers $d$ and $n$ represent the dimension and the number of equations in the system. For this system, $u_0$ and $z_0$ correspond to the initial data and initial history, respectively. Our main concern is to develop a well-posedness theory, provide sufficient conditions that lead to the asymptotic stability of the solutions, and determine the decay structure. The positive constant $\tau$ represents a delay.

In (1.1), we assume that all of the coefficient matrices have real entries. The matrices $L$, $M$ and $A^j$ for $0 \leq j \leq d$ have size $n\times n$, while the matrices $R$ and $Q^j$ for $1 \leq j \leq d$ have size $n_1 \times n$, where $n_1$ represents the number of constraints. Here, $L$ and $M$ will be referred as the damping (or relaxation) and delay matrices, respectively. It is allowed that the matrices $Q^j$ and $R$ to vanish and in such case we simply have a hyperbolic system with delay. All throughout, we suppose that (1.1) is symmetric, that is, $A^j$ is symmetric for all $0\leq j \leq d$. Moreover, we assume that $A^0$ is positive definite.

The physical models we often encounter deal with the case where the state is independent of the past. However in some situation, this is only an approximation and a more realistic setting is to include the dependence of the dynamics on the past states. For this reason, one could incorporate delay in the system and study its effect. The study of delay to partial differential equations caught its attention in control theory, specifically in the boundary feedback stabilization of the one-dimensional wave equation. It has been shown in [Reference Datko5, Reference Datko, Lagnese and Polis6] that the presence of delay in the boundary feedback for the string equation can lead to instability. These works have been extended in the multidimensional setting in [Reference Nicaise and Pignotti15]. Roughly, if damping dominates the delay factor, then the energy of the solutions for the wave equation tends to zero exponentially. The delay in the damping occurs either in the interior or on the boundary. In the event where the damping and delay factors are equal, there are solutions where the energy is conserved. The proofs rely on semigroup and energy methods, observability estimates and a compactness-uniqueness argument.

The main goal of this paper is to determine sufficient conditions on the damping and delay matrices in (1.1) in order for its solution to be stable for every delay $\tau > 0$. Our structural condition, see condition (M) below, is similar to the one stated in [Reference Hale8] for systems of differential equations with delay.

By introducing a state variable that keeps track of the history, system (1.1) will be expressed as a hyperbolic system coupled to a transport system with parameter. For partial differential equations with delay on a bounded domain, for example, the wave, heat and Schrödinger equations, the existence and uniqueness of solutions can be obtained through semigroup methods, Kato's theorem for evolution equations and Faedo–Galerkin approximations, see [Reference Fridman, Nicaise and Valein7, Reference Kirane and Said-Houari12, Reference Nicaise and Pignotti15Reference Nicaise, Valein and Fridman19] to name a few. The approach we shall pursue here is based on the Friedrichs method. The basic idea is to derive a priori estimates for suitably smooth functions and apply a duality argument. Weak solutions for rough data are formulated through a variational equation. The corresponding results rely on the well-posedness theory for hyperbolic systems as well as for a decoupled system of transport equations with parameter. For completeness and clarity, we present the results of the latter.

For data that are smooth and compatible, we expect better regularity for the solutions. This will be proved by a standard approximation argument and the a priori estimates for hyperbolic operators in Sobolev spaces. We would like to note that the advantage of Friedrichs method is its applicability even in the case of variable-coefficients, see for instance [Reference Benzoni-Gavage and Serre3]. Another reason of using this method is the following: For hyperbolic partial differential equations, there is a trade-off in the regularity between time and space. The higher regularity with respect to time, the less spatial derivatives are available. Now for the delay variable, which satisfies a hyperbolic partial differential equation with parameter, the trade-off is now on three quantities, namely time, space and the variable with respect to history.

The plan of the paper is as follows. In § 2, we present the suitable conditions for the matrices involved in (1.1) that guarantee stability. The well-posedness of transport equations with parameter and hyperbolic systems with delay will be developed in § 3 and 4, respectively. In §5, 6, and 8, we establish the asymptotic stability, standard decay estimates, and regularity-loss type estimates. Specific examples that illustrate our results are provided in § 7 and 8. These are the wave, Timoshenko, and Euler–Maxwell systems with delay.

The Sobolev space $W^{k,p}(\mathbb {R}^d)$ will be simply denoted by $W^{k,p}$ and $H^k := W^{k,2}$. We let $H^\infty := \bigcap _{m = 0}^\infty H^k$. If $X$ is a Banach space and $m$ is a nonnegative integer, then $C^m(0,\,T;X)$ is the space of functions from $[0,\,T]$ into $X$ whose derivatives up to order $m$ are continuous. We shall also use the shorthand $W_\theta ^{k,p}(W^{j,q}) := W^{k,p}(-\tau,\,0;W^{j,q}(\mathbb {R}^d))$. For example, $L^2_\theta (H^k) := L^2(-\tau,\,0;H^k(\mathbb {R}^d))$. Depending on the context, $\langle \cdot,\, \cdot \rangle$ denotes the inner product in $\mathbb {C}^n$ or $\mathbb {R}^n$. The gradient of a function $u : \mathbb {R}^d \to \mathbb {R}^n$ is denoted by $\partial _x u := (\partial _{x_1}u,\, \ldots,\, \partial _{x_d}u)^T$ where the superscript$^T$ represents transposition.

2. Structural conditions on the coefficient matrices

In this section, we list the structural assumptions on the coefficient matrices that will guarantee the stability of the solutions of system (1.1). We follow the presentation in [Reference Ueda, Duan and Kawashima29]. The principal symbol of (1.1) is given by $iA(\xi )$, where $A(\xi ) := A^1\xi _1 + \cdots + A^d \xi _d$ for $\xi := (\xi _1,\, \ldots,\, \xi _d)^T \in \mathbb {R}^d$. Similarly, we define by $iQ(\xi ) := i(Q^1\xi _1 + \cdots + Q^d\xi _d)$ the principal symbol of the constraint. The unit sphere in $\mathbb {R}^d$ is denoted by $\mathbb {S}^{d-1}$. Given a square real matrix $A$, the symmetric and skew-symmetric parts of $A$ are given by $A_1 := (A+A^T)/2$ and $A_2 := (A-A^T)/2$, respectively, so that $A = A_1 + A_2$. The orthogonal projection of $\mathbb {C}^n$ onto the orthogonal complement of the kernel of $A$ will be denoted by $P_A$. Equivalently, $P_A$ is the orthogonal projection onto the range of $A^T$, and as a consequence, $I-P_A$ is the orthogonal projection onto the kernel of $A$. Recall that $P_A$ and $I-P_A$ are symmetric matrices.

For the damping or relaxation matrix $L$, we impose the following condition.

  1. (L) The matrix $L$ is nonnegative and has a nontrivial kernel.

It is not assumed that the relaxation matrix $L$ is symmetric. Thus, condition (L) provides dissipation only in the orthogonal complement of the kernel of $L_1$. To obtain dissipation terms in the space $\text {Ker}(L)^\perp$, we introduce the compensating matrix $S$ as in [Reference Ueda, Duan and Kawashima29].

  1. (S) There exists a real $n \times n$ matrix such that $SA^0$ is symmetric, $(SL+L)_1 \geq 0$, $\text {Ker}((SL+L)_1) = \text {Ker}(L)$, and

    (2.1)\begin{equation} \langle SMz,u\rangle = 0 \quad \text{for all }(z,u) \in \mathbb{C}^n \times \text{Ker}(L). \end{equation}

Equation (2.1) means that the range of $SM$ and the kernel of $L$ are orthogonal. With respect to the delay matrix $M$, we assume the following condition.

  1. (M) There exist real $n\times n$ symmetric matrices $G$ and $N$ such that $GA^0$ is symmetric positive definite, $N$ is positive definite on $\text {Ker}(M)^\perp$, $\text {Ker}((GL)_1) = \text {Ker}(L_1)$,

    \[ \langle GMz,u\rangle = 0 \quad \text{for all }(z,u) \in \mathbb{C}^n \times \text{Ker}(L_1), \]
    and the symmetric block matrix
    \[ \Psi_{G,N,M} := \left( \begin{array}{@{}cc@{}} 2(GL)_1 - P_MNP_M & GM \\ M^TG & N \end{array} \right) \]
    is positive definite on $\text {Ker}(L_1)^\perp \times \text {Ker}(M)^\perp$.

This condition is similar to the one presented in [Reference Hale8, p. 107]. Due to the possible degeneracy of the matrices $L$ and $M$, positivity is only assumed on the orthogonal complements of their kernels. If $M$ vanishes, the case when there is no delay, one can see that condition (M) follows from condition (L) by taking $G = I$ and $N = L_1$. Also, condition (M) implies that $\text {Ker}(L_1) \subset \text {Ker}(M)$. Indeed, suppose that $u \in \text {Ker}(L_1)$. Then, for some constant $c_N > 0$ it holds that $c_N|P_Mu|^2 \leq \langle NP_Mu,\,P_Mu\rangle \leq 2\langle (GL)_1u,\,u\rangle = 0$ because the kernels of $L_1$ and $(GL)_1$ coincide. Thus, $P_M u = 0$, which implies that $u\in \text {Ker}(M)$.

The constraint in (1.1) will be satisfied for all $t > 0$ as soon as the initial data satisfies it and if the matrices appearing in the constraint as well as those in the PDE satisfy certain conditions. For this, we consider the following assumption.

  1. (Q) The matrices $Q(\omega )$ and $R$ satisfy

    \begin{align*} & Q(\omega)(A^0)^{{-}1}A(\omega) = R(A^0)^{{-}1}L = R(A^0)^{{-}1}M = 0,\\ & Q(\omega)(A^0)^{{-}1}L + R(A^0)^{{-}1}A(\omega) = Q(\omega)(A^0)^{{-}1}M = 0, \end{align*}
    for every $\omega \in \mathbb {S}^{d-1}$.

We denote by $\varPi _1$ the orthogonal projection of $\mathbb {C}^n$ onto the image of $R$, and hence $\varPi _2 := I - \varPi _1$ is the orthogonal projection onto the kernel of $R^T$. To derive energy estimates for the derivatives of the state components, we need the following condition, which is referred as the Shizuta–Kawashima condition [Reference Shizuta and Kawashima27].

  1. (K) There exist $n\times n$ real matrices $K^l$ for $1 \leq l \leq d$ such that $K^lA^0$ is skew-symmetric for all $l$ and

    \[ \sum_{j,l = 1}^d (K^l A^j)_1\omega_j\omega_l > 0 \quad\text{on } \text{Ker}(\varPi_2 Q(\omega))\cap \text{Ker}(L) \]
    for every $\omega := (\omega _1,\,\ldots,\,\omega _d) \in \mathbb {S}^{d-1}$.

Conditions (S) and (K) imply the existence of a constant $\vartheta > 0$ such that

(2.2)\begin{equation} \sum_{j,l = 1}^k (K^l A^j)_1\omega_j\omega_l + \vartheta(SL+L)_1 > 0\quad \text{on } \text{Ker}(\varPi_2 Q(\omega)) \end{equation}

for every $\omega \in \mathbb {S}^{d-1}$.

Our final set of assumptions deal with conditions that will determine the decay structure of (1.1). For a standard decay, the following assumption is sufficient.

  1. (S)s A real $n_1\times n_1$ real matrix $W$ exists with $W_1 \geq 0$ on the image of $R$ and

    \[ i(SA(\omega) - Q(\omega)^T\varPi_1WR)_2 \geq 0 \quad \text{on } \mathbb{C}^n \]
    for every $\omega \in \mathbb {S}^{d-1}$, where $S$ is the matrix in condition (S).

A weaker version of the previous condition is the following, whose corresponding decay will be of regularity-loss type. This means that we need additional regularity for the initial data to obtain stability of solutions.

  1. (S)r There is an $n_1\times n_1$ real matrix $W$ such that $W_1 \geq 0$ on the image of $R$ and

    \[ i(SA(\omega) - Q(\omega)^T\varPi_1 WR)_2 \geq 0 \qquad \text{on } \text{Ker}(L_1) \]
    for every $\omega \in \mathbb {S}^{d-1}$, where $S$ is the matrix in condition (S).

Both conditions (S)$_s$ and (S)$_r$ were introduced in [Reference Ueda, Duan and Kawashima29]. The rest of the section will be devoted in studying condition (M) and specifically on the block matrix $\Psi _{G,N,M}$. The first observation is that the positivity of $\Psi _{G,N,M}$ is equivalent to the positivity with respect to $\text {Ker}(L_1)^\perp$ with possibly a different matrix $N$.

Theorem 2.1 Let $G$ be a real symmetric matrix as in condition (M). Then, there is an $n\times n$ symmetric matrix $N$ that is positive definite on $\textit {Ker}(M)^\perp$ such that

(2.3)\begin{equation} \langle \Psi_{G,N,M}(u,z), (u,z)\rangle \geq \alpha(|P_{L_1}u|^2 + |P_Mz|^2) \end{equation}

for some $\alpha > 0$ and for every $(u,\,z) \in \mathbb {C}^n \times \mathbb {C}^n$ if and only if there is an $n\times n$ symmetric matrix $\widetilde {N}$ which is positive definite on $\textit {Ker}(M)^\perp$ such that

(2.4)\begin{equation} \langle \Psi_{G,\widetilde{N},M}(u,z), (u,z)\rangle \geq \widetilde{\alpha}|P_{L_1}u|^2 \end{equation}

for some $\widetilde {\alpha } > 0$ and for every $(u,\,z) \in \mathbb {C}^n\times \mathbb {C}^n$.

Proof. One can see that (2.3) implies (2.4) by taking $N = \widetilde {N}$. For the other direction, let $N = \widetilde {N} + \varepsilon P_M$ where $\varepsilon > 0$. The block matrices associated with $N$ and $\widetilde {N}$ are related by

\[ \Psi_{G,N,M} = \Psi_{G,\widetilde{N},M} + \left( \begin{array}{@{}cc@{}} -\varepsilon P_M & 0 \\ 0 & \varepsilon P_M \end{array}\right). \]

As before, (2.4) implies that the kernel of $L_1$ lies in the kernel of $M$ and as a consequence $\langle P_Mu,\,u\rangle = \langle P_MP_{L_1}u,\,P_{L_1}u\rangle$. By choosing $\varepsilon < \widetilde {\alpha } / \|P_M\|$, where $\|\cdot \|$ is the operator norm, we have $\widetilde {\alpha }|P_{L_1}u|^2 - \varepsilon \langle P_MP_{L_1}u,\,P_{L_1}u \rangle \geq (\widetilde {\alpha } - \varepsilon \|P_M\|)|P_{L_1} u|^2 > 0$. Then, we can see that (2.4) implies (2.3) with $\alpha = \widetilde {\alpha } - \varepsilon \|P_M\|$.

If the delay matrix is symmetric and nonnegative, then a sufficient condition for the positivity of the block matrix in condition (M) is given by the following theorem.

Theorem 2.2 Suppose that $M \geq 0$ is symmetric and $L_1 - M > 0$ on $\textit {Ker}(L_1)$. Then, $\Psi _{I,M,M} > 0$ on $\textit {Ker}(L_1)^\perp \times \textit {Ker}(M)^\perp$.

Proof. Given $(u,\,z)\in \mathbb {C}^n \times \mathbb {C}^n$ we have

(2.5)\begin{align} \langle \Psi_{I,M,M}(u,z),(u,z)\rangle = 2\langle L_1u,u\rangle - \langle MP_Mu,P_Mu\rangle + 2\text{Re}\langle Mu,z\rangle + \langle Mz,z\rangle. \end{align}

By the symmetry of the delay matrix $M$, we obtain $\langle Mz,\,z\rangle = \langle MP_Mz,\,P_Mz\rangle$ and $\langle Mu,\,z\rangle = \langle MP_Mu,\,P_Mz\rangle$, and therefore, by the Cauchy–Schwarz inequality we have

(2.6)\begin{equation} |\langle Mu,z\rangle| \leq \frac{1}{2}\langle MP_Mu,P_Mu\rangle + \frac{1}{2}\langle MP_Mz,P_Mz\rangle. \end{equation}

Using (2.6) in (2.5) yields the estimate

\[ \langle \Psi_{I,M,M}(u,z),(u,z)\rangle \geq 2\langle (L_1- P_MMP_M)u,u\rangle = 2\langle (L_1-M)u,u\rangle \geq \widetilde{\alpha}|P_{L_1}u|^2, \]

for some $\widetilde {\alpha } > 0$. The conclusion now follows from theorem 2.1.

We close this section by proving the invariance of the condition (M) with respect to a class of orthogonal matrices.

Theorem 2.3 Let $J$ be a real orthogonal $n\times n$ matrix, that is, $J^TJ = I$, such that

  1. (a) $J(\textit {Ker}(M)) = \textit {Ker}(M)$ and $J(\textit {Ker}(M)^\perp ) = \textit {Ker}(M)^\perp$

  2. (b) $\langle NP_Mu,\,P_Mu\rangle = \langle NP_MJu,\,P_MJu\rangle$ for all $u\in \mathbb {C}^n$.

If $M$ satisfies condition (M), then so is $MJ$. In particular, $-M$ satisfies condition (M) if and only if $M$ satisfies the condition.

Proof. Property (i) implies that the kernels of $MJ$ and $M$ coincide, and in particular, $\text {Ker}(MJ)^\perp = \text {Ker}(M)^\perp$ and $P_{MJ} = P_M$. Given $u \in \mathbb {C}^n$ there holds

\[ P_MJu = P_MJP_Mu + P_MJ(I-P_M)u = JP_Mu \]

since $JP_Mu \in \text {Ker}(M)^\perp$ and $J(I-P_M)u \in \text {Ker}(M)$. Hence, $P_M$ and $J$ commute, and consequently, $P_M$ and $J^T$ also commute by symmetry of $P_M$. If $(u,\,z)\in \mathbb {C}^n \times \mathbb {C}^n$, then we derive from (ii) and the preceding statement that

\begin{align*} & \langle \Psi_{G,N,M}(u,Jz),(u,Jz)\rangle\\ & \quad= 2\langle L_1u,u\rangle - \langle NP_Mu,P_Mu\rangle + 2\text{Re}\langle GMJz,u\rangle + \langle NJz,Jz\rangle\\ & \quad= 2\langle L_1u,u\rangle - \langle J^TNJP_{MJ}u,P_{MJ}u\rangle + 2\text{Re}\langle GMJz,u\rangle + \langle J^TNJz,z\rangle \\ & \quad= \langle \Psi_{G,J^TNJ,MJ}(u,z),(u,z)\rangle. \end{align*}

Using condition (M) and the fact that $P_M$ and $J$ commute, we can see from these equations that $\Psi _{G,J^TNJ,MJ} > 0$ on $\text {Ker}(L_1)^\perp \times \text {Ker}(M)^\perp$. Finally, since $J$ is bijective, it follows that $\langle GMJz,\, u \rangle = 0$ for every $z \in \mathbb {C}^n$ and $u \in \text {Ker}(L_1)$. These prove that $MJ$ satisfies condition (M).

Notice that $M$ satisfies (2.1) if and only if $MJ$ satisfies $\langle SMJz,\,u\rangle = 0$ for every $z \in \mathbb {C}^n$ and $u \in \text {Ker}(L)$. Also, the conditions involving the matrix $M$ in hypothesis (Q) hold if and only if those conditions are satisfied by $MJ$. The previous theorem together with the above remark will imply the stability of (1.1), with $M$ replaced by $MJ$, where $J$ is an orthogonal matrix satisfying (i) and (ii), provided that the original system with delay matrix $M$ is also stable.

3. Transport equations with parameter

The goal of the present section is to discuss the well-posedness of the following transport equation with parameter

(3.1)\begin{align} \begin{cases} \partial_tz(t,\theta,x) - \partial_\theta z(t,\theta,x) + a z(t,\theta,x) = 0 & \text{ for }(t,\theta,x) \in (0,T)\times (-\tau,0)\times \mathbb{R}^d,\\ z(t,0,x) = v(t,x) & \text{ for } (t,x) \in (0,T)\times \mathbb{R}^d,\\ z(0,\theta,x) = z_0(\theta,x) & \text{ for }(\theta,x) \in (-\tau,0)\times \mathbb{R}^d, \end{cases} \end{align}

that will be useful in the study of system (1.1). Here, $a$ is a fixed real number and $z : (0,\,T) \times (-\tau,\,0) \times \mathbb {R}^d \to \mathbb {R}^n$ is the unknown state. Such equation will occur once we introduce a state component that keeps track of the history. We would like to point out that the results in this section are analogous to the usual transport equations. However, for clarity in the development of the well-posedness for (1.1) and for future reference, we decided to include them here. Define the differential operator

(3.2)\begin{equation} \mathscr{L}_1z := \partial_t z - \partial_\theta z + a z, \end{equation}

whose formal adjoint is given by $\mathscr {L}_1^*z := -\partial _t z + \partial _\theta z + a z$.

First, we start with the definition of a weak solution for given square integrable data $v \in L^2(0,\,T;L^2)$ and $z_0 \in L^2_\theta (L^2)$. A function $z \in L^2((0,\,T)\times (-\tau,\,0)\times \mathbb {R}^d)$ is called a weak solution of (3.1) if the variational equation

(3.3)\begin{equation} \int_0^T \!\! \int_{-\tau}^0 (z, \mathscr{L}_1^* \psi)_{L^2} {\rm d} \theta {\rm d} t = \int_{-\tau}^0(z_0,\psi_{|t=0})_{L^2}{\rm d} \theta + \int_0^T ( u,\psi_{|\theta = 0})_{L^2} {\rm d} t \end{equation}

holds for every $\psi \in L^2(\mathbb {R}^d;H^1((0,\,T)\times (-\tau,\,0)))$ such that $\psi _{|t= T} = 0$ and $\psi _{|\theta = -\tau } = 0$.

It is clear that every classical solution is also a weak solution. The existence of weak solutions will be obtained using the following result in [Reference Peralta and Propst22] inspired by the Friedrichs work [Reference Fridman, Nicaise and Valein7].

Theorem 3.1 Let $X$ and $Z$ be Hilbert spaces, $Y$ be a subspace of $X$, and $\Lambda : Y \to X$, $\Psi : Y \to Z$, $\Phi : Y \to Z$ be linear operators. Suppose that $W = \text {Ker}(\Phi )$ and $\Lambda (W)$ are nontrivial. If there exist $\gamma > 0$ and $C > 0$ such that

(3.4)\begin{equation} \gamma \|w\|_X^2 + \|\Psi w\|_Z^2 \leq C(\gamma^{{-}1}\|\Lambda w\|_X^2 + \|\Phi w\|_Z^2), \quad \text{ for all } w \in Y, \end{equation}

then the variational equation

(3.5)\begin{equation} (u,\Lambda w)_X = (F,w)_X + (G,\Psi w)_Z, \quad \text{ for all } w \in W, \end{equation}

for a given $(F,\,G) \in X \times Z$ has a solution $u \in X$. In addition, the solution is unique if and only if $\Lambda (W)$ is dense in $X$.

Applying the above result requires some a priori estimate. First, let us derive the estimate associated with $\mathscr {L}_1$. For a smooth function $\psi$, we multiply both sides of equation (3.2) by $e^{-2\gamma t}\psi$, where $\gamma \geq 1$ is a constant to be chosen below, to obtain

\[ \frac{1}{2} \partial_t (e^{{-}2\gamma t}|\psi|^2) - \frac{1}{2}\partial_\theta (e^{{-}2\gamma t}|\psi|^2) + (\gamma + a)e^{{-}2\gamma t}|\psi|^2 = e^{{-}2\gamma t}\langle \mathscr{L}_1\psi, \psi \rangle. \]

Integrating this equation over $(0,\,\sigma )\times (-\tau,\,0) \times \mathbb {R}^d$, using Young's inequality to the right-hand side, and then choosing $\gamma _0 \geq 1$ sufficiently large, we have

(3.6)\begin{align} & e^{{-}2\gamma \sigma}\|\psi_{|t=\sigma}\|_{L^2_\theta(L^2)}^2 + \gamma \int_0^\sigma e^{{-}2\gamma t}\|\psi\|_{L^2_\theta(L^2)}^2 {\rm d} t + \int_0^\sigma e^{{-}2\gamma t}\|\psi_{|\theta ={-}\tau}\|^2_{L^2}{\rm d} t\nonumber\\ & \quad\leq C \left( \|\psi_{|t=0}\|_{L^2_\theta(L^2)}^2 + \frac{1}{\gamma} \int_0^\sigma e^{{-}2\gamma t}\|\mathscr{L}_1\psi\|_{L^2_\theta(L^2)}^2 {\rm d} t + \int_0^\sigma e^{{-}2\gamma t}\|\psi_{|\theta = 0}\|^2_{L^2}{\rm d} t \right) \end{align}

for every $\sigma \in [0,\,T]$, for every $\gamma \geq \gamma _0$, and for some $C > 0$. By a density argument, (3.6) is satisfied for every $\psi \in L^2(\mathbb {R}^d; H^1((0,\,T) \times (-\tau,\,0)))$. The dual version of this estimate is the following: for every $\gamma \geq \gamma _0^*$ and $\sigma \in [0,\,T]$ it holds that

(3.7)\begin{align} & \|\psi_{|t=0}\|_{L^2_\theta(L^2)}^2 + \gamma \int_0^\sigma e^{2\gamma t}\|\psi\|_{L^2_\theta(L^2)}^2 {\rm d} t + \int_0^\sigma e^{2\gamma t}\|\psi_{|\theta = 0}\|^2_{L^2}{\rm d} t \nonumber\\ & \quad \leq C \left( e^{2\gamma \sigma}\|\psi_{|t=\sigma}\|_{L^2_\theta(L^2)}^2 + \frac{1}{\gamma} \int_0^\sigma e^{2\gamma t}\|\mathscr{L}_1^*\psi\|_{L^2_\theta(L^2)}^2 {\rm d} t + \int_0^\sigma e^{2\gamma t}\|\psi_{|\theta ={-}\tau}\|^2_{L^2}{\rm d} t \right) \end{align}

for some constants $C > 0$ and $\gamma _0^* \geq 1$, and for every $\psi \in L^2(\mathbb {R}^d; H^1((0,\,T) \times (-\tau,\,0)))$.

For data that will be regular and compatible, we have additional regularity of the weak solution. For this we need a priori estimates in terms of the Sobolev norms. Given $0 \leq j \leq m$, if we replace $\psi$ by $\partial _t^j\partial _\theta ^k\partial _x^\ell \psi$ in (3.6), take the sum over all $0\leq k\leq m-j$, $0\leq j\leq m$ and $0 \leq \ell \leq s$, and then finally take the supremum over all $\sigma \in [0,\,T]$, we obtain the weighted a priori estimate

(3.8)\begin{align} & \sum_{j=0}^m \sup_{0\leq t \leq T} e^{{-}2\gamma t}\|\partial_t^j\psi(t)\|_{H^{m-j}_\theta(H^s)}^2 + \gamma \sum_{j=0}^m\int_0^T e^{{-}2\gamma t}\|\partial_t^j\psi\|_{H^{m-j}_\theta(H^s)}^2 {\rm d} t \nonumber\\ & \quad+ \sum_{j=0}^m \int_0^T e^{{-}2\gamma t}\|\partial_t^j\psi_{|\theta ={-}\tau}\|^2_{H^s}{\rm d} t \leq \frac{C}{\gamma} \sum_{j=0}^m \int_0^T e^{{-}2\gamma t}\|\partial_t^j\mathscr{L}_1\psi\|_{H^{m-j}_\theta(H^s)}^2 {\rm d} t\nonumber\\ & \quad+ C \sum_{j=0}^m \int_0^T e^{{-}2\gamma t}\|\partial_t^j\psi_{|\theta = 0}\|^2_{H^s}{\rm d} t + C\sum_{j=0}^m \|\partial_t^j\psi_{|t=0}\|_{H^{m-j}_\theta(H^s)}^2 \end{align}

for every $\psi \in H^{m+1}((0,\,T)\times (-\tau,\,0);H^s).$

Theorem 3.2 Given $z_0 \in L^2_\theta (L^2)$ and $v \in L^2(0,\,T;L^2)$, equation (3.1) admits a unique weak solution.

Proof. Let $X = L^2((0,\,T)\times (-\tau,\,0)\times \mathbb {R}^d)$, $Y = L^2(\mathbb {R}^d; H^1((0,\,T)\times (-\tau,\,0)))$, and $Z = L^2(0,\,T;L^2) \times L_\theta ^2(L^2)$. Define the operators $\Lambda : Y \to X$, $\Psi : Y \to Z$, and $\Phi : Y \to Z$ as follows:

\[ \Lambda \psi := \mathscr{L}_1^* \psi, \qquad \Psi \psi := (\psi_{|\theta=0}, \psi_{|t=0}), \qquad \Phi\psi := (\psi_{|\theta={-}\tau}, \psi_{|t=T}) \]

for $\psi \in Y$. The variational equation (3.3) can now be written in form (3.5). From the a priori estimate (3.7), one can see that (3.4) is satisfied, and therefore by theorem 3.1, (3.1) has a weak solution.

To establish uniqueness, we proceed by a duality argument. Suppose that $z_1$ and $z_2$ are two weak solutions and let $z := z_1 - z_2$. Then, it follows that

(3.9)\begin{equation} \int_0^T\!\! \int_{-\tau}^0 (z,\mathscr{L}_1^* \psi)_{L^2} {\rm d} \theta {\rm d} t = 0 \end{equation}

for every $\psi \in \text {Ker}(\Phi )$. Let $(\phi _n)_{n=1}^\infty$be a sequence of infinitely differentiable functions with compact support in $(0,\,T)\times (-\tau,\,0)\times \mathbb {R}^d$ such that $\phi _n \to z$ in $L^2((0,\,T)\times (-\tau,\,0)\times \mathbb {R}^d)$. The backward-in-time transport equation

\[ - \partial_t\psi_{n} + \partial_\theta \psi_n + a \psi_n = \phi_n, \quad \psi_{n|t= T} = 0, \quad \psi_{n|\theta={-}\tau} = 0 \]

has a classical solution, so that $\psi _n \in Y$ for each $n$. Using this test function in (3.9) and then passing to the limit, we see that $z = 0$ almost everywhere. Therefore, the weak solution of (3.1) is unique.

To prove regularity of the solutions, the following observation will be useful. Given $z_0$, we define recursively $z_j := \partial _\theta z_{j-1} - a z_{j-1}$. We say that the data $(z_0,\,v)$ is compatible up to order $k-1$ if $\partial _t^j v_{|t=0} = z_{j|\theta = 0}$ for every $0 \leq j \leq k-1$.

Theorem 3.3 Let $k$ and $s$ be nonnegative integers. If the pair $(z_0,\,v) \in H^k_\theta (H^s) \times H^k(0,\,T;H^s)$ is compatible up to order $k-1$ if $k \geq 1$, then there is a sequence $(z_{0n},\,v_n) \in H^{k+1}_\theta (H^{s+1}) \times H^{k+1}(0,\,T;H^{s+1})$ compatible up to order $k$ for every $n$ and $(z_{0n},\,v_n) \to (z_0,\,v) \text { in } H^k_\theta (H^s) \times H^k(0,\,T;H^s).$

Proof. Let $\rho _\varepsilon$ be a standard mollifier with respect to $x$, that is, $\rho _\varepsilon (x) := \varepsilon ^{-d}\rho ({x}/{\varepsilon })$ where $\rho \in \mathscr {D}(\mathbb {R}^d)$ satisfies $\int _{\mathbb {R}^d} \rho _\varepsilon (x) {\rm d} x = 1$, and let $R_\varepsilon v := \rho _\varepsilon \ast v$. Then, the regularized data $(R_\varepsilon z_0,\, R_\varepsilon v) \in H^k_\theta (H^\infty ) \times H^k(0,\,T;H^\infty )$ is still compatible up to order $k-1$ and

\[ (R_\varepsilon z_0, R_\varepsilon v) \to (z_0,v) \text{ in }H^k_\theta(H^s) \times H^k(0,T;H^s) \]

as $\varepsilon \to 0$. For a fix $\varepsilon > 0$, take a sequence $(z_{0n}^\varepsilon,\, v_{n1}^\varepsilon ) \in H^{k+1}_\theta (H^\infty ) \times H^{k+1}(0,\,T;H^\infty )$ such that as $n\to \infty$ there holds

\[ (z_{0n}^\varepsilon,v_{n1}^\varepsilon) \to (R_\varepsilon z_0, R_\varepsilon v) \text{ in }H^{k}_\theta(H^s) \times H^{k}(0,T;H^s). \]

For example, we first extend $R_\varepsilon z_0$ to a function in $H^k(\mathbb {R};H^\infty )$ by a standard reflection argument, see [Reference Adams1] for instance, and if $\widetilde {R}_\delta$ is the corresponding convolution operator with respect to $\theta$, then we may take $z_{0n}^\varepsilon$ to be the restriction of $\widetilde {R}_{1/n} (R_\varepsilon z_0)$ in $(-\tau,\,0)$.

Define $v_n^\varepsilon := v_{n1}^\varepsilon - v_{n2}^\varepsilon$ where $v_{n2}^\varepsilon \in H^{k+1}(0,\,T;H^{s+1})$ is a function that will be constructed below that satisfies $v_{n2}^\varepsilon \to 0$ in $H^{k+1}(0,\,T;H^{s+1})$. For each $0 \leq j\leq k$, define

\[ \sigma_{nj} := \partial_t^j v^\varepsilon_{n1|t=0} - z^\varepsilon_{0nj|\theta = 0} \in H^{k - j + s + \frac{3}{2}}. \]

From the compatibility conditions for the data $(R_\varepsilon z_0,\, R_\varepsilon v)$, we have $\sigma _{nj} \to 0$ in $H^{k - j + s + {3}/{2}}$ as $n\to \infty$ for every $0\leq j < k$. According to trace theory, for each $n$ there exists $h_n \in H^{k+s+2}((0,\,T)\times \mathbb {R}^d) \subset H^{k+1}(0,\,T;H^{s+1})$ such that $\partial _t^jh_{n|t=0} = \sigma _{nj}$ for all $0 \leq j < k$, $\partial _t^k h_{n|t=0} = 0$, and $h_n \to 0$ in $H^{k+1}(0,\,T;H^{s+1})$.

Let $v_{n2}^\varepsilon := h_n + g_n$, where $g_n := \widetilde {g}_n \otimes \sigma _{n,k}$ and $\widetilde {g}_n \in H^{k+1}(0,\,T)$ satisfies

\[ \widetilde{g}_n^{(j)}(0) = 0, \text{ for } j=0,1,\ldots,k-1, \quad \widetilde{g}_n^{(k)}(0) = 1, \quad \|\widetilde{g}_n\|_{H^{k+1}(0,T)} \to 0. \]

For the construction of $\widetilde {g}_n$, we refer to [Reference Peralta and Propst22, Reference Rauch and Massey25]. If $j < k$, then

\[ \partial_t^j v^\varepsilon_{n|t=0} = \partial_t^j v^\varepsilon_{n1|t=0} - \partial_t^j h_{n|t=0} = z^\varepsilon_{0nj|\theta = 0}. \]

Also, $\partial _t^k v^\varepsilon _{n|t=0} = \partial _t^k v^\varepsilon _{n1|t=0} - \sigma _{nk} = z^\varepsilon _{0nk|\theta = 0}.$

We now construct the sequence $(z_{0n},\,v_n)$ as follows. Given a positive integer $n$, let $(z_{0n},\,v_n) := (z_{0N}^{1/n},\,v_N^{1/n})$ for a sufficiently large $N = N(n)$ be such that

\[ \|(z_{0n},v_n) - (R_{1/n}z_0,R_{1/n}v)\|_{H^{k}_\theta(H^{s}) \times H^{k}(0,T;H^{s})} < \frac{1}{n}. \]

From the above construction, we can see that the pair $(z_{0n},\,v_n)$ satisfies the desired properties.

If the function $z_0$ in the previous theorem satisfies $z_0 \in L^2_\theta (L^1)$, then we have $R_\varepsilon z_0 \to z_0$ in $L_\theta ^2(L^1)$. Now, for a fixed $\varepsilon > 0$, it holds that $\widetilde {R}_{1/n}(R_\varepsilon z_0) \to R_\varepsilon z_0$ in $L^2(\mathbb {R};L^1)$, see for example [Reference Arendt, Batty, Hieber and Neubrander2, Section 1.3]. In particular, we have $z_{0n}^\varepsilon \to R_\varepsilon z_0$ in $L_\theta ^2(L^1)$ and by the same argument as above, we can choose $z_{0n}$ such that $z_{0n} \in L^2_\theta (L^1)$ for every $n$ and $z_{0n} \to z_0$ in $L_\theta ^2(L^1)$. With a diagonalization argument we obtain the following.

Corollary 3.4 Given $(z_0,\,v) \in L^2_\theta (L^2) \times L^2(0,\,T;L^2)$ and positive integers $k$ and $s$, there exists a sequence of data $(z_{0n},\,v_n) \in H^k_\theta (H^s) \times H^k(0,\,T;H^s)$ compatible up to order $k-1$ for each $n$ and

\[ (z_{0n},v_n) \to (z_0,v) \text{ in } L^2_\theta(L^2) \times L^2(0,T;L^2). \]

Moreover, if $z_0 \in L_\theta ^2(L^1)$, then $z_{0n}$ can be chosen to be an element of $L_\theta ^2(L^1)$ and $z_{0n} \to z_0$ in $L^2_\theta (L^1)$.

Let $\{t> -\theta \} := \{(t,\,\theta ) : t > -\theta \}$ and $\{t< -\theta \} := \{(t,\,\theta ) : t < -\theta \}$. Notice that the weak solution of (3.1) $z$ as well as $\mathscr {L}_1z$ lie in $L^2(\mathbb {R}^d;L^2((0,\,T)\times (-\tau,\,0)))$, and hence, a priori we have the trace regularity $z_{|\theta =-\tau },\, z_{|\theta = 0} \in L^2(\mathbb {R}^d;H^{-1/2}(0,\,T))$. Now we show that in fact they are both in $L^2(0,\,T;L^2)$ and that the weak solutions coincide with the one given by the method of characteristics. The former is sometimes called hidden regularity in control theory literature.

Theorem 3.5 The weak solution of system (3.1) satisfies $z \in C(0,\,T;L^2_\theta (L^2))$, $z_{|\theta = -\tau }$, $z_{|\theta = 0} \in L^2(0,\,T;L^2)$ and the following energy estimate

\begin{align*} & \sup_{0\leq t\leq T} e^{{-}2\gamma t}\|z(t)\|_{L^2_\theta(L^2)}^2 + \gamma \int_0^T e^{{-}2\gamma t}\|z\|_{L^2_\theta(L^2)}^2 {\rm d} t\\ & \quad + \int_0^T e^{{-}2\gamma t}(\|z_{|\theta={-}\tau}\|^2_{L^2} + \|z_{|\theta = 0}\|^2_{L^2}){\rm d} t \leq C \left( \|z_0\|_{L^2_\theta(L^2)}^2 + \int_0^T e^{{-}2\gamma t}\|v\|^2_{L^2}{\rm d} t \right) \end{align*}

holds for every $\gamma \geq \gamma _0$ and for some constants $C> 0$ and $\gamma _0 \geq 1$. The weak solution is given explicitly by

(3.10)\begin{equation} z(t,\theta,x) = \begin{cases} e^{a\theta}v(t+\theta,x) & \text{in } ( \{t>{-}\theta\} \cap (0,T)\times(-\tau,0)) \times \mathbb{R}^d,\\ e^{a\theta}z_0(t+\theta,x) & \text{in }(\{t<{-}\theta\} \cap (0,T)\times(-\tau,0))\times \mathbb{R}^d. \end{cases} \end{equation}

Proof. By choosing $k$ and $s$ sufficiently large in corollary 3.4, one can construct a sequence $(z_{0n},\,v_n)$ of continuously differentiable data that are compatible up to order 1 and tends to $(z_0,\,v)$ in $L^2_\theta (L^2) \times L^2(0,\,T;L^2)$. For example one may take $k = 2$ and $s > {d}/{2} + 1$. It can be easily verified that the transport equation (3.1) with boundary data $v_n$ and initial data $z_{0n}$ has the classical solution

(3.11)\begin{equation} z_n(t,\theta,x) = \begin{cases} e^{a\theta}v_n(t+\theta,x) & \text{in } ( \{t>{-}\theta\} \cap (0,T)\times(-\tau,0)) \times \mathbb{R}^d,\\ e^{a\theta}z_{0n}(t+\theta,x) & \text{in }(\{t<{-}\theta\} \cap (0,T)\times(-\tau,0))\times \mathbb{R}^d. \end{cases} \end{equation}

Applying the a priori estimate (3.6) to $z_n - z_m$, one can see that $(z_n)_n$ is a Cauchy sequence in $C(0,\,T;L_\theta ^2(L^2))$, while $(z_{|\theta = -\tau })_n$ and $(z_{n|\theta = 0})_n$ are a Cauchy sequences in $L^2(0,\,T;L^2)$. By passing to the weak formulation of the equation for $z_n$, we can see that the limit in $C(0,\,T;L_\theta ^2(L^2))$ is a weak solution, and thus, the limit must be the weak solution of (3.1) by uniqueness. Note that the traces $z_{n|\theta = -\tau }$ and $z_{n|\theta = 0}$ tend to $z_{|\theta = -\tau }$ and $z_{|\theta = 0}$ in $L^2(\mathbb {R}^d;H^{-1/2}(0,\,T))$, respectively, and consequently in $L^2(0,\,T;L^2)$ by uniqueness of limits in the sense of distributions. Passing to the limit in (3.11), up to a subsequence we can see that the weak solution of (3.1) is given by (3.10). The energy estimate can be obtained by passing to the limit of the priori estimate (3.6) for $z_n$.

If $z$ is the weak solution of (3.1), then the differential equations are satisfied in the sense of distributions, the boundary and initial conditions are satisfied in $L^2$, and the variational equation

(3.12)\begin{align} \int_0^T \!\! \int_{-\tau}^0 (z, \mathscr{L}_1^* \psi)_{L^2} {\rm d} \theta {\rm d} t & = \int_{-\tau}^0 (z_0,\psi_{|t=0})_{L^2} {\rm d} \theta + \int_0^T (u,\psi_{|\theta = 0})_{L^2} {\rm d} t\nonumber\\ & \quad- \int_0^T ( z_{|\theta={-}\tau},\psi_{|\theta ={-}\tau})_{L^2} {\rm d} t \end{align}

holds for every $\psi \in L^2(\mathbb {R}^d;H^1((0,\,T)\times (-\tau,\,0)))$. Letting $n\to \infty$ in (3.11), it follows that $z_{|\theta = -\tau }$ is given by

\[ z_{|\theta ={-}\tau}(t,x) = \begin{cases} e^{{-}a\tau}v(t-\tau,x) & \text{ if } t > \tau,\\ e^{{-}a\tau}z_0(t-\tau,x) & \text{ if } 0 < t < \tau. \end{cases} \]

With additional regularity for the initial and boundary data, one can obtain better regularity of the solutions. If $m$ is a nonnegative integer and the data $(z_0,\,v) \in H^m_\theta (H^s ) \times H^m(0,\,T;H^s)$ is compatible up to order $m-1$ if $m\geq 1$, then the weak solution of (3.1) satisfies

(3.13)\begin{equation} z \in \bigcap_{j = 0}^m C^j(0,T;H^{m-j}_\theta(H^s)), \qquad z_{|\theta={-}\tau} \in H^m(0,T;H^s), \end{equation}

and we have also a corresponding energy estimate

(3.14)\begin{align} & \sum_{j=0}^m \sup_{0\leq t \leq T} e^{{-}2\gamma t}\|\partial_t^jz(t)\|_{H^{m-j}_\theta(H^s)}^2 + \gamma \sum_{j=0}^m\int_0^T e^{{-}2\gamma t}\|\partial_t^jz\|_{H^{m-j}_\theta(H^s)}^2 {\rm d} t \nonumber\\ & \quad+ \sum_{j=0}^m \int_0^T e^{{-}2\gamma t}\|\partial_t^jz_{|\theta={-}\tau}\|^2_{H^s}{\rm d} t \leq C \biggl(\|z_0\|^2_{H^m_\theta(H^s)} + \sum_{j=0}^m \int_0^T e^{{-}2\gamma t}\|\partial_t^ju\|^2_{H^s}{\rm d} t \biggl). \end{align}

The proofs rely on additional a priori estimates in Sobolev spaces, see (3.8). On the other hand, if (3.1) with initial data $(z_0,\,v) \in H^m_\theta (H^s ) \times H^m(0,\,T;H^s)$ has a solution satisfying (3.13), then the data $(u,\,z_0)$ is compatible up to order $m-1$.

4. Well-posedness for hyperbolic systems with delay

We will recast system (1.1) as a coupled hyperbolic system-transport system with parameter. In this section, there are no assumptions on the matrices $L$ and $M$ aside from that they have real entries. Introducing the variable $z(t,\,\theta,\,x) = e^{\varepsilon \theta }P_Mu (t+\theta,\,x)$ for $(t,\,\theta,\,x) \in (0,\,\infty ) \times (-\tau,\,0)\times \mathbb {R}^d$, the system (1.1) can be written as

(4.1)\begin{equation} \begin{cases} \displaystyle A^0 \partial_t u(t,x) + \sum_{j = 1}^d A^j \partial_{x_j}u(t,x) + Lu(t,x) + e^{\varepsilon\tau}Mz_\tau(t,x) = 0,\\ \partial_tz(t,\theta,x) - \partial_\theta z(t,\theta,x) + \varepsilon z(t,\theta,x) = 0,\quad z(t,0,x) = P_Mu(t,x),\\ \displaystyle \sum_{j = 1}^d Q^j \partial_{x_j}u(t,x) + Ru(t,x) = 0,\\ u(0,x) = u_0(x), \qquad z(0,\theta,x) = e^{\varepsilon\theta}P_M z_0(\theta,x), \end{cases} \end{equation}

for $t > 0$, $x \in \mathbb {R}^d$ and $\theta \in (-\tau,\,0)$. Here and in the succeeding sections, $z_\tau$ will denote the trace $z_{|\theta = -\tau }$.

Define the following differential operators

\[ \mathscr{L}_2 u := A^0 \partial_t u + \sum_{j = 1}^d A^j \partial_{x_j}u + Lu, \quad \mathscr{L}_3 u := \sum_{j = 1}^d Q^j \partial_{x_j}u + Ru \]

whose distributional adjoints are given respectively by

\[ \mathscr{L}_2^* u :={-} A^0 \partial_t u - \sum_{j = 1}^d A^j \partial_{x_j}u + L^T u, \quad \mathscr{L}_3^* u :={-} \sum_{j = 1}^d (Q^j)^T \partial_{x_j}u + R^T u. \]

We can then rewrite (4.1) as follows:

(4.2)\begin{align} \begin{cases} \mathscr{L}_2 u ={-}e^{\varepsilon\tau}Mz_{\tau}\\ \mathscr{L}_1 z = 0, \quad z_{|\theta=0} = P_Mu\\ \mathscr{L}_3 u = 0, \quad u_{|t=0} = u_{0}, \quad z_{|t=0} = e^{\varepsilon \theta}P_Mz_{0}. \end{cases} \end{align}

Before we deal with (4.1), we briefly recall the results for hyperbolic systems without constraints. With respect to the hyperbolic operator $\mathscr {L}_2$ we have the weighted a priori estimate, see [Reference Benzoni-Gavage and Serre3] for example,

(4.3)\begin{align} & \sup_{0\leq t \leq T} e^{{-}2\gamma t}\|u(t)\|_{H^s}^2 + \gamma \int_0^T e^{{-}2\gamma t} \|u\|_{H^s}^2 {\rm d} t \nonumber\\ & \quad \leq C \left( \|u(0)\|_{H^s}^2 + \frac{1}{\gamma} \int_0^T e^{{-}2\gamma t} \|\mathscr{L}_2u\|^2_{H^s} {\rm d} t \right) \end{align}

for every $u \in H^1(0,\,T;H^s)$ and $\gamma \geq \gamma _1$, for some positive constants $C$ and $\gamma _1 \geq 1$. There is also an analogous a priori estimate for the dual operator $\mathscr {L}_2^*$. Given an initial data $u_0 \in L^2$ and a source term $f \in L^2(0,\,T;L^2)$, a function $u \in L^2 ((0,\,T)\times \mathbb {R}^d)$ is called a weak solution of the system

(4.4)\begin{equation} \mathscr{L}_2 u(t,x) = f(t,x), \qquad u(0,x) = u_0(x) \end{equation}

if for every test function $\phi \in H^1((0,\,T)\times \mathbb {R}^d)$ such that $\phi _{|t=T} = 0$ we have

\[ \int_0^T (u,\mathscr{L}_2^*\phi)_{L^2} {\rm d} t = (u_0,A^0\phi_{|t=0})_{L^2} + \int_0^T (f,\phi)_{L^2} {\rm d} t. \]

If $u_0 \in H^s$ and $f \in L^2(0,\,T;H^s)$, then it is known that the Cauchy problem (4.4) has a unique weak solution, and moreover, we have $u \in C(0,\,T;H^s)$ and the estimate (4.3) holds where $\mathscr {L}_2u$ is replaced by $f$.

For source terms with more regularity, the corresponding solution has also more regularity as well. Again they follow from the a priori estimates for Sobolev spaces. It can shown that if the source term satisfies $f \in \bigcap _{j = 0}^s H^j(0,\,T;H^{s-j})$ and $u_0 \in H^s$, then the weak solution of (4.4) satisfies the regularity $u \in C^j(0,\,T;H^{s-j})$ for every $0 \leq j \leq s$, see [Reference Rauch24] for instance. Moreover, we have the energy estimate

(4.5)\begin{align} & \sum_{j=0}^s\sup_{0\leq t \leq T} e^{{-}2\gamma t}\|\partial_t^ju(t)\|_{H^{s-j}}^2 + \gamma \sum_{j = 0}^s \int_0^T e^{{-}2\gamma t} \|\partial_t^j u(t)\|_{H^{s-j}}^2 {\rm d} t \nonumber\\ & \quad \leq C \left( \|u_0\|_{H^s}^2 + \frac{1}{\gamma} \sum_{j=0}^s \int_0^T e^{{-}2\gamma t} \|\partial_t^jf(t)\|^2_{H^{s-j}} {\rm d} t \right). \end{align}

Now, we define the weak solutions for the hyperbolic system (4.1). Given $u_0 \in L^2$ and $z_0 \in L_\theta ^2(L^2)$, the pair $(u,\,z) \in L^2(0,\,T;L^2) \times L^2((0,\,T)\times (-\tau,\,0) \times \mathbb {R}^d)$ is called a weak solution of (4.1) if the variational equation

(4.6)\begin{align} & \int_0^T (u,\mathscr{L}_2^* \phi -P_M\psi_{|\theta = 0})_{L^2}{\rm d} t + \int_0^T\!\! \int_{-\tau}^0 (z,\mathscr{L}_1^*\psi)_{L^2}{\rm d} \theta {\rm d} t \nonumber\\ & \quad= (u_0,A^0\phi_{|t=0})_{L^2} + \int_{-\tau}^0 (z_0,e^{\varepsilon\theta}P_M\psi_{|t=0})_{L^2} {\rm d} \theta \end{align}

is satisfied for every test function $(\phi,\,\psi ) \in H^1((0,\,T)\times \mathbb {R}^d) \times L^2(\mathbb {R}^d; H^1((0,\,T)\times (-\tau,\,0)))$ such that $\phi _{|t=T} = 0$, $\psi _{|t=T} = 0$ and $e^{\varepsilon \tau }M^T \phi = \psi _{|\theta = -\tau }$, and equation

(4.7)\begin{equation} \int_0^T (u,\mathscr{L}_3^*\varphi)_{L^2} {\rm d} t = 0 \end{equation}

holds for every $\varphi \in L^2(0,\,T;H^1)$.

For systems without constraints, the last equation trivially holds. Weak solutions are necessarily unique according to the following lemma.

Lemma 4.1 If $(u,\,z)$ is a weak solution of system (4.1), then $u$ is the weak solution of the Cauchy problem

(4.8)\begin{equation} \mathscr{L}_2u ={-} e^{\varepsilon \tau}M z_\tau, \qquad \mathscr{L}_3 u = 0, \qquad u_{|t=0} = u_0, \end{equation}

and $z$ is the weak solution of the transport system

(4.9)\begin{equation} \mathscr{L}_1z = 0, \qquad z_{|\theta = 0} = P_M u, \qquad z_{|t=0} = e^{\varepsilon\theta}P_Mz_0. \end{equation}

In particular, we have $u \in C(0,\,T;L^2)$, $z \in C(0,\,T;L^2_\theta (L^2))$, $z_\tau \in L^2(0,\,T;L^2)$, and the weak solution satisfies the estimate

\[ \|u\|_{C(0,T; L^2)} + \|z\|_{C(0,T;L_\theta^2(L^2))} + \|z_\tau\|_{L^2(0,T;L^2)} \leq Ce^{\gamma T}(\|u_0\|_{L^2} + \|z_0\|_{L^2_\theta(L^2)}) \]

for some positive constants $C$ and $\gamma$.

Proof. Taking $\phi = 0$ in (4.6) shows that $z$ is the weak solution of (4.9), and therefore, we have $z_\tau \in L^2(0,\,T;L^2)$. Given $\phi \in H^1((0,\,T)\times \mathbb {R}^d)$ such that $\phi _{|t= T} = 0$, the homogeneous backward-in-time Cauchy problem

\[ -\partial_t \psi + \partial_\theta \psi + \varepsilon \psi = 0, \quad \psi_{|\theta ={-}\tau} = e^{\varepsilon\tau}M^T \phi, \quad \psi_{|t=T} = 0 \]

has a compatible data, and thus, according to the previous section it has a solution satisfying

\[ \psi \in C(0,T; H^1_\theta(L^2) \cap L^2_\theta(H^1)) \cap C^1(0,T;L^2_\theta(L^2)), \]

and in particular, $\psi \in L^2(\mathbb {R}^d; H^1((0,\,T) \times (-\tau,\,0)))$. Choosing the pair $(\phi,\,\psi )$ in the variational formulation (4.6) and using (3.12), it follows that $u$ is the weak solution of (4.8). The energy estimate of the lemma follows from the energy estimates for solutions of (4.8) and (4.9), and by taking $\gamma$ sufficiently large.

The above lemma together with theorem 3.5 imply that $z(t,\,\theta,\,x) \in \text {Ker}(M)^\perp$ for almost every $(t,\,\theta,\,x) \in (0,\,T) \times (-\tau,\,0)\times \mathbb {R}^d$. Let

\[ X_c := \left\{ u \in L^2 : \sum_{j = 1}^d Q^j \partial_{x_j}u + Ru = 0\right\} \]

with the differential equation taken in the sense of distributions.

Theorem 4.2 If $(u_0,\,z_0) \in X_c \times L_\theta ^2(L^2)$ and assumption (Q) holds, then (4.1) has a unique weak solution.

Proof. Uniqueness follows immediately from the previous lemma. For existence, we apply theorem 3.1 and for this we introduce the function spaces $X := L^2(0,\,T;L^2) \times L^2(0,\,T;L^2_\theta (L^2))$, $Y := H^1((0,\,T)\times \mathbb {R}^d) \times L^2(\mathbb {R}^d,\, H^1((0,\,T)\times (-\tau,\,0)))$ any $Z := L^2(0,\,T;L^2) \times L^2 \times L^2_\theta (L^2)$. Define the operators $\Lambda : Y \to X$, $\Psi : Y \to Z$ and $\Phi : Y \to Z$ as follows:

\begin{align*} & \Lambda(\phi,\psi) := (\mathscr{L}_2^*\phi - P_M\psi_{|\theta =0}, \mathscr{L}_1^*\psi)\\ & \Psi(\phi,\psi) := (0, A^0\phi_{|t=0},e^{\varepsilon\theta}P_M\psi_{|t=0})\\ & \Phi(\phi,\psi) := (e^{\varepsilon\tau}M^T \phi - \psi_{|\theta ={-}\tau},\phi_{|t=T}, \psi_{|t=T}). \end{align*}

The variational equation (4.6) can now be expressed as

\[ ((u,z),\Lambda(\phi,\psi))_X = ((0,u_0,z_0), \Psi(\phi,\psi))_Z \]

for every $(\phi,\,\psi )\in \text {Ker}(\Phi )$. From the a priori estimates for the transport equation with parameter (3.7) and for hyperbolic systems, the dual version of (4.5), we obtain the priori estimate (3.4) with the help of an absorption argument. More precisely, the terms $\|\phi \|_{L^2(0,T;L^2)}$ and $\|\psi _{|\theta = 0}\|_{L^2(0,T;L^2)}$ arising on the right-hand side can be absorbed by the left-hand side by making $\gamma$ sufficiently large. Therefore, (4.6) is satisfied for some $(u,\,z) \in X$.

It remains to verify the constraint. For this purpose, let $u_\delta := R_\delta u \in L^2(0,\,T;H^\infty )$ and $z_{0\delta } := R_\delta z_0 \in L_\theta ^2(H^\infty )$. Let $z_\delta$ be the solution of the transport system with initial data $e^{\varepsilon \theta }P_Mz_{0\delta }$ and boundary data $P_M u_\delta$. Let $u^\delta$ be the solution of the hyperbolic system with source term $-e^{\varepsilon \tau }Mz_{\delta \tau }$ and initial data $u_{0\delta } := R_\delta u_0$. Then, we have $z_{\delta \tau } \in L^2(0,\,T;H^\infty )$, and consequently, $u \in H^1(0,\,T;H^{\infty })$. Moreover, $u^\delta \to u$ in $L^2$ by uniqueness of weak solutions. Therefore, for every $\varphi \in \mathscr {D}((0,\,T)\times \mathbb {R}^d)$ we obtain from the Parseval's identity that

\[ \int_0^T (\partial_t \mathscr{L}_3u^{\delta}, \varphi)_{L^2} {\rm d} t = \int_0^T \text{Re}((i|\xi|Q(\omega) + R)\widehat{u^{\delta}_t}, \widehat{\varphi})_{L^2}{\rm d} t \]

where $\, \widehat {\cdot }\,$ is the Fourier transform.

According to condition (Q) and the differential equation for $u^\delta$, we have

\begin{align*} & (i|\xi|Q(\omega) + R)\widehat{u^{\delta}_t} \\ & \quad =|\xi|^2Q(\omega)(A^0)^{{-}1}A(\omega)\widehat{u^\delta} - i|\xi| ( Q(\omega)(A^0)^{{-}1}L + R(A^0)^{{-}1}A(\omega)) \widehat{u^\delta}\\ & \qquad- R(A^0)^{{-}1}L \widehat{u^\delta} + ie^{\varepsilon\tau}|\xi|Q(\omega)(A^0)^{{-}1}M \widehat{z_{\delta\tau}} + e^{\varepsilon\tau}R(A^0)^{{-}1}M \widehat{z_{\delta\tau}} = 0. \end{align*}

Thus, $\mathscr {L}_3u^\delta$ is constant, and in particular, we have $\mathscr {L}_3u^\delta (t) = \mathscr {L}_3u_{0\delta } = R_\delta (\mathscr {L}_3u_0) = 0$ for every $t \geq 0$. Passing to the limit in $\int _0^T (u^\delta,\,\mathscr {L}_3^*\varphi )_{L^2} {\rm d} t = 0$ and using the density of $\mathscr {D}((0,\,T)\times \mathbb {R}^d)$ in $L^2(0,\,T;H^1)$, we infer that the weak solution satisfies the variational form (4.7) of the differential constraint.

Theorem 4.3 If $u_0 \in X_c \cap H^s$ and $z_0 \in L^2_\theta (H^s)$, then the weak solution of (4.1) satisfies $u \in C(0,\,T;H^s)$, $z \in C(0,\,T;L^2_\theta (H^s))$, and $z_\tau \in L^2(0,\,T;H^s)$.

Proof. Let $u^0 := u_0$ and $z^0 := e^{\varepsilon \theta }P_Mz_0$. Given $u^{n-1}$, let $z^n$ be the solution of the transport system (4.9) with boundary data $P_Mu^{n-1}$ and initial data $e^{\varepsilon \theta }P_Mz_0$. Then, we have $z^n_{\tau } \in L^2(0,\,T;H^s)$. Let $u^n$ be the solution of the hyperbolic system (4.8) with initial data $u_0$ and source term $-e^{\varepsilon \tau }Mz^{n}_\tau$. Hence, it follows that $u^n \in C(0,\,T;H^s)$. Using the energy estimates for the transport equation with parameter and hyperbolic systems, one can derive

\begin{align*} & \|u^n - u^{n-1}\|_{C(0,T;H^s)}^2 + \|z^n - z^{n-1}\|_{C(0,T;L^2_\theta(H^s))}^2 + \|z_\tau^n - z_\tau^{n-1}\|_{L^2(0,T;H^s)}^2 \\ & \quad\leq \frac{(Ce^{2\gamma T}T)^{n-1}}{(n-1)!}(\|u^{1} - u^{0}\|_{H^s}^2 + \|z^{1} - z^{0}\|_{L^2_\theta(H^s)}^2 ) \end{align*}

for some $C > 0$ and for every $n$. This implies that $(u^n)_n$, $(z^n)_n$, and $(z^{n}_\tau )_n$ are Cauchy sequences in $C(0,\,T;H^s)$, $C(0,\,T; L^2_\theta (H^s))$, and $L^2(0,\,T;H^s)$, respectively.

One can see that the limit of $(u^n,\,z^n)$ is the weak solution of system (4.1). In fact, this follows from

\begin{align*} & \int_0^T (u^n,\mathscr{L}_2^* \phi - P_M\psi_{|\theta = 0})_{L^2}{\rm d} t + \int_0^T (u^n - u^{n-1},P_M\psi_{|\theta = 0})_{L^2}{\rm d} t \\ & \quad+ \int_0^T\!\! \int_{-\tau}^0 (z^n,\mathscr{L}_2^*\psi)_{L^2}{\rm d} \theta {\rm d} t = (u_0,A^0\phi_{|t=0})_{L^2} + \int_{-\tau}^0 (z_0,e^{\varepsilon\theta}P_M\psi_{|t=0})_{L^2} {\rm d} \theta \end{align*}

which holds for every test function $(\phi,\,\psi ) \in H^1((0,\,T)\times \mathbb {R}^d) \times L^2(\mathbb {R}^d;H^1((0,\,T) \times (-\tau,\,0)))$ such that $\phi _{|t=T} = 0$, $\psi _{|t=T} = 0$, and $e^{\varepsilon \tau }M^T \phi = \psi _{|\theta = -\tau }$. Therefore, $u \in C(0,\,T;H^s)$, $z \in C(0,\,T;L^2_\theta (H^s))$ and $z_\tau \in L^2(0,\,T;H^s)$.

The solution space for problem (4.1) with compatible data is based on the following function spaces

\begin{align*} Z_{m,k} & := \bigcap_{j=0}^m H^j_\theta(H^{m+k-j}),\ \quad \qquad X_{m,k} := \bigcap_{j=0}^{m} C^j(0,T;H^{m+k-j}), \\ Y_{m,k} & := \bigcap_{j=0}^m C^j(0,T;Z_{m-j,k}), \quad W_{m,k} := \bigcap_{j = 0}^m H^j(0,T;H^{m+k-j}) \end{align*}

for nonnegative integers $m$ and $k$. The norms of these functions spaces will be the sum (or the max) of norms appearing in the intersections.

Given $u_0$ and $z_0$ we define recursively the following functions

\begin{align*} & \widetilde{z}_0 := e^{\varepsilon\theta}P_Mz_0, \qquad \widetilde{z}_i := \partial_\theta \widetilde{z}_{i-1} - \varepsilon \widetilde{z}_{i-1},\\ & u_i :={-} \sum_{j=1}^d A^j\partial_{x_j}u_{i-1} - Lu_{i-1} - e^{\varepsilon\tau}MP_M\widetilde{z}_{i-1|\theta ={-}\tau}. \end{align*}

The data $(u_0,\,z_0)$ is said to be compatible up to order $k$ if $\widetilde {z}_{i|\theta =0} = u_i$ for all $0\leq i \leq k$.

Theorem 4.4 Let $m$ and $k$ be nonnegative integers. Assume that the initial data $(u_0,\,z_0) \in (H^{m+k}\cap X_c) \times Z_{m,k}$ is compatible up to order $m-1$ if $m\geq 1$. Then, (4.1) has a unique solution $(u,\,z) \in X_{m,k} \times Y_{m,k}$ satisfying the differential constraint for every $T > 0$. Furthermore, $z_\tau \in W_{m,k}$. There exist positive constants $C$ and $\gamma$ such that

(4.10)\begin{equation} \|u\|_{X_{m,k}} + \|z\|_{Y_{m,k}} + \|z_{\tau}\|_{W_{m,k}} \leq Ce^{\gamma T}(\|u_0\|_{H^{m+k}} + \|z_0\|_{Z_{m,k}}). \end{equation}

Proof. The case $m= 0$ has been already established from the previous theorem, and so, we only consider $m\geq 1$. Initially we have the regularity $u\in C(0,\,T;H^{m+k})$, $z \in C(0,\,T;L^2_\theta (H^{m+k}))$, and $z_\tau \in L^2(0,\,T;H^{m+k})$ according to theorem 4.3, and from the differential equation for $u$, we can see that $u\in H^1(0,\,T;H^{m+k-1})$. The compatibility of the data $(P_Mu,\,e^{\varepsilon \theta }P_Mz_0)$ implies that $z \in C(0,\,T;H^1_\theta (H^{m+k-1})) \cap C^1(0,\,T;L^2_\theta (H^{m+k-1}))$ and $z_\tau \in H^1(0,\,T;H^{m+k-1}) \subset C(0,\,T;H^{m+k-1})$. Consequently, $u\in C^1(0,\,T;H^{m+k-1}) \cap H^2(0,\,T;H^{m+k-2})$ according to the PDE for $u$ once more. Continuing the process, we obtain that $z \in Z_{m,k}$ and $z_\tau \in W_{m,k}$, and consequently, $u \in X_{m,k}$. The energy estimate (4.10) is a direct consequence of (3.14), (4.5), and by making $\gamma$ sufficiently large.

In deriving energy estimates for (4.1), we will take more derivatives than those that are allowable by the regularity of the solution. However, the final estimates only contain the norms of the space where the solution belongs. Therefore, one can approximate first the solution by smoother ones, derive energy estimates for the approximations and then pass to the limit to obtain the energy estimates for the solution.

As an illustration, suppose we have data $(u_0,\,z_0) \in (H^{m+k}\cap X_c) \times Z_{m,k}$ which is compatible up to order $m-1$ if $m \geq 1$. The previous theorem implies that the weak solution of (4.1) satisfies $(u,\,z,\,z_\tau ) \in X_{m,k} \times Y_{m,k} \times W_{m,k}$. Given $m_0 > m$ and $k_0 > k$, by theorem 3.3 there exists $(z_{0n},\, v_n) \in H_\theta ^{m_0}(H^{k_0}) \times H^{m_0}(0,\,T;H^{k_0})$ that is compatible up to order $m_0 - 1$ for every $n$ and

\[ (z_{0n},v_n) \to (e^{\varepsilon\theta}P_M z_0, P_Mu) \qquad \text{in } Z_{m,k} \times W_{m,k}. \]

The solution of the transport system with parameter

\[ \mathscr{L}_1 z_n = 0, \quad z_{n|\theta = 0} = v_n, \quad z_{n|t=0} = z_{0n}, \]

satisfies $z_n \in Y_{m_0,k_0}$ and $z_{n\tau } \in W_{m_0,k_0}$ and we have $z_n \to z$ in $Y_{m,k}$ and $z_{n\tau } \to z_\tau$ in $W_{m,k}$.

For each $n$, let $u_{0n} := R_{1/n}u_0 \in H^{m_0 + k_0}$ so that $u_{0n} \to u_0$ in $H^{m+k}\cap X_c$. Now we use $z_n$ to approximate the solutions of the hyperbolic system. Let $u_n$ be the solution of the hyperbolic system

\[ \mathscr{L}_2 u_n ={-}e^{\varepsilon\tau}Mz_{n\tau}, \qquad \mathscr{L}_3 u_n = 0, \qquad u_{n|t=0} = u_{0n}. \]

Then we have $u_n \in X_{m_0,k_0}$ and $u_n \to u$ in $X_{m,k}$. Combining the above systems, we have

\[ \begin{cases} \mathscr{L}_2 u_n ={-}e^{\varepsilon\tau}Mz_{n\tau},\\ \mathscr{L}_1 z_n = 0, \qquad z_{n|\theta=0} = P_Mu_{n} + \varrho_n,\\ \mathscr{L}_3 u_n = 0, \qquad u_{n|t=0} = u_{0n}, \qquad z_{n|t=0} = z_{0n}. \end{cases} \]

where the residual $\varrho _n$ is given by $\varrho _n := v_n - P_M u_n$.

Notice that the above system is the same as (4.2) except for the boundary condition for $z_n$ which contains the residual $\varrho _n$. According to the continuous embedding $X_{m,k} \subset W_{m,k}$ we have $u_n \to u$ in $W_{m,k}$ and therefore $\varrho _n \to 0$ in $W_{m,k}$. From this, it follows that by taking $m_0$ and $k_0$ large enough, we can take any derivatives and as long as the final estimate involves only the norms of the states in $X_{m,k}$, $Y_{m,k}$, $Z_{m,k}$, and $W_{m,k}$ where they are applicable. The energy estimates for the approximate functions $(u_n,\,z_n)$ imply those for the solution $(u,\,z)$ of (4.2). If in addition, the initial data are integrable in the sense that $u_0 \in L^1$ and $z_0 \in L^2_\theta (L^1)$, then we have $u_{0n} \to u_0$ in $L^1$ and $z_{0n} \to z_0$ in $L_\theta ^2(L^1)$, see the paragraph after the proof of theorem 3.3. This information will be used in § 8 in deriving decay estimates under the additional integrability assumption on the data.

5. Asymptotic stability and standard decay estimates

The goal of the present section is to derive energy estimates for the solutions of (4.1) under the conditions for the coefficient matrices presented in § 2. We begin with condition (S)$_s$, which is known to provide standard decay estimates for symmetric hyperbolic systems. For simplicity we denote by

\[ w := P_Lu, \quad v := P_{L_1} u \]

the projection of $u$ onto $\text {Ker}(L)^\perp$ and $\text {Ker}(L_1)^\perp$, respectively. Also, we simply write $\Psi$ for the block matrix $\Psi _{G,N,M}$ in condition (M). Generic constants will be denoted by $C$ or with a subscript and their values may possibly vary from line to line.

Theorem 5.1 Suppose that conditions (L), (S), (M), (Q), (K), and (S) $_s$ are satisfied. Assume that $(u_0,\,z_0) \in (H^{s}\cap X_c) \times L_\theta ^2(H^s)$ for some $s \geq 1$ and define $I_0^2 := \|u_0\|_{H^{s}}^2 + \|z_0\|^2_{L_\theta ^2(H^s)}$. Then, the solution of (4.1) with data $(u_0,\,e^{\varepsilon \theta }P_Mz_0)$ satisfies

(5.1)\begin{align} & \|u(t)\|_{H^{s}}^2 + \|z(t)\|_{L^2_\theta(H^{s})}^2 + \int_0^t \|\partial_x u(\sigma)\|_{H^{s-1}}^2{\rm d} \sigma \nonumber\\ & \quad + \int_0^t \left( \|(w,v,z_\tau)(\sigma)\|_{H^{s}}^2 + \|z(\sigma)\|_{L^2_\theta(H^{s})}^2 \right){\rm d} \sigma \leq CI_0^2, \qquad t \geq0. \end{align}

Proof. As mentioned in the preceding section, we can formally take the derivatives of the partial differential equations. We divide the derivation of the energy estimate in several steps.

Step 1. Applying $\partial _x^\ell$ to the PDE for $u$ and then taking the inner product with $G \partial _x^\ell u$ yields

(5.2)\begin{align} & \frac{1}{2}\frac{{\rm d}}{{\rm d}t}\langle GA^0\partial_x^\ell u,\partial_x^\ell u\rangle + \frac{1}{2}\sum_{j=1}^d \partial_{x_j}\langle GA^j\partial_x^\ell u,\partial_x^\ell u\rangle + \langle (GL)_1 \partial_x^\ell u,\partial_x^\ell u\rangle \nonumber\\ & \quad + e^{\varepsilon\tau}\langle GM\partial_x^\ell z_\tau, \partial_x^\ell u \rangle = 0 \end{align}

for $0 \leq \ell \leq s$. By taking the inner product of the transport equation for $z$ with $Nz$ and then integrating over $(-\tau,\,0)$, we obtain

(5.3)\begin{align} & \frac{1}{2}\frac{{\rm d}}{{\rm d}t}\int_{-\tau}^0 \langle N \partial_x^\ell z(\theta),\partial_x^\ell z(\theta)\rangle {\rm d} \theta + \varepsilon\int_{-\tau}^0 \langle N\partial_x^\ell z(\theta),\partial_x^\ell z(\theta)\rangle {\rm d} \theta\nonumber\\ & \quad- \frac{1}{2}\langle NP_M\partial_x^\ell u,P_M\partial_x^\ell u \rangle + \frac{1}{2}\langle N\partial_x^\ell z_\tau,\partial_x^\ell z_\tau\rangle = 0. \end{align}

Getting the sum of (5.2) and (5.3), integrating with respect to $x$, and using the fact that the range of $GM$ is orthogonal to the kernel of $L_1$, we have

(5.4)\begin{align} & \frac{1}{2} \frac{{\rm d}}{{\rm d}t} E_{1,\ell} + \varepsilon(N \partial_x^\ell z,\partial_x^\ell z)_{L^2_\theta(L^2)} \nonumber\\ & \quad+ \frac{1}{2} \int_{\mathbb{R}^d} \langle \Psi(\partial_x^\ell u, \partial_x^\ell z_\tau),(\partial_x^\ell u, \partial_x^\ell z_\tau)\rangle {\rm d} x + (e^{\varepsilon\tau} - 1)(GM\partial_x^\ell z_\tau, \partial_x^\ell v)_{L^2} = 0 \end{align}

where

\[ E_{1,\ell} := (GA^0\partial_x^\ell u,\partial_x^\ell u)_{L^2} + (N \partial_x^\ell z,\partial_x^\ell z)_{L^2_\theta(L^2)}. \]

Using condition (M) on (5.4) and then choosing $\varepsilon$ sufficiently small, we have

(5.5)\begin{equation} \frac{{\rm d}}{{\rm d}t}E_{1\ell} + C(\|\partial_x^\ell v\|^2_{L^2} + \|\partial_x^\ell z\|_{L^2_\theta(L^2)}^2 + \|\partial_x^\ell z_\tau\|_{L^2}^2) \leq 0. \end{equation}

Step 2. The next step is to derive dissipation terms involving $w$. For this purpose, we differentiate $\ell$ times the equation for $u$ and take the inner product with $S^T\partial _x^\ell u$ to obtain

(5.6)\begin{align} & \frac{1}{2}\frac{{\rm d}}{{\rm d}t}\langle SA^0\partial_x^\ell u,\partial_x^\ell u\rangle + \sum_{j=1}^d \biggl(\frac{1}{2}\partial_{x_j}\langle SA^j\partial_x^\ell u,\partial_x^\ell u\rangle + \langle (SA^j)_2 \partial_{x_j}\partial_x^\ell u, \partial_x^\ell u\rangle\biggl) \nonumber\\ & \quad+ \langle (SL)_1 \partial_x^\ell u,\partial_x^\ell u\rangle + e^{\varepsilon\tau}\langle SM\partial_x^\ell z_\tau, \partial_x^\ell w \rangle = 0. \end{align}

Here, we used the fact that the range of $SM$ is orthogonal to the kernel of $L$. The second term in the sum can be rewritten as

(5.7)\begin{equation} \sum_{j = 1}^d \langle (SA^j)_2 \partial_{x_j}\partial_x^\ell u, \partial_x^\ell u\rangle = \sum_{j=1}^d ( \langle Y_j\partial_{x_j}\partial_x^\ell u, \partial_x^\ell u\rangle + \langle (Q^{jT}\varPi_1WR)_2 \partial_{x_j}\partial_x^\ell u, \partial_x^\ell u\rangle)\end{equation}

where

\[ Y_j := (SA^j - Q^{jT}\varPi_1WR)_2, \qquad 1 \leq j \leq d. \]

Integrating both sides with respect to $x$ and then applying Parseval's identity to the first sum on the right-hand side of (5.7), we get

(5.8)\begin{equation} \sum_{j=1}^d ( Y_j \partial_{x_j}\partial_x^\ell u, \partial_x^\ell u)_{L^2} = (|\xi|^{2\ell+1}Y(\omega) \widehat{u},\widehat{u})_{L^2}, \end{equation}

where $\omega := \xi /|\xi |$ and

\[ Y(\omega) := i(SA(\omega)-Q(\omega)^T\varPi_1 WR)_2. \]

According to condition (S)$_s$, (5.8) is nonnegative. On the other hand, using the constraint $\mathscr {L}_3 u = 0$, we obtain

\[ \sum_{j=1}^d\langle (Q^{jT}\varPi_1WR)_2 \partial_{x_j}\partial_x^\ell u, \partial_x^\ell u\rangle = \langle W_1R\partial_x^\ell u, R\partial_x^\ell u\rangle \geq 0 \]

because $W_1$ is nonnegative on the range of $R$. Therefore, integrating (5.6) over $\mathbb {R}^d$, using the above information, and Young's inequality, we have the estimate

(5.9)\begin{equation} \frac{1}{2} \frac{{\rm d}}{{\rm d}t}E_{2,\ell} + ((SL)_1 \partial_x^\ell u,\partial_x^\ell u)_{L^2} - (\eta \|\partial_x^\ell w\|_{L^2}^2 + C_\eta \|\partial_x^\ell z_\tau\|_{L^2}^2)\leq 0 \end{equation}

where $\eta > 0$ and

\[ E_{2,\ell} := (SA^0\partial_x^\ell u,\partial_x^\ell u)_{L^2}. \]

From condition (S), there exist positive constants $C_1$ and $C_2$ such that

\begin{align*} ((SL)_1 \partial_x^\ell u,\partial_x^\ell u)_{L^2} & = ((SL + L)_1 \partial_x^\ell u,\partial_x^\ell u)_{L^2} - (L_1 \partial_x^\ell u,\partial_x^\ell u)_{L^2}\\ & \geq C_1\|\partial_x^\ell w\|_{L^2}^2 - C_2\|\partial_x^\ell v\|^2_{L^2}. \end{align*}

Plugging this estimate to (5.9) and choosing $\eta < C_1$, we obtain

(5.10)\begin{equation} \frac{1}{2} \frac{{\rm d}}{{\rm d}t}E_{2,\ell} + C(\|\partial_x^\ell w\|_{L^2}^2 - \|\partial_x^\ell z_\tau\|_{L^2}^2 - \|\partial_x^\ell v\|^2_{L^2})\leq 0. \end{equation}

Multiplying (5.10) by small enough $\alpha > 0$ and then adding with (5.5), we have

(5.11)\begin{equation} \frac{1}{2}\frac{{\rm d}}{{\rm d}t}(E_{1,\ell} + \alpha E_{2,\ell}) + C_\alpha (\|\partial_x^\ell v\|_{L^2} ^2 + \|\partial_x^\ell w\|_{L^2}^2 + \|\partial_x^\ell z\|_{L^2_\theta(L^2)}^2 + \|\partial_x^\ell z_\tau\|_{L^2}^2) \leq 0 \end{equation}

for some $C_\alpha > 0$ and for all $0 \leq \ell \leq s$.

Step 3. The final step is to derive dissipation terms for the derivatives. Applying $\partial _x^\ell$ to the equation for $u$, taking the $L^2$-inner product with $\sum _{k = 1}^d K^{kT} \partial _{x_k}\partial _{x}^\ell u$, and applying the anti-symmetry of $K^kA^0$, we obtain

(5.12)\begin{align} & \frac{1}{2}\frac{{\rm d}}{{\rm d}t} \sum_{k=1}^d (K^kA^0 \partial_{x}^\ell u, \partial_{x_k}\partial_{x}^\ell u )_{L^2} + \sum_{j,k=1}^d ( K^kA^j \partial_{x_j}\partial_x^\ell u, \partial_{x_k} \partial_x^\ell u )_{L^2}\nonumber\\ & \quad+ \sum_{k=1}^d ( ( K^k L\partial_x^\ell w, \partial_{x_k}\partial_x^\ell u)_{L^2} + e^{-\varepsilon\tau}( K^k M\partial_x^\ell z_\tau, \partial_{x_k}\partial_x^\ell u)_{L^2}) = 0 \end{align}

for every $0\leq \ell \leq s-1$. Let $I_1$ and $I_2$ denote the last two sums in this equation. The term $I_2$ can be estimated as

\[ |I_2| \leq \eta\|\partial_x^{\ell + 1} u\|_{L^2}^2 + C_\eta (\|\partial_x^\ell w\|^2_{L^2} + \|\partial_x^\ell z_\tau\|^2_{L^2}) \]

for every $\eta > 0$.

Now, applying (2.2), the fact that $\widehat {u}(t,\,\xi ) \in \text {Ker}(\varPi _2Q(\omega ))$ for every $t \geq 0$ and for every $\xi \in \mathbb {R}^d \setminus \{0\}$, and using Parseval's identity, we infer that

\begin{align*} I_1 & = \text{Re}(|\xi|^{2\ell+2} K(\omega)A(\omega) \widehat{u},\widehat{u})_{L^2}\\ & = (|\xi|^{2\ell+2}(K(\omega)A(\omega) + \vartheta (SL+L))_1\widehat{u},\widehat{u})_{L^2} - \vartheta (|\xi|^{2\ell+2}(SL+L)_1\widehat{w},\widehat{w})_{L^2}\\ & \geq C_1 \||\xi|^{2\ell+2}\widehat{u}\|_{L^2}^2 - C_2\||\xi|^{2\ell+2} \widehat{w}\|^2_{L^2} \end{align*}

for some constants $C_1,\,C_2 > 0$ and $\vartheta$ is the constant in (2.2). Here, $K(\omega ) := \sum _{k =1}^d K^k \omega _k$. Using Plancherel's identity to the latter terms and then combining the above estimates, we obtain from (5.12)

(5.13)\begin{equation} \frac{1}{2}\frac{{\rm d}}{{\rm d}t} E_{3,\ell} + C_\eta( \|\partial_x^{\ell +1}u\|_{L^2}^2 - \|\partial_x^\ell w\|^2_{H^1} - \|\partial_x^\ell z_\tau\|_{L^2}^2) \leq 0 \end{equation}

by choosing $\eta > 0$ small enough, where

\[ E_{3,\ell} := \sum_{k=1}^d (K^kA^0 \partial_{x}^\ell u, \partial_{x_k}\partial_{x}^\ell u )_{L^2}. \]

Multiplying (5.13) by $\beta > 0$ small enough and then adding the result to (5.11) yields, for $0 \leq \ell \leq s-1$, the estimate

(5.14)\begin{align} & \frac{1}{2}\frac{{\rm d}}{{\rm d}t} (E_{1,\ell} + E_{1,\ell + 1} + \alpha (E_{2,\ell} + E_{2,\ell + 1}) + \beta E_{3,\ell}) \nonumber\\ & \quad+ C(\|\partial_x^{\ell +1}u\|_{L^2}^2 + \|\partial_x^\ell v\|_{H^1} ^2+ \|\partial_x^\ell w\|_{H^1}^2 + \|\partial_x^\ell z\|_{L^2_\theta(H^1)}^2 + \|\partial_x^\ell z_\tau\|_{H^1}^2) \leq 0 \end{align}

for some constant $C = C_{\alpha,\beta,\eta } > 0$. By reducing $\alpha > 0$ and then $\beta > 0$ if necessary, we can see that there are constants $C_1,\,C_2 > 0$ such that

(5.15)\begin{align} C_1(\|\partial_x^\ell u\|_{H^1}^2 + \|\partial_x^\ell z\|_{L^2_\theta(H^1)}^2) & \leq E_{1,\ell} + E_{1,\ell + 1} + \alpha (E_{2,\ell} + E_{2,\ell + 1}) + \beta E_{3,\ell} \nonumber\\ & \leq C_2(\|\partial_x^\ell u\|_{H^1}^2 + \|\partial_x^\ell z\|_{L^2_\theta(H^1)}^2). \end{align}

Taking the sum of (5.14) for $0\leq \ell \leq s-1$, integrating with respect to time, and then using the equivalence (5.15), we acquire the estimate in the theorem.

Using the energy estimate of the previous theorem, one can also derive the corresponding estimates for the derivatives with respect to time and history under an additional compatibility condition.

Theorem 5.2 Suppose that conditions (L), (S), (M), (Q), (K), and (S) $_s$ are satisfied. Assume that $(u_0,\,z_0) \in (H^{k+m} \cap X_c) \times Z_{m,k}$ is compatible up to order $m-1$ for some $k\geq 0$ and $m\geq 1$. Let $s := k+m$ and $I_0^2 := \|u_0\|_{H^{s}}^2 + \|z_0\|^2_{Z_{m,k}}$. The solution of (4.1) satisfies

(5.16)\begin{align} & \sum_{\ell=1}^m \, \biggl[ \|\partial_t^\ell u(t)\|_{H^{s-\ell}}^2 + \int_0^t \|\partial_t^\ell u(\sigma)\|_{H^{s-\ell}}^2{\rm d} \sigma \biggl] \nonumber\\ & \quad+ \sum_{\ell = 0}^m \, \biggl[\|\partial_t^\ell z(t)\|_{Z_{m-\ell,k}}^2 + \int_0^t \left( \|\partial_t^\ell z_\tau(\sigma)\|_{H^{s-\ell}}^2 + \|\partial_t^\ell z(\sigma)\|_{Z_{m-\ell,k}}^2 \right){\rm d} \sigma\biggl] \ \leq\, CI_0^2 \end{align}

for every $t \geq 0$.

Proof. First, since $Z_{m,k} \subset Z_{0,s} = L^2_\theta (H^s)$ it follows that (5.1) holds. The next step is to obtain an estimate for the time derivatives of $u$. Taking the $(\ell -1)$st derivative with respect to $t$ of the equation for $u$, for $1\leq \ell \leq m$, we have

(5.17)\begin{equation} \partial_t^\ell u ={-} (A^0)^{{-}1}\sum_{j = 1}^d A^j \partial_{x_j} \partial_t^{\ell-1} u - (A^0)^{{-}1} L\partial_t^{\ell-1}w - e^{\varepsilon\tau} (A^0)^{{-}1} M\partial_t^{\ell-1}z_\tau. \end{equation}

Using an induction argument, it can be easily seen that the estimate

(5.18)\begin{equation} \|\partial_t^\ell u(t)\|^2_{H^{s-\ell}} \leq C \biggl( \|\partial_x u(t)\|^2_{H^{s-1}} + \|w(t)\|^2_{H^{s-1}} +\sum_{j = 0}^{\ell - 1} \|\partial_t^j z_\tau(t)\|_{H^{s-j-1}}^2 \biggl) \end{equation}

holds for every $1 \leq \ell \leq m$ and $t\geq 0$.

On the other hand, by applying $\partial _t^\ell \partial _x^\nu$ to the transport equation for $z$ and multiplying with $\partial _t^\ell \partial _x^\nu z$, we have

(5.19)\begin{equation} \frac{1}{2}\partial_t (|\partial_t^\ell \partial_x^\nu z|^2) - \frac{1}{2}\partial_\theta(|\partial_t^\ell \partial_x^\nu z|^2) + \varepsilon|\partial_t^\ell \partial_x^\nu z|^2 = 0 \end{equation}

for every $0 \leq \ell \leq m$ and $0\leq \nu \leq m - \ell$. From the boundary condition at $\theta = 0$ and the fact that $\text {Ker}(L_1) \subset \text {Ker}(M)$, we have $z_{|\theta = 0} =P_Mu = P_M( v + (I - P_{L_1})u) = P_Mv$. Integrating (5.19) over $(0,\,t) \times (-\tau,\,0) \times \mathbb {R}^d$ and then taking the sum over all $0 \leq \nu \leq s - \ell$ produce the estimate

(5.20)\begin{align} & \|\partial_t^\ell z(t)\|^2_{L_\theta^2(H^{s-\ell})} + \varepsilon\int_0^t \|\partial_t^\ell z(\sigma)\|^2_{L_\theta^2(H^{s-\ell})} {\rm d} \sigma + \int_0^t \|\partial_t^\ell z_\tau(\sigma)\|_{H^{s-\ell}}^2 {\rm d} \sigma \nonumber\\ & \quad\leq C \biggl(\|\partial_t^\ell z(0)\|_{L_\theta^2(H^{s-\ell})}^2 + \int_0^t \|\partial_t^\ell v(\sigma)\|_{H^{s-\ell}}^2 {\rm d} \sigma \biggl) \end{align}

for every $0 \leq \ell \leq m$. We note that for every $0 \leq \ell \leq m$ it holds that $\|\partial _t^\ell z(0)\|_{L_\theta ^2(H^{s-\ell })} \leq \|z_0\|_{Z_{m,k}}$.

We establish by strong induction the following estimate for $0 \leq \ell \leq m$ and $t \geq 0$

(5.21)\begin{equation} \|\partial_t^\ell z(t)\|^2_{L_\theta^2(H^{s-\ell})} + \int_0^t \|\partial_t^\ell z(\sigma)\|^2_{L_\theta^2(H^{s-\ell})} {\rm d} \sigma + \int_0^t \|\partial_t^\ell z_\tau(\sigma)\|_{H^{s-\ell}}^2 {\rm d} \sigma \leq CI_0^2. \end{equation}

The case $\ell = 0$ has been already established in theorem 5.1. Suppose that (5.21) holds for every $0,\, 1,\,\ldots,\,\ell -1$. Applying (5.1), (5.18), and the induction hypothesis, one has

\begin{align*} & \int_0^t \|\partial_t^\ell v(\sigma)\|_{H^{s-\ell}}^2 {\rm d} \sigma \\ & \quad\leq C \int_0^t \biggl( \|\partial_x u(\sigma)\|^2_{H^{s-1}} +\|w(\sigma)\|^2_{H^{s-1}} + \sum_{j = 0}^{\ell - 1} \|\partial_t^j z_\tau(\sigma)\|_{H^{s-j-1}}^2 \biggl) {\rm d} \sigma \leq CI_0^2.\qquad \end{align*}

Plugging this in inequality (5.20) proves (5.21).

Similarly, using a strong induction argument and the equation $\partial _\theta z = \partial _t z + \varepsilon z$, we can obtain estimates involving derivatives with respect to $\theta$. More precisely, we have

(5.22)\begin{equation} \|\partial_t^\ell \partial_\theta^\mu z(t)\|^2_{L_\theta^2(H^{s-\ell- \mu})} + \int_0^t \|\partial_t^\ell\partial_\theta^\mu z(\sigma)\|^2_{L_\theta^2(H^{s-\ell-\mu})} {\rm d} \sigma \leq CI_0^2 \end{equation}

for every $0 \leq \ell + \mu \leq m$. Given $0 \leq \ell \leq m$, taking the sum over all $0 \leq \mu \leq m-\ell$ in (5.22) results to

(5.23)\begin{equation} \sum_{\mu = 0}^{m-\ell} \biggl( \|\partial_t^\ell z(t)\|^2_{H_\theta^\mu(H^{s-\ell - \mu})} + \int_0^t \|\partial_t^\ell z(\sigma)\|^2_{H_\theta^\mu(H^{s-\ell - \mu})} {\rm d} \sigma\biggl) \, \leq CI_0^2. \end{equation}

Combining (5.21) and (5.23), and using the definition of $Z_{m,k}$, we obtain

\[ \sum_{\ell = 0}^m \, \biggl[\|\partial_t^\ell z(t)\|_{Z_{m-\ell,k}}^2 + \int_0^t \left( \|\partial_t^\ell z_\tau(\sigma)\|_{H^{s-\ell}}^2 + \|\partial_t^\ell z(\sigma)\|_{Z_{m-\ell,k}}^2 \right){\rm d} \sigma\biggl] \ \leq\, CI_0^2. \]

By the Poincaré inequality we have, for every $0 \leq j \leq m -1$,

\[ \|\partial_t^j z_\tau(t)\|_{H^{s-j-1}} \leq C\|\partial_t^j z(t)\|_{H_\theta^1(H^{s-j-1})}. \]

Utilizing this estimate in (5.18) together with (5.1) and (5.23), we have the other part of the desired estimate

\[ \sum_{\ell=1}^m \, \biggl[ \|\partial_t^\ell u(t)\|_{H^{s-\ell}}^2 + \int_0^t \|\partial_t^\ell u(\sigma)\|_{H^{s-\ell}}^2{\rm d} \sigma \biggl] \leq CI_0^2. \]

This completes the proof of the theorem.

The above energy estimates imply the uniform decay of $u$ and $z$ on $\mathbb {R}^d$ and $(-\tau,\,0) \times \mathbb {R}^d$, respectively. We denote by $[r]$ the largest integer less than or equal to $r \in \mathbb {R}$.

Corollary 5.3 Suppose that the conditions of theorem 5.2 hold. Let $s_0 := [{d}/{2}] + 1$ and $k \geq 1$. Then, for every $0 \leq \ell \leq m-1$ and $0 \leq j \leq m -\ell$, we have

(5.24)\begin{align} & \|\partial_t^\ell u(t)\|_{W^{s-s_0-\ell-1,\infty}} \to 0 \text{ as } t \to \infty, \text{ if } s \geq s_0 + \ell + 1, \end{align}
(5.25)\begin{align} & \|\partial_t^\ell (v,w)(t)\|_{W^{s-s_0-\ell,\infty}} \to 0 \text{ as } t \to \infty, \text{ if } s \geq s_0 + \ell, \end{align}
(5.26)\begin{align} & \|\partial_t^\ell z(t)\|_{H_\theta^j(W^{s-s_0-\ell-j,\infty})} \to 0 \text{ as } t \to \infty, \text{ if } s \geq s_0 + \ell + j. \end{align}

In particular, for every $0 \leq \ell \leq m-1$ and $0 \leq j \leq m - \ell -1$, we have

(5.27)\begin{equation} \|\partial_t^\ell z(t)\|_{W_\theta^{j,\infty}(W^{s-s_0-\ell-j-1,\infty})} \to 0 \text{ as } t \to \infty, \qquad \text{if }s \geq s_0 + j + \ell +1 . \end{equation}

Proof. First, let us prove the uniform decay (5.24). To do this, we introduce the functional $\Phi _1(t) := \|\partial _x\partial _t^\ell u(t)\|^2_{H^{s-\ell -2}}$. According to (5.1) and (5.16), we have $\Phi _1 \in W^{1,1}(0,\,\infty )$ and therefore $\Phi _1(t) \to 0$ as $t \to \infty$. Let $r = d/(2s_0)$. Then, $r = d/(d+2)$ if $d$ is even and $r = d/(d+1)$ if $d$ is odd, and in any case, we have $r \in [{1}/{2},\,1)$. In virtue of the Gagliardo–Nirenberg inequality [Reference Nirenberg20], we get

\begin{align*} \|\partial_t^\ell u(t)\|_{W^{s-s_0-\ell-1,\infty}} & \leq C\|\partial_x \partial_t^\ell u(t)\|_{H^{s-\ell-2}}^r\|\partial_t^\ell u(t)\|_{H^{s-s_0 - \ell-1}}^{1-r} \\ & \leq CI_0^{1-r}\|\partial_x \partial_t^\ell u(t)\|_{H^{s-\ell-2}}^r \to 0 \end{align*}

as $t \to \infty$. To prove (5.25) and (5.26), we consider the functional

\[ \Phi_2(t) := \|\partial_t^\ell (v,w)(t)\|_{H^{s-\ell-1}}^2 + \sum_{j = 0}^{m-\ell} \|\partial_t^\ell z(t)\|^2_{H_\theta^j(H^{s-\ell-j-1})}. \]

From the energy estimates (5.1) and (5.16) we can see that $\Phi _2 \in W^{1,1}(0,\,\infty )$, and hence, $\Phi _2(t) \to 0$ as $t \to \infty$. The Gagliardo–Nirenberg inequality once more imply (5.25), and similarly

(5.28)\begin{align} & \|\partial_t^\ell \partial_\theta^\mu z(t,\theta)\|_{L_\theta^2(W^{s-s_0-\ell - j,\infty})}^2 \nonumber\\ & \quad\leq C\int_{-\tau}^0 \|\partial_t^\ell \partial_\theta^\mu \partial_x z(t,\theta)\|_{H^{s-\ell-j-1}}^{2r}\|\partial_t^\ell \partial_\theta^\mu z(t,\theta)\|_{H^{s-s_0-\ell - j}}^{2(1-r)}{\rm d} \theta \end{align}

for every $0\leq \mu \leq j$ and $0\leq j \leq m-\ell$. Applying Hölder's inequality and using the fact that $r < 1$, we get

\begin{align*} \|\partial_t^\ell \partial_\theta^\mu z(t)\|_{L_\theta^2(W^{s-s_0-\ell - j,\infty})} & \leq C\|\partial_t^\ell \partial_\theta^\mu z(t)\|_{L_\theta^2(H^{s-\ell-j})}^{r} \|\partial_t^\ell \partial_\theta^\mu z(t)\|_{L_\theta^2(H^{s-s_0-\ell - j})}^{1-r}\\ & \leq CI_0^{r}\|\partial_t^\ell \partial_\theta^\mu z(t)\|_{L_\theta^{2}(H^{s-\ell-j-1})}^{1-r}\\ & \leq CI_0^{r} \|\partial_t^\ell z(t)\|_{H_\theta^j(H^{s-\ell-j-1})}^{1-r} \end{align*}

for every $0\leq \mu \leq j$. Passing to the limit $t \to \infty$, this estimate imply (5.26). Finally, (5.27) is a consequence of (5.26) and the Sobolev embedding.

The next goal is to derive time-weighted decay estimates for (4.1) under the assumption (S)$_s$. For this, we define the energy functionals

(5.29)\begin{align} N_s(t)^2 & := \sum_{j = 0}^s \sup_{0 \leq \sigma\leq t} (1+ \sigma)^j(\|\partial_x^j u(\sigma)\|_{H^{s-j}}^2 + \|\partial_x^j z(\sigma)\|^2_{L_\theta^2(H^{s-j})})\nonumber\\ D_s(t)^2 & := \sum_{j=0}^s \int_0^t (1+\sigma)^j(\|\partial_x^j(v,w,z_\tau)(\sigma)\|_{H^{s-j}}^2 + \|\partial_x^j z(\sigma)\|^2_{L_\theta^2(H^{s-j})}) {\rm d} \sigma \end{align}
(5.30)\begin{align} & \quad + \sum_{j=0}^{s-1}\int_0^t (1+\sigma)^j \|\partial_x^{j+1}u(\sigma)\|_{H^{s-j-1}}^2 {\rm d} \sigma. \end{align}

Theorem 5.4 Under the assumptions of theorem 5.1, there exists a constant $C > 0$ independent of $t$ and the initial data such that $N_s(t)^2 + D_s(t)^2 \leq CI_0^2$ for every $t \geq 0$. In particular, we have

(5.31)\begin{equation} \|\partial_x^j u(t)\|_{H^{s-j}} + \|\partial_x^j z(t)\|_{L^2_\theta(H^{s-j})} \leq C(1+t)^{-{j}/{2}} \end{equation}

for every $0 \leq j \leq s$ and $t \geq 0$. Moreover, for every $0 \leq j < s$, we have

(5.32)\begin{equation} \|\partial_x^j R u(t)\|_{H^{s-j-1}} \leq C(1+t)^{-{j}/{2} - ({1}/{2})}. \end{equation}

The proof of this theorem follows immediately from the following energy estimates together with an induction argument. Estimate (5.32) follows from (5.31) and the differential constraint.

Lemma 5.5 In the framework of theorem 5.1, there exists $C > 0$ such that we have the following time-weighted energy estimates

(5.33)\begin{align} & (1+t)^j (\|\partial_x^ju(t)\|_{H^{s-j}}^2 + \|\partial_x^jz(t)\|^2_{L_\theta^2(H^{s-j})}) \nonumber\\ & \quad+ \int_0^t (1+ \sigma)^j(\|\partial_x^j (v,w,z_\tau)(\sigma)\|_{H^{s-j}}^2 + \|\partial_x^j z(\sigma)\|^2_{L_\theta^2(H^{s-j})}) {\rm d} \sigma \leq CI_0^2 \end{align}

for every $0 \leq j \leq s$ and $t \geq 0$, and

(5.34)\begin{equation} \int_0^t (1+\sigma)^j \|\partial_x^{j+1}u(\sigma)\|_{H^{s-j-1}}^2 {\rm d} \sigma \leq CI_0^2 \end{equation}

for every $0 \leq j < s$ and $t \geq 0$.

Proof. Multiplying (5.11) by $(1+t)^j$, integrating with respect to $t$, and then taking the sum of the corresponding inequalities for $j \leq \ell \leq s-j$, we obtain

\begin{align*} & (1+t)^j (\|\partial_x^j u(t)\|_{H^{s-j}}^2 + \|\partial_x^j z(t)\|^2_{L_\theta^2(H^{s-j})}) \\ & \quad+ \int_0^t (1+ \sigma)^j (\|\partial_x^j (v,w,z_\tau)(\sigma)\|_{H^{s-j}}^2 + \|\partial_x^j z(\sigma)\|^2_{L_\theta^2(H^{s-j})}) {\rm d} \sigma \\ & \leq CI_0^2 + Cj \int_0^t (1+\sigma)^{j-1}(\|\partial_x^j u(\sigma)\|_{H^{s-j}}^2 + \|\partial_x^j z(\sigma)\|^2_{L_\theta^2(H^{s-j})}) {\rm d} \sigma \end{align*}

for every $0 \leq j \leq s$. On the other hand, if we multiply (5.14) by $(1+ t)^{j}$, integrate from $0$ to $t$, and then take the sum for every $j \leq \ell \leq s-j-1$, we get

\begin{align*} & \int_0^t (1+ \sigma)^j(\|\partial_x^{j+1}u(\sigma) \|_{H^{s-j-1}}^2 + \|\partial_x^{j+1} z(\sigma)\|^2_{L_\theta^2(H^{s-j-1})}){\rm d} \sigma \nonumber\\ & \quad\leq CI_0^2 + Cj \int_0^t (1+\sigma)^{j-1} (\|\partial_x^{j+1} u(\sigma)\|_{H^{s-j-1}}^2 + \|\partial_x^{j+1} z(\sigma)\|^2_{L_\theta^2(H^{s-j-1})}){\rm d} \sigma. \end{align*}

Induction argument yields that these estimates imply (5.33) and (5.34).

Due to the dissipative structure of the damping matrix $L$, we have the following better decay for the component $w$ and $z$ assuming that the kernel of $L$ lies in the kernel of its symmetric part $L_1$. More precisely:

  1. (L)* The matrix $P_L$ commutes with $GA^0$ and $SA^0$ and we have $\text {Ker}(L) \subset \text {Ker}(L_1)$.

Theorem 5.6 Suppose that the conditions of theorem 5.1 hold and in addition that (L) $_*$ is satisfied. Then, we have

(5.35)\begin{equation} \|\partial_x^j w(t)\|_{H^{s-j-1}} + \|\partial_x^j z(t)\|_{L^2_\theta(H^{s-j-1})} \leq C (1+t)^{-{j}/{2}- ({1}/{2})} \end{equation}

for every $0 \leq j < s$ and $t \geq 0$.

Proof. Taking the inner product of the differential equation for $u$ with $G\partial _x^\ell w = GP_L\partial _x^\ell u$ and then using the fact that $P_L$ and $GA^0$ commute, we have

(5.36)\begin{equation} \frac{1}{2} \frac{{\rm d}}{{\rm d}t} \langle GA^0\partial_x^\ell w,\partial_x^\ell w \rangle + \langle (GL)_1\partial_x^\ell w, \partial_x^\ell w\rangle + e^{\varepsilon\tau}\langle GM\partial_x^\ell z_\tau, \partial_x^\ell w \rangle = R_{1\ell}, \end{equation}

where $R_{1\ell } := - \sum _{k= 1}^d \langle GA^k\partial _{x_k}\partial _x^\ell u,\, \partial _x^\ell w \rangle$. We claim that $P_{L_1}w = v$. Indeed, since $(I-P_L)u$ is in the kernel of $L$, and hence in the kernel of $L_1$, we have $P_{L_1} (u - P_Lu) = 0$. Using the definition of $w$ and $v$, this implies our claim. Since $P_{(GL)_1} = P_{L_1}$ and the range of $GM$ is orthogonal to the kernel of $L_1$, equation (5.36) can be written as

(5.37)\begin{equation} \frac{1}{2} \frac{{\rm d}}{{\rm d}t} \langle GA^0\partial_x^\ell w,\partial_x^\ell w \rangle + \langle (GL)_1\partial_x^\ell v, \partial_x^\ell v\rangle + e^{\varepsilon\tau}\langle GM \partial_x^\ell z_\tau, \partial_x^\ell v \rangle = R_{1\ell}. \end{equation}

Similarly, from the fact that $P_L$ and $SA^0$ commute, we obtain by multiplying the equation for $u$ by $S^T \partial _x^\ell w$

(5.38)\begin{equation} \frac{1}{2} \frac{{\rm d}}{{\rm d}t} \langle SA^0\partial_x^\ell w,\partial_x^\ell w \rangle + \langle (SL)_1\partial_x^\ell w,\partial_x^\ell w\rangle = R_{2\ell}, \end{equation}

where $R_{2\ell } := - e^{\varepsilon \tau }\langle SM \partial _x^\ell z_\tau,\, \partial _x^\ell w \rangle - \sum _{l = 1}^d \langle SA^k\partial _{x_k}\partial _x^\ell u,\, \partial _x^\ell w \rangle$. Here, we use the fact that $SLu = SLw$. Multiplying (5.38) by $\alpha$, taking the sum with (5.37) and (5.3), and then the sum for all $j \leq \ell \leq s-j-1$, we obtain

\begin{align*} \frac{1}{2}\frac{{\rm d}}{{\rm d}t} \widetilde{E}_j & + \varepsilon (N\partial_x^{j}z,\partial_x^j z)_{L_\theta^2(H^{s-j-1})} + \frac{1}{2}\sum_{\ell = j}^{s-j-1} \int_{\mathbb{R}^d} \langle \Psi(\partial_x^\ell v,\partial_x^\ell z_\tau), (\partial_x^\ell v,\partial_x^\ell z_\tau) \rangle {\rm d} x\\ & \quad+ (e^{\varepsilon\tau}-1)(GM\partial_x^j z_\tau, \partial_x^j v)_{H^{s-j-1}} + \alpha ((SL)_1 \partial_x^{j}w,\partial_x^j w)_{H^{s-j-1}} = R_j, \end{align*}

where $\widetilde {E}_j := ((GA^0 + \alpha SA^0) \partial _x^\ell w,\,\partial _x^\ell w)_{H^{s-j-1}} + (N \partial _x^\ell z,\, \partial _x^\ell z)_{L_\theta ^2(H^{s-j-1})}$ and the right-hand side is given by $R_j := \sum _{\ell = j}^{s-j-1} (R_{1\ell } + \alpha R_{2\ell })$.

Applying the Cauchy–Schwarz inequality, using condition (M), and then making $\alpha$ and $\varepsilon$ small enough, we can see that there exist positive constants $c$ and $C$ such that

\[ \frac{{\rm d}}{{\rm d}t}\widetilde{E}_j + c\widetilde{E}_j \leq C \|\partial_x^{j+1} u(t)\|_{H^{s-j-1}}^2. \]

Multiplying both sides by $e^{ct}$ and then integrating from $0$ to $t$ yields

\begin{align*} \widetilde{E}_j(t) & \leq \widetilde{E}_j(0)e^{{-}ct} + C\int_0^t e^{{-}c(t-\sigma)} \|\partial_x^{j+1} u(\sigma)\|_{H^{s-j-1}}^2 {\rm d} \sigma\\ & \leq \widetilde{E}_j(0)e^{{-}ct} + C\int_0^t e^{{-}c(t-\sigma)}(1+ \sigma)^{{-}j-1}{\rm d} \sigma\\ & \leq C(1+ t)^{{-}j-1}. \end{align*}

This estimate implies (5.35) and this completes the proof of the theorem.

Next, we have the following estimates on the spatio-temporal derivatives. With additional structure on the matrices associated with the constraints we get better decay.

  1. (Q)* For each $1 \leq l \leq d$ there exists a $n_1\times n$ matrix $\widetilde {Q}^l$ such that $\varPi _1Q^l = \widetilde {Q}^l P_{L}$.

Corollary 5.7 In the framework of theorem 5.2, for $0 \leq \ell \leq m$, we have

(5.39)\begin{align} & \|\partial_t^\ell\partial_x^j u(t)\|_{H^{s-\ell - j}} + \|\partial_t^\ell \partial_x^j z(t)\|_{H_\theta^\nu(H^{s-\ell-j-\nu})}\leq C(1 + t)^{-({j}/{2}) } \end{align}
(5.40)\begin{align} & \|\partial_t^\ell \partial_x^\mu Ru(t) \|_{H^{s-\ell -\mu- 1}} \leq C(1+t)^{-({\mu}/{2}) -({1}/{2})} \end{align}

for every $0 \leq j \leq s-\ell$, $0 \leq \nu \leq s -\ell -j$ and $0\leq \mu < s-\ell$. In addition, if (L) $_*$ and (Q) $_*$ are satisfied, then we have

(5.41)\begin{align} & \|\partial_t^\ell \partial_x^j w(t) \|_{H^{s-\ell - j - 1}} + \|\partial_t^\ell \partial_x^j z(t)\|_{H_\theta^\nu(H^{s-\ell-j-\nu-1})}\leq C(1 + t)^{-({j}/{2}) - ({1}/{2})} \end{align}
(5.42)\begin{align} & \|\partial_t^\ell \partial_x^\mu Ru(t) \|_{H^{s-\ell - \mu-2}} \leq C(1+t)^{-({\mu}/{2}) - 1} \end{align}

for every $0 \leq j \leq s - \ell - 1$ and $0 \leq \nu \leq s-\ell -j-1$, and for every $0\leq \mu < s-\ell -1$.

Proof. Applying $\partial _x^\nu$ to (5.17) and then taking the sum over $j \leq \nu \leq s - \ell -j$ yields

(5.43)\begin{align} \|\partial_t^\ell \partial_x^j u(t)\|^2_{H^{s-\ell - j}} & \leq C\|\partial_t^{\ell-1}\partial_x^{j+1} u(t)\|^2_{H^{s-\ell -j-1}} + C\|\partial_t^{\ell-1}\partial_x^j w(t)\|^2_{H^{s-\ell-j}} \nonumber\\ & \quad + C\|\partial_t^{\ell-1} \partial_x^j z_\tau(t)\|^2_{H^{s-\ell - j}} . \end{align}

Multiplying equation (5.19) by $(1 + t)^j$, integrating with respect to $(t,\,\theta,\,x)$, and then getting the sum for $j \leq \nu \leq s-\ell - j$, we have

(5.44)\begin{align} & (1+t)^j \|\partial_t^\ell \partial_x^jz(t)\|_{L_\theta^2(H^{s-\ell - j})}^2 + \int_0^t (1+\sigma)^j \|\partial_t^\ell \partial_x^jz(\sigma)\|_{L_\theta^2(H^{s-\ell - j})}^2 {\rm d} \sigma \nonumber\\ & \quad+ \int_0^t (1+\sigma)^j \|\partial_t^\ell \partial_x^jz_\tau(\sigma)\|_{H^{s-\ell - j}}^2 {\rm d} \sigma \leq C I_0^2 + C\int_0^t (1+\sigma)^j \|\partial_t^\ell \partial_x^jv(\sigma)\|_{H^{s-\ell - j}}^2 {\rm d} \sigma \nonumber\\ & \quad+ Cj \int_0^t (1+\sigma)^{j-1} \|\partial_t^\ell \partial_x^jz(\sigma)\|_{L_\theta^2(H^{s-\ell - j})}^2 {\rm d} \sigma . \end{align}

The estimate involving $z$ in (5.39) when $\nu = 0$ can be obtained from (5.43) and (5.44), and when $\nu > 0$ we use the equation $\partial _\theta z = \partial _t z + \varepsilon z$ together with strong induction. The constraint and (5.39) immediately imply (5.40). Finally, (5.41) and (5.42) can be shown using the same argument as in the preceding theorem.

6. Asymptotic stability and regularity-loss decay estimates

If we replace condition (S)$_s$ by (S)$_r$, then the inequality (5.11) does not hold anymore. For this reason, we need to revisit the second and third steps of the proof of theorem 5.1. As a result, the corresponding estimates will be weaker than those that were derived from (S)$_s$.

Theorem 6.1 Assume that the conditions (L), (S), (M), (Q), (K), and (S) $_r$ hold and let $s \geq 2$. Suppose that $(u_0,\,z_0) \in (H^{s}\cap X_c) \times L_\theta ^2(H^s)$ and define $I_0^2 := \|u_0\|_{H^{s}}^2 + \|z_0\|^2_{L_\theta ^2(H^s)}$. Then, the solution of (4.1) with data $(u_0,\,e^{\varepsilon \theta }P_Mz_0)$ satisfies

(6.1)\begin{align} & \|u(t)\|_{H^{s}}^2 + \|z(t)\|_{L^2_\theta(H^{s})}^2 + \int_0^t \|\partial_x u(\sigma)\|_{H^{s-2}}^2{\rm d} \sigma \nonumber\\ & \quad+ \int_0^t \left( \|(v,z_\tau)(\sigma)\|_{H^{s}}^2 + \|w(\sigma)\|_{H^{s-1}}^2 + \|z(\sigma)\|_{L^2_\theta(H^{s})}^2 \right){\rm d} \sigma \leq CI_0^2,\quad t\geq 0. \end{align}

Proof. Note that inequality (5.5) is still satisfied. The next step is to revise the estimation of Step 2 in the proof of theorem 5.1. Since $Y(\omega )$ is nonnegative only on the kernel of $L_1$, we need to proceed in a different way to treat the first term on the right-hand side of (5.7). Recall that $Y_j := (SA^j - Q^{jT}\varPi _1WR)_2$. We rewrite the said term as follows:

(6.2)\begin{align} (Y_j\partial_{x_j}\partial_x^\ell u, \partial_x^\ell u)_{L^2} & = (Y_j\partial_{x_j}\partial_x^\ell (u-v), \partial_x^\ell (u-v))_{L^2} - (Y_j\partial_x^\ell v,\partial_{x_j} \partial_x^\ell u)_{L^2} \nonumber\\ & \quad+ (Y_j\partial_{x_j}\partial_x^\ell (u-v), \partial_x^\ell v)_{L^2}. \end{align}

Applying Parseval's identity and the fact that $\widehat {u}(\xi ) - \widehat {v}(\xi ) \in \text {Ker}(L_1)$ for every $\xi$, we obtain from condition (S)$_r$ that

\[ \sum_{j=1}^d ( Y_j\partial_{x_j}\partial_x^\ell (u-v), \partial_x^\ell (u-v))_{L^2} = (|\xi|^{2\ell + 1}Y(\omega)(\widehat{u} - \widehat{v}), \widehat{u} - \widehat{v})_{L^2} \geq 0. \]

We apply Young's inequality to estimate the last two terms on the right-hand side of (6.2) as follows:

\[ \sum_{j = 1}^d |( Y_j\partial_x^\ell v, \partial_{x_j}\partial_x^\ell u)_{L^2}| + |( Y_j\partial_{x_j}\partial_x^\ell (u-v), \partial_x^\ell v)_{L^2}| \leq \frac{\varrho}{2}\|\partial_x^{\ell+1}u\|_{L^2}^2 + C_{\varrho}\|\partial_x^{\ell} v\|_{H^1}^2 \]

for every $0 \leq \ell \leq s-1$ and $\varrho > 0$. Plugging these estimates to (5.6) and (5.7), we obtain

(6.3)\begin{equation} \frac{1}{2}\frac{{\rm d}}{{\rm d}t}E_{2,\ell} + C(\|\partial_x^\ell w\|_{L^2}^2 - \|\partial_x^\ell z_\tau\|_{L^2}^2) - C_{\varrho} \|\partial_x^{\ell} v\|_{H^1}^2 - \frac{\varrho}{2}\|\partial_x^{\ell +1}u\|_{L^2}^2 \leq 0 \end{equation}

for every $0 \leq \ell \leq s-1$. We perform a similar procedure and integrate by parts to pass the derivatives on $v$ to get

(6.4)\begin{align} \frac{1}{2}\frac{{\rm d}}{{\rm d}t}E_{2,\ell+1} + C(\|\partial_x^{\ell+1} w\|_{L^2}^2 - \|\partial_x^{\ell+1} z_\tau\|_{L^2}^2) - C_{\varrho} \|\partial_x^{\ell + 1} v\|_{H^1}^2 - \frac{\varrho}{2}\|\partial_x^{\ell +1}u\|_{L^2}^2 \leq 0 \end{align}

for every $0 \leq \ell \leq s-2$.

Combining estimates (5.5), (5.14), (6.3), and (6.4), we obtain the following:

(6.5)\begin{align} & \frac{1}{2}\frac{{\rm d}}{{\rm d}t}\widetilde{E}_\ell + (C- \alpha C_\varrho)\|\partial_x^{\ell}v\|^2_{H^2} + (C- \alpha C - \alpha \beta C_\eta)\|\partial_x^\ell z_\tau\|^2_{H^2}\nonumber\\ & \quad+ \alpha( C - \beta C_\eta)\|\partial_x^\ell w\|_{H^1}^2 + \alpha(\beta C_\eta - \varrho)\|\partial_x^{\ell+1} u\|_{L^2}^2 + C\|\partial_x^\ell z\|^2_{L_\theta^2(H^2)}\leq 0 \end{align}

for every $0 \leq \ell \leq s-2$, where we used the abbreviation

\[ \widetilde{E}_\ell := E_{1,\ell} + E_{1,\ell+1} + E_{1,\ell+2} + \alpha(E_{2,\ell} + E_{2,\ell+1}) + \alpha\beta E_{3,\ell}. \]

Choosing the positive constants $\varrho,$ $\beta$, and $\alpha$ in such a way that $\beta < C/C_\eta$, $\varrho < \beta C_{\eta }$, and $\alpha < \max (C/C_{\varrho },\, C/(C + \beta C_\eta ))$, we can see that every constants appearing on the left-hand side of inequality (6.5) are positive. Integrating this inequality with respect to $t$ and taking the sum for all $0\leq \ell \leq s-2$, we get the desired inequality stated in the theorem after reducing $\alpha$ and then $\beta$ if necessary.

Analogous to theorem 5.2, one can also derive estimates for the time-derivatives. The details are left to the reader.

Theorem 6.2 Suppose that the conditions (L), (S), (M), (Q), (K), and (S) $_r$ hold. Assume that the initial data $(u_0,\,z_0) \in (H^{k+m} \cap X_c) \times Z_{m,k}$ is compatible up to order $m-1$ for some $k\geq 1$ and $m\geq 1$. Let $s := k+m$ and $I_0 := \|u_0\|_{H^{s}} + \|z_0\|_{Z_{m,k}}$. The solution of (4.1) satisfies

(6.6)\begin{align} & \sum_{\ell=1}^{m} \, \biggl[ \|\partial_t^\ell u(t)\|_{H^{s-\ell}}^2 + \int_0^t \|\partial_t^\ell u(\sigma)\|_{H^{s-\ell-1}}^2{\rm d} \sigma \biggl] + \sum_{\ell = 0}^{m} \|\partial_t^\ell z(t)\|_{Z_{m-\ell,k - 1}}^2 \nonumber\\ & \quad+ \sum_{\ell = 0}^{m} \int_0^t \left( \|\partial_t^\ell z_\tau(\sigma)\|_{H^{s-\ell-1}}^2 + \|\partial_t^\ell z(\sigma)\|_{Z_{m-\ell,k-1}}^2 \right){\rm d} \sigma\ \leq\, CI_0^2, \quad t \geq 0. \end{align}

The energy estimates in theorems 6.1 and 6.2 imply the following uniform decay. The proofs are similar to corollary 5.3 and for this reason they are omitted.

Corollary 6.3 Assume that the framework of theorem 6.2 holds. Let $s_0 := [{d}/{2}] + 1$, $\widetilde {s}_0 := s_0$ if $d > 1$ and $\widetilde {s}_0 := s_0 + 1$ if $d =1$. For every $0\leq \ell \leq m-1$ and $0 \leq j \leq m-\ell$

(6.7)\begin{equation} \|\partial_t^\ell u(t)\|_{W^{s-s_0-\ell-2,\infty}} \to 0 \text{ as } t \to \infty,\qquad \text{if } s \geq s_0 + \ell + 2, \end{equation}

and (5.25), (5.26), and (5.27) are also satisfied, with $s_0$ replaced by $\widetilde {s}_0$.

One can also derive time-weighted decay estimates with condition (S)$_r$. To this end, we define the following energy functionals:

\begin{align*} N_r(t)^2 & := \sum_{j = 0}^{[s/2]} \sup_{0 \leq \sigma\leq t} (1+ \sigma)^j(\|\partial_x^j u(\sigma)\|_{H^{s-2j}}^2 + \|\partial_x^j z(\sigma)\|^2_{L_\theta^2(H^{s-2j})})\\ D_r(t)^2 & := \sum_{j=0}^{[s/2]} \int_0^t (1+\sigma)^j(\|\partial_x^j(v,z_\tau)(\sigma)\|_{H^{s-2j}}^2 + \|\partial_x^j z(\sigma)\|^2_{L_\theta^2(H^{s-2j})}) {\rm d} \sigma \\ & \quad + \sum_{j=0}^{[s/2]-1}\int_0^t (1+\sigma)^j (\|\partial_x^{j}w(\sigma)\|_{H^{s-2j-1}}^2 + \|\partial_x^{j+1}u(\sigma)\|_{H^{s-2j-2}}^2) {\rm d} \sigma. \end{align*}

Theorem 6.4 With respect to the assumptions of theorem 6.1, there is a constant $C > 0$, which is independent of $t$ and the initial data, with $N_r(t)^2 + D_r(t)^2 \leq CI_0^2$ for every $t \geq 0$. In particular, we have

\[ \|\partial_x^j u(t)\|_{H^{s-2j}} + \|\partial_x^j z(t)\|_{L_\theta^2(H^{s-2j})} \leq C(1+t)^{-{j}/{2}} \]

for every $0 \leq j \leq [{s}/{2}]$. Furthermore, for every $0 \leq j < [{s}/{2}]$, we have

\[ \|\partial_x^j R u(t)\|_{H^{s-2j-1}} \leq C(1+t)^{-({j}/{2}) - ({1}/{2})}. \]

Proof. Given $0\leq j \leq [{s}/{2}]$, we multiply (5.5) by $(1+t)^j$, take the sum over all $j \leq \ell \leq s-j$, and integrate with respect to time to obtain the time-weighted inequality

\begin{align*} & (1+t)^j(\|\partial_x^j u(t)\|^2_{H^{s-2j}} + \|\partial_x^j z(t)\|^2_{L_\theta^2(H^{s-2j})}) \\ & \qquad+ \int_0^t (1+\sigma)^j(\|\partial_x^j(v,z_\tau)(\sigma)\|_{H^{s-2j}}^2 + \|\partial_x^j z(\sigma)\|^2_{L_\theta^2(H^{s-2j})}){\rm d} \sigma \\ & \quad\leq CI_0^2 + Cj\int_0^t (1+\sigma)^{j-1}(\|\partial_x^j u(\sigma)\|^2_{H^{s-2j}} + \|\partial_x^j z(\sigma)\|^2_{L_\theta^2(H^{s-2j})}) {\rm d} \sigma. \end{align*}

Now, given $0 \leq j < [{s}/{2}]$ we multiply (6.5) by $(1+t)^j$ and then add the corresponding terms for $j \leq \ell \leq s-j-2$ to have the estimate

\begin{align*} & \int_0^t (1+\sigma)^j (\|\partial_x^{j}w(\sigma)\|_{H^{s-2j-1}}^2 + \|\partial_x^{j+1}u(t)\|_{H^{s-2j-2}}^2) {\rm d} \sigma \\ & \quad\leq CI_0^2 + Cj\int_0^t (1+\sigma)^{j-1}(\|\partial_x^j u(\sigma)\|^2_{H^{s-2j}} + \|\partial_x^j z(\sigma)\|^2_{L_\theta^2(H^{s-2j})}) {\rm d} \sigma. \end{align*}

A straightforward induction argument shows that these two estimates imply $N_r(t)^2 + D_r(t)^2 \leq CI_0^2$ for every $t \geq 0$ and for some constant $C > 0$. The rest of the theorem follows from the latter estimate and the constraint.

We also have better decay under the assumption (L)$_*$. The proof is similar as before and therefore we omit the details.

Theorem 6.5 Assume that the conditions of theorem 6.1 hold. Suppose also that (L) $_*$ is satisfied. Then, we have

(6.8)\begin{equation} \|\partial_x^j w(t)\|_{H^{s-2j-2}} + \|\partial_x^j z(t)\|_{L^2_\theta(H^{s-2j-2})} \leq (1+t)^{-({j}/{2})- ({1}/{2})}\end{equation}

for every $0 \leq j < [{s}/{2}]$ and $t \geq 0$.

Finally, we close this section with estimates on the time derivatives and derivatives with respect to $\theta$ for the delay variable $z$. The proofs are the same as before and for this reason we again omit the proofs.

Corollary 6.6 In the framework of theorem 6.2, for $0 \leq \ell \leq m$, we have

(6.9)\begin{align} & \|\partial_t^\ell\partial_x^j u(t)\|_{H^{s-\ell - 2j}} + \|\partial_t^\ell \partial_x^j z(t)\|_{H_\theta^\nu(H^{s-\ell-2j-\nu})}\leq C(1 + t)^{-({j}/{2}) } \end{align}
(6.10)\begin{align} & \|\partial_t^\ell \partial_x^\mu Ru(t) \|_{H^{s-\ell -2\mu- 1}} \leq C(1+t)^{-({\mu}/{2}) - ({1}/{2})} \end{align}

for every $0 \leq j \leq [(s-\ell )/2]$, $0 \leq \nu \leq s-\ell - 2j$, and $0\leq \mu < [(s-\ell )/2]$. If (L)$_*$ is satisfied, then

(6.11)\begin{equation} \|\partial_t^\ell \partial_x^j w(t) \|_{H^{s-\ell - 2j - 2}} + \|\partial_t^\ell \partial_x^j z(t)\|_{H_\theta^\nu(H^{s-\ell-2j-\nu-2})} \leq C(1 + t)^{-({j}/{2}) - ({1}/{2})} \end{equation}

for every $0 \leq j \leq [(s - \ell )/2] - 1$ and $0 \leq \nu \leq s - \ell - 2j - 2$, and if (Q) $_*$ holds, then

(6.12)\begin{equation} \|\partial_t^\ell \partial_x^\mu Ru(t) \|_{H^{s-\ell - 2\mu-4}} \leq C(1+t)^{-({\mu}/{2}) - 1} \end{equation}

for every $0\leq \mu < [(s - \ell )/2] -1$.

7. Applications to the wave, Timoshenko and Euler–Maxwell systems

In this section, we shall apply the results of § 5 and 6 to certain physical systems. This includes the Timoshenko system, system of wave equations with delay in the interaction, and the linearized Euler–Maxwell system.

7.1 Timoshenko system

Our first example is the following dissipative Timoshenko system with delay

(7.1)\begin{equation} \begin{cases} w_{tt} - w_{xx} + \psi_x = 0,\\ \psi_{tt} - a^2 \psi_{xx} - w_x + \psi + \alpha \psi_t + \beta \psi_{t\tau} = 0, \end{cases} \end{equation}

for $t > 0$ and $x \in \mathbb {R}$. The unknown scalar functions $w$ and $\psi$ represent the transversal displacement and rotation angle of a beam, respectively. The constants $\alpha$ and $a$ are positive while the sign of $\beta$ is arbitrary. As in [Reference Ide, Haramoto and Kawashima10, Reference Ide and Kawashima11, Reference Ueda, Duan and Kawashima29], by introducing the state variable $u = (w_x -\psi,\, w_t,\, a\psi _x,\, \psi _t)$, we can rewrite this system in form (1.1) with the matrices $A^0 = I$,

\[ A^1 ={-} \left( \begin{array}{@{}cccc@{}} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & a \\ 0 & 0 & a & 0 \end{array}\right), \quad L = \left( \begin{array}{@{}cccc@{}} 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ -1 & 0 & 0 & \alpha \end{array}\right), \quad M = \left( \begin{array}{@{}cccc@{}} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \beta \end{array}\right). \]

The system has no constraints so that $Q = R = 0$, and so condition (Q) is trivially satisfied.

Note that the damping matrix is nonnegative and the delay matrix is symmetric. The kernels of $L$ and its symmetric part are given by $\text {Ker}(L) = \{e_2,\,e_3\}$ and $\text {Ker}(L_1) = \text {Ker}(L) \cup \{e_1\}$, where $e_j$ for $1 \leq j \leq 4$ are the canonical unit vectors in $\mathbb {R}^4$. This means that $e_j$ is the vector in $\mathbb {R}^4$ with entry 1 in the $j$th component and zero elsewhere. Choosing the compensating matrices $S$ and $K^1$ by

\[ S ={-} \eta \left( \begin{array}{@{}cccc@{}} 0 & 0 & 0 & 1 \\ 0 & 0 & a & 0 \\ 0 & a & 0 & 0 \\ 1 & 0 & 0 & 0 \end{array}\right), \quad K^1 = \left( \begin{array}{@{}cccc@{}} 0 & 1 & 0 & 0 \\ -1 & 0 & 0 & 0 \\ 0 & 0 & 0 & -1 \\ 0 & 0 & 1 & 0 \end{array}\right) \]

and by choosing $\eta > 0$ small enough, conditions (S) and (K) are satisfied. Moreover, if $a \neq 1$, then (S)$_s$ is satisfied while (S)$_r$ holds when $a = 1$. We refer to [Reference Ueda, Duan and Kawashima29] for the computations. If $\alpha > |\beta |$, then it follows from theorem 2.2 and theorem 2.3 that condition (M) holds. One can easily verify that condition (L)$_*$ holds. Therefore, the asymptotic stability and decay estimates presented in § 5 and § 6 for $\alpha \neq 1$ and $\alpha = 1$, respectively, are applicable to the state $u$ corresponding to (7.1). In this example, we have $P_Lu = (u_1,\,0,\,0,\,u_4)$.

7.2 System of wave equations I

Consider the following coupled system of three-dimensional wave equations with delay in one component

(7.2)\begin{equation} \begin{cases} \phi_{tt} - \Delta \phi + a \phi _t + \alpha \psi_t + \beta \phi _{t\tau} = 0,\\ \psi_{tt} - \Delta \psi - \alpha \phi _t = 0, \end{cases} \end{equation}

for $(t,\,\theta,\,x) \in (0,\,\infty )\times (-\tau,\,0) \times \mathbb {R}^3$. Here, $\phi$ and $\psi$ are scalar-valued. A similar system in the bounded case has been studied in [Reference Ait Benhassi, Ammari, Boulite and Maniar9].

When $\alpha = 0$ in (7.2), the wave equations are uncoupled, where one has a damping term with delay, while the other one is undamped. For $\alpha \neq 0$, we can think of the terms $\alpha \psi _t$ and $-\alpha \phi _t$ as feedback interconnection that links the two vibrating media through their velocities. If $\alpha > 0$, then a negative velocity $\psi _t$ accelerates the damped wave, while a negative velocity $\phi _t$ decelerates the undamped wave. This approach is related to the concept of indirect damping mechanisms introduced by Russell [Reference Russel26], wherein dissipation in one component in elastic systems can be transferred to that of the whole system.

We shall recast system (7.2) in the form of (1.1). To do this, we define the state variable $u = (\nabla \phi,\, \phi _t,\, \nabla \psi,\,\psi _t)$. Let $e_j$ and $\widetilde {e}_k$, for $1 \leq j \leq 8$ and $1 \leq k\leq 3$, denote the $j$th and $k$th canonical vectors in $\mathbb {R}^8$ and $\mathbb {R}^3$, respectively. The above system can be written in the form of (1.1) with $A^0 = I$,

\[ A^j ={-} \left( \begin{array}{@{}cccc@{}} 0 & \widetilde{e}_j & 0 & 0\\ \widetilde{e}_j^T & 0 & 0 & 0\\ 0 & 0 & 0 & \widetilde{e}_j\\ 0 & 0 & \widetilde{e}_j^T & 0 \end{array}\right), \, L = \left( \begin{array}{@{}cccc@{}} 0 & 0 & 0 & 0\\ 0 & a & 0 & \alpha\\ 0 & 0 & 0 & 0\\ 0 & -\alpha & 0 & 0 \end{array}\right), \, M = \left( \begin{array}{@{}cccc@{}} 0 & 0 & 0 & 0 \\ 0 & \beta & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right), \]

for $j=1,\,2,\,3$.

The eigenvalues of the principal symbol $A(\xi )$ are given by $\pm i|\xi |$ and the multiple eigenvalue 0. Thus, some solutions of the system does not correspond to a solution of wave system (7.2), and to factor them out we need to add the constraints $\nabla \times (\nabla \phi ) = \nabla \times (\nabla \psi ) = 0$. The corresponding matrices are then given by $R = 0$ and

\[ Q^j = \left( \begin{array}{@{}cccc@{}} \Omega_{\widetilde{e}_j} & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & \Omega_{\widetilde{e}_j} & 0\\ 0 & 0 & 0 & 0 \end{array}\right), \qquad j=1,2,3. \]

Given $\xi = (\xi _1,\,\xi _2,\,\xi _3)^T \in \mathbb {R}^3$, the skew-symmetric matrix $\Omega _\xi$ is defined by

\[ \Omega_\xi = \left( \begin{array}{@{}rrr@{}} 0\ & -\xi_3 & \xi_2 \\ \xi_3 & 0\ & - \xi_1\\ -\xi_2 & \xi_1 & 0\ \end{array}\right) \]

so that $\Omega _\xi \psi = \xi \times \psi$, where $\xi \times \psi$ is treated as a column vector. The damping matrix is nonnegative and the delay matrix is symmetric. The kernel of $L$ and its symmetric part are given by $\text {Ker}(L) = \{e_1,\,e_2,\,e_3,\,e_5,\,e_6,\,e_7\}$ and $\text {Ker}(L_1) = \text {Ker}(L) \cup \{e_8\}$, respectively.

Taking $S = I$, one can immediately see that (S) and (L)$_*$ are satisfied. Because $\varPi _1 = 0$ and $\varPi _2 = I$, we can take $W = 0$ and (S)$_s$ holds. Let us verify condition (K). For this purpose, define the compensating matrices

\[ K^j = \left( \begin{array}{@{}cccc@{}} 0 & -\widetilde{e}_j & 0 & 0\\ \widetilde{e}_j^T & 0 & 0 & 0\\ 0 & 0 & 0 & -\widetilde{e}_j\\ 0 & 0 & \widetilde{e}_j^T & 0 \end{array}\right), \qquad j = 1,2,3. \]

Note that $\text {Ker}(Q(\omega )) = \{ (\psi _1,\,\phi _1,\,\psi _2,\,\phi _2) : \omega \times \psi _1 = \omega \times \psi _2 = 0 \}$. If $(\psi _1,\,\phi _1,\,\psi _2,\,\phi _2) \in \text {Ker}(Q(\omega ))$, then $\omega \omega ^T \psi _k = \omega \times (\omega \times \psi _k) + \psi _k(\omega ^T \omega ) = \psi _k$ for $k = 1,\,2$ and $\omega \in \mathbb {S}^{d-1}$. From this, one can immediately see that condition (K) holds. Also, it is clear that $Q^jA^k = Q^jL = Q^j M = 0$ for every $j,\,k=1,\,2,\,3$ and as a consequence, condition (Q) is satisfied. Under the assumption $a > |\beta |$, we see that condition (M) holds. The results in § 5 can be applied to the state $u$ and the corresponding orthogonal projection onto $\text {Ker}(L)^\perp$ is given by $w = (0,\,0,\,0,\,u_4,\,0,\,0,\,0,\,u_8)$.

7.3 System of wave equations II

Now let us consider the following coupled system of wave equations with delay in the interaction

(7.3)\begin{equation} \begin{cases} \phi _{tt} - \Delta \phi + a \phi _t + \alpha \psi_{t\tau} + \beta \phi _{t\tau} = 0,\\ \psi_{tt} - \Delta \psi + d\psi_t + \gamma \phi _{t\tau} + \delta \psi_{t\tau} = 0, \end{cases} \end{equation}

for $t > 0$ and $x \in \mathbb {R}^3$. This has been studied in [Reference Peralta and Ueda23] in the case of one-space dimension. The system (7.3) is a generalization of (7.2), in which the two waves have damping with delay, and the effects of the feedback interconnection also include delays. In other words, the transmission of the dissipation terms does not occur instantaneously.

The coefficient matrices for this problem are the same as those that are given in the previous subsection except for the delay and damping matrices. In the current situation, they are given by

\[ L = \left( \begin{array}{@{}cccc@{}} 0 & 0 & 0 & 0\\ 0 & a & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & d \end{array}\right), \quad M = \left( \begin{array}{@{}cccc@{}} 0 & 0 & 0 & 0\\ 0 & \alpha & 0 & \beta\\ 0 & 0 & 0 & 0\\ 0 & \gamma & 0 & \delta \end{array}\right). \]

In this case, the delay matrix is not necessarily symmetric anymore. Condition (M) holds if we have $(a-|\alpha |)(d-|\delta |) > |\beta \gamma |$, see the Appendix in [Reference Peralta and Kunisch21] for instance. More precisely, by taking $G$ and $N$ of the form

\[ G = \left( \begin{array}{@{}cccc@{}} 0 & 0 & 0 & 0\\ 0 & a_1 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & a_2 \end{array}\right), \qquad N = \left( \begin{array}{@{}cccc@{}} 0 & 0 & 0 & 0\\ 0 & a_3 & 0 & 0\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & a_4 \end{array}\right), \]

there are positive constants $a_1$, $a_2$, $a_3$, and $a_4$ that fulfil condition (M). On the other hand, conditions (L), (S), (Q), (S)$_s$, and (L)$_*$ can be easily verified. Therefore, the results of § 5 are valid to system (7.3) with the state variable $u = (\nabla \phi,\, \phi _t,\, \nabla \psi,\, \psi _t)$.

The wave equations (7.2) and (7.3) were written as first-order hyperbolic systems with delay in terms of the velocities and the gradients of the displacements, that is, using the ansatz $u = (\nabla \phi,\, \phi _t,\, \nabla \psi,\, \psi _t)$. On one hand, this is a typical formulation for the wave equation as the $L^2$-norms of these quantities represent the potential and kinetic energies of the system. On the other hand, it is also interesting to derive estimates and study the time-asymptotic behaviour with respect to $\phi$ and $\psi$. However, the results and methods provided here cannot be applied directly to such problems. For example, estimates for $\nabla \phi$ cannot be transferred to $\phi$ since, in general, the Poincaré inequality is not valid in the whole space $\mathbb {R}^d$. Nonetheless, in the case of bounded Lipschitz domains with homogeneous Dirichlet conditions, this approach is meaningful.

For the damped wave equation without delay, Matsumura [Reference Matsumura14] obtained estimates for the displacement, including the spatio-temporal derivatives, in $L^2$ and $L^\infty$. The results were based on the analysis of the equivalent second-order differential equation with the Fourier variable as a parameter. It is not clear at this point how such methods can be applied and extended to the case of wave systems with delays. As this is outside the scope of the paper, we leave these tasks for future investigations.

7.4 Linearized Euler–Maxwell system

Our final example is the following Euler–Maxwell system arising in the study of plasma physics. We consider the system linearized at the constant equilibrium state $u_* := (\rho _*,\,0,\,0,\,B_*)$, where $\rho _* > 0$ and $B_* \in \mathbb {R}^3$,

(7.4)\begin{equation} \begin{cases} \rho_t + \rho_* \text{div} v = 0,\\ \rho_* v_t + p_* \nabla \rho ={-} \rho_* (E + v \times B_*) - \rho_* (\alpha v + \beta v_\tau),\\ E_t - \nabla \times B = \rho_* v,\\ B_t + \nabla \times E = 0,\\ \text{div} E = \rho_* - \rho, \quad \text{div} B = 0, \end{cases} \end{equation}

for $t > 0$ and $x \in \mathbb {R}^3$. The unknown state variables are the density $\rho : (0,\,\infty ) \times \mathbb {R}^3 \to \mathbb {R}$, velocity $v : (0,\,\infty ) \times \mathbb {R}^3 \to \mathbb {R}^3$, electric field $E : (0,\,\infty ) \times \mathbb {R}^3 \to \mathbb {R}^3$, and magnetic field $B : (0,\,\infty ) \times \mathbb {R}^3 \to \mathbb {R}^3$. Here, $p_*$ is constant.

By letting $u := (\rho -\rho _*,\, v,\, E,\, B- B_*)$, system (7.4) can be written in the form of (1.1) with the coefficient matrices

\begin{align*} A^0 & = \left( \begin{array}{@{}cccc@{}} p_*/\rho_* & 0 & 0 & 0\\ 0 & \rho_*I & 0 & 0\\ 0 & 0 & I & 0 \\ 0 & 0 & 0 & I \end{array}\right), \quad A^j = \left( \begin{array}{@{}cccc@{}} 0 & p_*\widetilde{e}_j & 0 & 0\\ p_*\widetilde{e}_j^T & 0 & 0 & 0\\ 0 & 0 & 0 & -\Omega_{\widetilde{e}_j}\\ 0 & 0 & \Omega_{\widetilde{e}_j} & 0\end{array}\right),\\ & M =\left( \begin{array}{@{}cccc@{}} 0 & 0 & 0 & 0\\ 0 & \beta \rho_*I & 0 & 0\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right),\quad L = \left( \begin{array}{@{}cccc@{}} 0 & 0 & 0 & 0\\ 0 & \rho_*( \alpha I - \Omega_{B_*}) & \rho_*I & 0\\ 0 & -\rho_*I & 0 & 0\\ 0 & 0 & 0 & 0 \end{array}\right). \end{align*}

The damping matrix $L$ is nonnegative and the delay matrix $M$ is symmetric. Observe that $\text {Ker}(L) = \{e_1,\,e_8,\,e_9,\,e_{10}\}$ and $\text {Ker}(L_1) =\text {Ker}(L) \cup \{e_5,\,e_6,\,e_7\}$. The coefficient matrices corresponding to the differential constraints are given by

\[ Q^j = \left( \begin{array}{@{}cccc@{}} 0 & 0 & \widetilde{e}_j & 0 \\ 0 & 0 & 0 & \widetilde{e}_j \end{array}\right), \qquad\qquad R = \left( \begin{array}{@{}cccc@{}} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right). \]

It has been verified in [Reference Ueda, Duan and Kawashima29] that (S), (K), (Q), and (S)$_r$, without the conditions pertaining to the delay matrix $M$, hold with the matrices $W = (\eta p_*/\rho _*)I$,

\[ S = \eta \left( \begin{array}{@{}cccc@{}} 0 & 0 & 0 & 0\\ 0 & 0 & I & 0\\ 0 & (1/\rho_*)I & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right), \quad K^j = \left( \begin{array}{@{}cccc@{}} 0 & (1/\rho_*)\widetilde{e}_j & 0 & 0\\ - (\rho_*/p_*)\widetilde{e}_j^T & 0 & 0 & 0\\ 0 & 0 & 0 & \Omega_{\widetilde{e}_j}\\ 0 & 0 & \Omega_{\widetilde{e}_j} & 0 \end{array}\right) \]

for $\eta > 0$ sufficiently small and for $j = 1,\,2,\,3$. However, it can be easily verified that the properties in connection to $M$ for conditions (S) and (Q) are satisfied. Moreover, as in the previous examples, condition (M) is verified when $\alpha > |\beta |$. Finally, conditions (L)$_*$ and (Q)$_*$ are satisfied with $G = I$ and the fact that

\[ P_L = \left(\begin{array}{@{}cccc@{}} 0 & 0 & 0 & 0\\ 0 & I & 0 & 0\\ 0 & 0 & I & 0\\ 0 & 0 & 0 & 0 \end{array}\right), \qquad \varPi_1 Q^j = \left( \begin{array}{@{}cccc@{}} 0 & 0 & \widetilde{e}_j & 0 \\ 0 & 0 & 0 & 0 \end{array}\right) P_L. \]

Hence, the Euler–Maxwell system (7.4) with delay has the regularity-loss type decay and the results of § 6 can be applied to this system. In this example, note that $P_Lu = (0,\,v,\,E,\,0)$, $P_{L_1}u = (0,\,v,\,0,\,0)$, and $Ru = (\rho - \rho _*,\,0,\,0,\,0)$. See also [Reference Ueda and Kawashima31] and [Reference Ueda, Wang and Kawshima32] for related results.

8. Decay estimates for integrable data

If the initial data and the initial history are both integrable, then we can improve the decay rates given in the previous sections. In this section, we only focus with decay estimates for the derivatives with respect to space. The derivatives with respect to time and delay variable can be handled in a similar way as in the previous sections. The basic idea is to carry the calculations in the preceding sections in the Fourier space. More precisely, we take the Fourier transforms with respect to $x$ of the differential equations and perform the calculations as in the primal space. As before, using an approximation argument, we can use the differential equations directly.

8.1 Standard decay estimates

Taking the Fourier transform of system (4.1) with respect to the spatial variable yields the following equations:

(8.1)\begin{equation} \begin{cases} A^0 \widehat{u}_t(t,\xi) + i |\xi|A(\omega) \widehat{u}(t,\xi) + L \widehat{u}(t,\xi) + e^{\varepsilon \tau}M\widehat{z}_\tau(t,\xi) = 0,\\ \widehat{z}_t(t,\theta,\xi) - \widehat{z}_\theta(t,\theta,\xi) + \varepsilon\widehat{z}(t,\theta,\xi) = 0, \\ \widehat{z}(t,0,\xi) = P_M \widehat{u}(t,\xi),\\ \widehat{z}(0,\theta, \xi) = e^{\varepsilon\theta}P_M\widehat{z}_0(\theta,\xi),\\ (i|\xi|Q(\omega) + R) \widehat{u}(t,\xi) = 0, \\ \widehat{u}(0,\xi) = \widehat{u}_0(\xi),\\ \end{cases} \end{equation}

where $\omega := \xi /|\xi |$ if $\xi \neq 0$ and $\omega := 0$ if $\xi = 0$.

Getting the inner product of the first equation in (8.1) with $G\widehat {u}$ and taking the real part, we have

\[ \frac{1}{2}\frac{{\rm d}}{{\rm d}t}\langle GA^0\widehat{u}, \widehat{u}\rangle + \langle (GL)_1 \widehat{u}, \widehat{u}\rangle + e^{\varepsilon\tau}\text{Re}\langle GM \widehat{z}_\tau, \widehat{u}\rangle = 0. \]

Taking the inner product of the second equation in (8.1) with $N\widehat {z}$ and integrating with respect to $\theta$ yields

\begin{align*} & \frac{1}{2}\frac{{\rm d}}{{\rm d}t}\!\int_{-\tau}^0 \langle N\widehat{z}(\theta), \widehat{z}(\theta) \rangle {\rm d} \theta - \frac{1}{2} \langle NP_M \widehat{u}, P_M \widehat{u} \rangle \\ & \quad+ \frac{1}{2} \langle N \widehat{z}_\tau, \widehat{z}_\tau \rangle + \varepsilon \int_{-\tau}^0 \langle N\widehat{z}(\theta), \widehat{z}(\theta) \rangle {\rm d} \theta = 0. \end{align*}

Taking the sum of the above equations, we obtain the energy identity

\begin{align*} & \frac{1}{2} \frac{{\rm d}}{{\rm d}t} \mathcal{E}_1 + \frac{1}{2} \langle \Psi(\widehat{u},\widehat{z}_\tau), (\widehat{u},\widehat{z}_\tau)\rangle \\ & \quad+ \varepsilon \int_{-\tau}^0 \langle N\widehat{z}(\theta),\widehat{z}(\theta) \rangle {\rm d} \theta + (e^{\varepsilon\tau} - 1)\text{Re}\langle GM\widehat{z}_\tau,\widehat{u} \rangle= 0, \end{align*}

where $\mathcal {E}_1 := \langle GA^0\widehat {u},\, \widehat {u}\rangle + \int _{-\tau }^0 \langle N\widehat {z}(\theta ),\, \widehat {z}(\theta ) \rangle {\rm d} \theta$.

If we get the inner product of the first equation in (8.1) with $S^T\widehat {u}$ and then take the real part, we have

\[ \frac{1}{2}\frac{{\rm d}}{{\rm d}t} \mathcal{E}_2 + i|\xi| \langle (SA(\omega))_2 \widehat{u},\widehat{u}\rangle + \langle (SL)_1\widehat{u},\widehat{u} \rangle + e^{\varepsilon\tau}\text{Re}\langle SM\widehat{z}_\tau, \widehat{w} \rangle = 0, \]

where $\mathcal {E}_2 := \langle SA^0 \widehat {u},\,\widehat {u}\rangle$. Finally, we take the inner product of the first equation in (8.1) by $-i|\xi |K(\omega )^T \widehat {u}$ and then take the real part to get

\begin{align*} & \frac{1}{2}\frac{{\rm d}}{{\rm d}t} \mathcal{E}_3 + |\xi|^2 \langle ((K(\omega)A(\omega))_1 \widehat{u},\widehat{u}\rangle \\ & \quad- |\xi| \text{Re}\langle iK(\omega)L\widehat{w},\widehat{u} \rangle - i|\xi|e^{\varepsilon\tau}\text{Re}\langle K(\omega)M\widehat{z}_\tau, \widehat{u} \rangle = 0, \end{align*}

where $\mathcal {E}_3 := -{1}/{2}|\xi | \langle i K(\omega )A^0 \widehat {u},\,\widehat {u}\rangle$. From the above energy identities, we obtain

(8.2)\begin{equation} (1+|\xi|^2)\frac{{\rm d}}{{\rm d}t}\mathcal{E} + \mathcal{D}_1 + \mathcal{D}_2 + \mathcal{D}_3 = 0\end{equation}

where $\mathcal {E} := \frac {1}{2}(\mathcal {E}_1 + \alpha \mathcal {E}_2 + \frac {\alpha \beta }{1+|\xi |^2}\mathcal {E}_3)$ and

\begin{align*} \mathcal{D}_1 & := 2^{{-}1}(1+ |\xi|^2)\{ \langle \Psi(\widehat{u},\widehat{z}_\tau), (\widehat{u},\widehat{z}_\tau)\rangle + \varepsilon (N\widehat{z},\widehat{z} )_{L^2(-\tau,0)} + (e^{\varepsilon\tau} - 1)\text{Re}\langle GM\widehat{z}_\tau,\widehat{u} \rangle\}\\ \mathcal{D}_2 & := \alpha (1+|\xi|^2) \{i|\xi| \langle (SA(\omega))_2 \widehat{u},\widehat{u}\rangle + \langle (SL)_1\widehat{u},\widehat{u} \rangle + e^{\varepsilon\tau}\text{Re}\langle SM\widehat{z}_\tau, \widehat{w} \rangle\} \\ \mathcal{D}_3 & := \alpha \beta \{|\xi|^2 \langle (K(\omega)A(\omega))_1 \widehat{u},\widehat{u}\rangle - i|\xi| \text{Re}\langle i K(\omega)L\widehat{w},\widehat{u} \rangle -i|\xi| e^{\varepsilon\tau} \text{Re}\langle K(\omega)M\widehat{z}_\tau, \widehat{u} \rangle\}. \end{align*}

One can easily see that there exist constants $\eta > 0$ and $c_{\eta },\, C_\eta > 0$ such that for every $\alpha,\, \beta \in [0,\,\eta ]$, we have

(8.3)\begin{equation} c_{\eta}(|\widehat{u}|^2 + \|\widehat{z}\|^2_{L^2(-\tau,0)}) \leq \mathcal{E} \leq C_{\eta} (|\widehat{u}|^2 + \|\widehat{z}\|^2_{L^2(-\tau,0)}). \end{equation}

Utilizing condition (M) and then making $\varepsilon$ small enough, we obtain

(8.4)\begin{equation} \mathcal{D}_1 \geq C(1 + |\xi|^2)( |\widehat{v}|^2 + |\widehat{z}_\tau|^2 + \|\widehat{z}\|^2_{L^2(-\tau,0)} ). \end{equation}

On the other hand, using conditions (S) and (S)$_s$, we have

(8.5)\begin{align} \mathcal{D}_2 & \geq \alpha(1 + |\xi|^2)( \langle (SL+L)_1 \widehat{w},\widehat{w} \rangle - \langle L_1\widehat{v},\widehat{v} \rangle - \varrho |\widehat{w}|^2 - C_{\varrho}|\widehat{z}_\tau|^2 ) \nonumber\\ & \geq \alpha(1+|\xi|^2)( C_1|\widehat{w}|^2 - C_2|\widehat{v}|^2 - C_2|\widehat{z}_\tau|^2 ) \end{align}

by choosing $\varrho > 0$ small enough. According to the condition (K) and $\widehat {u}(t,\,\xi ) \in \text {Ker}(\varPi _2Q(\omega ))$ we have, after using Young's inequality,

(8.6)\begin{equation} \mathcal{D}_3 \geq \alpha \beta(C_1|\xi|^2|\widehat{u}|^2 - C_2|\widehat{w}|^2 - C_2|\widehat{z}_\tau|^2 ). \end{equation}

Taking $\beta$ sufficiently small and then $\alpha$ small enough, we obtain from (8.3) to (8.6) that

(8.7)\begin{equation} (1 + |\xi|^2)\frac{{\rm d}}{{\rm d}t}\mathcal{E} + c|\xi|^2(|\widehat{u}|^2 + \|\widehat{z}\|^2_{L^2(-\tau,0)}) \leq 0 \end{equation}

for some constant $c > 0$. This inequality sets up the proof of the following theorem.

Theorem 8.1 Assume that conditions (L), (S), (M), (Q), (K), and (S) $_s$ are satisfied and let $s \geq 1$. Suppose that $(u_0,\,z_0) \in (H^{s}\cap X_c \cap L^1) \times L_\theta ^2(H^s \cap L^1)$. Then, the solution of (4.1) satisfies the pointwise estimate

(8.8)\begin{equation} |\widehat{u}(t,\xi)| + \|\widehat{z}(t,\xi)\|_{L^2(-\tau,0)} \leq C e^{{-}c t\rho(\xi)} (|\widehat{u}_0(\xi)| + \|\widehat{z}_0(\xi)\|_{L^2(-\tau,0)}) \end{equation}

where $\rho (\xi ) := |\xi |^2/(1+ |\xi |^2)$, for some constants $c>0$, $C > 0$ and for every $t \geq 0$ and $\xi \in \mathbb {R}^d$. Moreover, we have the decay estimate

(8.9)\begin{equation} \|\partial_x^\ell u(t)\|_{L^2} + \|\partial_x^\ell z(t)\|_{L_\theta^2(L^2)} \leq CI_1 (1 + t)^{-({d}/{4})- ({\ell}/{2})} + CI_{2,\ell} e^{{-}ct/2} \end{equation}

for every $0 \leq \ell \leq s$ and $t \geq 0$, where $I_1 := \|u_0\|_{L^1} + \|z_0\|_{L^2_\theta (L^1)}$ and $I_{2,\ell } := \|\partial _x^{\ell }u_0\|_{L^2} + \|\partial _x^\ell z_0\|_{L^2_\theta (L^2)}.$ In particular, for every $0 \leq \ell < s$, we have

(8.10)\begin{equation} \|\partial_x^\ell Ru(t)\|_{L^2} \leq C I_1(1 + t)^{-({d}/{4})- ({\ell}/{2}) - ({1}/{2})} + C I_{2,\ell+1} e^{{-}ct/2}. \end{equation}

Proof. The pointwise estimate (8.8) in Fourier space follows immediately from inequality (8.7) and the equivalence (8.3). The proof of (8.9) relies on integrating (8.8) over $\mathbb {R}^d$ and separating the integral into low- and high-frequency parts. For $|\xi | \geq 1$, we have $2\rho (\xi ) \geq 1$ and consequently from (8.8) we infer

\begin{align*} & \int_{|\xi| \geq 1} |\xi|^{2\ell}(|\widehat{u}(t,\xi)|^2 + \|\widehat{z}(t,\xi)\|^2_{L^2(-\tau,0)}){\rm d} \xi\nonumber\\ & \quad\leq C e^{{-}ct}\int_{|\xi| \geq 1} |\xi|^{2\ell}(|\widehat{u}_0(\xi)|^2 + \|\widehat{z}_0(\xi)\|^2_{L^2(-\tau,0)}){\rm d} \xi \leq CI_{2,\ell}^2 e^{{-}ct} \end{align*}

by Plancherel's formula and Fubini's theorem. On the other hand, for $|\xi | \leq 1$, we have $\rho (\xi ) \geq \widetilde {c} |\xi |^2$ for some constant $\widetilde {c} > 0$, and thus,

\begin{align*} & \int_{|\xi| \leq 1} |\xi|^{2\ell}(|\widehat{u}(t,\xi)|^2 + \|\widehat{z}(t,\xi)\|^2_{L^2(-\tau,0)}){\rm d} \xi\\ & \quad\leq C I_1^2 \int_{|\xi|\leq 1} |\xi|^{2\ell}e^{{-}c\widetilde{c}|\xi|^2 t}{\rm d} \xi \leq CI_1^2 (1 + t)^{-({d}/{2}) - {\ell}}. \end{align*}

Taking the sum of these estimates and then using Plancherel's formula and Fubini's theorem, we obtain the decay estimate (8.9). Finally, (8.10) follows immediately from the constraint and (8.9).

Corollary 8.2 Suppose that the assumptions of the previous theorem hold. If in addition, (L) $_*$ and (Q) $_*$ hold, then for some $c_0 > 0$ and $C > 0$, there holds

(8.11)\begin{align} & \|\partial_x^\ell w(t)\|_{L^2} + \|\partial_x^\ell z(t)\|_{L_\theta^2(L^2)} \leq CI_1 (1 + t)^{-({d}/{4})- ({\ell}/{2}) - ({1}/{2})} + CI_{2,\ell+1} e^{{-}c_0t} \end{align}
(8.12)\begin{align} & \|\partial_x^\ell Ru(t)\|_{L^2}\leq CI_1 (1 + t)^{-({d}/{4})- ({\ell}/{2}) - 1} + CI_{2,\ell+2} e^{{-}c_0t} \end{align}

for every $0 \leq \ell < s$ in (8.11) and $0 \leq j < s-1$ in (8.12).

Proof. The proof of the decay estimate (8.11) is the same as in the proof of theorem 5.6. On the other hand, (8.12) follows from (8.11) and the constraints.

Example 8.3 As an application, we consider the system of wave equations (7.3). If the initial data corresponding to this system when written in the form of (1.1) are integrable with respect to space, then the associated state satisfies the decay estimates (8.8) and (8.11). These results are also valid for the wave system (7.2) and the Timoshenko system (7.1) when $a \neq 1$.

8.2 Regularity-loss decay estimates

In this section, we derive decay estimates for integrable data where condition (S)$_s$ is replaced by the weaker condition (S)$_r$. For systems without delay, we refer to [Reference Ueda, Duan and Kawashima29]. Recent advances for regularity-loss decay estimates can be found in [Reference Chen and Dao4, Reference Liu and Ueda13, Reference Ueda28, Reference Ueda, Duan and Kawashima30] and the references therein.

Let us start with the energy identity

(8.13)\begin{equation} (1 + |\xi|^2)^2\frac{{\rm d}}{{\rm d}t}\widetilde{\mathcal{E}} + (1+|\xi|^2)\mathcal{D}_1 + \mathcal{D}_2 + \mathcal{D}_3= 0 \end{equation}

where $\widetilde {\mathcal {E}} = {1}/{2}( \mathcal {E}_1 + \alpha (1 + |\xi |^2)^{-1} \mathcal {E}_2 + \alpha \beta (1 + |\xi |^2)^{-2}\mathcal {E}_3)$, while $\mathcal {E}_j$ and $\mathcal {D}_j$, for $j = 1,\,2,\,3$, are the same terms as in the previous subsection. Equivalence (8.3) also holds in place of $\widetilde {\mathcal {E}}$. First, we rewrite

\begin{align*} \langle i(SA(\omega))_2 \widehat{u},\widehat{u}\rangle & = \langle Y(\omega)(\widehat{u} - \widehat{v}),\widehat{u} - \widehat{v} \rangle + \langle Y(\omega) \widehat{v}, \widehat{u} \rangle + \langle Y(\omega) \widehat{u}, \widehat{v} \rangle\\ & \quad + \langle Y(\omega)\widehat{v},\widehat{v}\rangle + \langle i (Q(\omega)^T \varPi_1 WR)_2 \widehat{u}, \widehat{u} \rangle. \end{align*}

According to condition (S)$_r$, the first term on the right-hand side is nonnegative. The last term can be written as $\langle i (Q(\omega )^T \varPi _1 WR)_2 \widehat {u},\, \widehat {u} \rangle = \langle W_1 R\widehat {u},\, \widehat {u} \rangle$, which is nonnegative as well since $W_1$ is nonnegative on the range of $R$. Therefore, by applying Young's inequality, we get

\[ |\xi|(1+|\xi|^2)\langle i(SA(\omega))_2 \widehat{u},\widehat{u}\rangle \geq{-}\, \eta |\xi|^2|\widehat{u}|^2 - C_{\eta}(1+|\xi|^2)^2|\widehat{v}|^2 \]

for every $\eta > 0$, and consequently,

\[ \mathcal{D}_2 \geq \alpha(1+|\xi|^2)( C_1|\widehat{w}|^2 - C_2 |\widehat{v}|^2 - C_2|\widehat{z}_\tau|^2) - \alpha \eta |\xi|^2 |\widehat{u}|^2 - \alpha C_{\eta}(1+|\xi|^2)^2|\widehat{v}|^2. \]

Using this inequality together with (8.4) and (8.5), we have

\begin{align*} & (1+|\xi|^2)\mathcal{D}_1 + \mathcal{D}_2 + \mathcal{D}_3\geq (1 + |\xi|^2)\{ (C-\alpha C_\eta)(1+|\xi|^2) - \alpha C_2 )\} |\widehat{v}|^2 \\ & \quad + \{C(1+|\xi|^2)^2 - \alpha C_2 (1 + |\xi|^2) - \alpha \beta C_2\}|\widehat{z}_\tau|^2 + \alpha|\xi|^2(\beta C_1 - \eta )|\widehat{u}|^2 \\ & \quad + C(1+|\xi|^2)^2\|\widehat{z}\|^2_{L^2(-\tau,0)} + \alpha\{ C_1(1+|\xi|^2) - \beta C_2\} |\widehat{w}|^2. \end{align*}

Choosing the constants $\alpha$, $\beta$, and $\eta$ in such a way that $\beta < C_1/C_2$, $\eta < \beta C_1$, and $\alpha < \min \{ C/(C_\eta + C_2),\, C/[C_2(\beta +1)] \}$, and then using them in the energy equation (8.13), we get

(8.14)\begin{equation} (1+|\xi|^2)^2\frac{{\rm d}}{{\rm d}t}\mathcal{E} + c|\xi|^2(|\widehat{u}|^2 + \|\widehat{z}\|^2_{L^2(-\tau,0)}) \leq 0 \end{equation}

for some constant $c > 0$. With this estimate, we are now ready to establish the following theorem.

Theorem 8.4 Suppose that the conditions of theorem 8.1 hold where (S) $_s$ is replaced by (S) $_r$. The solution of (4.1) satisfies the pointwise estimate

(8.15)\begin{equation} |\widehat{u}(t,\xi)| + \|\widehat{z}(t,\xi)\|_{L^2(-\tau,0)} \leq C e^{{-}ct \varrho(\xi)} (|\widehat{u}_0(\xi)| + \|\widehat{z}_0(\xi)\|_{L^2(-\tau,0)}) \end{equation}

where $\varrho (\xi ) := |\xi |^2/(1+ |\xi |^2)^2$, for some constants $c,\, C > 0$, and for every $t \geq 0$ and $\xi \in \mathbb {R}^d$. In particular, we have the decay estimate

(8.16)\begin{equation} \|\partial_x^\ell u(t)\|_{L^2} + \|\partial_x^\ell z(t)\|_{L_\theta^2(L^2)} \leq C I_1(1 + t)^{-({d}/{4})- ({\ell}/{2})} + CI_{2,\ell+k} (1+t)^{-{k}/{2}} \end{equation}

for every $0 \leq k + \ell \leq s$ and $t \geq 0$, where $I_1 := \|u_0\|_{L^1} + \|z_0\|_{L^2_\theta (L^1)}$ and $I_{2,\ell + k} := \|\partial _x^{\ell + k }u_0\|_{L^2} + \|\partial _x^{\ell + k} z_0\|_{L^2_\theta (L^2)}.$ Moreover, for every $0 \leq \ell < s$, we have

\[ \|\partial_x^\ell Ru(t)\|_{L^2}\leq C I_1(1 + t)^{-({d}/{4})- ({\ell}/{2}) -({1}/{2})} + CI_{2,\ell+k+1} (1+t)^{-({k}/{2})-({1}/{2})}. \]

Proof. First, let us notice that we can obtain the same estimate as in the proof theorem 8.1 at lower frequencies since $\varrho (\xi ) \geq \widetilde {c}|\xi |^2$ for some constant $\widetilde {c} > 0$ and for all $|\xi | \leq 1$. For $|\xi | \geq 1$, we have $\varrho (\xi ) \geq C|\xi |^{-2}$ for some $C > 0$, and based on this we have the estimate

\begin{align*} & \int_{|\xi| \geq 1} |\xi|^{2\ell}(|\widehat{u}(t,\xi)|^2 + \|\widehat{z}(t,\xi)\|^2_{L^2(-\tau,0)}){\rm d} \xi \nonumber\\ & \quad\leq C \sup_{|\xi| \geq 1} \frac{e^{{-}cCt|\xi|^{{-}2}}}{|\xi|^{2k}}\int_{|\xi| \geq 1} |\xi|^{2(\ell + k)}(|\widehat{u}_0(\xi)|^2 + \|\widehat{z}_0(\xi)\|^2_{L^2(-\tau,0)}){\rm d} \xi\\ & \quad\leq CI_{2,\ell+k}^2 (1+t)^{-k}. \end{align*}

When combined with the estimate at lower frequencies, we obtain (8.16). The rest of the theorem can be verified following the comments in the proof of theorem 8.1.

We also have the following result analogous to corollary 5.4.

Corollary 8.5 In the framework of the previous theorem and with the additional conditions (L) $_*$ and (Q) $_*$, we have

(8.17)\begin{align} \|\partial_x^\ell w(t)\|_{L^2} + \|\partial_x^\ell z(t)\|_{L_\theta^2(L^2)} & \leq CI_1 (1 + t)^{-({d}/{4})- ({\ell}/{2}) - ({1}/{2})}\nonumber\\ & \quad + CI_{2,\ell + k +1} (1+t)^{-({k}/{2})- ({1}/{2})} \end{align}
(8.18)\begin{align} \|\partial_x^\ell Ru(t)\|_{L^2}& \leq CI_1 (1 + t)^{-({d}/{4})- ({\ell}/{2}) - 1} + CI_{2,\ell + k +2} (1+t)^{-({k}/{2})-1} \end{align}

for every $0 \leq \ell + k< s$ in (8.17) and $0 \leq \ell + k < s-1$ in (8.18).

Example 8.6 Estimates (8.16) and (8.17) are valid for the Timoshenko system (7.1) with delay when $a = 1$. In this system, recall that $P_L u = (u_1,\,0,\,0,\,u_4)$. Likewise, (8.16)(8.18) are satisfied by the Euler–Maxwell system (7.4) with delay, and for this system, the state components $w = P_Lu$ and $Ru$ are given in § 7.

Acknowledgements

The author is grateful to Yoshihiro Ueda for the initial discussions on the topic of this paper and to the anonymous referee for the valuable comments relayed during the review process.

References

Adams, R. A.. Sobolev Spaces (Academic Press, New York, 1975).Google Scholar
Arendt, W., Batty, C. J. K., Hieber, M. and Neubrander, F.. Vector-Valued Laplace Transforms and Cauchy Problems. 2nd Ed. (Springer, Birkhäuser Basel, 2011).CrossRefGoogle Scholar
Benzoni-Gavage, S. and Serre, D.. Multi-Dimensional Hyperbolic Partial Differential Equations: First-Order Systems and Applications (Oxford University Press, Oxford, 2007).Google Scholar
Chen, W. and Dao, T. A.. On the Cauchy problem for semilinear regularity-loss-type $\sigma$-evolution models with memory term. Nonlinear Anal.: Real World Appl. 59 (2021), 126.Google Scholar
Datko, R.. Not all feedback stabilized hyperbolic systems are robust with respect to small time delays in their feedbacks. SIAM J. Control Optim. 26 (1988), 697713.CrossRefGoogle Scholar
Datko, R., Lagnese, J. and Polis, M. P.. An example on the effect of time delays in boundary feedback stabilization of wave equations. SIAM J. Control Optim. 24 (1986), 152156.CrossRefGoogle Scholar
Fridman, E., Nicaise, S. and Valein, J.. Stabilization of second order evolution equations with unbounded feedback with time-dependent delay. SIAM J. Control Optim. 48 (2010), 50285052.CrossRefGoogle Scholar
Hale, J. K.. Theory of Functional Differential Equation (Springer-Verlag, New York, 1977).CrossRefGoogle Scholar
Ait Benhassi, E. M., Ammari, K., Boulite, S. and Maniar, L.. Exponential energy decay of some coupled second order systems. Semigroup Forum 86 (2013), 362382.CrossRefGoogle Scholar
Ide, K., Haramoto, K. and Kawashima, S.. Decay property of regularity-loss type for dissipative Timoshenko system. Math. Models Methods Appl. Sci. 18 (2008), 647667.CrossRefGoogle Scholar
Ide, K. and Kawashima, S.. Decay property and regularity-loss type and nonlinear effects for dissipative Timoshenko systems. Math. Models Methods Appl. Sci. 18 (2008), 10011025.CrossRefGoogle Scholar
Kirane, M. and Said-Houari, B.. Existence and asymptotic stability of a viscoelastic wave equation with a delay. Z. Angew. Math. Phys. 62 (2011), 10651082.CrossRefGoogle Scholar
Liu, Y. and Ueda, Y.. Decay estimate and asymptotic profile for a plate equation with memory. J. Differ. Equ. 268 (2020), 24352463.CrossRefGoogle Scholar
Matsumura, A.. On the asymptotic behavior of solutions of semi-linear wave equations. Publ. RIMS. Kyoto Univ., 12 (1976), 169189.CrossRefGoogle Scholar
Nicaise, S. and Pignotti, C.. Stability and instability results of the wave equation with a delay term in the boundary or internal feedbacks. SIAM J. Control Optim. 45 (2006), 15611585.CrossRefGoogle Scholar
Nicaise, S. and Pignotti, C.. Stabilization of the wave equation with boundary or internal distributed delay. Differ. Integral Equ. 21 (2008), 935958.Google Scholar
Nicaise, S. and Pignotti, C.. Interior feedback stabilization of wave equations with time dependent delay. Electron. J. Differ. Equ. 41 (2011), 120.Google Scholar
Nicaise, S. and Rebiai, S.. Stabilization of the Schrödinger equation with a delay term in boundary feedback or internal feedback. Port. Math. 68 (2011), 1939.CrossRefGoogle Scholar
Nicaise, S., Valein, J. and Fridman, E.. Stability of the heat and of the wave equations with boundary time-varying delays. Discrete Contin. Dyn. Syst.-S 2 (2009), 559581.Google Scholar
Nirenberg, L.. On elliptic partial differential equations. Ann. Scuola Norm. Sup. Pisa 13 (1959), 115162.Google Scholar
Peralta, G. and Kunisch, K.. Interface stabilization of a parabolic-hyperbolic PDE system with delay in the interaction. Discrete Contin. Dyn. Syst.-S 38 (2018), 30553083.CrossRefGoogle Scholar
Peralta, G. and Propst, G.. Well-posedness and regularity of linear hyperbolic systems with dynamic boundary conditions. Proc. R. Soc. Edinb. A: Math. 146 (2016), 10471080.CrossRefGoogle Scholar
Peralta, G. and Ueda, Y., Stability condition for a system of delay-differential equations and its application. Preprint.Google Scholar
Rauch, J.. Hyperbolic Partial Differential Equations and Geometric Optics (American Mathematical Society, Providence, 2012).CrossRefGoogle Scholar
Rauch, J. and Massey, F. J.. Differentiability of solutions to hyperbolic initial boundary value problems. Trans. Am. Math. Soc. 189 (1974), 303318.Google Scholar
Russel, D. L.. A general framework for the study of indirect damping mechanisms in elastic systems. J. Math. Anal. Appl. 173 (1993), 339358.CrossRefGoogle Scholar
Shizuta, Y. and Kawashima, S.. Systems of equations of hyperbolic-parabolic type with applications to the discrete Boltzmann equation. Hokkaido Math. J. 14 (1985), 249275.CrossRefGoogle Scholar
Ueda, Y.. Optimal decay estimates of a regularity-loss type system with constraint condition. J. Differ. Equ. 264 (2018), 679701.CrossRefGoogle Scholar
Ueda, Y., Duan, R. and Kawashima, S.. Decay structure for symmetric hyperbolic systems with non-symmetric relaxation and its applications. Arch. Ration. Mech. Anal. 205 (2012), 239266.CrossRefGoogle Scholar
Ueda, Y., Duan, R. and Kawashima, S.. Decay structure of two hyperbolic relaxation models with regularity loss. Kyoto J. Math. 57 (2017), 235292.CrossRefGoogle Scholar
Ueda, Y. and Kawashima, S.. Decay property of regularity-loss type for the Euler–Maxwell system. Methods Appl. Anal. 18 (2011), 245268.Google Scholar
Ueda, Y., Wang, S. and Kawshima, S.. Dissipative structure of the regularity-loss type and time asymptotic decay of solutions for the Euler–Maxwell system. SIAM J. Math. Anal. 44 (2012), 20022017.CrossRefGoogle Scholar