Hostname: page-component-586b7cd67f-dlnhk Total loading time: 0 Render date: 2024-11-24T11:00:22.000Z Has data issue: false hasContentIssue false

Macroscopic limit of a Fokker-Planck model of swarming rigid bodies

Published online by Cambridge University Press:  19 April 2024

Pierre Degond*
Affiliation:
Institut de Mathématiques de Toulouse, UMR5219, Université de Toulouse, CNRS UPS, F-31062, Toulouse Cedex 9, France
Amic Frouvelle
Affiliation:
CEREMADE, CNRS, Université Paris Dauphine – PSL, 75016, Paris, France
*
Corresponding author: Pierre Degond; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

We consider self-propelled rigid bodies interacting through local body-attitude alignment modelled by stochastic differential equations. We derive a hydrodynamic model of this system at large spatio-temporal scales and particle numbers in any dimension $n \geq 3$. This goal was already achieved in dimension $n=3$ or in any dimension $n \geq 3$ for a different system involving jump processes. However, the present work corresponds to huge conceptual and technical gaps compared with earlier ones. The key difficulty is to determine an auxiliary but essential object, the generalised collision invariant. We achieve this aim by using the geometrical structure of the rotation group, namely its maximal torus, Cartan subalgebra and Weyl group as well as other concepts of representation theory and Weyl’s integration formula. The resulting hydrodynamic model appears as a hyperbolic system whose coefficients depend on the generalised collision invariant.

Type
Papers
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press

1. Introduction

In this paper, we consider a system of self-propelled rigid bodies interacting through local body-attitude alignment. Such systems describe flocks of birds [Reference Hildenbrandt, Carere and Hemelrijk42] for instance. The model consists of a coupled stochastic differential equations describing the positions and body attitudes of the agents. We aim to derive a hydrodynamic model of this system at large spatio-temporal scales and particle numbers. This goal has already been achieved in dimension $n=3$ [Reference Degond, Frouvelle and Merino-Aceituno24Reference Degond, Frouvelle, Merino-Aceituno and Trescases26] or in any dimension $n \geq 3$ for a different model where the stochastic differential equations are replaced by jump processes [Reference Degond, Diez and Frouvelle19]. In the present paper, we realise this objective for the original system of stochastic differential equations in any dimension $n \geq 3$ . The resulting hydrodynamic model appears as a hyperbolic system of first-order partial differential equations for the particle density and mean body orientation. The model is formally identical to that of [Reference Degond, Diez and Frouvelle19] but for the expressions of its coefficients, whose determination is the main difficulty here. It has been shown in ref. [Reference Degond, Frouvelle, Merino-Aceituno and Trescases27] that this system is hyperbolic, which is a good indication of its (at least local) well-posedness.

The passage from dimension $n=3$ to any dimension $n \geq 3$ involves huge conceptual and technical difficulties. One of them is the lack of an appropriate coordinate system of the $n$ -dimensional rotation group $\mathrm{SO}_n$ , by contrast to dimension $n=3$ where the Rodrigues formula [Reference Degond, Frouvelle and Merino-Aceituno24] and the quaternion representation [Reference Degond, Frouvelle, Merino-Aceituno and Trescases26] are available. This difficulty was already encountered in ref. [Reference Degond, Diez and Frouvelle19] and solved by the use of representation theory [Reference Faraut30, Reference Fulton and Harris36] and Weyl’s integration formula [Reference Simon51]. Here, additional difficulties arise because an auxiliary but essential object, the generalised collision invariant (GCI) which will be defined below, becomes highly non-trivial. Indeed, the GCI is the object that leads to explicit formulas for the coefficients of the hydrodynamic model. In this paper, we will develop a completely new method to determine the GCI relying on the Cartan subalgebra and Weyl group of $\mathrm{SO}_n$ [Reference Fulton and Harris36]. While biological agents live in a three-dimensional space, deriving models in arbitrary dimensions is useful to uncover underlying algebraic or geometric structures that would be otherwise hidden. This has been repeatedly used in physics where large dimensions (e.g. in the theory of glasses) [Reference Parisi, Urbani and Zamponi47, Ch. 1] or zero dimension (e.g. in the replica method) [Reference Castellani and Cavagna14] have provided invaluable information on real systems. In data science, data belonging to large-dimensional manifolds may also be encountered, which justifies the investigation of models in arbitrary dimensions.

Collective dynamics can be observed in systems of interacting self-propelled agents such as locust swarms [Reference Bazazi, Buhl, Hale, Anstey, Sword, Simpson and Couzin5], fish schools [Reference Lopez, Gautrais, Couzin and Theraulaz45] or bacterial colonies [Reference Beér and Ariel6] and manifests itself by coordinated movements and patterns (see e.g. the review [Reference Vicsek and Zafeiris54]). The system of interacting rigid bodies which motivates the present study is only one among many examples of collective dynamics models. Other examples are the three-zone model [Reference Aoki2, Reference Cao, Motsch, Reamy and Theisen12], the Cucker-Smale model [Reference Aceves-Sánchez, Bostan, Carrillo and Degond1, Reference Barbaro, Canizo, Carrillo and Degond3, Reference Barbaro and Degond4, Reference Carrillo, Fornasier, Rosado and Toscani13, Reference Cucker and Smale17, Reference Ha and Liu41, Reference Motsch and Tadmor46], the Vicsek model [Reference Chaté, Ginelli, Grégoire and Raynaud15, Reference Degond, Frouvelle and Liu22, Reference Degond, Frouvelle and Liu23, Reference Frouvelle and Liu35, Reference Toner and Tu52, Reference Vicsek, Czirók, Ben-Jacob, Cohen and Shochet53] (the literature is huge and the proposed citations are for illustration purposes only). Other collective dynamics models involving rigid-body attitudes or geometrically complex objects can be found in [Reference Cho, Ha and Kang16, Reference Fetecau, Ha and Park31, Reference Golse and Ha38, Reference Ha, Ko and Ryoo40, Reference Hildenbrandt, Carere and Hemelrijk42, Reference Sarlette, Bonnabel and Sepulchre48Reference Sepulchre, Sarlette and Rouchon50].

Collective dynamics can be studied at different scales, each described by a different class of models. At the finest level of description lie particle models which consist of systems of ordinary or stochastic differential equations [Reference Aoki2, Reference Cao, Motsch, Reamy and Theisen12, Reference Chaté, Ginelli, Grégoire and Raynaud15, Reference Cucker and Smale17, Reference Lopez, Gautrais, Couzin and Theraulaz45, Reference Motsch and Tadmor46, Reference Vicsek, Czirók, Ben-Jacob, Cohen and Shochet53]. When the number of particles is large, a statistical description of the system is substituted, which leads to kinetic or mean-field models [Reference Barbaro, Canizo, Carrillo and Degond3, Reference Bertin, Droz and Grégoire7, Reference Carrillo, Fornasier, Rosado and Toscani13, Reference Degond, Diez, Frouvelle and Merino-Aceituno20, Reference Degond, Frouvelle and Liu22, Reference Figalli, Kang and Morales32, Reference Gamba and Kang37, Reference Griette and Motsch39]. At large spatio-temporal scales, the kinetic description can be approximated by fluid models, which describe the evolution of locally averaged quantities such as the density or mean orientation of the particles [Reference Bertin, Droz and Grégoire8, Reference Bertozzi, Kolokolnikov, Sun, Uminsky and Von Brecht9, Reference Degond, Diez and Na21, Reference Degond, Frouvelle, Merino-Aceituno and Trescases27, Reference Toner and Tu52]. The rigorous passage from particle to kinetic models of collective dynamics has been investigated in refs. [Reference Bolley, Cañizo and Carrillo10, Reference Briant, Diez and Merino-Aceituno11, Reference Diez29, Reference Ha and Liu41] while passing from kinetic to fluid models has been formally shown in [Reference Degond, Frouvelle and Liu22, Reference Degond, Frouvelle and Liu23, Reference Degond and Motsch28, Reference Frouvelle33] and rigorously in ref. [Reference Jiang, Xiong and Zhang44]. Phase transitions in kinetic models have also received a great deal of attention [Reference Degond, Diez, Frouvelle and Merino-Aceituno20, Reference Degond, Frouvelle and Liu22, Reference Degond, Frouvelle and Liu23, Reference Frouvelle34, Reference Frouvelle and Liu35].

The paper is organised as follows. After some preliminaries on rotation matrices, Section 2 describes the particle model and its associated kinetic model. The main result, which is the identification of the associated fluid model, is stated in Section 3. The model has the same form as that derived in ref. [Reference Degond, Diez and Frouvelle19], but for the expression of its coefficients. In classical kinetic theory, fluid models are strongly related to the collision invariants of the corresponding kinetic model. However, in collective dynamics models, there are often not enough collision invariants. This has been remediated by the concept of generalised collision invariant (GCI) first introduced in ref. [Reference Degond and Motsch28] for the Vicsek model. In Section 4, the definition and first properties of the GCI are stated and proved. To make the GCI explicit, we need to investigate the geometry of $\mathrm{SO}_n$ in more detail. This is done in Section 5 where the notions of maximal torus, Cartan subalgebra and Weyl group are recalled. After these preparations, we derive an explicit system of equations for the GCI in section 6 and show its well-posedness. Once the GCI are known, we can proceed to the derivation of the hydrodynamic model in Section 7. This involves again the use of representation theory and the Weyl integration formula as the results from [Reference Degond, Diez and Frouvelle19] cannot be directly applied due to the different shapes of the GCI. Finally, a conclusion is drawn in Section 8. In Appendix A, an alternate derivation of the equations for the GCI is given: while Section 6 uses the variational form of these equations, Appendix A uses their strong form. The two methods rely on different Lie algebra formulas and can be seen as cross-validations of one another.

2. Microscopic model and scaling

2.1. Preliminaries: rotations and the rotation group

Before describing the model, we need to recall some facts about rotation matrices (see [Reference Degond18] for more detail). Throughout this paper, the dimension $n$ will be supposed greater or equal to $3$ . We denote by $\mathrm{M}_n$ the space of $n \times n$ matrices with real entries and by $\mathrm{SO}_n$ the subset of $\mathrm{M}_n$ consisting of rotation matrices:

\begin{equation*} \mathrm {SO}_n = \big \{ A \in \mathrm {M}_n \quad | \quad A A^T = A^T A = \mathrm {I} \quad \mathrm {and} \quad \mathrm {det}\, A = 1 \big \}, \end{equation*}

where $A^T$ is the transpose of $A$ , $\mathrm{I}$ is the identity matrix of $\mathrm{M}_n$ and $\mathrm{det}$ stands for the determinant. For $x$ , $y \in{\mathbb R}^n$ , we denote by $x \cdot y$ and $|x|$ the Euclidean inner product and norm. Likewise, we define a Euclidean inner product on $\mathrm{M}_n$ as follows:

(2.1) \begin{equation} M \cdot N = \frac{1}{2} \mathrm{Tr} (M^T N) = \frac{1}{2} \sum _{i,j} M_{ij}N_{ij}, \quad \forall M, \, N \in \mathrm{M}_n, \end{equation}

where $\mathrm{Tr}$ is the trace. We note the factor $\frac{1}{2}$ which differs from the conventional definition of the Frobenius inner product, but which is convenient when dealing with rotation matrices. We use the same symbol $\cdot$ for vector and matrix inner products as the context easily waives the ambiguity. The set $\mathrm{SO}_n$ is a compact Lie group, i.e. it is a group for matrix multiplication and an embedded manifold in $\mathrm{M}_n$ for which group multiplication and inversion are $C^\infty$ , and it is compact. Let $A \in \mathrm{SO}_n$ . We denote by $T_A$ the tangent space to $\mathrm{SO}_n$ at $A$ . The tangent space $T_{\mathrm{I}}$ at the identity is the Lie algebra $\mathfrak{so}_n$ of skew-symmetric matrices with real entries endowed with the Lie bracket $[X,Y] = XY-YX$ , $\forall X, \, Y \in \mathfrak{so}_n$ . Let $A \in \mathrm{SO}_n$ . Then,

(2.2) \begin{equation} T_A = A \, \mathfrak{so}_n = \mathfrak{so}_n \, A. \end{equation}

For $M \in \mathrm{M}_n$ , we denote by $P_{T_A} M$ its orthogonal projection onto $T_A$ . It is written:

\begin{equation*} P_{T_A} M = A \frac {A^T M - M^T A}{2} = \frac {M A^T - A M^T}{2} A. \end{equation*}

A Riemannian structure on $\mathrm{SO}_n$ is induced by the Euclidean structure of $\mathrm{M}_n$ following from (2.1). This Riemannian metric is given by defining the inner product of two elements $M$ , $N$ of $T_A$ by $M \cdot N$ . Given that there are $P$ and $Q$ in $\mathfrak{so}_n$ such that $M=AP$ , $N = AQ$ and that $A$ is orthogonal, we have $M \cdot N = P \cdot Q$ . As any Lie-group is orientable, this Riemannian structure gives rise to a Riemannian volume form and measures $\omega$ . This Riemannian measure is left-invariant by group translation [Reference Degond18, Lemma 2.1] and is thus equal to the normalised Haar measure on $\mathrm{SO}_n$ up to a multiplicative constant. We recall that on compact Lie groups, the Haar measure is also right invariant and invariant by group inversion. On Riemannian manifolds, the gradient $\nabla$ , divergence $\nabla \cdot$ and Laplacian $\Delta$ operators can be defined. Given a smooth map $f$ : $\mathrm{SO}_n \to{\mathbb R}$ , the gradient $\nabla f(A) \in T_A$ is defined by

(2.3) \begin{equation} \nabla f (A) \cdot X = df_A(X), \quad \forall X \in T_A, \end{equation}

where $df_A$ is the derivative of $f$ at $A$ and is a linear map $T_A \to{\mathbb R}$ , and $df_A(X)$ is the image of $X$ by $df_A$ . The divergence of a smooth vector field $\phi$ on $\mathrm{SO}_n$ (i.e. a map $\mathrm{SO}_n \to \mathrm{M}_n$ such that $\phi (A) \in T_A$ , $\forall A \in \mathrm{SO}_n$ ) is defined by duality by

\begin{equation*} \int _{\mathrm {SO}_n} \nabla \cdot \varphi \, f \, dA = - \int _{\mathrm {SO}_n} \varphi \cdot \nabla f \, dA, \quad \forall f \in C^{\infty }(\mathrm {SO}_n), \end{equation*}

where $C^{\infty }(\mathrm{SO}_n)$ denotes the space of smooth maps $\mathrm{SO}_n \to{\mathbb R}$ and where we have denoted the Haar measure by $dA$ . The Laplacian of a smooth map $f$ : $\mathrm{SO}_n \to{\mathbb R}$ is defined by $\Delta f = \nabla \cdot (\nabla f)$ .

It is not easy to find a convenient coordinate system on $\mathrm{SO}_n$ to express the divergence and Laplace operators, so we will rather use an alternate expression which uses the matrix exponential $\exp$ : $\mathfrak{so}_n \to \mathrm{SO}_n$ . Let $X \in \mathfrak{so}_n$ . Then, $\varrho (X)$ denotes the map $C^{\infty }(\mathrm{SO}_n) \to C^{\infty }(\mathrm{SO}_n)$ such that

(2.4) \begin{equation} \big ( \varrho (X)(f) \big )(A) = \frac{d}{dt} \big (f(A e^{tX}) \big )|_{t=0}, \quad \forall f \in C^{\infty }(\mathrm{SO}_n), \quad \forall A \in \mathrm{SO}_n. \end{equation}

We note that

(2.5) \begin{equation} \big ( \varrho (X)(f) \big )(A) = df_A(AX) = \nabla f(A) \cdot (AX). \end{equation}

Let $F_{ij}$ be the matrix with entries

(2.6) \begin{equation} (F_{ij})_{k\ell } = \delta _{ik} \delta _{j \ell } - \delta _{i \ell } \delta _{jk}. \end{equation}

We note that $(F_{ij})_{1 \leq i \lt j \leq n}$ forms an orthonormal basis of $\mathfrak{so}_n$ for the inner product (2.1). Then, we have [Reference Degond18, Lemma 2.2]

(2.7) \begin{equation} (\Delta f)(A) = \sum _{1 \leq i \lt j \leq n} \big ( \varrho (F_{ij})^2 f \big ) (A). \end{equation}

The expression remains valid if the basis $(F_{ij})_{1 \leq i \lt j \leq n}$ is replaced by another orthonormal basis of $\mathfrak{so}_n$ .

Finally, let $M \in \mathrm{M}_n^+$ where $\mathrm{M}_n^+$ is the subset of $\mathrm{M}_n$ consisting of matrices with positive determinant. There exists a unique pair $(A, S) \in \mathrm{SO}_n \times{\mathcal S}^+_n$ where ${\mathcal S}^+_n$ denotes the cone of symmetric positive-definite matrices, such that $M = A S$ . The pair $(A, S)$ is the polar decomposition of $M$ and we define a map $\mathcal P$ : $\mathrm{M}_n^+ \to \mathrm{SO}_n$ , $M \mapsto A$ . We note that ${\mathcal P}(M) = (M M^T)^{-1/2} M$ . We recall that a positive-definite matrix $S$ can be written $S = U D U^T$ where $U \in \mathrm{SO}_n$ and $D = \mathrm{diag}(d_1, \ldots, d_n)$ is the diagonal matrix with diagonal elements $d_1, \ldots, d_n$ with $d_i \gt 0$ , $\forall i=1, \ldots, n$ . Then, $S^{-1/2} = U D^{-1/2} U^T$ with $D^{-1/2} = \mathrm{diag}(d_1^{-1/2}, \ldots, d_n^{-1/2})$ .

2.2. The particle model

We consider a system of $N$ agents moving in an $n$ -dimensional space ${\mathbb R}^n$ . They have positions $(X_k(t))_{k=1}^N$ with $X_k(t) \in{\mathbb R}^n$ at time $t$ . All agents are identical rigid bodies. An agent’s attitude can be described by a moving direct orthonormal frame $(\omega _1^k(t), \ldots, \omega _n^k(t))$ referred to as the agent’s local body frame or body attitude. Let ${\mathbb R}^n$ be endowed with a reference direct orthonormal frame $({\textbf{e}}_1, \ldots,{\textbf{e}}_n)$ . We denote by $A_k(t)$ the unique rotation which maps $({\textbf{e}}_1, \ldots,{\textbf{e}}_n)$ to $(\omega _1^k(t), \ldots, \omega _n^k(t))$ , i.e. $\omega _j^k(t) = A_k(t){\textbf{e}}_j$ . Therefore, the $k$ -th agent’s body attitude can be described equivalently by the local basis $(\omega _1^k(t), \ldots, \omega _n^k(t))$ or by the rotation $A_k(t)$ .

The particle dynamics is as follows: particles move with speed $c_0$ in the direction of the first basis vector $\omega _1^k(t) = A_k(t){\textbf{e}}_1$ of the local body frame, hence leading to the equation

(2.8) \begin{equation} d X_k= c_0 A_k(t){\textbf{e}}_1 \, dt. \end{equation}

Body frames are subject to two processes. The first one tends to relax the particle’s body frame to a target frame which represents the average of the body frames of the neighbouring particles. The second one is diffusion noise. We first describe how the average of the body frames of the neighbouring particles is computed. Define

(2.9) \begin{equation} \tilde J_k(t) = \frac{1}{N \, R^n} \sum _{\ell =1}^N K \Big ( \frac{|X_k(t) - X_\ell (t)|}{R} \Big ) \, A_\ell (t), \end{equation}

where the sensing function $K$ : $[0,\infty ) \to [0,\infty )$ and the sensing radius $R \gt 0$ are given. Here we have assumed that the sensing function is radially symmetric for simplicity. Then, the body frame dynamics is as follows:

(2.10) \begin{equation} d A_k = \nu \, P_{T_{A_k}} \big ({\mathcal P}(\tilde J_k) \big ) \, dt + \sqrt{2D} \, P_{T_{A_k}} \circ dW_t^k, \end{equation}

where $\nu$ and $D$ are positive constants, $dW_t^k$ are independent Brownian motions on $\mathrm{M}_n$ and the symbol $\circ$ indicates that the stochastic differential equation is meant in the Stratonovich sense. Here, it is important to stress that these Brownian motions are defined using the metric induced by the inner product (2.1). In the first term, we note that the matrix $\tilde J_k$ is projected onto the rotation matrix issued from the polar decomposition by the map $\mathcal P$ . For consistency, we need to assume that $\tilde J_k(t)$ remains in $M_n^+$ , which will be true as long as the body frames of neighbouring particles are close to each other. The rotation $\Gamma _k ={\mathcal P}(\tilde J_k(t))$ can be seen as the average body frame of the neighbours to particle $k$ . For (2.10) to define a motion on $\mathrm{SO}_n$ , the right-hand side must be a tangent vector to $\mathrm{SO}_n$ at $A_k(t)$ . This is ensured by the projection operator $P_{T_{A_k}}$ which multiplies each term of the right-hand side of (2.10), and, for the second term, by the fact that the stochastic differential equation is taken in the Stratonovich sense. Indeed, according to [Reference Hsu43], the second term of (2.10) generates a Brownian motion on $\mathrm{SO}_n$ .

Finally, the particle system must be supplemented with initial conditions specifying the values of $(X_k,A_k)(0)$ . As discussed in ref. [Reference Degond, Frouvelle and Merino-Aceituno24], the first term of (2.10) relaxes the $k$ -th particle body frame to the neighbour’s average body frame $\Gamma _k$ and models the agents’ tendency to adopt the same body attitude as their neighbours. The second term of (2.10) is an idiosyncratic noise term that models either errors in the agent’s computation of the neighbour’s average body frame or the agent’s will to detach from the group and explore new environments.

2.3. The mean-field model

When $N \to \infty$ , the particle system can be approximated by a mean-field kinetic model. Denote by $f(x,A,t)$ the probability distribution of the particles, namely $f(x,A,t)$ is the probability density of the particles at $(x,A) \in{\mathbb R}^n \times \mathrm{SO}_n$ at time $t$ . Then, provided that $K$ satisfies

\begin{equation*} \int _{{\mathbb R}^n} K(|x|) \, dx = 1, \quad \int _{{\mathbb R}^n} K(|x|) \, |x|^2\, dx \lt \infty,\end{equation*}

$f$ is the solution of the following system:

(2.11) \begin{eqnarray} && \partial _t\, f + c_0 \, A{\textbf{e}}_1 \cdot \nabla _x f + \nu \nabla _A \cdot \big ( P_{T_A} ({\mathcal P} (\tilde J_f)) f \big ) - D \Delta _A f = 0, \end{eqnarray}
(2.12) \begin{eqnarray} && \tilde J_f(x,t) = \frac{1}{R^n} \int _{{\mathbb R}^n \times \mathrm{SO}_n} K \Big ( \frac{|x-y|}{R} \Big ) \, f(y, B, t) \, B \, dy \, dB, \end{eqnarray}

where again, in (2.12), the integral over $\mathrm{SO}_n$ is taken with respect to the normalised Haar measure. The operators $\nabla _A \cdot$ and $\Delta _A$ are respectively the divergence and Laplacian on $\mathrm{SO}_n$ as defined in Section 2.1. The index $A$ is there to distinguish them from analog operators acting on the spatial variable $x$ , which will be indicated (as in $\nabla _x f$ ) with an index $x$ . A small remark may be worth making: according to [Reference Hsu43], the stochastic process defined by the second term of (2.10) has infinitesimal generator $D \tilde \Delta$ where for any $f \in C^\infty (\mathrm{SO}_n)$ ,

\begin{equation*} \tilde \Delta f(A) = \sum _{i,j=1}^n (P_{T_A} E_{ij} \cdot \nabla _A)^2 f(A), \qquad \forall A \in \mathrm {SO}_n, \end{equation*}

and where $(E_{ij})_{i,j=1}^n$ is any orthogonal basis (for the inner product (2.1)) of $\mathrm{M}_n$ . For instance, we can take the matrices $E_{ij}$ with entries $(E_{ij})_{k \ell } = \sqrt{2} \delta _{ik} \delta _{j \ell }$ . It is shown in ref. [Reference Degond18, Lemma 2.2] that $\tilde \Delta$ coincides with the Laplacian $\Delta$ defined by (2.7). This gives a justification for the last term in (2.11). We note that (2.11) is a nonlinear Fokker-Planck equation, where the nonlinearity arises in the third term.

The proof of the convergence of the particle system (2.8), (2.10) to the kinetic model (2.11), (2.12) is still open. The difficulty is in the presence of the projection operator $\mathcal P$ in (2.10) which requires to control that the determinant of $\tilde J_k$ remains positive. In the Vicsek case where a similar singular behaviour is observed, a local-in-time convergence result is shown in ref. [Reference Briant, Diez and Merino-Aceituno11]. This result supports the conjecture that System (2.8), (2.10) converges to System (2.11), (2.12) in the limit $N \to \infty$ in small time. We will assume it.

2.4. Scaling and statement of the problem

Let $t_0$ be a time scale and define the spatial scale $x_0 = c_0 t_0$ . We introduce the following dimensionless parameters:

\begin{equation*} \tilde D = D t_0, \qquad \tilde R = \frac {R}{x_0}, \qquad \kappa = \frac {\nu }{D}. \end{equation*}

Then, we change variables and unknowns to dimensionless variables $x' = x/x_0$ , $t'=t/t_0$ and unknowns $f'(x',A,t') = x_0^n f(x,A,t)$ , $\tilde J'_{f'}(x',t') = x_0^n \tilde J_f(x,t)$ . Inserting these changes into (2.11), (2.12) leads to (omitting the primes for simplicity):

(2.13) \begin{eqnarray} && \partial _t f + A{\textbf{e}}_1 \cdot \nabla _x f + \tilde D \big [ \kappa \nabla _A \cdot \big ( P_{T_A} ({\mathcal P} (\tilde J_f)) f \big ) - \Delta _A f \big ] = 0, \end{eqnarray}
(2.14) \begin{eqnarray} && \tilde J_f(x,t) = \frac{1}{\tilde R^n} \int _{{\mathbb R}^n \times \mathrm{SO}_n} K \Big ( \frac{|x-y|}{\tilde R} \Big ) \, f(y, B, t) \, B\, dy \, dB, \end{eqnarray}

We introduce a small parameter $\varepsilon \ll 1$ and make the scaling assumption $ \frac{1}{\tilde D} = \tilde R = \varepsilon$ , while $\kappa$ is kept of order $1$ . By Taylor’s formula, we have $\tilde J_f = J_f +{\mathcal O}(\varepsilon ^2)$ where $J_f(x,t) = \int _{\mathrm{SO}_n} A \, f(x, A, t) \, dA$ . Since the map $\mathcal P$ is smooth on $\mathrm{M}_n^+$ , we get ${\mathcal P}(\tilde J_f) ={\mathcal P}(J_f) +{\mathcal O}(\varepsilon ^2)$ . Inserting these scaling assumptions and neglecting the ${\mathcal O}(\varepsilon ^2)$ terms in the above expansions (because they would have no influence on the result), we get the following perturbation problem:

(2.15) \begin{eqnarray} && \partial _t f^\varepsilon + A{\textbf{e}}_1 \cdot \nabla _x f^\varepsilon = \frac{1}{\varepsilon } \big [ - \kappa \nabla _A \cdot \big ( P_{T_A} ({\mathcal P} (J_{f^\varepsilon })) f^\varepsilon \big ) + \Delta _A f^\varepsilon \big ], \end{eqnarray}
(2.16) \begin{eqnarray} && J_f(x,t) = \int _{\mathrm{SO}_n} f(x, A, t) \, A\, dA. \end{eqnarray}

The goal of this paper is to provide the formal limit $\varepsilon \to 0$ of this problem. This problem is referred to as the hydrodynamic limit of the Fokker-Planck equation (2.15).

3. Hydrodynamic limit (I): main results and first steps of the proof

3.1. Statement of the results

We will need the following

Definition 3.1 (von Mises distribution). Let $\Gamma \in \mathrm{SO}_n$ and $\kappa \gt 0$ . The function $M_\Gamma$ : $\mathrm{SO}_n \to [0,\infty )$ such that

(3.1) \begin{equation} M_\Gamma (A) = \frac{1}{Z} \exp (\kappa \Gamma \cdot A), \quad Z = \int _{\mathrm{SO}_n} \exp (\kappa \mathrm{Tr} (A)/2) \, dA, \end{equation}

is called the von Mises distribution of orientation $\Gamma$ and concentration parameter $\kappa$ . It is the density of a probability measure on $\mathrm{SO}_n$ .

We note that $\int M_\Gamma (A) \, dA= \int \exp (\kappa \mathrm{Tr} (A)/2) \, dA$ does not depend on $\Gamma$ thanks to the translation invariance of the Haar measure.

The first main result of this paper is about the limit of the scaled kinetic System (2.15), (2.16) when $\varepsilon \to 0$ . We will need the following notations: for two vector fields $X=(X_i)_{i=1}^n$ and $Y=(Y_i)_{i=1}^n$ , we define the antisymmetric matrices $X \wedge Y = (X \wedge Y)_{ij}$ and $\nabla _x \wedge X = (\nabla _x \wedge X)_{ij}$ by

\begin{equation*} (X \wedge Y)_{ij} = X_i Y_j - X_j Y_i, \quad (\nabla _x \wedge X)_{ij} = \partial _{x_i} X_j - \partial _{x_j} X_i, \quad \forall i, \, j \in \{1, \ldots n\}. \end{equation*}

Then, we have

Theorem 3.2. We suppose that there is a smooth solution $f^\varepsilon$ to System (2.15), (2.16). We also suppose that $f^\varepsilon \to f^0$ as $\varepsilon \to 0$ as smoothly as needed. Then, there exist two functions $\rho$ : ${\mathbb R}^n \times [0,\infty ) \to [0,\infty )$ and $\Gamma$ : ${\mathbb R}^n \times [0,\infty ) \to \mathrm{SO}_n$ such that

(3.2) \begin{equation} f^0(x,A,t) = \rho (x,t) M_{\Gamma (x,t)}(A). \end{equation}

Furthermore, for appropriate real constants $c_1, \ldots, c_4$ , $\rho$ and $\Gamma$ satisfy the following system of equations:

(3.3) \begin{eqnarray} && \partial _t \rho + \nabla _x (\rho c_1 \Omega _1) = 0, \end{eqnarray}
(3.4) \begin{eqnarray} && \rho \big ( \partial _t \Gamma + c_2 (\Omega _1 \cdot \nabla _x) \Gamma \big ) ={\mathbb W} \Gamma, \end{eqnarray}

where

(3.5) \begin{equation} \Omega _k(x,t) = \Gamma (x,t){\textbf{e}}_k, \quad k \in \{1, \ldots, n\}, \end{equation}

and where

(3.6) \begin{equation}{\mathbb W} = - c_3 \nabla _x \rho \wedge \Omega _1 - c_4 \rho \big [ \big ( \Gamma (\nabla _x \cdot \Gamma ) \big ) \wedge \Omega _1 + \nabla _x \wedge \Omega _1 \big ]. \end{equation}

The notation $\nabla _x \cdot \Gamma$ stands for the divergence of the matrix $\Gamma$ , i.e. $(\nabla _x \cdot \Gamma )_i = \sum _{j=1}^n \partial _{x_i} \Gamma _{ij}$ and $\Gamma (\nabla _x \cdot \Gamma )$ is the vector arising from multiplying the vector $\nabla _x \cdot \Gamma$ on the left by the matrix $\Gamma$ .

System (3.3), (3.4) has been referred to in previous works [Reference Degond, Diez and Frouvelle19, Reference Degond, Frouvelle, Merino-Aceituno and Trescases27] as the Self-Organised Hydrodynamic model for body-attitude coordination. It consists of coupled first-order partial differential equation for the particle density $\rho$ and the average body-attitude $\Gamma$ and has been shown to be hyperbolic in ref. [Reference Degond, Frouvelle, Merino-Aceituno and Trescases27]. It models the system of interacting rigid bodies introduced in Section 2.2 as a fluid of which (3.3) is the continuity equation. The velocity of the fluid is $c_1 \Omega _1$ . Equation (3.4) is an evolution equation for the averaged body orientation of the particles within a fluid element, described by $\Gamma$ . The left-hand side of (3.4) describes pure transport at velocity $c_2 \Omega _1$ . In general, $c_2 \not = c_1$ , which means that such transport occurs at a velocity different from the fluid velocity. The right-hand side appears as the multiplication of $\Gamma$ itself on the left by the antisymmetric matrix $\mathbb W$ , which is a classical feature of rigid-body dynamics. The first term of (3.6) is the action of the pressure gradient which contributes to rotating $\Gamma$ so as to align $\Omega _1$ with $- \nabla _x \rho$ . The second term has a similar effect with the pressure gradient replaced by the vector $\Gamma (\nabla _x \cdot \Gamma )$ which encodes gradients of the mean body-attitude $\Gamma$ . Finally, the last term encodes self-rotations of the averaged body frame about the self-propulsion velocity $\Omega _1$ . The last two terms do not have counterparts in classical fluid hydrodynamics. We refer to [Reference Degond, Diez and Frouvelle19, Reference Degond, Diez and Na21, Reference Degond, Frouvelle, Merino-Aceituno and Trescases27] for a more detailed interpretation.

Remark 3.1. We stress that the density $\rho$ and the differentiation operator $\varrho$ defined by (2.4) have no relation and are distinguished by the different typography. The notation $\varrho$ for (2.4) is classical (see e.g. [Reference Faraut30]).

The second main result of this paper is to provide explicit formulas for the coefficients $c_1, \ldots, c_4$ . For this, we need to present additional concepts. Let $p = \lfloor \frac{n}{2} \rfloor$ the integer part of $\frac{n}{2}$ (i.e. $n=2p$ or $n = 2p+1$ ) and

(3.7) \begin{equation} \epsilon _n = \left \{ \begin{array}{ccc} 0 & \mathrm{ if } & n=2p \\[5pt] 1 & \mathrm{ if } & n=2p +1 \end{array} \right. . \end{equation}

Let ${\mathcal T} = [\!-\!\pi, \pi )^p$ . Let $\Theta = (\theta _1, \ldots, \theta _p) \in{\mathcal T}$ . We introduce

(3.8) \begin{eqnarray} u_{2p} (\Theta ) &=& \prod _{1 \leq j\lt k \leq p} \big ( \cos \theta _j - \cos \theta _k \big )^2, \quad \text{ for } p \geq 2, \end{eqnarray}
(3.9) \begin{eqnarray} u_{2p+1} (\Theta ) &=& \prod _{1 \leq j\lt k \leq p} \big ( \cos \theta _j - \cos \theta _k \big )^2 \, \prod _{j=1}^p \sin ^2 \frac{\theta _j}{2}, \quad \text{ for } p \geq 1, \end{eqnarray}

and

(3.10) \begin{equation} m(\Theta ) = \exp \Big ( \frac{\kappa }{2} \big ( 2 \sum _{k=1}^p \cos \theta _k + \epsilon _n \big ) \Big ) \, u_n(\Theta ). \end{equation}

Let $\nabla _\Theta$ (resp. $\nabla _\Theta \cdot$ ) denote the gradient (resp. divergence) operators with respect to $\Theta$ of scalar (resp. vector) fields on $\mathcal T$ . Then, we define $\alpha$ : ${\mathcal T} \to{\mathbb R}^p$ , with $\alpha = (\alpha _i)_{i=1}^p$ as a periodic solution of the following system:

(3.11) \begin{eqnarray} && - \nabla _\Theta \cdot \big ( m \nabla _\Theta \alpha _\ell \big ) + m \sum _{k \not = \ell } \Big ( \frac{\alpha _\ell - \alpha _k}{1 - \cos\!(\theta _\ell - \theta _k)} + \frac{\alpha _\ell + \alpha _k}{1 - \cos\!(\theta _\ell + \theta _k)} \Big ) \nonumber \\[5pt] &&\qquad + \epsilon _n m \frac{\alpha _\ell }{1 - \cos \theta _\ell } = m \, \sin \theta _\ell, \quad \forall \ell \in \{1, \ldots, p \}, \end{eqnarray}

A functional framework which guarantees that $\alpha$ exists is unique and satisfies an extra invariance property (commutation with the Weyl group) will be provided in Section 6.

Remark 3.2. In the case of $\mathrm{SO}_3$ , we have $p=1$ and a single unknown $\alpha _1(\theta _1)$ . From (3.11), we get that $\alpha _1$ satisfies

\begin{equation*} - \frac {\partial }{\partial \theta _1} \Big ( m \frac {\partial \alpha _1}{\partial \theta _1} \Big ) + \frac {m \, \alpha _1}{1 - \cos \theta _1} = m \sin \theta _1. \end{equation*}

We can compare this equation with [Reference Degond, Frouvelle and Merino-Aceituno24, Eq. (4.16)] and see that $\alpha _1$ coincides with the function $- \sin \theta \, \tilde \psi _0$ of [Reference Degond, Frouvelle and Merino-Aceituno24].

Thanks to these definitions, we have the

Theorem 3.3. The constants $c_1, \ldots, c_4$ involved in (3.3), (3.4), (3.6) are given by

(3.12) \begin{eqnarray} && c_1 = \frac{1}{n} \frac{\displaystyle \int \Big ( 2 \sum \cos \theta _k + \epsilon _n \Big ) \, m(\Theta ) \, d \Theta }{\displaystyle \int \, m(\Theta ) \, d \Theta }, \end{eqnarray}
(3.13) \begin{eqnarray} && c_2 = - \frac{1}{n^2-4} \times \nonumber \\[5pt] && \frac{\displaystyle \int \Big [ -n \big ( \sum \alpha _k \sin \theta _k \big ) \big ( 2 \sum \cos \theta _k + \epsilon _n \big ) + 4 \big ( \sum \alpha _k \sin \theta _k \cos \theta _k \big ) \Big ] m(\Theta ) d \Theta }{\displaystyle \int \big ( \sum \alpha _k \sin \theta _k \big ) m(\Theta ) d \Theta }, \end{eqnarray}
(3.14) \begin{eqnarray} && c_3 = \frac{1}{\kappa }, \end{eqnarray}
(3.15) \begin{eqnarray} && c_4 = - \frac{1}{n^2-4} \times \nonumber \\[5pt] && \frac{\displaystyle \int \Big [ - \big ( \sum \alpha _k \sin \theta _k \big ) \big ( 2 \sum \cos \theta _k + \epsilon _n \big ) + n \big ( \sum \alpha _k \sin \theta _k \cos \theta _k \big ) \Big ] m(\Theta ) d \Theta }{\displaystyle \int \big ( \sum \alpha _k \sin \theta _k \big ) m(\Theta ) d \Theta }, \end{eqnarray}

where the integrals are over $\Theta = (\theta _1, \ldots, \theta _p) \in{\mathcal T}$ and the sums over $k \in \{1, \ldots, p \}$ .

Remark 3.3. (i) Letting $\alpha _k(\Theta ) = - \sin \theta _k$ , we recover the formulas of [Reference Degond, Diez and Frouvelle19, Theorem 3.1] for the coefficients $c_i$ (see also Remark 6.1).

(ii) Likewise, restricting ourselves to dimension $n=3$ , setting $\alpha _1 (\theta ) = - \sin \theta \tilde \psi _0$ where $\tilde \psi _0$ is defined in [Reference Degond, Frouvelle and Merino-Aceituno24, Prop. 4.6] (see Remark 3.2) the above formulas recover the formulas of [Reference Degond, Frouvelle and Merino-Aceituno24, Theorem 4.1] (noting that in [Reference Degond, Frouvelle and Merino-Aceituno24], what was called $c_2$ is actually our $c_2 - c_4$ ).

(iii) Hence, the results of Theorem 3.3 are consistent with and generalise previous results on either lower dimensions or simpler models.

We note that these formulas make $c_1, \ldots, c_4$ explicitely computable, at the expense of the resolution of System (3.11) and the computations of the integrals involved in the formulas above. In particular, it may be possible to compute numerical approximations of them for not-too-large values of $p$ . For large values of $p$ , analytical approximations will be required. Approximations of $\alpha$ may be obtained by considering the variational formulation (6.40) and restricting the unknown and test function spaces to appropriate (possibly finite-dimensional) subspaces.

The main objective of this paper is to prove Theorems 3.2 and 3.3. While the remainder of Section 3, as well as Section 4 rely on the same framework as [Reference Degond, Frouvelle and Merino-Aceituno24], the subsequent sections require completely new methodologies.

3.2. Equilibria

For $f$ : $\mathrm{SO}_n \to{\mathbb R}$ smooth enough, we define the collision operator

(3.16) \begin{equation} Q(f) = - \kappa \nabla _A \cdot \big ( P_{T_A} ({\mathcal P} (J_f)) f \big ) + \Delta _A f \quad \mathrm{with} \quad J_f = \int _{\mathrm{SO}_n} f \, A \, dA, \end{equation}

so that (2.15) can be recast into

(3.17) \begin{equation} \partial _t f^\varepsilon + A{\textbf{e}}_1 \cdot \nabla _x f^\varepsilon = \frac{1}{\varepsilon } Q(f^\varepsilon ). \end{equation}

It is clear that if $f^\varepsilon \to f^0$ as $\varepsilon \to 0$ strongly as well as all its first-order derivatives with respect to $x$ and second-order derivatives with respect to $A$ , then, we must have $Q(f^0) = 0$ . Solutions of this equation are called equilibria.

We have the following lemma whose proof can be found in ref. [Reference Degond, Diez and Frouvelle19, Appendix 7] and is not reproduced here:

Lemma 3.4. We have

\begin{equation*} \int _{\mathrm {SO}_n} A \, M_\Gamma (A) \, dA = c_1 \Gamma, \quad c_1 = \Big \langle \frac { \mathrm {Tr}(A)}{n} \Big \rangle _{\exp (\kappa \mathrm {Tr} (A)/2)}, \end{equation*}

where for two functions $f$ , $g$ : $\mathrm{SO}_n \to{\mathbb R}$ , with $g \geq 0$ and $g \not \equiv 0$ , we note

\begin{equation*} \langle f(A) \rangle _{g(A)} = \frac {\int _{\mathrm {SO}_n} f(A) \, g(A) \, dA}{\int _{\mathrm {SO}_n} g(A) \, dA}. \end{equation*}

The function $c_1$ : ${\mathbb R} \to{\mathbb R}$ , $\kappa \mapsto c_1(\kappa )$ is a nonnegative, nondecreasing function which satisfies $c_1(0) = 0$ , and $\lim _{\kappa \to \infty } c_1(\kappa ) = 1$ . It is given by (3.12).

From now on, we will use $\Gamma _f ={\mathcal P}(J_f)$ . We have different expressions of $Q$ expressed in the following

Lemma 3.5. We have

(3.18) \begin{eqnarray} Q(f) &=& - \kappa \nabla _A \cdot \big ( P_{T_A} \Gamma _f f \big ) + \Delta _A f \end{eqnarray}
(3.19) \begin{eqnarray} &=& \nabla _A \cdot \Big [ M_{\Gamma _f} \nabla _A \Big ( \frac{f}{M_{\Gamma _f}} \Big ) \Big ] \end{eqnarray}
(3.20) \begin{eqnarray} &=& \nabla _A \cdot \Big [ f \nabla _A \big (\!-\! \kappa \Gamma _f \cdot A + \log f \big ) \Big ]. \end{eqnarray}

Proof. Formula (3.18) is nothing but (3.16) with $\Gamma _f$ in place of ${\mathcal P}(J_f)$ . To get the other two expressions, we note that

(3.21) \begin{equation} \nabla _A (\Gamma \cdot A) = P_{T_A} \Gamma, \quad \forall A, \, \Gamma \in \mathrm{SO}_n. \end{equation}

Then, (3.20) follows immediately and for (3.19) we have

\begin{eqnarray*} \nabla _A \cdot \Big [ M_{\Gamma _f} \nabla _A \Big ( \frac{f}{M_{\Gamma _f}} \Big ) \Big ] &=& \Delta _A f - \nabla _A \cdot \big [f \nabla _A \big ( \log (M_{\Gamma _f}) \big ) \big ] \\[5pt] &=& \Delta _A f - \kappa \nabla _A \cdot \big [f P_{T_{A}} \Gamma _f \big ] = Q(f), \end{eqnarray*}

which ends the proof.

The following gives the equilibria of $Q$ :

Lemma 3.6. Let $f$ : $\mathrm{SO}_n \to{\mathbb R}$ be smooth enough such that $f \geq 0$ , $\rho _f \gt 0$ and $\mathrm{det} J_f \gt 0$ . Then, we have

\begin{equation*} Q(f) = 0 \Longleftrightarrow \exists (\rho,\Gamma ) \in (0,\infty ) \times \mathrm {SO}_n \text { such that } f = \rho M_{\Gamma }. \end{equation*}

Proof. Suppose $f$ fulfils the assumptions of the lemma and is such that $Q(f)=0$ . Using (3.19) and Stokes’ theorem, this implies:

\begin{equation*} 0 = \int _{\mathrm {SO}_n} Q(f) \frac {f}{M_{\Gamma _f}} \, dA = - \int _{\mathrm {SO}_n} \Big | \nabla _A \Big ( \frac {f}{M_{\Gamma _f}} \Big ) \Big |^2 \, M_{\Gamma _f} \, dA. \end{equation*}

Hence, $f/M_{\Gamma _f}$ is constant. So, there exists $\rho \in (0,\infty )$ such that $f = \rho M_{\Gamma _f}$ which shows that $f$ is of the form $\rho M_{\Gamma }$ for some $(\rho,\Gamma ) \in (0,\infty ) \times \mathrm{SO}_n$ .

Conversely, let $f = \rho M_\Gamma$ for some $(\rho,\Gamma ) \in (0,\infty ) \times \mathrm{SO}_n$ . If we show that $\Gamma = \Gamma _f$ , then, by (3.19) we deduce that $Q(f) = 0$ . By Lemma 3.4, we have $J_{\rho M_\Gamma } = \rho c_1 \Gamma$ with $\rho c_1 \gt 0$ . Thus, $\Gamma _f ={\mathcal P}(\rho c_1 \Gamma ) = \Gamma$ , which ends the proof. Note that knowing that $c_1 \gt 0$ is crucial in that step of the proof.

Corollary 3.7. Assume that $f^\varepsilon \to f^0$ as $\varepsilon \to 0$ strongly as well as all its first-order derivatives with respect to $x$ and second-order derivatives with respect to $A$ , such that $\int f^0(x,v,t) dv \gt 0$ , $\forall (x,t) \in{\mathbb R}^n \times [0,\infty )$ . Then, there exists two functions $\rho$ : ${\mathbb R}^n \times [0,\infty ) \to (0,\infty )$ and $\Gamma$ : ${\mathbb R}^n \times [0,\infty ) \to \mathrm{SO}_n$ such that (3.2) holds.

Proof. This is an obvious consequence of Lemma 3.6 since, for any given $(x,t) \in{\mathbb R}^n \times [0,\infty )$ , the function $f^0(x,\cdot,t)$ satisfies $Q(f^0(x,\cdot,t)) = 0$ .

Now, we are looking for the equations satisfied by $\rho$ and $\Gamma$ .

3.3. The continuity equation

Proposition 3.8. The functions $\rho$ and $\Gamma$ involved in (3.2) satisfy the continuity equation (3.3).

Proof. By Stokes’s theorem, for all second-order differentiable function $f$ : $\mathrm{SO}_n \to{\mathbb R}$ , we have $ \int _{\mathrm{SO}_n} Q(f) \, dA = 0$ . Therefore, integrating (3.17) with respect to $A$ , we obtain,

(3.22) \begin{equation} \int _{\mathrm{SO}_n} (\partial _t + A{\textbf{e}}_1 \cdot \nabla _x) f^\varepsilon \, dA = 0. \end{equation}

For any distribution function $f$ , we define

\begin{equation*} \rho _f(x,t) = \int _{\mathrm {SO}_n} f(x,A,t) \, dA. \end{equation*}

Thus, with (2.16), Eq. (3.22) can be recast into

(3.23) \begin{equation} \partial _t \rho _{f^\varepsilon } + \nabla _x \cdot (J_{f^\varepsilon }{\textbf{e}}_1) = 0. \end{equation}

Now, given that the convergence of $f^\varepsilon \to f^0$ as $\varepsilon \to 0$ is supposed strong enough, and thanks to Lemma 3.4, we have

\begin{equation*} \rho _{f^\varepsilon } \to \rho _{f^0} = \rho _{\rho M_\Gamma } = \rho, \qquad J_{f^\varepsilon } \to J_{f^0} = J_{\rho M_\Gamma } = \rho c_1 \Gamma. \end{equation*}

Then, passing to the limit $\varepsilon \to 0$ in (3.23) leads to (3.3).

4. Generalised collision invariants: definition and existence

4.1. Definition and first characterisation

Now, we need an equation for $\Gamma$ . We see that the proof of Prop. 3.8 can be reproduced if we can find functions $\psi$ : $\mathrm{SO}_n \to{\mathbb R}$ such that for all second-order differentiable function $f$ : $\mathrm{SO}_n \to{\mathbb R}$ , we have

(4.1) \begin{equation} \int _{\mathrm{SO}_n} Q(f) \, \psi \, dA = 0. \end{equation}

Such a function is called a collision invariant. In the previous proof, the collision invariant $\psi = 1$ was used. Unfortunately, it can be verified that the only collision invariants of $Q$ are the constants. Thus, the previous proof cannot be reproduced to find an equation for $\Gamma$ . In order to find more equations, we have to relax the condition that (4.1) must be satisfied for all functions $f$ . This leads to the concept of generalised collision invariant (GCI). We first introduce a few more definitions.

Given $\Gamma \in \mathrm{SO}_n$ , we define the following linear Fokker-Planck operator, defined for second-order differentiable functions $f$ : $\mathrm{SO}_n \to{\mathbb R}$ :

\begin{equation*} {\mathcal Q}(f,\Gamma ) = \nabla \cdot \Big [ M_\Gamma \nabla \Big ( \frac {f}{M_\Gamma } \Big ) \Big ]. \end{equation*}

For simplicity, in the remainder of the present section as well as in Sections 5 and 6, we will drop the subscript $A$ to the $\nabla$ , $\nabla \cdot$ and $\Delta$ operators as all derivatives will be understood with respect to $A$ .

We note that

(4.2) \begin{equation} Q(f) ={\mathcal Q}(f,\Gamma _f). \end{equation}

Definition 4.1. Given $\Gamma \in \mathrm{SO}_n$ , a GCI associated with $\Gamma$ is a function $\psi$ : $\mathrm{SO}_n \to{\mathbb R}$ such that

(4.3) \begin{equation} \int _{\mathrm{SO}_n}{\mathcal Q}(f,\Gamma ) \, \psi \, dA = 0 \quad \text{for all} \quad f: \, \mathrm{SO}_n \to{\mathbb R} \quad \text{such that} \quad P_{T_\Gamma } J_f = 0. \end{equation}

The set ${\mathcal G}_\Gamma$ of GCI associated with $\Gamma$ is a vector space.

From this definition, we have the following lemma which gives a justification for why this concept is useful for the hydrodynamic limit.

Lemma 4.2. We have

(4.4) \begin{equation} \psi \in{\mathcal G}_{\Gamma _f} \quad \Longrightarrow \quad \int _{\mathrm{SO}_n} Q(f) \, \psi \, dA = 0. \end{equation}

Proof. By (4.2) and (4.3), it is enough to show that $P_{T_{\Gamma _f}} J_f = 0$ . But $\Gamma _f ={\mathcal P}(J_f)$ , so there exists a symmetric positive-definite matrix $S$ such that $J_f = \Gamma _f S$ . So,

\begin{equation*} P_{T_{\Gamma _f}} J_f = \Gamma _f \frac {\Gamma _f^T J_f - J_f^T \Gamma _f}{2} = \Gamma _f \frac {S-S^T}{2} = 0. \end{equation*}

The following lemma provides the equation solved by the GCI.

Lemma 4.3. The function $\psi$ : $\mathrm{SO}_n \to{\mathbb R}$ belongs to ${\mathcal G}_\Gamma$ if and only if $\exists P \in T_\Gamma$ such that

(4.5) \begin{equation} \nabla \cdot \big ( M_\Gamma \nabla \psi \big ) = P \cdot A \, M_\Gamma. \end{equation}

Proof. On the one hand, we can write

\begin{equation*} \int _{\mathrm {SO}_n} {\mathcal Q}(f,\Gamma ) \, \psi \, dA = \int _{\mathrm {SO}_n} f \, {\mathcal Q}^*(\psi,\Gamma ) \, dA, \end{equation*}

where ${\mathcal Q}^*(\cdot,\Gamma )$ is the formal $L^2$ -adjoint to ${\mathcal Q}(\cdot,\Gamma )$ and is given by

\begin{equation*} {\mathcal Q}^*(\psi,\Gamma ) = M_\Gamma ^{-1} \nabla \cdot \big ( M_\Gamma \nabla \psi \big ). \end{equation*}

On the other hand, we have

\begin{eqnarray*} P_{T_\Gamma } J_f =0 & \Longleftrightarrow & P_{T_\Gamma } \int _{\mathrm{SO}_n} f \, A \, dA = 0 \\[5pt] & \Longleftrightarrow & \int _{\mathrm{SO}_n} f \, A \cdot P \, dA = 0, \quad \forall P \in T_\Gamma \\[5pt] & \Longleftrightarrow & f \in \{ A \mapsto A \cdot P \, \, | \, \, P \in T_\Gamma \}^\bot \end{eqnarray*}

where the orthogonality in the last statement is meant in the $L^2$ sense. So, by (4.3), $\psi \in{\mathcal G}_\Gamma$ if and only if

\begin{equation*} \{ A \mapsto A \cdot P \, \, | \, \, P \in T_\Gamma \}^\bot \subset \{ {\mathcal Q}^*(\psi,\Gamma ) \}^\bot, \end{equation*}

or by taking orthogonals again, if and only if

(4.6) \begin{equation} \mathrm{Span} \{{\mathcal Q}^*(\psi,\Gamma ) \} \subset \{ A \mapsto A \cdot P \, \, | \, \, P \in T_\Gamma \}, \end{equation}

because both sets in (4.6) are finite-dimensional, hence closed. Statement (4.6) is equivalent to the statement that there exists $P \in T_\Gamma$ such that (4.5) holds true.

4.2. Existence and invariance properties

We now state an existence result for the GCI. First, we introduce the following spaces: $L^2(\mathrm{SO}_n)$ stands for the space of square integrable functions $f$ : $\mathrm{SO}_n \to{\mathbb R}$ endowed with the usual $L^2$ -norm $\|f\|^2_{L^2} = \int _{\mathrm{SO}_n} |f(A)|^2 \, dA$ . Then, we define $H^1(\mathrm{SO}_n) = \{ f \in L^2(\mathrm{SO}_n) \, \, | \, \, \nabla f \in L^2(\mathrm{SO}_n)\}$ (where $\nabla f$ is meant in the distributional sense), endowed with the usual $H^1$ -norm $\|f\|^2_{H^1} = \|f\|^2_{L^2} + \|\nabla f\|^2_{L^2}$ . Finally, $H^1_0(\mathrm{SO}_n)$ are the functions of $H^1(\mathrm{SO}_n)$ with zero mean, i.e. $f \in H^1_0(\mathrm{SO}_n) \Longleftrightarrow f \in H^1(\mathrm{SO}_n)$ and

(4.7) \begin{equation} \int _{\mathrm{SO}_n} f \, dA = 0. \end{equation}

We will solve (4.5) in the variational sense. We note that for $P \in T_\Gamma$ , we have

\begin{equation*} \int _{\mathrm {SO}_n} A \cdot P \, M_\Gamma (A) \, dA = c_1 \Gamma \cdot P = 0, \end{equation*}

i.e. the right-hand side of (4.5) satisfies (4.7). Hence, if $\psi$ is a smooth solution of (4.5) satisfying (4.7), it satisfies

(4.8) \begin{equation} \int _{\mathrm{SO}_n} M_\Gamma \, \nabla \psi \, \nabla \chi \, dA = - \int _{\mathrm{SO}_n} M_\Gamma \, A \cdot P \, \chi \, dA, \end{equation}

for all functions $\chi$ satisfying (4.7). This suggests to look for solutions of (4.8) in $H^1_0(\mathrm{SO}_n)$ . Indeed, we have

Proposition 4.4. For a given $P \in T_\Gamma$ , there exists a unique $\psi \in H^1_0(\mathrm{SO}_n)$ such that (4.8) is satisfied for all $\chi \in H^1_0(\mathrm{SO}_n)$ .

Proof. This is a classical application of Lax-Milgram’s theorem. We only need to verify that the bilinear form

\begin{equation*} a(\psi,\chi ) = \int _{\mathrm {SO}_n} M_\Gamma \, \nabla \psi \, \nabla \chi \, dA, \end{equation*}

is coercive on $H^1_0(\mathrm{SO}_n)$ . Since $\mathrm{SO}_n$ is compact, there exists $C\gt 0$ such that $M_\Gamma \geq C$ . So,

\begin{equation*} a(\psi,\psi ) \geq C \int _{\mathrm {SO}_n} |\nabla \psi |^2\, dA. \end{equation*}

This is the quadratic form associated with the Laplace operator $- \Delta$ on $\mathrm{SO}_n$ . But the lowest eigenvalue of $- \Delta$ is $0$ and its associated eigenspace are the constant functions. Then, there is a spectral gap and the next eigenvalue $\lambda _2$ is positive. Hence, we have

(4.9) \begin{equation} \int _{\mathrm{SO}_n} |\nabla \psi |^2\, dA \geq \lambda _2 \int _{\mathrm{SO}_n} |\psi |^2\, dA, \quad \forall \psi \in H^1_0(\mathrm{SO}_n), \end{equation}

(see e.g. [Reference Degond18, Section 4.3] for more detail). This implies the coercivity of $a$ on $H^1_0(\mathrm{SO}_n)$ and ends the proof.

Remark 4.1. Since the functions $A \mapsto M_\Gamma (A)$ and $A \mapsto M_\Gamma (A) \, A \cdot P$ are $C^\infty$ , by elliptic regularity, the unique solution of Prop. 4.4 actually belongs to $C^\infty (\mathrm{SO}_n)$ .

For any $P \in T_\Gamma$ , there exists $X \in \mathfrak{so}_n$ such that $P = \Gamma X$ . We denote by $\psi _X^\Gamma$ the unique solution of (4.8) in $H^1_0(\mathrm{SO}_n)$ associated with $P = \Gamma X$ . Then, we have the

Corollary 4.5. The space ${\mathcal G}_\Gamma$ is given by

(4.10) \begin{equation}{\mathcal G}_\Gamma = \mathrm{Span} \big ( \{1\} \cup \{\psi _X^\Gamma \, \, | \, \, X \in \mathfrak{so}_n \} \big ), \end{equation}

and we have

(4.11) \begin{equation} \mathrm{dim} \,{\mathcal G}_\Gamma = \mathrm{dim} \, \mathfrak{so}_n + 1 = \frac{n(n-1)}{2}+1. \end{equation}

Proof. If $\psi \in{\mathcal G}_\Gamma$ , then, $\psi - \bar \psi \in{\mathcal G}_\Gamma \cap H^1_0(\mathrm{SO}_n)$ where $\bar \psi = \int _{\mathrm{SO}_n} \psi \, dA$ . Then, $\exists X \in \mathfrak{so}_n$ such that $\psi - \bar \psi = \psi _X^\Gamma$ , which leads to (4.10). Now, the map $\mathfrak{so}_n \to H^1_0(\mathrm{SO}_n)$ , $X \mapsto \psi _X^\Gamma$ is linear and injective. Indeed, suppose $\psi _X^\Gamma = 0$ . Then, inserting it into (4.8), we get that

\begin{equation*} \int _{\mathrm {SO}_n} M_\Gamma (A) \, A \cdot (\Gamma X) \, \chi (A) \, dA = 0, \quad \forall \chi \in H^1_0(\mathrm {SO}_n), \end{equation*}

and by density, this is still true for all $\chi \in L^2(\mathrm{SO}_n)$ . This implies that

\begin{equation*} M_\Gamma (A) \, A \cdot (\Gamma X) = 0, \quad \forall A \in \mathrm {SO}_n, \end{equation*}

and since $M_\Gamma \gt 0$ , that $A \cdot (\Gamma X) = (\Gamma ^T A) \cdot X = 0$ , for all $A \in \mathrm{SO}_n$ . Now the multiplication by $\Gamma ^T$ on the left is a bijection of $\mathrm{SO}_n$ , so we get that $X$ satisfies

\begin{equation*} A \cdot X = 0, \quad \forall A \in \mathrm {SO}_n. \end{equation*}

Then, taking $A = e^{tY}$ with $Y \in \mathfrak{so}_n$ and differentiating with respect to $t$ , we obtain

\begin{equation*} Y \cdot X = 0, \quad \forall Y \in \mathfrak {so}_n, \end{equation*}

which shows that $X=0$ . Hence, ${\mathcal G}_\Gamma$ is finite-dimensional and (4.11) follows.

From now on, we will repeatedly use the following lemma.

Lemma 4.6. (i) Let $g \in \mathrm{SO}_n$ and let $\ell _g$ , $r_g$ and $\xi _g$ be the left and right translations and conjugation maps of $\mathrm{SO}_n$ , respectively:

(4.12) \begin{equation} \ell _g(A) = gA, \quad r_g(A) = Ag, \quad \xi _g = \ell _g \circ r_{g^{-1}} = \ell _g \circ r_{g^T}, \quad \forall A \in \mathrm{SO}_n. \end{equation}

Let $f$ : $\mathrm{SO}_n \to{\mathbb R}$ be smooth. Then, we have for any $X \in \mathfrak{so}_n$ :

(4.13) \begin{eqnarray} \nabla (f \circ \ell _g) (A) \cdot AX &=& \nabla f (gA) \cdot gAX, \end{eqnarray}
(4.14) \begin{eqnarray} \nabla (f \circ \xi _g) (A) \cdot AX &=& \nabla f (gAg^T) \cdot gAXg^T. \end{eqnarray}

(ii) If $f$ and $\varphi$ : $\mathrm{SO}_n \to{\mathbb R}$ are smooth, then,

(4.15) \begin{eqnarray} \nabla (f \circ \ell _g) (A) \cdot \nabla (\varphi \circ \ell _g) (A) &=& \nabla f(gA) \cdot \nabla \varphi (gA), \end{eqnarray}
(4.16) \begin{eqnarray} \nabla (f \circ \xi _g) (A) \cdot \nabla (\varphi \circ \xi _g) (A) &=& \nabla f(gAg^T) \cdot \nabla \varphi (gAg^T), \end{eqnarray}

Proof. (i) We show (4.13). (4.14) is shown in a similar way. By (2.5), we have

\begin{equation*} \nabla (f \circ \ell _g) (A) \cdot AX = \frac {d}{dt} \big ( (f \circ \ell _g)(A e^{tX}) \big )|_{t=0} = \frac {d}{dt} \big ( f(g A e^{tX}) \big )|_{t=0} = \nabla f(g A) \cdot g A X. \end{equation*}

(ii) Again, we show (4.15), the proof of (4.16) being similar. Applying (4.13) twice, we have

\begin{eqnarray*} \nabla f(gA) \cdot \nabla \varphi (gA) &=& \nabla (f \circ \ell _g) (A) \cdot \big (g^T \nabla \varphi (gA) \big ) = \nabla \varphi (gA) \cdot \big ( g \nabla (f \circ \ell _g) (A) \big ) \\[5pt] &=& \nabla (\varphi \circ \ell _g) (A) \cdot \nabla (f \circ \ell _g) (A). \end{eqnarray*}

Proposition 4.7 (translation invariance). We have

(4.17) \begin{equation} \psi _X^{\mathrm{I}}(A) = \psi _X^\Gamma (\Gamma A), \quad \forall A, \, \Gamma \in \mathrm{SO}_n, \quad \forall X \in \mathfrak{so}_n. \end{equation}

Proof. $\psi = \psi _X^\Gamma$ is the unique solution in $H^1_0(\mathrm{SO}_n)$ of the following variational formulation:

\begin{eqnarray*} && \int _{\mathrm{SO}_n} \exp \big (\frac{\kappa }{2} \mathrm{Tr} (\Gamma ^T A) \big ) \, \nabla \psi (A) \cdot \nabla \chi (A) \, dA \\[5pt] &&\quad = - \frac{1}{2} \int _{\mathrm{SO}_n} \exp \big (\frac{\kappa }{2} \mathrm{Tr} (\Gamma ^T A) \big ) \, \mathrm{Tr} (A^T \Gamma X) \, \chi (A) \, dA, \quad \forall \chi \in H^1_0(\mathrm{SO}_n). \end{eqnarray*}

By the change of variables $A' = \Gamma ^T A$ , the translation invariance of the Haar measure and (4.15), we get, dropping the primes for simplicity:

(4.18) \begin{eqnarray} && \int _{\mathrm{SO}_n} \exp (\frac{\kappa }{2} \mathrm{Tr} A ) \, \nabla (\psi \circ \ell _\Gamma ) (A) \cdot \nabla (\chi \circ \ell _\Gamma ) (A) \, dA \nonumber \\[5pt] &&\quad = - \frac{1}{2} \int _{\mathrm{SO}_n} \exp (\frac{\kappa }{2} \mathrm{Tr} A ) \, \mathrm{Tr} (A^T X) \, \chi \circ \ell _\Gamma (A) \, dA, \quad \forall \chi \in H^1_0(\mathrm{SO}_n), \end{eqnarray}

We remark that the mapping $H^1_0(\mathrm{SO}_n) \to H^1_0(\mathrm{SO}_n)$ , $\chi \to \chi \circ \ell _\Gamma$ is a linear isomorphism and an isometry (the proof is analogous to the proof of Prop. 6.4 below and is omitted). Thus, we can replace $\chi \circ \ell _\Gamma$ in (4.18) by any test function $\tilde \chi \in H^1_0(\mathrm{SO}_n)$ , which leads to a variational formulation for $\psi \circ \ell _\Gamma$ which is identical with that of $\psi _X^{\mathrm{I}}$ . By the uniqueness of the solution of the variational formulation, this leads to (4.17) and finishes the proof.

Proposition 4.8 (Conjugation invariance). We have

(4.19) \begin{equation} \psi _X^{\mathrm{I}}(gAg^T) = \psi _{g^TXg}^{\mathrm{I}}(A), \quad \forall A, \, g \in \mathrm{SO}_n. \end{equation}

Proof. The proof is identical to that of Prop. 4.7. We start from the variational formulation for $\psi _X^{\mathrm{I}}$ and make the change of variables $A = g A' g^T$ in the integrals. Thanks to (4.15), it yields a variational formulation for $\psi _X^{\mathrm{I}} \circ \xi _g$ , which is noticed to be identical to that of $\psi _{g^TXg}^{\mathrm{I}}$ . By the uniqueness of the solution of the variational formulation, we get (4.19).

From this point onwards, the search for GCI differs significantly from [Reference Degond, Frouvelle and Merino-Aceituno24] where the assumption of dimension $n=3$ was crucial. We will need further concepts about the rotation groups which are summarised in the next section.

5. Maximal torus and Weyl group

If $g \in \mathrm{SO}_n$ , the conjugation map $\xi _g$ given by (4.12) is a group isomorphism. Let $A$ and $B \in \mathrm{SO}_n$ . We say that $A$ and $B$ are conjugate, and we write $A \sim B$ , if and only if $\exists g \in \mathrm{SO}_n$ such that $B = gAg^T$ . It is an equivalence relation. Conjugation classes can be described as follows. The planar rotation $R_\theta$ for $\theta \in{\mathbb R}/(2 \pi{\mathbb Z})$ is defined by

(5.1) \begin{equation} R_\theta = \left ( \begin{array}{c@{\quad}c} \cos \theta & - \sin \theta \\[5pt] \sin \theta & \cos \theta \end{array} \right ). \end{equation}

The set ${\mathcal T}=[\!-\! \pi, \pi )^p$ will be identified with the torus $({\mathbb R}/(2 \pi{\mathbb Z}))^p$ . For $\Theta \;=\!:\; (\theta _1, \ldots, \theta _p) \in{\mathcal T}$ , we define the matrix $A_\Theta$ blockwise by:

  • in the case $n = 2p$ , $p \geq 2$ ,

    (5.2) \begin{equation} A_\Theta = \left ( \begin{array}{c@{\quad}c@{\quad}c@{\quad}c} R_{\theta _1} & & & 0 \\[5pt] & R_{\theta _2} & & \\[5pt] & & \ddots & \\[5pt] 0 & & & R_{\theta _p} \end{array} \right ) \in \mathrm{SO}_{2p}{\mathbb R}, \end{equation}
  • in the case $n = 2p+1$ , $p \geq 1$ ,

    (5.3) \begin{equation} A_\Theta = \left ( \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} R_{\theta _1} & & & 0 & 0 \\[5pt] & R_{\theta _2} & & & \vdots \\[5pt] & & \ddots & & \vdots \\[5pt] 0 & & & R_{\theta _p} & 0 \\[5pt] 0 & \ldots & \ldots & 0 & 1 \end{array} \right ) \in \mathrm{SO}_{2p+1}{\mathbb R}. \end{equation}

By classical matrix reduction theory, any $A \in \mathrm{SO}_n$ is conjugate to $A_\Theta$ for some $\Theta \in{\mathcal T}$ . We define the subset $\mathbb T$ of $\mathrm{SO}_n$ by

\begin{equation*} {\mathbb T} = \{ A_\Theta \, \, | \, \, \Theta \in {\mathcal T} \}. \end{equation*}

$\mathbb T$ is an abelian subgroup of $\mathrm{SO}_n$ and the map ${\mathcal T} \to{\mathbb T}$ is a group isomorphism. It can be shown that $\mathbb T$ is a maximal abelian subgroup of $\mathrm{SO}_n$ , and for that reason, $\mathbb T$ is called a maximal torus. $\mathbb T$ is a Lie subgroup of $\mathrm{SO}_n$ , and we denote by $\mathfrak{h}$ its Lie algebra. $\mathfrak{h}$ is a Lie subalgebra of $\mathfrak{so}_n$ given by

\begin{equation*} \mathfrak {h} = \big \{ \sum _{i=1}^p \alpha _i F_{2i-1 \, 2i} \, \, | \, \, (\alpha _1, \ldots \alpha _p) \in {\mathbb R}^p \big \}, \end{equation*}

where we recall that $F_{ij}$ is defined by (2.6). $\mathfrak{h}$ is an abelian subalgebra (i.e. $[X,Y] = 0$ , $\forall X, \, Y \in \mathfrak{h}$ ) and is actually maximal among abelian subalgebras. In Lie algebra language, $\mathfrak{h}$ is a Cartan subalgebra of $\mathfrak{so}_n$ .

Let us describe the elements $g \in \mathrm{SO}_n$ that conjugate an element of $\mathbb T$ into an element of $\mathbb T$ . Such elements form a group called the normaliser of $\mathbb T$ and are denoted by $N({\mathbb T})$ . Since $\mathbb T$ is abelian, elements of $\mathbb T$ conjugate an element of $\mathbb T$ to itself so we are rather interested in those elements of $N({\mathbb T})$ that conjugate an element of $\mathbb T$ to a different one. In other words, we want to describe the quotient group $N({\mathbb T})/{\mathbb T}$ (clearly, $\mathbb T$ is normal in $N({\mathbb T})$ ) which is a finite group called the Weyl group and denoted by $\mathfrak{W}$ . The Weyl group differs in the odd and even dimension cases. It is generated by the following elements $g$ of $N({\mathbb T})$ (or strictly speaking by the cosets $g{\mathbb T}$ where $g$ are such elements) [Reference Fulton and Harris36]:

  • Case $n=2p$ even:

    • - elements $g = C_{ij}$ , $1 \leq i \lt j \leq p$ where $C_{ij}$ exchanges $e_{2i-1}$ and $e_{2j-1}$ on the one hand, $e_{2i}$ and $e_{2j}$ on the other hand, and fixes all the other basis elements, (where $(e_i)_{i=1}^{2p}$ is the canonical basis of ${\mathbb R}^{2p}$ ). Conjugation of $A_{\Theta }$ by $C_{ij}$ exchanges the blocks $R_{\theta _i}$ and $R_{\theta _j}$ of (5.2), i.e. exchanges $\theta _i$ and $\theta _j$ . It induces an isometry of ${\mathbb R}^p$ (or $\mathcal T$ ), still denoted by $C_{ij}$ by abuse of notation, such that $C_{ij} (\Theta ) = ( \ldots, \theta _j, \ldots \theta _i, \ldots )$ . This isometry is the reflection in the hyperplane $\{e_i - e_j\}^\bot$ ;

    • - elements $g = D_{ij}$ , $1 \leq i \lt j \leq p$ where $D_{ij}$ exchanges $e_{2i-1}$ and $e_{2i}$ on the one hand, $e_{2j-1}$ and $e_{2j}$ on the other hand and fixes all the other basis elements. Conjugation of $A_{\Theta }$ by $D_{ij}$ changes the sign of $\theta _i$ in $R_{\theta _i}$ and that of $\theta _j$ in $R_{\theta _j}$ , i.e. changes $(\theta _i,\theta _j)$ into $(\!-\!\theta _i, -\theta _j)$ . It induces an isometry of ${\mathbb R}^p$ (or $\mathcal T$ ), still denoted by $D_{ij}$ such that $D_{ij}(\Theta ) = ( \ldots, - \theta _i, \ldots - \theta _j, \ldots )$ . The transformation $C_{ij} \circ D_{ij} = D_{ij} \circ C_{ij}$ is the reflection in the hyperplane $\{e_i + e_j\}^\bot$ ;

  • Case $n=2p+1$ odd:

    • - elements $g = C_{ij}$ , $1 \leq i \lt j \leq p$ identical to those of the case $n=2p$ ;

    • - elements $g = D_i$ , $1 \leq i \leq p$ , where $D_i$ exchanges $e_{2i-1}$ and $e_{2i}$ on the one hand, maps $e_{2p+1}$ into $-e_{2p+1}$ on the other hand and fixes all the other basis elements. Conjugation of $A_{\Theta }$ by $D_i$ changes the sign of $\theta _i$ in $R_{\theta _i}$ , i.e. changes $\theta _i$ into $-\theta _i$ . It induces an isometry of ${\mathbb R}^p$ (or $\mathcal T$ ), still denoted by $D_i$ such that $D_i(\Theta ) = ( \ldots, - \theta _i, \ldots )$ . It is the reflection in the hyperplane $\{e_i\}^\bot$ .

The group of isometries of ${\mathbb R}^p$ (or $\mathcal T$ ) generated by $\{C_{ij}\}_{1 \leq i \lt j \leq p} \cup \{D_{ij}\}_{1 \leq i \lt j \leq p}$ in the case $n=2p$ and $\{C_{ij}\}_{1 \leq i \lt j \leq p} \cup \{D_i\}_{1=1}^p$ in the case $n=2p+1$ is still the Weyl group $\mathfrak{W}$ . Note that in the case $n=2p$ , an element of $\mathfrak{W}$ induces only an even number of sign changes of $\Theta$ , while in the case $n=2p+1$ , an arbitrary number of sign changes are allowed. $\mathfrak{W}$ is also generated by the orthogonal symmetries in the hyperplanes $\{e_i \pm e_j\}^\bot$ for $1 \leq i \lt j \leq p$ in the case $n=2p$ . The elements of the set $\{\pm e_i \pm e_j\}_{1 \leq i\lt j \leq p }$ are called the roots of $\mathrm{SO}_{2p}$ , while the roots of $\mathrm{SO}_{2p+1}$ are the elements of the set $\{\pm e_i \pm e_j\}_{1 \leq i\lt j \leq p } \cup \{\pm e_i\}_{1=1}^p$ .

We also need one more definition. A closed Weyl chamber is the closure of a connected component of the complement of the union of the hyperplanes orthogonal to the roots. The Weyl group acts simply transitively on the closed Weyl chambers [Reference Fulton and Harris36, Sect. 14.1], i.e. for two closed Weyl chambers ${\mathcal W}_1$ and ${\mathcal W}_2$ , there exists a unique $W \in \mathfrak{W}$ such that $W({\mathcal W}_1) ={\mathcal W}_2$ . A distinguished closed Weyl chamber (that is associated with a positive ordering of the roots) is given by [Reference Fulton and Harris36, Sect. 18.1]:

  • Case $n=2p$ even:

    \begin{equation*} {\mathcal W} = \big \{ \Theta \in {\mathbb R}^p \, \, | \, \, \theta _1 \geq \theta _2 \geq \ldots \geq \theta _{p-1} \geq |\theta _p| \geq 0 \big \}, \end{equation*}
  • Case $n=2p+1$ odd:

    \begin{equation*} {\mathcal W} = \big \{ \Theta \in {\mathbb R}^p \, \, | \, \, \theta _1 \geq \theta _2 \geq \ldots \geq \theta _{p-1} \geq \theta _p \geq 0 \big \}, \end{equation*}

all other closed Weyl chambers being of the form $W({\mathcal W})$ for some element $W \in \mathfrak{W}$ . We have

(5.4) \begin{equation}{\mathbb R}^p = \bigcup _{W \in \mathfrak{W}} W \big ({\mathcal W} \big ), \end{equation}

and for any $W_1$ , $W_2 \in \mathfrak{W}$ ,

(5.5) \begin{equation} W_1 \not = W_2 \, \, \Longrightarrow \, \, \mathrm{meas} \Big ( W_1 \big ({\mathcal W} \big ) \cap W_2 \big ({\mathcal W} \big ) \Big )= 0, \end{equation}

(where $\mathrm{meas}$ stands for the Lebesgue measure), the latter relation reflects that the intersection of two Weyl chambers is included in a hyperplane. Defining ${\mathcal W}_{\mathrm{per}} ={\mathcal W} \cap [\!-\! \pi, \pi ]^p$ , we have (5.4) and (5.5) with ${\mathbb R}^p$ replaced by $[\!-\! \pi, \pi ]^p$ and $\mathcal W$ replaced by ${\mathcal W}_{\mathrm{per}}$ .

Class functions are functions $f$ : $\mathrm{SO}_n \to{\mathbb R}$ that are invariant by conjugation, i.e. such that $f(gAg^T)=f(A)$ , $\forall \, A, \, g \in \mathrm{SO}_n$ . By the preceding discussion, a class function can be uniquely associated with a function $\varphi _f$ : $\tilde{\mathcal T} \;=\!:\;{\mathcal T}/\mathfrak{W} \to{\mathbb R}$ such that $\varphi _f (\Theta ) = f(A_\Theta )$ . By a function on ${\mathcal T}/\mathfrak{W}$ , we mean a function on $\mathcal T$ which is invariant by any isometry belonging to the Weyl group $\mathfrak{W}$ . The Laplace operator $\Delta$ maps class functions to class functions. Hence, it generates an operator $L$ on $C^\infty (\tilde{\mathcal T})$ such that for any class function $f$ in $C^\infty (\mathrm{SO}_n )$ , we have:

(5.6) \begin{equation} L \varphi _f = \varphi _{\Delta f}. \end{equation}

Expressions of the operator $L$ (called the radial Laplacian) are derived in ref. [Reference Degond18]. They are recalled in Appendix A. Class functions are important because they are amenable to a specific integration formula called the Weyl integration formula which states that if $f$ is an integrable class function on $\mathrm{SO}_n$ , then,

(5.7) \begin{equation} \int _{\mathrm{SO}_n{\mathbb R}} f(A) \, dA = \gamma _n \frac{1}{(2 \pi )^p} \int _{{\mathcal T}} f(A_\Theta ) \, u_n (\Theta ) \, d \Theta, \end{equation}

with $u_n$ defined in (3.8) and (3.9), and

\begin{equation*} \gamma _n = \left \{ \begin {array}{l@{\quad}l@{\quad}l} \dfrac {2^{(p-1)^2}}{p!} & \mathrm { if } & n=2p \\[12pt] \dfrac {2^{p^2}}{p!} & \mathrm { if } & n=2p+1 \end {array} \right. . \end{equation*}

We now introduce some additional definitions. The adjoint representation of $\mathrm{SO}_n$ denoted by ‘ $\mathrm{Ad}$ ’ maps $\mathrm{SO}_n$ into the group $\mathrm{Aut}(\mathfrak{so}_n)$ of linear automorphisms of $\mathfrak{so}_n$ as follows:

\begin{equation*} \mathrm {Ad}(A) (Y) = A Y\, A^{-1}, \quad \forall A \in \mathrm {SO}_n, \quad \forall Y \in \mathfrak {so}_n. \end{equation*}

$\mathrm{Ad}$ is a Lie-group representation of $\mathrm{SO}_n{\mathbb R}$ , meaning that

\begin{equation*} \mathrm {Ad}(A) \mathrm {Ad}(B) = \mathrm {Ad}(AB), \quad \forall A, \, B \in \mathrm {SO}_n. \end{equation*}

We have

\begin{equation*} \mathrm {Ad}(A) (X) \cdot \mathrm {Ad}(A) (Y) = X \cdot Y, \quad \forall A \in \mathrm {SO}_n, \quad \forall X, \, Y \in \mathfrak {so}_n, \end{equation*}

showing that the inner product on $\mathfrak{so}_n$ is invariant by $\mathrm{Ad}$ . The following identity, shown in ref. [Reference Faraut30, Section 8.2], will be key to the forthcoming analysis of the GCI. Let $f$ be a function $\mathrm{SO}_n \to V$ , where $V$ is a finite-dimensional vector space over $\mathbb R$ . Then, we have, using the definition (2.4) of $\varrho$ :

(5.8) \begin{equation} \Big (\varrho \big (\mathrm{Ad}(g^{-1}) T - T \big ) f\Big )(g) = \frac{d}{ds} \big ( f(e^{sT}ge^{-sT}) \big ) \big |_{s=0}, \quad \forall g \in \mathrm{SO}_n, \quad \forall T \in \mathfrak{so}_n. \end{equation}

We finish with the following identity which will be used repeatedly.

(5.9) \begin{equation} [F_{ij},F_{k \ell }] = \big ( \delta _{jk} F_{i \ell } + \delta _{i \ell } F_{jk} - \delta _{ik} F_{j \ell } - \delta _{j \ell } F_{ik} \big ). \end{equation}

6. Generalised collision invariants associated with the identity

6.1. Introduction of $\boldsymbol\alpha = (\boldsymbol\alpha _\boldsymbol{i})_{\boldsymbol{i}=\textbf{1}}^\boldsymbol{p}$ and first properties

This section is devoted to the introduction of the function $\alpha = (\alpha _i)_{i=1}^p$ , ${\mathcal T} \to{\mathbb R}^p$ which will be eventually shown to solve System (3.11).

For simplicity, we denote $\psi _X^{\mathrm{I}}$ simply by $\psi _X$ and $M_{\mathrm{I}}$ by $M$ . Let $A, \, \Gamma \in \mathrm{SO}_n$ be fixed. The map $\mathfrak{so}_n \to{\mathbb R}$ , $X \mapsto \psi _X^\Gamma (A)$ is a linear form. Hence, there exists a map $\mu ^\Gamma$ : $\mathrm{SO}_n \to \mathfrak{so}_n$ such that

(6.1) \begin{equation} \psi _X^\Gamma (A) = \mu ^\Gamma (A) \cdot X, \quad \forall A \in \mathrm{SO}_n, \quad \forall X \in \mathfrak{so}_n. \end{equation}

We abbreviate $\mu ^{\mathrm{I}}$ into $\mu$ .

We define $H^1_0(\mathrm{SO}_n, \mathfrak{so}_n)$ as the space of functions $\chi$ : $\mathrm{SO}_n \to \mathfrak{so}_n$ such that each component of $\chi$ in an orthonormal basis of $\mathfrak{so}_n$ is a function of $H^1_0(\mathrm{SO}_n)$ (with similar notations for $L^2(\mathrm{SO}_n, \mathfrak{so}_n)$ and $H^1(\mathrm{SO}_n, \mathfrak{so}_n)$ ). Obviously, the definition does not depend on the choice of the orthonormal basis. Now, let $\chi \in H^1(\mathrm{SO}_n, \mathfrak{so}_n)$ . Then $\nabla \chi (A)$ (which is defined almost everywhere) can be seen as an element of $\mathfrak{so}_n \otimes T_A$ by the relation

\begin{equation*} \nabla \chi (A) \cdot (X \otimes AY) = \nabla (\chi \cdot X) (A) \cdot (AY), \quad \forall X, \, Y \in \mathfrak {so}_n, \end{equation*}

where we have used (2.2) to express an element of $T_A$ as $AY$ for $Y \in \mathfrak{so}_n$ and where we define the inner product on $\mathfrak{so}_n \otimes T_A$ by

\begin{equation*} \big ( X \otimes AY \cdot X' \otimes AY' \big ) = (X \cdot X') \, (A Y \cdot AY') = (X \cdot X') \, (Y \cdot Y'), \end{equation*}

for all $X, \, Y \, X', \, Y' \, \in \mathfrak{so}_n$ . With this identification, if $(\Phi _i)_{i=1}^{\mathcal N}$ and $(\Psi _i)_{i=1}^{\mathcal N}$ (with ${\mathcal N} = \frac{n(n-1)}{2}$ ) are two orthonormal bases of $\mathfrak{so}_n$ , then $(\Phi _i \otimes A \Psi _j)_{i,j = 1}^{\mathcal N}$ is an orthonormal basis of $\mathfrak{so}_n \otimes T_A$ and we can write

(6.2) \begin{equation} \nabla \chi (A) = \sum _{i,j=1}^{\mathcal N} \big ( \nabla (\chi \cdot \Phi _i)(A) \cdot (A \Psi _j) \big ) \, \Phi _i \otimes A \Psi _j. \end{equation}

Consequently, if $\mu \in H^1(\mathrm{SO}_n, \mathfrak{so}_n)$ is another function, we have, thanks to Parseval’s formula and (2.5):

(6.3) \begin{eqnarray} \nabla \mu (A) \cdot \nabla \chi (A) &=& \sum _{i,j=1}^{\mathcal N} \big ( \nabla (\mu \cdot \Phi _i) (A) \cdot A \Psi _j \big ) \, \big ( \nabla (\chi \cdot \Phi _i) (A) \cdot A \Psi _j \big ) \nonumber \\[5pt] &=& \sum _{i,j=1}^{\mathcal N} \Big ( \big ( \varrho (\Psi _j) (\mu \cdot \Phi _i) \big ) (A) \Big ) \, \Big ( \big ( \varrho (\Psi _j) (\chi \cdot \Phi _i) \big ) (A) \Big ). \end{eqnarray}

In general, we will use (6.3) with identical bases $(\Phi _i)_{i=1}^{\mathcal N} = (\Psi _i)_{i=1}^{\mathcal N}$ , but this is not necessary. The construction itself shows that these formulae are independent of the choice of the orthonormal bases of $\mathfrak{so}_n$ .

Now, we have the following properties:

Proposition 6.1 (Properties of $\mu$ ). (i) The function $\mu$ is the unique variational solution in $H^1_0(\mathrm{SO}_n, \mathfrak{so}_n)$ of the equation

(6.4) \begin{equation} M^{-1} \nabla \cdot \big ( M \, \nabla \mu \big ) (A) = \frac{A - A^T}{2}, \quad \forall A \in \mathrm{SO}_n, \end{equation}

where the differential operator at the left-hand side is applied componentwise. The variational formulation of (6.4) is given by

(6.5) \begin{equation} \left \{ \begin{array}{l} \displaystyle \mu \in H^1_0(\mathrm{SO}_n, \mathfrak{so}_n), \\[5pt] \displaystyle \int _{\mathrm{SO}_n} \nabla \mu \cdot \nabla \chi \, M \, dA = - \int _{\mathrm{SO}_n} \frac{\displaystyle A-A^T}{\displaystyle 2} \cdot \chi \, M \, dA, \quad \forall \chi \in H^1_0(\mathrm{SO}_n, \mathfrak{so}_n), \end{array} \right. \end{equation}

with the interpretation (6.3) of the left-hand side of (6.5).

(ii) We have (conjugation invariance):

(6.6) \begin{equation} \mu (gAg^T) = g \mu (A) g^T, \quad \forall A, \, g \in \mathrm{SO}_n. \end{equation}

(iii) We have (translation invariance):

(6.7) \begin{equation} \mu ^\Gamma (A) = \mu (\Gamma ^T A), \quad \forall \Gamma, \, A \in \mathrm{SO}_n. \end{equation}

Proof. (i) Since $X$ is antisymmetric, we have $A \cdot X = \frac{A-A^T}{2} \cdot X$ . Hence, (4.5) (with $\Gamma = \mathrm{I}$ ) can be written

\begin{equation*} \Big ( M^{-1} \nabla \cdot \big ( M \nabla \mu \big ) - \frac {A-A^T}{2} \Big ) \cdot X = 0, \quad \forall X \in \mathfrak {so}_n. \end{equation*}

Since the matrix to the left of the inner product is antisymmetric and the identity is true for all antisymmetric matrices $X$ , we find (6.4). We can easily reproduce the same arguments on the variational formulation (4.8) (with $\Gamma = \mathrm{I}$ ), which leads to the variational formulation (6.5) for $\mu$ .

(ii) (4.19) reads

\begin{equation*} \mu (gAg^T) \cdot X = \mu (A) \cdot (g^T X g) = \big ( g \mu (A) g^T \big ) \cdot X, \end{equation*}

hence (6.6).

(iii) (4.17) reads

\begin{equation*} \mu (A) \cdot X = \mu ^\Gamma (\Gamma A) \cdot X, \end{equation*}

hence (6.7).

The generic form of a function satisfying (6.6) is given in the next proposition.

Proposition 6.2. (i) Let $\chi$ : $\mathrm{SO}_n \to \mathfrak{so}_n$ be a smooth map satisfying

(6.8) \begin{equation} \chi (gAg^T) = g \chi (A) g^T, \quad \forall A, \, g \in \mathrm{SO}_n. \end{equation}

Define $p$ such that $n=2p$ or $n=2p+1$ .

(i) There exists a $p$ -tuple $\tau = (\tau _i)_{i=1}^p$ of periodic functions $\tau _i$ : ${\mathcal T} \to{\mathbb R}$ such that

(6.9) \begin{equation} \chi (A_\Theta ) = \sum _{k=1}^p \tau _k(\Theta ) F_{2k-1 \, 2k}, \quad \forall \Theta \in{\mathcal T}. \end{equation}

Furthermore, $\tau$ commutes with the Weyl group, i.e.

(6.10) \begin{equation} \tau \circ W = W \circ \tau, \quad \forall W \in \mathfrak{W}. \end{equation}

(ii) $\chi$ has expression

(6.11) \begin{equation} \chi (A) = \sum _{k=1}^p \tau _k(\Theta ) \, gF_{2k-1 \, 2k}g^T, \end{equation}

where $g \in \mathrm{SO}_n$ and $\Theta \in{\mathcal T}$ are such that $A = g A_\Theta g^T$ .

Proof. (i) Let $A$ , $g \in{\mathbb T}$ . Then, since $\mathbb T$ is abelian, $gAg^T=A$ and (6.8) reduces to

(6.12) \begin{equation} g \chi (A) g^T = \chi (A), \quad \forall A, \, g \in{\mathbb T}. \end{equation}

Fixing $A$ , letting $g = e^{tX}$ with $X \in \mathfrak{h}$ and differentiating (6.12) with respect to $t$ , we get

\begin{equation*} [X, \chi (A)] = 0, \quad \forall X \in \mathfrak {h}. \end{equation*}

This means that $\mathfrak{h} \oplus (\chi (A) \,{\mathbb R})$ is an abelian subalgebra of $\mathfrak{so}_n$ . But $\mathfrak{h}$ is a maximal subalgebra of $\mathfrak{so}_n$ . So, it implies that $\chi (A) \in \mathfrak{h}$ . Hence, there exist functions $\tau _i$ : ${\mathcal T} \to{\mathbb R}$ , $\Theta \mapsto \tau _i(\Theta )$ for $i=1, \ldots, p$ such that (6.9) holds.

Now, we show (6.10) on a set of generating elements of $\mathfrak{W}$ . For this, we use (6.8) with such generating elements $g$ , reminding that $\mathfrak{W} \approx N({\mathbb T})/{\mathbb T}$ as described in Section 5. We distinguish the two parity cases of $n$ .

Case $n=2p$ even. In this case, the Weyl group is generated by $(C_{ij})_{1 \leq i \lt j \leq p}$ and $(D_{ij})_{1 \leq i \lt j \leq p}$ (see Section 5). First, we take $g = C_{ij}$ as defined in Section 5. Then, conjugation by $C_{ij}$ exchanges $F_{2i-1 \, 2i}$ and $F_{2j-1 \, 2j}$ and leaves $F_{2k-1 \, 2k}$ for $k \not = i, \, j$ invariant. On the other hand conjugation by $C_{ij}$ changes $A_\Theta$ into $A_{C_{ij} \Theta }$ where we recall that, by abuse of notation, we also denote by $C_{ij}$ the transformation of $\Theta$ generated by conjugation by $C_{ij}$ . Thus, from (6.8) and (6.9) we get

\begin{equation*} \chi (A_{C_{ij} \Theta }) = \sum _{k \not = i, \, j} \tau _k (\Theta ) F_{2k-1 \, 2k} + \tau _i(\Theta ) F_{2j-1 \, 2j} + \tau _j(\Theta ) F_{2i-1 \, 2i}. \end{equation*}

On the other hand, direct application of (6.9) leads to

\begin{equation*} \chi (A_{C_{ij} \Theta }) = \sum _{k=1}^p \tau _k (C_{ij} \Theta ) F_{2k-1 \, 2k}. \end{equation*}

Equating these two expressions leads to

\begin{eqnarray*} \tau _k \big ( C_{ij} (\Theta )\big ) &=& \tau _k (\Theta ), \quad \forall k \not = i, \, j. \\[5pt] \tau _i \big ( C_{ij} (\Theta )\big ) &=& \tau _j (\Theta ), \qquad \tau _j \big ( C_{ij} (\Theta )\big ) = \tau _i (\Theta ), \end{eqnarray*}

Hence, we get

(6.13) \begin{equation} \tau \big ( C_{ij}(\Theta ) \big ) = C_{ij} \big ( \tau (\Theta ) \big ). \end{equation}

Next, we take $g = D_{ij}$ . Conjugation by $D_{ij}$ changes $F_{2i-1 \, 2i}$ into $-F_{2i-1 \, 2i}$ and $F_{2j-1 \, 2j}$ into $-F_{2j-1 \, 2j}$ and leaves $F_{2k-1 \, 2k}$ for $k \not = i, \, j$ invariant. Besides, conjugation by $D_{ij}$ changes $A_\Theta$ into $A_{D_{ij} \Theta }$ . Thus, using the same reasoning as previously, we get

\begin{eqnarray*} \tau _k \big ( D_{ij} \Theta \big ) &=& \tau _k ( \Theta ), \quad \forall k \not = i, \, j, \\[5pt] \tau _i \big ( D_{ij} \Theta \big ) &=& - \tau _i ( \Theta ), \qquad \tau _j \big ( D_{ij} \Theta \big ) = - \tau _j ( \Theta ). \end{eqnarray*}

Hence, we find

\begin{equation*} \tau \big ( D_{ij}(\Theta ) \big ) = D_{ij} \big ( \tau (\Theta ) \big ). \end{equation*}

Case $n=2p+1$ odd. Here, the Weyl group is generated by $(C_{ij})_{1 \leq i \lt j \leq p}$ and $(D_i)_{1 =1}^p$ . Taking $g=C_{ij}$ as in the previous case, we get (6.13) again. Now, taking $g=D_i$ , conjugation by $D_i$ changes $F_{2i-1 \, 2i}$ into $-F_{2i-1 \, 2i}$ and leaves $F_{2k-1 \, 2k}$ for $k \not = i$ invariant. Besides, conjugation by $g$ changes $A_\Theta$ into $A_{D_i \Theta }$ . Thus, we get

\begin{eqnarray*} \tau _k \big ( D_i \Theta \big ) &=& \tau _k (\Theta ), \quad \forall k \not = i, \\[5pt] \tau _i \big ( D_i \Theta \big ) &=& - \tau _i ( \Theta ). \end{eqnarray*}

Thus, we finally get

\begin{equation*} \tau \big ( D_i(\Theta ) \big ) = D_i \big ( \tau (\Theta ) \big ), \end{equation*}

which ends the proof.

(ii) The fact that $\tau$ commutes with the Weyl group guarantees that formula (6.11) is well-defined, i.e. if $(g,\Theta )$ and $(g',\Theta ')$ are two pairs in $\mathrm{SO}_n \times{\mathcal T}$ such that $A = g A_\Theta g^T = g' A_{\Theta '}{g'}^T$ , then, the two expressions (6.11) deduced from each pair are the same. Applying (6.8) to (6.9) shows that (6.11) is necessary. It can be directly verified that (6.11) satisfies (6.8) showing that it is also sufficient.

The following corollary is a direct consequence of the previous discussion:

Corollary 6.3. Let $\mu$ be the solution of the variational formulation (6.4). Then, there exists $\alpha = (\alpha _i)_{i=1}^p$ : ${\mathcal T} \to{\mathbb R}^p$ such that

(6.14) \begin{equation} \mu (A_\Theta ) = \sum _{k=1}^p \alpha _k(\Theta ) F_{2k-1 \, 2k}, \quad \forall \Theta \in{\mathcal T}, \end{equation}

and $\alpha$ commutes with the Weyl group,

\begin{equation*} \alpha \circ W = W \circ \alpha, \quad \forall W \in \mathfrak {W}. \end{equation*}

Remark 6.1. The context of [Reference Degond, Diez and Frouvelle19] corresponds to $\mu (A) = \frac{A-A^T}{2}$ , i.e. $\alpha _k(\Theta ) = - \sin \theta _k$ , see Remark 3.3.

Now, we wish to derive a system of PDEs for $\alpha$ . There are two ways to achieve this aim.

  • - The first one consists of deriving the system in strong form by directly computing the differential operators involved in (6.4) at a point $A_\Theta$ of the maximal torus $\mathbb T$ . This method applies the strategy exposed in ref. [Reference Faraut30, Section 8.3] and used in ref. [Reference Degond18] to derive expressions of the radial Laplacian on rotation groups. However, this method does not give information on the well-posedness of the resulting system. We develop this method in Appendix A for the interested reader and as a cross-validation of the following results.

  • - The second method, which is developed below, consists of deriving the system in weak form, using the variational formulation (6.5). We will show that we can restrict the space of test functions $\chi$ to those satisfying the invariance relation (6.8). This will allow us to derive a variational formulation for the system satisfied by $\alpha$ which will lead us to its well-posedness and eventually to the strong form of the equations.

6.2. Reduction to a conjugation-invariant variational formulation

We first define the following spaces:

\begin{eqnarray*} L^2_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n) &=& \big \{ \chi \in L^2(\mathrm{SO}_n, \mathfrak{so}_n) \text{ such that } \\[5pt] && \qquad \chi \text{ satisfies (6.8) a.e. } A \in \mathrm{SO}_n, \, \, \forall g \in \mathrm{SO}_n \big \}, \\[5pt] H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n) &=& \big \{ \chi \in H^1(\mathrm{SO}_n, \mathfrak{so}_n) \text{ such that } \\[5pt] && \qquad \chi \text{ satisfies (6.8) a.e. } A \in \mathrm{SO}_n \, \, \forall g \in \mathrm{SO}_n \big \}, \end{eqnarray*}

where $\mathrm{a.e.}$ stands for ‘for almost every’. Concerning these spaces, we have the

Proposition 6.4. (i) $L^2_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ and $H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ are closed subspaces of $L^2(\mathrm{SO}_n, \mathfrak{so}_n)$ and $H^1(\mathrm{SO}_n, \mathfrak{so}_n)$ , respectively, and consequently are Hilbert spaces.

(ii) We have $H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n) \subset H^1_0(\mathrm{SO}_n, \mathfrak{so}_n)$ .

Proof. (i) Let $g \in \mathrm{SO}_n$ . We introduce the conjugation map $\Xi _g$ mapping any function $\chi$ : $\mathrm{SO}_n \to \mathfrak{so}_n$ to another function $\Xi _g \chi$ : $\mathrm{SO}_n \to \mathfrak{so}_n$ such that

\begin{equation*} \Xi _g \chi (A) = g^T \chi (gAg^T) g, \quad \forall A \in \mathrm {SO}_n.\end{equation*}

We prove that $\Xi _g$ is an isometry of $L^2(\mathrm{SO}_n, \mathfrak{so}_n)$ and of $H^1(\mathrm{SO}_n, \mathfrak{so}_n)$ for any $g \in \mathrm{SO}_n$ . The result follows as

\begin{equation*} L^2_{\mathrm {inv}}(\mathrm {SO}_n, \mathfrak {so}_n) = \bigcap _{g \in \mathrm {SO}_n} \mathrm {ker}_{L^2} (\Xi _g - \mathrm {I}), \quad H^1_{\mathrm {inv}}(\mathrm {SO}_n, \mathfrak {so}_n) = \bigcap _{g \in \mathrm {SO}_n} \mathrm {ker}_{H^1} (\Xi _g - \mathrm {I}). \end{equation*}

Thanks to the cyclicity of the trace and the translation invariance of the Haar measure, we have

\begin{eqnarray*} \int _{\mathrm{SO}_n} |\Xi _g \chi (A)|^2 \, dA &=& \int _{\mathrm{SO}_n} \big ( g^T \chi (gAg^T) g \big ) \cdot \big ( g^T \chi (gAg^T) g \big ) \, dA \\[5pt] &=& \int _{\mathrm{SO}_n} |\chi (gAg^T)|^2 \, dA = \int _{\mathrm{SO}_n} |\chi (A)|^2 \, dA, \end{eqnarray*}

where, for $X \in \mathfrak{so}_n$ , we have denoted by $|X| = (X \cdot X)^{1/2}$ the Euclidean norm on $\mathfrak{so}_n$ . This shows that $\Xi _g$ is an isometry of $L^2(\mathrm{SO}_n, \mathfrak{so}_n)$ .

Now, with (6.3) and (6.8), we have

\begin{eqnarray*} |\nabla (\Xi _g \chi ) (A)|^2 &=& \frac{1}{2} \sum _{i,j=1}^n \nabla (\Xi _g \chi )_{ij} \cdot \nabla (\Xi _g \chi )_{ij} \\[5pt] &=& \frac{1}{2} \sum _{i,j,k,\ell,k^{\prime},\ell ^{\prime}=1}^n g_{ki} \, g_{\ell j} \, g_{k^{\prime} i} \, g_{\ell ^{\prime} j} \, \nabla (\chi _{k \ell } \circ \xi _g)(A) \cdot \nabla (\chi _{k^{\prime} \ell ^{\prime}} \circ \xi _g)(A) \\[5pt] &=& \frac{1}{2} \sum _{k,\ell =1}^n |\nabla \chi _{k \ell } (gAg^T)|^2 = |\nabla \chi (gAg^T)|^2, \end{eqnarray*}

where we used that $\sum _{i=1}^n g_{ki} g_{k^{\prime}i} = \delta _{k k^{\prime}}$ and similarly for the sum over $j$ , as well as (4.16). Now, using the translation invariance of the Haar measure, we get

\begin{equation*} \int _{\mathrm {SO}_n} |\nabla (\Xi _g \chi ) (A)|^2 \, dA = \int _{\mathrm {SO}_n} |\nabla \chi (A)|^2 \, dA, \end{equation*}

which shows that $\Xi _g$ is an isometry of $H^1(\mathrm{SO}_n, \mathfrak{so}_n)$ .

(ii) We use the fact that, for any integrable function $f$ : $\mathrm{SO}_n \to \mathfrak{so}_n$ , we have

(6.15) \begin{equation} \int _{\mathrm{SO}_n} f(A) \, dA = \int _{\mathrm{SO}_n} \Big ( \int _{\mathrm{SO}_n} f(gAg^T) \, dg \Big ) dA. \end{equation}

Let $\chi \in H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ . With (6.8), we have

\begin{equation*} \int _{\mathrm {SO}_n} \chi (gAg^T) \, dg = \int _{\mathrm {SO}_n} g \chi (A) g^T\, dg. \end{equation*}

We define the linear map $T$ : $\mathfrak{so}_n \to{\mathbb R}$ , $ X \mapsto \int _{\mathrm{SO}_n} (g \chi (A) g^T) \, dg \cdot X$ . By translation invariance of the Haar measure, we have $T(hXh^T) = T(X)$ , for all $h \in \mathrm{SO}_n$ . Thus, $T$ intertwines the representation $\mathfrak{so}_n$ of $\mathrm{SO}_n$ (i.e. the adjoint representation $\mathrm{Ad}$ ) and its trivial representation $\mathbb R$ . $\mathrm{Ad}$ is irreducible except for dimension $n=4$ where it decomposes into two irreducible representations. Neither of these representations is isomorphic to the trivial representation. Then, by Schur’s Lemma, $T=0$ . This shows that $\int _{\mathrm{SO}_n} \chi (gAg^T) \, dg =0$ and by application of (6.15), that $\int _{\mathrm{SO}_n} \chi (A) \, dA =0$ .

We now show that the space $H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ may replace $H^1_0(\mathrm{SO}_n, \mathfrak{so}_n)$ in the variational formulation giving $\mu$ . More specifically, we have the

Proposition 6.5. (i) $\mu$ is the unique solution of the variational formulation (6.5) if and only if it is the unique solution of the variational formulation

(6.16) \begin{equation} \left \{ \begin{array}{l} \displaystyle \mu \in H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n), \\[5pt] \displaystyle \int _{\mathrm{SO}_n} \nabla \mu \cdot \nabla \chi \, M \, dA = - \int _{\mathrm{SO}_n} \frac{A-A^T}{2} \cdot \chi \, M \, dA, \quad \forall \chi \in H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n). \end{array} \right. \end{equation}

(ii) The variational formulation (6.16) can be equivalently written

(6.17) \begin{equation} \left \{ \begin{array}{l} \displaystyle \mu \in H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n), \\[5pt] \displaystyle \int _{{\mathcal T}} (\nabla \mu \cdot \nabla \chi ) (A_\Theta ) \, M(A_\Theta ) \, u_n(\Theta ) \, d\Theta \\[5pt] \displaystyle \quad = - \int _{{\mathcal T}} \frac{A_\Theta -A_\Theta ^T}{2} \cdot \chi (A_\Theta ) \, M(A_\Theta ) \, u_n(\Theta ) \, d\Theta, \quad \forall \chi \in H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n). \end{array} \right. \end{equation}

Proof. (i) We remark that (6.16) has a unique solution. Indeed, (4.9) can be extended componentwise to all $\psi \in H^1_0(\mathrm{SO}_n, \mathfrak{so}_n)$ , and in particular, to all $\psi \in H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ . Thus, the bilinear form at the left-hand side of (6.16) is coercive on $H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ . Existence and uniqueness follow from Lax-Milgram’s theorem.

Let $\mu$ be the solution of (6.5). Since $\mu$ satisfies (6.6), it belongs to $H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ . Furthermore, restricting (6.5) to test functions in $H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ , it satisfies (6.16).

Conversely, suppose that $\mu$ is the unique solution of (6.16). We will use (6.15). Let $\chi \in H^1_0(\mathrm{SO}_n, \mathfrak{so}_n)$ . Thanks to (4.16) and to the fact that $M$ is a class function, we have

\begin{eqnarray*} && I(A) \;=\!:\; \int _{\mathrm{SO}_n} M(gAg^T) \, \nabla \mu (gAg^T) \cdot \nabla \chi (gAg^T) \, dg \\[5pt] && = \frac{1}{2} \sum _{i,j=1}^n \int _{\mathrm{SO}_n} M(A) \, \nabla \mu _{ij} (gAg^T) \cdot \nabla \chi _{ij}(gAg^T) \, dg \\[5pt] && = \frac{1}{2} \sum _{i,j=1}^n \int _{\mathrm{SO}_n} M(A) \, \nabla (\mu _{ij} \circ \xi _g) (A) \cdot \nabla (\chi _{ij} \circ \xi _g)(A) \, dg. \end{eqnarray*}

Now, by (6.6), we have $ (\mu _{ij} \circ \xi _g) (A) = \sum _{k,\ell = 1}^n g_{ik} \, g_{j \ell } \, \mu _{k \ell }(A)$ . We deduce that

\begin{eqnarray*} I(A) &=& \frac{1}{2} \sum _{k,\ell = 1}^n M(A) \, \nabla \mu _{k \ell }(A) \cdot \nabla \Big ( \sum _{i,j=1}^n \int _{\mathrm{SO}_n} g_{ik} \, g_{j \ell } \, (\chi _{ij} \circ \xi _g) \, dg \Big )(A)\\[5pt] &=& \frac{1}{2} \sum _{k,\ell = 1}^n M(A) \, \nabla \mu _{k \ell }(A) \cdot \nabla \bar \chi _{k \ell }(A) = M(A) \nabla \mu (A) \cdot \nabla \bar \chi (A), \end{eqnarray*}

where $\bar \chi$ is defined by

(6.18) \begin{equation} \bar \chi (A) = \int _{\mathrm{SO}_n} g^T \chi (gAg^T) g \, dg, \quad \forall A \in \mathrm{SO}_n. \end{equation}

Similarly, we have

\begin{eqnarray*} && J(A) \;=\!:\; \int _{\mathrm{SO}_n} M(gAg^T) \, \Big ( g \frac{A-A^T}{2} g^T \Big ) \cdot \chi (gAg^T) \, dg \\[5pt] && = M(A) \, \frac{A-A^T}{2} \cdot \Big ( \int _{\mathrm{SO}_n} g^T \chi (gAg^T) g \, dg \Big ) = M(A) \, \frac{A-A^T}{2} \cdot \bar \chi (A). \end{eqnarray*}

Applying (6.15), we get that

(6.19) \begin{eqnarray} && \int _{\mathrm{SO}_n} \Big ( \nabla \mu \cdot \nabla \chi + \frac{A-A^T}{2} \cdot \chi \Big ) M \, dA = \int _{\mathrm{SO}_n} (I(A) + J(A)) \, dA \nonumber \\[5pt] &&\qquad = \int _{\mathrm{SO}_n} \Big ( \nabla \mu \cdot \nabla \bar \chi + \frac{A-A^T}{2} \cdot \bar \chi \Big ) M \,dA. \end{eqnarray}

Now, we temporarily assume that

(6.20) \begin{equation} \bar \chi \in H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n), \end{equation}

Then, because $\mu$ is the unique solution of (6.16), the right-hand side of (6.19) is equal to zero. This shows that $\mu$ is the unique solution of (6.5).

Now, we show (6.20). That $\bar \chi$ satisfies the invariance relation (6.8) is obvious. We now show that $ \| \bar \chi \|_{H^1} \leq \| \chi \|_{H^1}$ .

We first show that $ \| \bar \chi \|_{L^2} \leq \| \chi \|_{L^2}$ . By Cauchy-Schwarz inequality, Fubini’s theorem and the translation invariance of the Haar measure, we have

\begin{eqnarray*} \int _{\mathrm{SO}_n} |\bar \chi (A)|^2 \, dA &=& \int _{\mathrm{SO}_n} \Big | \int _{\mathrm{SO}_n} g^T \chi (gAg^T) g \, dg \Big |^2 \, dA \leq \int _{\mathrm{SO}_n} \Big ( \int _{\mathrm{SO}_n} | g^T \chi (gAg^T) g |^2 \, dg \Big ) dA \\[5pt] &=& \int _{\mathrm{SO}_n} \Big ( \int _{\mathrm{SO}_n} | \chi (gAg^T) |^2 \, dA \Big ) dg = \int _{\mathrm{SO}_n} | \chi (A) |^2 \, dA. \end{eqnarray*}

We now show that $ \| \nabla \bar \chi \|_{L^2} \leq \| \nabla \chi \|_{L^2}$ . Differentiating (6.18) with respect to $A$ and using (4.14), we get

\begin{eqnarray*} && |\nabla \bar \chi (A)|^2 = \frac{1}{2} \sum _{k, \ell = 1}^n |\nabla \bar \chi _{k \ell }(A)|^2 = \frac{1}{2} \sum _{k,\ell =1}^n \Big | \int _{\mathrm{SO}_n} \Big (\sum _{i,j=1}^n g_{ik} \, g_{j \ell } \, g^T \nabla \chi _{ij}(gAg^T) g \Big ) \, dg \Big |^2. \end{eqnarray*}

Applying Cauchy-Schwarz formula, this leads to

\begin{eqnarray*} && |\nabla \bar \chi (A)|^2 \leq \frac{1}{2} \sum _{k,\ell =1}^n \int _{\mathrm{SO}_n} \Big | \sum _{i,j=1}^n g_{ik} \, g_{j \ell } \, g^T \nabla \chi _{ij}(gAg^T) g \Big |^2 \, dg \\[5pt] && = \frac{1}{2} \sum _{i,j,i^{\prime},j^{\prime},k,\ell =1}^n \int _{\mathrm{SO}_n} g_{ik} \, g_{j \ell } \, g_{i^{\prime}k} \, g_{j^{\prime} \ell } \, \big ( g^T \nabla \chi _{i^{\prime} j^{\prime}}(gAg^T) g \big ) \cdot \big ( g^T \nabla \chi _{ij} (gAg^T) g \big ) \, dg \\[5pt] && = \frac{1}{2} \sum _{i,j=1}^n \int _{\mathrm{SO}_n} \nabla \chi _{i j}(gAg^T) \cdot \nabla \chi _{ij} (gAg^T) \, dg, \end{eqnarray*}

where we have applied that $\sum _{k=1}^n g_{ik} g_{i^{\prime}k} = \delta _{i i^{\prime}}$ and similarly for the sum over $\ell$ . Thus,

\begin{eqnarray*} && \int _{\mathrm{SO}_n} |\nabla \bar \chi (A)|^2 \, dA \leq \frac{1}{2} \sum _{i,j=1}^n \int _{\mathrm{SO}_n} \Big ( \int _{\mathrm{SO}_n} \nabla \chi _{i j}(gAg^T) \cdot \nabla \chi _{ij} (gAg^T) \, dA \Big ) \, dg \\[5pt] && = \frac{1}{2} \sum _{i,j=1}^n \int _{\mathrm{SO}_n} \nabla \chi _{i j}(A) \cdot \nabla \chi _{ij} (A) \, dA = \int _{\mathrm{SO}_n} | \nabla \chi (A)|^2 \, dA, \end{eqnarray*}

which shows the result and ends the proof of (i)

(ii) Let $\chi \in H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ . Then, the functions $A \mapsto M(A) \nabla \mu (A) \cdot \nabla \chi (A)$ and $A \mapsto M \, \frac{A-A^T}{2} \cdot \chi (A)$ are class functions (the proof relies on similar computations as those just made above and is omitted). Then, (6.17) is simply a consequence of Weyl’s integration formula (5.7). This ends the proof.

6.3. Derivation and well-posedness of system (3.11) for $\alpha$

Now, we investigate how the condition $\chi \in H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ translates onto $\tau$ and define the following spaces:

  • $C_{\mathrm{per}}^{\infty,\mathfrak{W}}({\mathcal T},{\mathbb R}^p)$ is the set of periodic, $C^\infty$ functions ${\mathcal T} \to{\mathbb R}^p$ which commute with the Weyl group, i.e. such that (6.10) is satisfied in $\mathcal T$ , for all $W \in \mathfrak{W}$ ,

  • $\mathcal H$ is the closure of $C_{\mathrm{per}}^{\infty,\mathfrak{W}}({\mathcal T},{\mathbb R}^p)$ for the norm

    \begin{equation*} \| \tau \|_{\mathcal H}^2 \;=\!:\; \sum _{i=1}^p \int _{{\mathcal T}} |\tau _i(\Theta )|^2 \, u_n(\Theta ) \, d \Theta, \end{equation*}
  • $\mathcal V$ is the closure of $C_{\mathrm{per}}^{\infty,\mathfrak{W}}({\mathcal T},{\mathbb R}^p)$ for the norm

    \begin{eqnarray*} && \| \tau \|_{\mathcal V}^2 \;=\!:\; \| \tau \|_{\mathcal H}^2 + \int _{{\mathcal T}} \Big \{ \sum _{i,j=1}^p \Big | \frac{\partial \tau _i}{\partial \theta _j}(\Theta ) \Big |^2 + \sum _{1 \leq i \lt j \leq p} \Big ( \frac{|(\tau _i - \tau _j)(\Theta )|^2}{1 - \cos\!(\theta _i - \theta _j)} \\[5pt] &&\qquad + \frac{|(\tau _i + \tau _j)(\Theta )|^2}{1 - \cos\!(\theta _i + \theta _j)} \Big ) + \epsilon _n \sum _{i=1}^p \frac{|\tau _i(\Theta )|^2}{1 - \cos \theta _i} \Big \} \, u_n(\Theta ) \, d \Theta. \end{eqnarray*}

Remark 6.2. We note that for $\tau \in C_{\mathrm{per}}^{\infty,\mathfrak{W}}({\mathcal T},{\mathbb R}^p)$ , we have $\| \tau \|_{\mathcal V} \lt \infty$ , which shows that the definition of $\mathcal V$ makes sense. Indeed since $\tau$ commutes with $\mathfrak{W}$ , we have

\begin{eqnarray*} && (\tau _i - \tau _j) (\ldots, \theta _i, \ldots, \theta _j \ldots ) = \tau _i (\ldots, \theta _i, \ldots, \theta _j \ldots ) - \tau _i (\ldots, \theta _j, \ldots, \theta _i \ldots ) \\[5pt] && = (\theta _j - \theta _i) \Big ( \big ( \frac{\partial }{\partial \theta _j} - \frac{\partial }{\partial \theta _i} \big ) \tau _i \Big ) (\ldots, \theta _i, \ldots, \theta _i \ldots ) +{\mathcal O} \big ((\theta _j - \theta _i)^2\big ) ={\mathcal O} \big (|\theta _j - \theta _i| \big ), \end{eqnarray*}

as $\theta _j - \theta _i \to 0$ , while

\begin{equation*} 1 - \cos\!(\theta _i - \theta _j) = 2 \sin ^2 \Big ( \frac {\theta _i - \theta _j}{2} \Big ) = \frac {1}{2} |\theta _j - \theta _i|^2 + {\mathcal O} \big (|\theta _j - \theta _i|^4 \big ). \end{equation*}

Thus,

\begin{equation*} \frac {|(\tau _i - \tau _j)(\Theta )|^2}{ 1 - \cos\!(\theta _i - \theta _j)} \lt \infty. \end{equation*}

The same computation holds when $\theta _j + \theta _i \to 0$ and, in the odd-dimensional case, when $\theta _i \to 0$ .

Proposition 6.6. $\chi \in H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ if and only if $\chi$ is given by (6.11) with $\tau \in{\mathcal V}$ .

Proof. Let $\chi \in C^{\infty }_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ be an element of the space of smooth functions satisfying the invariance relation (6.8). Then, it is clear that $\tau$ associated with $\chi$ through (6.9) belongs to $C_{\mathrm{per}}^{\infty,\mathfrak{W}}({\mathcal T},{\mathbb R}^p)$ . For such $\chi$ and $\tau$ , we temporarily assume that

(6.21) \begin{equation} \| \chi \|_{H^1}^2 = \frac{\gamma _n}{(2 \pi )^p} \| \tau \|^2_{\mathcal V}. \end{equation}

We also assume that

(6.22) \begin{equation} C^{\infty }_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n) \, \, \text{ is dense in } \, \, H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n). \end{equation}

Let now $\chi$ be an element of $H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ and $(\chi ^q)_{q \in{\mathbb N}}$ be a sequence of elements of $C^{\infty }_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ which converges to $\chi$ in $H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ . Then, the associated sequence $(\tau ^q)_{q \in{\mathbb N}}$ is such that $\tau ^q \in C_{\mathrm{per}}^{\infty,\mathfrak{W}}({\mathcal T},{\mathbb R}^p)$ and by (6.21) $(\tau ^q)_{q \in{\mathbb N}}$ is a Cauchy sequence in $\mathcal V$ . Thus, there is an element $\tau \in{\mathcal V}$ such that $\tau ^q \to \tau$ in $\mathcal V$ . Now, up to the extraction of subsequences, we have $\chi ^q \to \chi$ , a.e. in $\mathrm{SO}_n$ and $\tau ^q \to \tau$ , a.e. in $\mathcal T$ . Hence,

\begin{equation*} \chi ^q (A_\Theta ) = \sum _{i=1}^p \tau ^q_i (\Theta ) F_{2i-1 \, 2i} \to \sum _{i=1}^p \tau _i(\Theta ) F_{2i-1 \, 2i} = \chi (A_\Theta ) \quad \mathrm {a.e.} \quad \Theta \in {\mathcal T}, \end{equation*}

and, since $\chi$ satisfies (6.8), $\chi (A)$ is given by (6.11). Therefore, if $\chi$ belongs to $H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ , there exists $\tau \in{\mathcal V}$ such that $\chi$ is given by (6.11).

Therefore, Prop. 6.6 will be proved if we prove the converse property, namely

(6.23) \begin{equation} \mathrm{For \, any \,\,} \tau \in{\mathcal V}, \mathrm{\, \, then \,\, } \chi \mathrm{\, \, given \, by \, (6.9) \, belongs \, to} \,\, \mathrm{H}^1_{\mathrm{inv}}(\mathrm{SO}_{\mathrm{n}}, \mathfrak{so}_{\mathrm{n}}), \end{equation}

as well as Properties (6.21) and (6.22).

Proof of (6.22). Let $\chi \in H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ . Since $C^\infty (\mathrm{SO}_n, \mathfrak{so}_n)$ is dense in $H^1(\mathrm{SO}_n, \mathfrak{so}_n)$ , there exists a sequence $(\chi ^q)_{q \in{\mathbb N}}$ in $C^\infty (\mathrm{SO}_n, \mathfrak{so}_n)$ such that $\chi ^q \to \chi$ . Now, $\bar \chi ^q$ obtained from $\chi ^q$ through (6.18) belongs to $C^{\infty }_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ . And because the map $\chi \mapsto \bar \chi$ is continuous on $H^1(\mathrm{SO}_n, \mathfrak{so}_n)$ (see proof of Prop. 6.5), we get $\bar \chi ^q \to \bar \chi$ as $q \to \infty$ . But since $\chi$ satisfies (6.9), we have $\bar \chi = \chi$ , which shows the requested density result.

Proof of (6.21). Let $\chi \in C^{\infty }_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ . Since the function $A \mapsto |\chi (A)|^2$ is a class function, we have, thanks to Weyl’s integration formula (5.7):

(6.24) \begin{equation} \| \chi \|_{L^2}^2 = \frac{\gamma _n}{(2 \pi )^p} \int _{{\mathcal T}} |\chi (A_\Theta )|^2 \, u_n(\Theta ) \, d \Theta = \frac{\gamma _n}{(2 \pi )^p} \int _{{\mathcal T}} \sum _{i=1}^p |\tau _i(\Theta )|^2 \, u_n(\Theta ) \, d \Theta, \end{equation}

which shows that $\| \chi \|^2_{L^2} = \frac{\gamma _n}{(2 \pi )^p} \| \tau \|^2_{\mathcal H}$ .

Now, we show that

(6.25) \begin{eqnarray} \| \nabla \chi \|_{L^2}^2 &=& \frac{\gamma _n}{(2 \pi )^p} \int _{{\mathcal T}} \Big \{ \sum _{i,j=1}^p \Big |\frac{d \tau _i}{d \theta _j} (\Theta ) \Big |^2 + \sum _{1 \leq i \lt j \leq p} \Big ( \frac{|(\tau _i - \tau _j)(\Theta )|^2}{1 - \cos\!(\theta _i - \theta _j)} + \frac{|(\tau _i + \tau _j)(\Theta )|^2}{1 - \cos\!(\theta _i + \theta _j)} \Big ) \nonumber \\[5pt] && \qquad + \epsilon _n \sum _{i=1}^p \frac{\tau _i^2(\Theta )}{1 - \cos \theta _i} \, \Big \} u_n(\Theta ) \, d \Theta. \end{eqnarray}

The function $A \mapsto |\nabla \chi (A)|^2$ is a class function (the proof relies on similar computations to those made in the proof of Prop. 6.5 and is omitted), and we can use Weyl’s integration formula to evaluate $\| \nabla \chi \|_{L^2}^2$ . For this, we need to compute $|\nabla \chi (A_\Theta )|^2$ . This will be done by means of (6.3), which requires to find a convenient basis of $\mathfrak{so}_n$ and to compute the action of the derivation operator $\varrho$ on each term. The latter will be achieved thanks to (5.8). Given that $\chi$ satisfies (6.8), we get, for all $A \in \mathrm{SO}_n$ and $X \in \mathfrak{so}_n$ :

\begin{equation*} \frac {d}{dt} \big ( \chi ( e^{tX} A e^{-tX}) \big ) \big |_{t=0} = \frac {d}{dt} \big ( e^{tX} \chi (A) e^{-tX} \big ) \big |_{t=0} = [X, \chi (A)]. \end{equation*}

When inserted into (5.8), this leads to

(6.26) \begin{equation} \Big (\varrho \big (\mathrm{Ad}(A^{-1}) X - X \big ) \chi \Big )(A) = [X, \chi (A)], \quad \forall A \in \mathrm{SO}_n, \quad \forall X \in \mathfrak{so}_n. \end{equation}

We will also use the formula

(6.27) \begin{eqnarray} && \big ( \varrho (F_{2k-1 \, 2k}) (\chi ) \big )(A_\Theta ) = \frac{d}{dt} \chi (A_\Theta e^{t F_{2k-1 \, 2k}}) \big |_{t=0} = \frac{d}{dt} \chi (A_{(\theta _1, \ldots, \theta _k-t, \ldots, \theta _p)}) \big |_{t=0} \nonumber \\[5pt] && = \sum _{\ell =1}^p \frac{d}{dt} \tau _\ell (\theta _1, \ldots, \theta _k-t, \ldots, \theta _p) \big |_{t=0} \, F_{2\ell -1 \, 2 \ell } = - \sum _{\ell =1}^p \frac{\partial \tau _\ell }{\partial \theta _k}(\Theta ) \, F_{2\ell -1 \, 2 \ell }, \end{eqnarray}

which is a consequence of (6.9).

It will be convenient to first treat the special examples of dimension $n=3$ and $n=4$ , before generalising them to dimensions $n=2p$ and $n=2p+1$ .

Case of $\mathrm{SO}_3$ . This is an even-dimensional case $n=2p+1$ with $p=1$ . We have $\Theta = \theta _1$ . Then, $A_\Theta = R_{\theta _1}$ (where $R_\theta$ is given by (5.1)) and we have $\chi (A_\Theta ) = \tau _1(\theta _1) F_{12}$ . The triple $(F_{12}, G^+, G^-)$ with $G^\pm = \frac{1}{\sqrt{2}} (F_{13} \pm F_{23})$ is an orthonormal basis of $\mathfrak{so}_3$ which we will use to compute (6.3). Thanks to (6.27), we first have

\begin{equation*} \big ( \varrho (F_{12}) \chi \big )(A_\Theta ) = - \frac {d \tau _1}{d \theta _1} (\theta _1) F_{12}. \end{equation*}

Now, we apply (6.26) with $A=A_\Theta$ and $X=G^+$ or $G^-$ . Since $A_\Theta ^{-1} = A_{- \Theta }$ , easy computations (see also [Reference Degond18]) lead to

\begin{eqnarray*} \mathrm{Ad}(A_{-\Theta }) G^+ - G^+ &=& (\cos \theta _1 - 1) G^+ + \sin \theta _1 G^-, \\[5pt] \mathrm{Ad}(A_{-\Theta }) G^- - G^- &=& - \sin \theta _1 G^+ + (\cos \theta _1 - 1) G^-, \end{eqnarray*}

and, using (5.9),

\begin{equation*} [G^+, \chi (A_\Theta )] = - \tau _1(\theta _1) G^-, \qquad [G^-, \chi (A_\Theta )] = \tau _1(\theta _1) G^+. \end{equation*}

Thus,

\begin{eqnarray*} \Big ( \big ( (\cos \theta _1 - 1) \varrho (G^+) + \sin \theta _1 \varrho (G^-) \big ) \chi \Big ) (A_\Theta ) &=& - \tau _1(\theta _1) G^-, \\[5pt] \Big ( \big (\!-\! \sin \theta _1 \varrho (G^+) + (\cos \theta _1 - 1) \varrho (G^-) \big ) \chi \Big ) (A_\Theta ) &=& \tau _1(\theta _1) G^+, \end{eqnarray*}

which leads to

(6.28) \begin{eqnarray} \big ( \varrho (G^+) \chi \big ) (A_\Theta ) &=& - \frac{\tau _1(\theta _1)}{2 (1 - \cos \theta _1)} \big ( \sin \theta _1 G^+ + (\cos \theta _1 - 1) G^- \big ), \end{eqnarray}
(6.29) \begin{eqnarray} \big ( \varrho (G^-) \chi \big ) (A_\Theta ) &=& \frac{\tau _1(\theta _1)}{2 (1 - \cos \theta _1)} \big ( (\cos \theta _1 - 1) G^+ - \sin \theta _1 G^- \big ). \end{eqnarray}

From (6.3), it follows that

\begin{equation*} |\nabla \chi (A_\Theta ) |^2 = \Big |\frac {d \tau _1}{d \theta _1} (\theta _1) \Big |^2 + \frac {\tau _1^2(\theta _1)}{1 - \cos \theta _1}, \end{equation*}

which leads to (6.25) for $n=3$ .

Case of $\mathrm{SO}_4$ : this is an odd-dimensional case $n=2p$ with $p=2$ . We have $\Theta = (\theta _1, \theta _2)$ and

\begin{equation*} \chi (A_\Theta ) = \tau _1(\theta _1,\theta _2) F_{12} + \tau _2(\theta _1,\theta _2) F_{34}.\end{equation*}

The system $(F_{12}, F_{34}, H^+, H^-, K^+, K^-)$ with $H^\pm = \frac{1}{\sqrt{2}} (F_{13} \pm F_{24})$ and $K^\pm = \frac{1}{\sqrt{2}} (F_{14} \pm F_{23})$ is an orthonormal basis of $\mathfrak{so}_4$ , which will be used to express (6.3). Then, we have

(6.30) \begin{eqnarray} \big ( \varrho (F_{12}) \chi \big )(A_\Theta ) &=& - \frac{d \tau _1}{d \theta _1} (\Theta ) F_{12} - \frac{d \tau _2}{d \theta _1} (\Theta ) F_{34}, \end{eqnarray}
(6.31) \begin{eqnarray} \big ( \varrho (F_{34}) \chi \big )(A_\Theta ) &=& - \frac{d \tau _1}{d \theta _2} (\Theta ) F_{12} - \frac{d \tau _2}{d \theta _2} (\Theta ) F_{34}. \end{eqnarray}

Now, we compute:

\begin{eqnarray*} \mathrm{Ad}(A_{-\Theta }) H^+ - H^+ &=& (c_1 c_2 + s_1 s_2 - 1) H^+ - (c_1 s_2 - s_1 c_2) K^-, \\[5pt] \mathrm{Ad}(A_{-\Theta }) H^- - H^- &=& - (c_1 s_2 + s_1 c_2) K^+ + (c_1 c_2 - s_1 s_2 - 1)H^-, \\[5pt] \mathrm{Ad}(A_{-\Theta }) K^+ - K^+ &=& (c_1 c_2 - s_1 s_2 - 1) K^+ + (c_1 s_2 + s_1 c_2) H^-, \\[5pt] \mathrm{Ad}(A_{-\Theta }) K^- - K^- &=& (c_1 s_2 - s_1 c_2) H^+ + (c_1 c_2 + s_1 s_2 - 1)K^-, \end{eqnarray*}

with $c_i = \cos \theta _i$ and $s_i = \sin \theta _i$ , $i=1, \, 2$ . We also compute, using (5.9):

\begin{eqnarray*} [H^+, \chi (A_\Theta )] &=& (\!-\! \tau _1 + \tau _2) K^-, \quad [H^-, \chi (A_\Theta )] = (\tau _1 + \tau _2) K^+, \\[5pt] \mbox{} [K^+, \chi (A_\Theta )] &=& - (\tau _1 + \tau _2) H^-, \quad [K^-, \chi (A_\Theta )] = (\tau _1 - \tau _2) H^+. \end{eqnarray*}

where we omit the dependence of $\tau _i$ on $\Theta$ for simplicity. Applying (6.26), we get two independent linear systems of equations for $((\varrho (H^+)\chi )(A_\Theta ), (\varrho (K^-)\chi )(A_\Theta ))$ on one hand and $((\varrho (K^+)\chi )(A_\Theta ), (\varrho (H^-)\chi )(A_\Theta ))$ on the other hand, which can both easily be resolved into

(6.32) \begin{eqnarray} \big ( \varrho (H^+)\chi \big )(A_\Theta ) &=& \frac{1}{2} (\tau _1-\tau _2) \Big ( K^- - \frac{\sin (\theta _1-\theta _2)}{1-\cos\!(\theta _1-\theta _2)} H^+ \Big ), \end{eqnarray}
(6.33) \begin{eqnarray} \big ( \varrho (K^-)\chi \big )(A_\Theta ) &=& \frac{1}{2} (\tau _1-\tau _2) \Big (\!-\! \frac{\sin (\theta _1-\theta _2)}{1-\cos\!(\theta _1-\theta _2)} K^- - H^+ \Big ), \end{eqnarray}
(6.34) \begin{eqnarray} \big ( \varrho (H^-)\chi \big )(A_\Theta ) &=& \frac{1}{2} (\tau _1+\tau _2) \Big (\!-\! K^+ - \frac{\sin (\theta _1+\theta _2)}{1-\cos\!(\theta _1+\theta _2)} H^- \Big ), \end{eqnarray}
(6.35) \begin{eqnarray} \big ( \varrho (K^+)\chi \big )(A_\Theta ) &=& \frac{1}{2} (\tau _1+\tau _2) \Big (\!-\! \frac{\sin (\theta _1+\theta _2)}{1-\cos\!(\theta _1+\theta _2)} K^+ + H^- \Big ). \end{eqnarray}

Taking the squared norms in $\mathfrak{so}_4$ of (6.30) to (6.35), we get

\begin{equation*}|\nabla \chi (A_\Theta ) |^2 = \sum _{i,j=1}^2 \Big |\frac {d \tau _i}{d \theta _j} \Big |^2 + \frac {|\tau _1 - \tau _2|^2}{1 - \cos\!(\theta _1 - \theta _2)} + \frac {|\tau _1 + \tau _2|^2}{1 - \cos\!(\theta _1 + \theta _2)}, \end{equation*}

which leads to (6.25) for $n=4$ .

Case of $\mathrm{SO}_{2p}$ : Define $H_{jk}^\pm = \frac{1}{\sqrt{2}} (F_{2j-1 \, 2k-1} \pm F_{2j \, 2k})$ and $K_{jk}^\pm = \frac{1}{\sqrt{2}} (F_{2j-1 \, 2k} \pm F_{2j \, 2k-1})$ for $1 \leq j \lt k \leq p$ . Then, the system $\big ( (F_{2j-1 \, 2j})_{j=1, \ldots, p}, (H_{jk}^+, H_{jk}^-, K_{jk}^+, K_{jk}^-)_{1 \leq j \lt k \leq p} \big )$ is the orthonormal basis of $\mathfrak{so}_{2p}{\mathbb R}$ which will be used to evaluate (6.3). Then, we remark that the computations of $\mathrm{Ad}(A_{-\Theta }) H_{jk}^\pm$ and $\mathrm{Ad}(A_{- \Theta }) K_{jk}^\pm$ only involve the $4 \times 4$ matrix subblock corresponding to line and column indices belonging to $\{ 2j-1, 2j \} \cup \{ 2k-1, 2k \}$ . Thus, restricted to these $4 \times 4$ matrices, the computations are identical to those done in the case of $\mathrm{SO}_{4}{\mathbb R}$ . This directly leads to (6.25) for $n=2p$ .

Case of $\mathrm{SO}_{2p+1}$ : this case is similar, adding to the previous basis the elements $G_j^{\pm } = \frac{1}{\sqrt{2}} (F_{2j-1 \, 2p+1} \pm F_{2j \, 2p+1})$ . These additional terms contribute to terms like in the $\mathrm{SO}_3$ case, which leads to (6.25) for $n=2p+1$ .

This finishes the proof of (6.25).

Proof of (6.23). Consider $\tau \in{\mathcal V}$ and a sequence $(\tau ^q)_{q \in{\mathbb N}}$ of elements of $C_{\mathrm{per}}^{\infty,\mathfrak{W}}({\mathcal T},{\mathbb R}^p)$ which converges to $\tau$ in $\mathcal V$ . Let $\chi ^q$ be associated with $\tau ^q$ through (6.11). From (6.9), we see that the function $\Theta \to \chi ^q(A_\Theta )$ belongs to $C^{\infty }({\mathcal T}, \mathfrak{so}_n)$ . However, we cannot deduce directly from it that $\chi ^q$ belongs to $C^{\infty }_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ . Indeed, for a given $A \in \mathrm{SO}_n$ , the pair $(g,\Theta ) \in \mathrm{SO}_n \times{\mathcal T}$ such that $A = g A_\Theta g^T$ is not unique, and consequently, the map $\mathrm{SO}_n \times{\mathcal T} \to \mathrm{SO}_n$ , $(g, \Theta ) \mapsto g A_\Theta g^T$ is not invertible. Therefore, we cannot deduce that $\chi ^q$ is smooth from the smoothness of the map $\mathrm{SO}_n \times{\mathcal T} \to \mathfrak{so}_n$ , $(g, \Theta ) \mapsto g \chi ^q(A_\Theta ) g^T$ . The result is actually true but the proof requires a bit of topology (see Remark 6.3 below). Here, we give a proof of (6.23) that avoids proving that $\chi ^q$ is smooth and which is thus simpler.

Temporarily dropping the superscript $q$ from $\chi ^q$ , we first show that $\chi$ is a differentiable function $\mathrm{SO}_n \to \mathfrak{so}_n$ at any point $A \in{\mathbb T}$ (which means that the derivatives in all directions of the tangent space $T_A = A \mathfrak{so}_n$ of $\mathrm{SO}_n$ at $A \in{\mathbb T}$ are defined, not only those derivatives in the direction of an element of the tangent space $T_A{\mathbb T} = A \mathfrak{h}$ of $\mathbb T$ at $A$ ). Indeed, this is a consequence of the proof of (6.21) above. This proof shows that for an orthonormal basis $(\Phi _i)_{i=1}^{\mathcal N}$ (with ${\mathcal N} = \mathrm{dim} \, \mathfrak{so}_n$ ) whose precise definition depends on $n$ (see proof of (6.21)), then $(\varrho (\Phi _i) (\chi ))(A)$ exists for all $A \in{\mathbb T}$ and all $i \in \{1, \ldots{\mathcal N}\}$ , which is exactly saying that the derivatives of $\chi$ in all the direction of $T_A$ exist for all $A \in{\mathbb T}$ . If we refer to the $n=3$ case for instance, $(\varrho (F_{12}) (\chi ))(A_\Theta )$ exists because $(d \tau _1/d \theta )(\theta _1)$ exists since $\tau$ is $C^\infty$ . Then, (6.28) and (6.29) tell us that $(\varrho (G^+) (\chi ))(A_\Theta )$ and $(\varrho (G^-) (\chi ))(A_\Theta )$ exist, except maybe when $\cos \theta _1 = 1$ . But, because $\tau$ commutes with the Weyl group, Eqs. (6.28) and (6.29) lead to finite values of $(\varrho (G^\pm ) (\chi ))(A_\Theta )$ when $\cos \theta _1 = 1$ thanks to a Taylor expansion similar to that made in Remark 6.2. It is straightforward to see that dimensions $n \geq 4$ can be treated in a similar fashion.

From this, we deduce that $\nabla \chi (A)$ exists for all $A \in \mathrm{SO}_n$ . Indeed, by construction, $\chi$ satisfies (6.8). So, we deduce that

(6.36) \begin{equation} \nabla (\chi \circ \xi _g) (A) = \nabla (g \chi g^T) (A), \quad \forall A, \, g \in \mathrm{SO}_n, \end{equation}

where $\xi _g$ is defined by (4.12). Applying (6.2) with the basis $(\Phi _i)_{i=1}^{\mathcal N} = (\Psi _i)_{i=1}^{\mathcal N} = (F_{ij})_{1 \leq i \lt j \leq n}$ , we get, thanks to (4.14),

(6.37) \begin{equation} \nabla (\chi \circ \xi _g) (A) = \sum _{k \lt \ell } \sum _{k^{\prime} \lt \ell ^{\prime}} \big ( \nabla \chi _{k \ell } (g A g^T) \cdot g A F_{k^{\prime} \ell ^{\prime}} g^T \big ) \, F_{k \ell } \otimes A F_{k^{\prime} \ell ^{\prime}}, \end{equation}

while

(6.38) \begin{equation} \nabla (g \chi g^T)(A) = \sum _{k \lt \ell } \sum _{k^{\prime} \lt \ell ^{\prime}} \big ( \nabla \chi _{k \ell }(A) \cdot A F_{k^{\prime} \ell ^{\prime}} \big ) \, (g F_{k \ell } g^T) \otimes A F_{k^{\prime} \ell ^{\prime}}, \end{equation}

where $\sum _{k \lt \ell }$ means the sum over all pairs $(k,\ell )$ such that $1 \leq k \lt \ell \leq n$ and similarly for $\sum _{k^{\prime} \lt \ell ^{\prime}}$ . Thus, using the fact that the basis $(F_{k \ell } \otimes A F_{k^{\prime} \ell ^{\prime}})_{k\lt \ell, k^{\prime}\lt \ell ^{\prime}}$ is an orthonormal basis of $\mathfrak{so}_n \otimes T_A$ and that the two expressions (6.37) and (6.38) are equal thanks to (6.36), we get

(6.39) \begin{equation} \nabla \chi _{k \ell }(gAg^T) \cdot g A F_{k' \ell '} g^T = \sum _{k_1 \lt \ell _1} \big ( \nabla \chi _{k_1 \ell _1} (A) \cdot A F_{k' \ell '} \big ) \big ( g F_{k_1 \ell _1} g^T \cdot F_{k \ell } \big ). \end{equation}

We apply this formula with $A = A_\Theta$ . Because we know that $\nabla \chi (A_\Theta )$ exists for all $\Theta \in{\mathcal T}$ , the right-hand side of (6.39) is well-defined. Thus, the left-hand side of (6.39) tells us that $\nabla \chi (A)$ is defined (by its components on the basis $(F_{k \ell } \otimes A F_{k' \ell '})_{k\lt \ell, k'\lt \ell '}$ ) for all $A$ such that there exists $(g,\Theta ) \in \mathrm{SO}_n \times{\mathcal T}$ with $A = g A_\Theta g^T$ . But of course, such $A$ range over the whole group $\mathrm{SO}_n$ , which shows that $\nabla \chi$ exists everywhere.

Now, we note that formula (6.21) applies to $\chi$ . Indeed, because $\chi$ satisfies (6.8), $|\chi |^2$ and $|\nabla \chi |^2$ are class functions, so they only depend on their values on $\mathbb T$ , which are given by the same formulas in terms of $\tau$ as those found in the proof of (6.21). Hence, the value of $\| \chi \|_{H^1}^2$ is still given by (6.21). In particular, it is finite and so, we have that $\chi \in H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ .

Now, putting the superscript $q$ back, we have $\chi ^q \in H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ and up to an unimportant constant, $\| \chi ^q \|_{H^1} = \| \tau ^q \|_{\mathcal V}$ . Since $\tau ^q \to \tau$ in $\mathcal V$ , $\chi ^q$ is a Cauchy sequence in $H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ . Thus, it converges to an element $\chi \in H^1_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ and by the same reasoning as when we proved the direct implication, we deduce that $\chi$ and $\tau$ are related with each other by (6.11), which ends the proof.

Remark 6.3. We sketch a direct proof that, for any $\tau \in C_{\mathrm{per}}^{\infty,\mathfrak{W}}({\mathcal T},{\mathbb R}^p)$ , the function $\chi$ given by (6.11) belongs to $C^{\infty }_{\mathrm{inv}}(\mathrm{SO}_n, \mathfrak{so}_n)$ . We consider the mapping $\Pi$ : $\mathrm{SO}_n \times{\mathcal T} \to \mathrm{SO}_n$ , $(g, \Theta ) \mapsto g A_\Theta g^T$ . Suppose $g' \in g{\mathbb T}$ , i.e. $\exists \Upsilon \in{\mathcal T}$ such that $g' = g A_\Upsilon$ . Then, $g' A_\Theta (g')^T = g A_\Theta g^T$ because $\mathbb T$ is abelian. Thus, $\Pi$ defines a map $\tilde \Pi$ : $\mathrm{SO}_n/{\mathbb T} \times{\mathcal T} \to \mathrm{SO}_n$ , $(g{\mathbb T}, \Theta ) \mapsto g A_\Theta g^T$ , where $\mathrm{SO}_n/{\mathbb T}$ is the quotient set, whose elements are the cosets $g{\mathbb T}$ for $g \in \mathrm{SO}_n$ . From the discussion of the Weyl group in Section 5, we know that generically (i.e. for all $\Theta$ except those belonging to the boundary of one of the Weyl chambers), the map $\tilde \Pi$ is a $\mathrm{Card} (\mathfrak{W})$ -sheeted covering of $\mathrm{SO}_n$ (see also [Reference Fulton and Harris36, p. 443]). Thus, it is locally invertible and its inverse is smooth. Let us generically denote by $(\tilde \Pi )^{-1}$ one of these inverses. Now, we introduce $\bar \chi$ : $\mathrm{SO}_n \times{\mathcal T} \to \mathfrak{so}_n$ , $(g, \Theta ) \mapsto \sum _{k=1}^p \tau (\Theta ) g F_{2k-1 \, 2k} g^T$ . This map is smooth. Then, it is easy to see that $\bar \chi$ defines a map $\tilde \chi$ : $\mathrm{SO}_n/{\mathbb T} \times{\mathcal T} \to \mathfrak{so}_n$ in the same way as $\Pi$ did and likewise, $\tilde \chi$ is smooth. Then, by construction, we can write locally $\chi = \tilde \chi \circ (\tilde \Pi )^{-1}$ in the neighbourhood of any $A$ such that the associated $\Theta$ ’s do not belong to the boundary of one of the Weyl chambers. Since the maps $\tilde \chi$ and $(\tilde \Pi )^{-1}$ are smooth, we deduce that $\chi$ is smooth. We need now to apply a special treatment when $\Theta$ belongs to the boundary of one of the Weyl chambers, because some of the sheets of the covering $\tilde \Pi$ intersect there. We will skip these technicalities in this remark. The case $\cos \theta _1 = 1$ in dimension $n=3$ that we encountered above and that required a Taylor expansion to show that $\nabla \chi (A_\Theta )$ existed is a perfect illustration of the kind of degeneracy which appears at the boundary of the Weyl chambers.

We can now write the variational formulation obeyed by $\alpha$ in the following:

Proposition 6.7. Let $\mu$ be the unique solution of the variational formulation (6.17). Then, $\mu$ is given by (6.14) where $\alpha = (\alpha _i)_{i=1}^p$ is the unique solution of the following variational formulation

(6.40) \begin{equation} \left \{ \begin{array}{l} \displaystyle \alpha \in{\mathcal V}, \\[5pt] \displaystyle{\mathcal A}(\alpha,\tau ) ={\mathcal L}(\tau ), \quad \forall \tau \in{\mathcal V}, \end{array} \right. \end{equation}

and with

(6.41) \begin{eqnarray}{\mathcal A}(\alpha,\tau ) &=& \int _{{\mathcal T}} \Big \{ \sum _{i,j=1}^p \frac{\partial \alpha _i}{\partial \theta _j} \frac{\partial \tau _i}{\partial \theta _j} + \sum _{1 \leq i \lt j \leq p} \Big ( \frac{(\alpha _i - \alpha _j)(\tau _i - \tau _j)}{1 - \cos\!(\theta _i - \theta _j)} \nonumber \\[5pt] &&\quad + \frac{(\alpha _i + \alpha _j)(\tau _i + \tau _j)}{1 - \cos\!(\theta _i + \theta _j)} \Big ) + \epsilon _n \sum _{i=1}^p \frac{\alpha _i \tau _i}{1 - \cos \theta _i} \Big \} m(\Theta ) \, d \Theta, \end{eqnarray}
(6.42) \begin{eqnarray}{\mathcal L}(\tau ) = \int _{{\mathcal T}} \Big ( \sum _{i=1}^p \sin \theta _i \, \tau _i \Big ) m(\Theta ) \, d \Theta. \end{eqnarray}

We recall that $m(\Theta )$ is given by (3.10).

Proof. $\mathcal A$ and $\mathcal L$ are just the expression of the left-hand and right-hand sides of (6.17) when $\mu$ and $\chi$ are given the expressions (6.14) and (6.9), respectively. The computation of (6.41) follows closely the computations made in the proof of Prop. 6.6 and is omitted. That of (6.42) follows from

(6.43) \begin{equation} \frac{A_\Theta -A_\Theta ^T}{2} = - \sum _{\ell =1}^p \sin \theta _\ell \, F_{2\ell -1 \, 2\ell }. \end{equation}

which is a consequence of (5.2), (5.3).

To show that the variational formulation (6.40) is well-posed, we apply Lax-Milgram’s theorem. $\mathcal A$ and $\mathcal L$ are clearly continuous bilinear and linear forms on $\mathcal V$ , respectively. To show that $\mathcal A$ is coercive, it is enough to show a Poincaré inequality

\begin{equation*} {\mathcal A} (\tau, \tau ) \geq C \| \tau \|_{\mathcal H}^2, \quad \forall \tau \in {\mathcal V}, \end{equation*}

for some $C \gt 0$ . Suppose that $p \geq 2$ . Since $1 - \cos\!(\theta _i \pm \theta _j) \leq 2$ , we have

\begin{eqnarray*}{\mathcal A} (\tau, \tau ) &\geq & \int _{{\mathcal T}} \sum _{1 \leq i \lt j \leq p} \Big ( \frac{|\tau _i - \tau _j|^2}{1 - \cos\!(\theta _i - \theta _j)} + \frac{|\tau _i + \tau _j|^2}{1 - \cos\!(\theta _i + \theta _j)} \Big ) m(\Theta ) \, d \Theta \\[5pt] & \geq & \frac{1}{2} \sum _{1 \leq i \lt j \leq p} \int _{{\mathcal T}} \big ( | \tau _i-\tau _j |^2 + | \tau _i+\tau _j |^2 \big ) m(\Theta ) \, d \Theta \\[5pt] &=& \sum _{1 \leq i \lt j \leq p} \int _{{\mathcal T}} \big ( | \tau _i |^2 + | \tau _j |^2 \big ) m(\Theta ) \, d \Theta = (p-1) \| \tau \|_{\mathcal H}^2, \end{eqnarray*}

which shows the coercivity in the case $p \geq 2$ . Suppose now that $p=1$ , i.e. $n=3$ . Then, by the same idea,

\begin{equation*} {\mathcal A}(\tau,\tau ) \geq \int _{{\mathcal T}} \frac {|\tau _1|^2}{1 - \cos \theta _1} \, m(\Theta ) \, d \Theta \geq \frac {1}{2} \int _{{\mathcal T}} |\tau _1|^2 \, m(\Theta ) \, d \Theta = \frac {1}{2} \| \tau \|_{\mathcal H}^2, \end{equation*}

which shows the coercivity in the case $p=1$ and ends the proof.

The variational formulation (6.40) gives the equations satisfied by $\alpha$ in weak form. It is desirable to express this system in strong form and show that the latter is given by System (3.11). This is done in the following proposition.

Proposition 6.8. Let $\alpha$ be the unique solution of the variational formulation (6.40). Then, $\alpha$ is a distributional solution of System (3.11) on the open set

\begin{equation*} {\mathcal O} = \big \{ \Theta \in {\mathcal T} \, \, | \, \, \theta _\ell \not = \pm \theta _k, \, \, \forall k \not = \ell \text { and, in the case } n \text { even, } \theta _\ell \not = 0, \, \, \forall \ell \in \{1, \ldots, p \} \big \}. \end{equation*}

Proof. A function $f$ : ${\mathcal T} \to{\mathbb R}$ is invariant by the Weyl group if and only if $f \circ W = f$ , $\forall W \in \mathfrak{W}$ . It is easily checked that the functions inside the integrals defining $\mathcal A$ and $\mathcal L$ in (6.41) and (6.42) are invariant by the Weyl group. If $f \in L^1({\mathcal T})$ is invariant by the Weyl group, we can write, using (5.4) and (5.5) (with ${\mathbb R}^p$ replaced by $\mathcal T$ and $\mathcal W$ by ${\mathcal W}_{\mathrm{per}}$ ):

(6.44) \begin{equation} \int _{{\mathcal T}} f(\Theta ) \, d \Theta = \sum _{W \in \mathfrak{W}} \int _{{\mathcal W}_{\mathrm{per}}} f \circ W (\Theta ) \, d \Theta = \mathrm{Card} (\mathfrak{W}) \int _{{\mathcal W}_{\mathrm{per}}} f (\Theta ) \, d \Theta, \end{equation}

where $\mathrm{Card} (\mathfrak{W})$ stands for the order of $\mathfrak{W}$ . Thus, up to a constant which will factor out from (6.40), the integrals over $\mathcal T$ which are involved in the definitions of $\mathcal A$ and $\mathcal L$ in (6.41) and (6.42) can be replaced by integrals over ${\mathcal W}_{\mathrm{per}}$ .

Now, given $\ell \in \{1, \ldots p\}$ , we wish to recover the $\ell$ -th equation of System (3.11) by testing the variational formulation (6.40) with a convenient $p$ -tuple of test functions $\tau = (\tau _1, \ldots, \tau _p)$ . A natural choice is by taking $\tau _\ell = \varphi (\Theta )$ for a given $\varphi \in C^\infty _c (\mathrm{Int}({\mathcal W}_{\mathrm{per}}))$ (where $C^\infty _c$ stands for the space of infinitely differentiable functions with compact support and $\mathrm{Int}$ for the interior of a set), while $\tau _i = 0$ for $i \not = \ell$ . However, this only defines $\tau$ on ${\mathcal W}_{\mathrm{per}}$ and we have to extend it to the whole domain $\mathcal T$ and show that it defines a valid test function $\tau \in{\mathcal V}$ .

More precisely, we claim that we can construct a unique $\tau \in{\mathcal V}$ such that

(6.45) \begin{equation} \tau _i (\Theta ) = \varphi (\Theta ) \delta _{i \ell }, \quad \forall \Theta \in \mathrm{Int}({\mathcal W}_{\mathrm{per}}), \quad \forall i \in \{1, \ldots p\}. \end{equation}

Indeed, we can check that

\begin{equation*} {\mathcal O} = \bigcup _{W \in \mathfrak {W}} \mathrm {Int} \big (W({\mathcal W}_{\mathrm {per}}) \big ). \end{equation*}

Thus, if $\Theta \in{\mathcal O}$ , there exists a unique pair $(W,\Theta _0)$ with $W \in \mathfrak{W}$ and $\Theta _0 \in \mathrm{Int}({\mathcal W}_{\mathrm{per}})$ such that $\Theta = W(\Theta _0)$ . Then, by the fact that $\tau$ must commute with the elements of the Weyl group, we necessarily have

(6.46) \begin{equation} \tau (\Theta ) = \tau \circ W (\Theta _0) = W \circ \tau (\Theta _0). \end{equation}

Since $\tau (\Theta _0)$ is determined by (6.45), then $\tau (\Theta )$ is determined by (6.46) in a unique way. It remains to determine $\tau$ on ${\mathcal T} \setminus{\mathcal O}$ . But since $\varphi$ is compactly supported in $\mathrm{Int}({\mathcal W}_{\mathrm{per}})$ its value on the boundary of $\partial{\mathcal W}_{\mathrm{per}}$ of ${\mathcal W}_{\mathrm{per}}$ is equal to $0$ . By the action of the Weyl group on $\partial{\mathcal W}_{\mathrm{per}}$ , we deduce that $\tau$ must be identically $0$ on ${\mathcal T} \setminus{\mathcal O}$ . In particular, $\tau$ is periodic on $\mathcal T$ . So, the so-constructed $\tau$ is in $C_{\mathrm{per}}^{\infty,\mathfrak{W}}({\mathcal T},{\mathbb R}^p)$ and thus in $\mathcal V$ . The so-constructed $\tau$ is unique because we proceeded by necessary conditions throughout all this reasoning.

We use the so-constructed $\tau$ as a test function in (6.40) with $\mathcal T$ replaced by ${\mathcal W}_{\mathrm{per}}$ in (6.41) and (6.42). This leads to

\begin{eqnarray*} && \int _{{\mathcal W}_{\mathrm{per}}} \Big \{ \sum _{k=1}^p \frac{\partial \alpha _\ell }{\partial \theta _k} \frac{\partial \varphi }{\partial \theta _k} + \Big [ \sum _{k \not = \ell } \Big ( \frac{(\alpha _\ell - \alpha _k)}{1 - \cos\!(\theta _\ell - \theta _k)} + \frac{(\alpha _\ell + \alpha _k)}{1 - \cos\!(\theta _\ell + \theta _k)} \Big ) \\[5pt] &&\qquad + \epsilon _n \frac{\alpha _\ell }{1 - \cos \theta _\ell } \Big ] \varphi \Big \} m(\Theta ) \, d \Theta = \int _{{\mathcal W}_{\mathrm{per}}} \sin \theta _\ell \, \varphi \, m(\Theta ) \, d \Theta, \end{eqnarray*}

for all $\ell \in \{1, \ldots, p\}$ , for all $\varphi \in C^\infty _c (\mathrm{Int}({\mathcal W}_{\mathrm{per}}) )$ , which is equivalent to saying that $\alpha$ is a distributional solution of (3.11) on $\mathrm{Int}({\mathcal W}_{\mathrm{per}})$ . Now, in (6.44), ${\mathcal W}_{\mathrm{per}}$ can be replaced by $W({\mathcal W}_{\mathrm{per}})$ for any $W \in \mathfrak{W}$ . This implies that $\alpha$ is a distributional solution of (3.11) on the whole open set $\mathcal O$ , which ends the proof.

Remark 6.4. Because of the singularity of System (3.11), it is delicate to give sense of it on ${\mathcal T} \setminus{\mathcal O}$ . Observe however that since $\mu \in C^\infty (\mathrm{SO}_n, \mathfrak{so}_n)$ (by elliptic regularity), then $\tau _i$ : $\Theta \mapsto \mu (A_\Theta ) \cdot F_{2i-1 \, 2i}$ belongs to $C^\infty ({\mathcal T})$ .

7. Hydrodynamic limit II: final steps of the proof

7.1. Use of the generalised collision invariant

Here, we note a slight confusion we have made so far between two different concepts. The reference frame $({\textbf{e}}_1, \ldots,{\textbf{e}}_n)$ is used to define a rotation (temporarily noted as $\gamma$ ) which maps this frame to the average body frame $(\Omega _1, \ldots, \Omega _n)$ (i.e. $\Omega _j = \gamma ({\textbf{e}}_j)$ , see (3.5)). Now to identify the rotation $\gamma$ with a rotation matrix $\Gamma$ , we can use a coordinate basis $({\textbf f}_1, \ldots,{\textbf f}_n)$ which is different from the reference frame $({\textbf{e}}_1, \ldots,{\textbf{e}}_n)$ . The rotation matrix $\Gamma$ is defined by $\gamma ({\textbf f}_j) = \sum _{i=1}^n \Gamma _{ij}{\textbf f}_i$ . So far, we have identified the rotation $\gamma$ and the matrix $\Gamma$ , but of course, this requires to specify the coordinate basis $({\textbf f}_1, \ldots,{\textbf f}_n)$ . Note that $\Gamma$ can be recovered from $({\textbf{e}}_1, \ldots,{\textbf{e}}_n)$ and $(\Omega _1, \ldots, \Omega _n)$ by $\Gamma = T S^T$ where $S$ and $T$ are the transition matrices from $({\textbf f}_1, \ldots,{\textbf f}_n)$ to $({\textbf{e}}_1, \ldots,{\textbf{e}}_n)$ and $(\Omega _1, \ldots, \Omega _n)$ , respectively.

We denote by $e_{1 \, m}$ the $m$ -th coordinate of ${\textbf{e}}_1$ in the coordinate basis $({\textbf f}_1, \ldots,{\textbf f}_n)$ , and we define a matrix $\mathbb P$ and a four-rank tensor $\mathbb S$ as follows:

(7.1) \begin{eqnarray}{\mathbb P} &=& \Gamma ^T (\nabla _x \rho \otimes{\textbf{e}}_1) - (\nabla _x \rho \otimes{\textbf{e}}_1)^T \Gamma + \kappa \rho \Gamma ^T \partial _t \Gamma, \end{eqnarray}
(7.2) \begin{eqnarray}{\mathbb S}_{i j m q} &=& \frac{1}{2} \sum _{k, \ell = 1}^n \Gamma _{ki} \frac{\partial \Gamma _{kj}}{\partial x_\ell } \big (\Gamma _{\ell m} e_{1 \, q} + \Gamma _{\ell q} e_{1 \, m} \big ), \end{eqnarray}

where in (7.2), $\mathbb S$ is read in the coordinate basis $({\textbf f}_1, \ldots,{\textbf f}_n)$ . The symbol $\otimes$ denotes the tensor product of two vectors. Hence, the matrix $\nabla _x \rho \otimes{\textbf{e}}_1$ has entries $(\nabla _x \rho \otimes{\textbf{e}}_1)_{ij} = (\nabla _x \rho )_i e_{1 \, j}$ . We note that $\mathbb P$ is an antisymmetric matrix while $\mathbb S$ is antisymmetric with respect to $(i,j)$ and symmetric with respect to $(m,q)$ . We denote by ${\mathcal S}_n$ the space of $n \times n$ symmetric matrices. Now, we define two maps $L$ and $B$ as follows:

  • $L$ is a linear map $\mathfrak{so}_n \to \mathfrak{so}_n$ such that

    (7.3) \begin{equation} L(P) = \int _{\mathrm{SO}_n} (A \cdot P) \, \mu (A) \, M(A) \, dA, \quad \forall P \in \mathfrak{so}_n. \end{equation}
  • $B$ : $\mathfrak{so}_n \times \mathfrak{so}_n \to{\mathcal S}_n$ is bilinear and defined by

    (7.4) \begin{equation} B(P,Q) = \int _{\mathrm{SO}_n} (A \cdot P) \, (\mu (A) \cdot Q) \, \frac{A + A^T}{2} \, M(A) \, dA, \quad \forall P, \, Q \in \mathfrak{so}_n. \end{equation}

We can now state a first result about the equation satisfied by $\Gamma$ .

Proposition 7.1. The functions $\rho$ and $\Gamma$ involved in (3.2) satisfy the following equations:

(7.5) \begin{equation} L({\mathbb P})_{rs} + \kappa \rho \sum _{m,q = 1}^n B_{m q}({\mathbb S}_{\cdot \cdot m q}, F_{rs}) = 0, \quad \forall r, s \in \{1, \ldots, n \}, \end{equation}

where ${\mathbb S}_{\cdot \cdot m q}$ stands for the antisymmetric matrix $({\mathbb S}_{\cdot \cdot m q})_{ij} ={\mathbb S}_{i j m q}$ , $F_{rs}$ is given by (2.6) and $B_{m q}(P,Q)$ is the $(m,q)$ -th entry of the symmetric matrix $B(P,Q)$ (for any $P, \, Q \in \mathfrak{so}_n$ ).

Proof. Recalling the definition (6.1) of $\mu ^\Gamma$ , we have, thanks to (4.4)

\begin{equation*} \int _{\mathrm {SO}_n} Q(f^\varepsilon ) \, \mu ^{\Gamma _{f^\varepsilon }} \, dA = 0. \end{equation*}

It follows from (3.17) that

\begin{equation*} \int _{\mathrm {SO}_n} \Big [ \partial _t f^\varepsilon + (A {\textbf{e}}_1) \cdot \nabla _x f^\varepsilon \Big ] \, \mu ^{\Gamma _{f^\varepsilon }} \, dA = 0. \end{equation*}

Letting $\varepsilon \to 0$ and noting that $\mu ^\Gamma$ is a smooth function of $\Gamma$ and that $\Gamma _{f^\varepsilon } \to \Gamma$ , we get

(7.6) \begin{equation} \int _{\mathrm{SO}_n} \Big [ \partial _t f^0 + (A{\textbf{e}}_1) \cdot \nabla _x f^0 \Big ] \, \mu ^{\Gamma } \, dA = 0. \end{equation}

With (3.21), we compute

\begin{equation*} \partial _t f^0 + (A {\textbf{e}}_1) \cdot \nabla _x f^0 = M_\Gamma \Big \{ \partial _t \rho + (A {\textbf{e}}_1) \cdot \nabla _x \rho + \kappa \rho P_{T_\Gamma } A \cdot \big [ \partial _t \Gamma + \big ((A {\textbf{e}}_1) \cdot \nabla _x \big ) \Gamma \big ] \Big \}, \end{equation*}

where $(A{\textbf{e}}_1) \cdot \nabla _x$ stands for the operator $\sum _{i=1}^n (A{\textbf{e}}_1)_i \partial _{x_i}$ . We insert this expression into (7.6), and we make the change of variables $A' = \Gamma ^T A$ with $dA = dA'$ owing to the translation invariance of the Haar measure. With (6.7) and the fact that $M_\Gamma (A) = M(\Gamma ^T A)$ and $P_{T_\Gamma } (\Gamma A) = \Gamma \frac{A-A^T}{2}$ , we get (dropping the primes for clarity):

\begin{equation*} \int _{\mathrm {SO}_n} \Big \{ \partial _t \rho + (\Gamma A {\textbf{e}}_1) \cdot \nabla _x \rho + \kappa \rho \frac {A-A^T}{2} \cdot \Big ( \Gamma ^T \big [ \partial _t \Gamma + \big ((\Gamma A {\textbf{e}}_1) \cdot \nabla _x \big ) \Gamma \big ] \Big ) \Big \} \, M(A) \, \mu (A) \, dA = 0. \end{equation*}

Now, changing $A$ to $A^T$ in (6.4), we notice that $\mu (A^T)$ and $- \mu (A)$ satisfy the same variational formulation. By uniqueness of its solution, we deduce that $\mu (A^T) = - \mu (A)$ .

Then, making the change of variables $A' = A^T= A^{-1}$ and remarking that the Haar measure on $\mathrm{SO}_n$ is invariant by group inversion, we have, thanks to the fact that $M(A) = M(A^T)$ ,

\begin{equation*} \int _{\mathrm {SO}_n} \Big \{ \partial _t \rho + (\Gamma A^T {\textbf{e}}_1) \cdot \nabla _x \rho - \kappa \rho \frac {A-A^T}{2} \cdot \Big ( \Gamma ^T \big [ \partial _t \Gamma + \big ((\Gamma A^T {\textbf{e}}_1) \cdot \nabla _x \big ) \Gamma \big ] \Big ) \Big \} \, M(A) \, \mu (A) \, dA = 0. \end{equation*}

Subtracting the last two equations and halfing the result, we get

(7.7) \begin{eqnarray} && 0= \int _{\mathrm{SO}_n} \Big \{ (\Gamma \frac{A-A^T}{2}{\textbf{e}}_1) \cdot \nabla _x \rho + \kappa \rho \frac{A-A^T}{2} \cdot (\Gamma ^T \partial _t \Gamma ) \nonumber \\[5pt] && + \kappa \rho \frac{A-A^T}{2} \cdot \Big ( \Gamma ^T \big [ \big ( (\Gamma \frac{A+A^T}{2}{\textbf{e}}_1) \cdot \nabla _x \big ) \Gamma \big ] \Big ) \Big \} \, M(A) \, \mu (A) \, dA \;=\!:\;\unicode{x24D0} +\unicode{x24D1} + \unicode{x24D2}. \end{eqnarray}

We have

\begin{eqnarray*} (\Gamma \frac{A-A^T}{2}{\textbf{e}}_1) \cdot \nabla _x \rho &=& (\frac{A-A^T}{2}{\textbf{e}}_1) \cdot (\Gamma ^T \nabla _x \rho ) = 2 \frac{A-A^T}{2} \cdot \big ( \Gamma ^T (\nabla _x \rho \otimes{\textbf{e}}_1) \big ) \\[5pt] &=& \frac{A-A^T}{2} \cdot \big ( \Gamma ^T (\nabla _x \rho \otimes{\textbf{e}}_1) - (\nabla _x \rho \otimes{\textbf{e}}_1)^T \Gamma \big ), \end{eqnarray*}

products and the last two ones are matrix inner products. The factor $2$ in the last expression of the first line arises because of the definition (2.1) of the matrix inner product. Finally, the second line is just a consequence of the fact that $A - A^T$ is an antisymmetric matrix. Thus, the first two terms in (7.7) can be written as

(7.8) \begin{eqnarray}\unicode{x24D0} +\unicode{x24D1} &=& \int _{\mathrm{SO}_n} \frac{A-A^T}{2} \cdot \big [ \Gamma ^T (\nabla _x \rho \otimes{\textbf{e}}_1) - (\nabla _x \rho \otimes{\textbf{e}}_1)^T \Gamma + \kappa \rho \Gamma ^T \partial _t \Gamma \Big ] \, M(A) \, \mu (A) \, dA \nonumber \\[5pt] &=& \int _{\mathrm{SO}_n} (A \cdot{\mathbb P}) \, M(A) \, \mu (A) \, dA = L({\mathbb P}). \end{eqnarray}

To compute $\unicode{x24D2}$ , we first state the following lemma:

Lemma 7.2. For a vector $w \in{\mathbb R}^n$ , we have

(7.9) \begin{equation} \big ( \Gamma ^T (w \cdot \nabla _x) \Gamma \big )_{ij} = \sum _{k, \ell = 1}^n \Gamma _{ki} w_\ell \frac{ \partial \Gamma _{kj}}{\partial x_\ell }. \end{equation}

This lemma is not totally obvious as the differentiation is that of a matrix in $\mathrm{SO}_n$ . It is proved in ref. [Reference Degond18] in its adjoint form (with $\Gamma ^T$ to the right of the directional derivative). The proof of the present result is similar and is skipped.

Then, applying this lemma, we have

\begin{eqnarray*} && \Big ( \Gamma ^T \big [ \big ( (\Gamma \frac{A+A^T}{2}{\textbf{e}}_1) \cdot \nabla _x \big ) \Gamma \big ] \Big )_{ij} = \sum _{k=1}^n \Gamma _{ki} \sum _{\ell, m, q = 1}^n \Gamma _{\ell m} \frac{A_{mq}+ A_{qm}}{2} e_{1 \, q} \frac{\partial \Gamma _{kj}}{\partial x_\ell } \\[5pt] &&\qquad = \sum _{m, q = 1}^n \frac{A_{mq}+ A_{qm}}{2} \, \tilde{\mathbb S}_{ijmq} = \sum _{m, q = 1}^n \frac{A_{mq}+ A_{qm}}{2} \,{\mathbb S}_{ijmq}, \end{eqnarray*}

with

\begin{equation*} \tilde {\mathbb S}_{ijmq} = \sum _{k, \ell = 1}^n \Gamma _{ki} \Gamma _{\ell m} e_{1 \, q} \frac {\partial \Gamma _{kj}}{\partial x_\ell } \quad \mathrm { and } \quad {\mathbb S}_{ijmq} = \frac {\tilde {\mathbb S}_{ijmq} + \tilde {\mathbb S}_{ijqm}}{2}. \end{equation*}

Thus,

\begin{equation*} \unicode{x24D2} = \kappa \rho \sum _{i,j,m,q = 1}^n \int _{\mathrm {SO}_n} \Big ( \frac {A - A^T}{2} \Big )_{ij} \Big ( \frac {A + A^T}{2} \Big )_{mq} \, M(A) \, \mu (A) \, dA \, \, {\mathbb S}_{ijmq}. \end{equation*}

Now, because $\mu (A)_{rs} = (\mu (A) \cdot F_{rs})$ and ${\mathbb S}_{ijmq}$ is antisymmetric with respect to $(i,j)$ , the $(r,s)$ entry of the matrix $\unicode{x24D2}$ is given by

(7.10) \begin{eqnarray}\unicode{x24D2}_{rs} &=& \kappa \rho \sum _{m,q = 1}^n \int _{\mathrm{SO}_n} (A \cdot{\mathbb S}_{\cdot \cdot mq}) \, (\mu (A) \cdot F_{rs}) \, \Big ( \frac{A + A^T}{2} \Big )_{mq} \, M(A) \, dA \nonumber \\[5pt] &=& \kappa \rho \sum _{m,q = 1}^n B_{mq}({\mathbb S}_{\cdot \cdot mq}, F_{rs}). \end{eqnarray}

Now, combining (7.8) and (7.10), we get (7.5).

7.2. Expressions of the linear map $L$ and bilinear map $B$

Now, we give expressions of $L$ and $B$ defined in (7.3) and (7.4).

Proposition 7.3. (i) We have

(7.11) \begin{equation} L(P) = C_2 \, P, \quad \forall P \in \mathfrak{so}_n, \end{equation}

with

(7.12) \begin{eqnarray} && C_2 = \frac{2}{n(n-1)} \int _{\mathrm{SO}_n} (\mu (A) \cdot A) \, M(A) \, dA \end{eqnarray}
(7.13) \begin{eqnarray} && = - \frac{2}{n(n-1)} \frac{ \displaystyle \int _{{\mathcal T}} \Big ( \sum _{k=1}^p \alpha _k(\Theta ) \sin \theta _k \Big ) \, m(\Theta ) \, d \Theta }{ \displaystyle \int _{{\mathcal T}} m(\Theta ) \, d \Theta }, \end{eqnarray}

where $m$ is given by (3.10).

(ii) We have

(7.14) \begin{equation} B(P,Q) = C_3 \mathrm{Tr}(PQ) \mathrm{I} + C_4 \Big ( \frac{PQ+QP}{2} - \frac{1}{n} \mathrm{Tr}(PQ) \mathrm{I} \Big ) \,\quad \forall P, \, Q \in \mathfrak{so}_n, \end{equation}

with

(7.15) \begin{eqnarray} && C_3 = - \frac{1}{n^2(n-1)} \int _{\mathrm{SO}_n} (\mu (A) \cdot A) \, \mathrm{Tr} (A) \, M(A) \, dA \end{eqnarray}
(7.16) \begin{eqnarray} && =\frac{1}{n^2(n-1)} \frac{ \displaystyle \int _{{\mathcal T}} \Big ( \sum _{k=1}^p \alpha _k(\Theta ) \sin \theta _k \Big ) \, \Big ( 2 \sum _{k=1}^n \cos \theta _k + \epsilon _n \Big ) \, m(\Theta ) \, d \Theta }{ \displaystyle \int _{{\mathcal T}} m(\Theta ) \, d \Theta }, \nonumber \\[5pt] && \end{eqnarray}

and

(7.17) \begin{equation} C_4 = \frac{2n}{n^2-4} \big (\!-\! 2 C_3 + C'_{\!\!4} \big ), \end{equation}

where

(7.18) \begin{eqnarray} && C'_{\!\!4} = \frac{1}{n(n-1)} \int _{\mathrm{SO}_n} \mathrm{Tr} \Big \{ \mu (A) \Big ( \frac{A+A^T}{2} \Big ) \Big (\frac{A-A^T}{2} \Big ) \Big \} \, M(A) \, dA \end{eqnarray}
(7.19) \begin{eqnarray} && =\frac{2}{n(n-1)} \, \frac{ \displaystyle \int _{{\mathcal T}} \Big ( \sum _{k=1}^p \alpha _k(\Theta ) \sin \theta _k \cos \theta _k \Big ) \, m(\Theta ) \, d \Theta }{ \displaystyle \int _{{\mathcal T}} m(\Theta ) \, d \Theta }, \end{eqnarray}

and where $\epsilon _n$ is given by (3.7).

Proof. The proof of (7.11) and (7.14) relies on Lie-group representations and Schur’s Lemma. We refer to [Reference Degond, Diez and Frouvelle19, Sect. 6] for a list of group representation concepts which will be useful for what follows. The proof will follow closely [Reference Degond, Diez and Frouvelle19, Section 8 and 9] but with some differences which will be highlighted when relevant.

(i) Proof of (7.11). Using (6.6) and the translation invariance (on both left and right) of the Haar measure, we easily find that

(7.20) \begin{equation} L(gPg^T) = g L(P) g^T, \quad \forall P \in \mathfrak{so}_n, \quad \forall g \in \mathrm{SO}_n. \end{equation}

Denote $V ={\mathbb C}^n$ (the standard complex representation of $\mathrm{SO}_n$ ) and by $\Lambda ^2(V) = V \wedge V$ its exterior square. We recall that $\Lambda ^2(V) = \mathrm{Span}_{\mathbb C} \{ v \wedge w \, | \, (v,w) \in V^2 \}$ where $v \wedge w = v \otimes w - w \otimes v$ is the antisymmetrised tensor product of $v$ and $w$ . Clearly, $\Lambda ^2(V) = \mathfrak{so}_n{\mathbb C}$ where $\mathfrak{so}_n{\mathbb C}$ is the complexification of $\mathfrak{so}_n$ . So, by linearity, we can extend $L$ into a linear map $\Lambda ^2(V) \to \Lambda ^2(V)$ , which still satisfies (7.20) (with now $P \in \Lambda ^2(V)$ ). Thus, $L$ intertwines the representation $\Lambda ^2(V)$ .

  • For $n \geq 3$ and $n \not = 4$ , $\Lambda ^2(V)$ is an irreducible representation of $\mathrm{SO}_n$ . Thus, by Schur’s Lemma, we have (7.11) (details can be found in [Reference Degond18, Sect. 8]).

  • For $n=4$ , $\Lambda ^2(V)$ is not an irreducible representation of $\mathrm{SO}_n$ . Still, (7.11) remains true but its proof requires additional arguments developed in Section 7.4.1.

Now, we show (7.12) and (7.13). Taking the matrix inner product of (7.11) with $P$ , we get

\begin{equation*} \int _{\mathrm {SO}_n} (A \cdot P) \, (\mu (A) \cdot P) \, M(A) \, dA = C_2 (P \cdot P). \end{equation*}

For $P=F_{ij}$ with $i \not = j$ , this gives

\begin{equation*} C_2 = \int _{\mathrm {SO}_n} \frac {A_{ij} - A_{ji}}{2} \, \mu (A)_{ij} \, M(A) \, dA. \end{equation*}

Averaging this formula over all pairs $(i,j)$ with $i \not = j$ leads to (7.12). Now, thanks to (6.6), the function $A \mapsto (A \cdot \mu (A)) \, M(A)$ is a class function. So, we can apply Weyl’s integration formula (5.7). For $A = A_\Theta$ , we have,

(7.21) \begin{equation} \mathrm{Tr} (A_\Theta ) = 2 \sum _{k=1}^n \cos \theta _k + \epsilon _n, \end{equation}

so that

(7.22) \begin{equation} M(A_\Theta ) = \frac{1}{Z} \exp \Big ( \frac{\kappa }{2} \big ( 2 \sum _{k=1}^n \cos \theta _k + \epsilon _n \big ) \Big ). \end{equation}

Besides, dotting (6.14) with (6.43), we get

(7.23) \begin{equation} \big ( A_\Theta \cdot \mu (A_\Theta ) \big ) = - \sum _{k=1}^p \alpha _k(\Theta ) \, \sin \theta _k. \end{equation}

Expressing the integral involved in $Z$ (see (3.1)) using Weyl’s integration formula (5.7) as well and collecting (7.22) and (7.23) into (7.12), we get (7.13).

(ii) Proof of (7.14). By contrast to [Reference Degond18, Sect. 9], the bilinear form $B$ is not symmetric. Thus, we decompose

(7.24) \begin{eqnarray} && B(P,Q) = B_s(P,Q) + B_a(P,Q), \nonumber \\[5pt] && B_s(P,Q) = \frac{1}{2} \big ( B(P,Q) + B(Q,P) \big ), \quad B_a(P,Q) = \frac{1}{2} \big ( B(P,Q) - B(Q,P) \big ). \end{eqnarray}

Again, using (6.6), we get

(7.25) \begin{equation} B(gPg^T, gQg^T) = g B(P,Q) g^T, \quad \forall P, \, Q \in \mathfrak{so}_n, \quad \forall g \in \mathrm{SO}_n, \end{equation}

and similar for $B_s$ and $B_a$ . Both $B_s$ and $B_a$ can be extended by bilinearity to the complexifications of $\mathfrak{so}_n$ and ${\mathcal S}_n$ which are $\Lambda ^2(V)$ and $\vee ^2(V)$ , respectively, (where $\vee ^2(V) = V \vee V$ is the symmetric tensor square of $V$ , spanned by elements of the type $v \vee w = v \otimes w + w \otimes v$ , for $v$ , $w$ in $V$ ). By the universal property of the symmetric and exterior products, $B_s$ and $B_a$ generate linear maps $\tilde B_s$ : $\vee ^2(\Lambda ^2(V)) \to \vee ^2(V)$ and $\tilde B_a$ : $\Lambda ^2(\Lambda ^2(V)) \to \vee ^2(V)$ which intertwine the $\mathrm{SO}_n$ representations. By decomposing $\vee ^2(\Lambda ^2(V))$ , $\Lambda ^2(\Lambda ^2(V))$ and $\vee ^2(V)$ into irreducible $\mathrm{SO}_n$ representations and applying Schur’s lemma, we are able to provide generic expressions of $\tilde B_s$ and $\tilde B_a$ .

For $\tilde B_s$ , this decomposition was done in ref. [Reference Degond, Diez and Frouvelle19, Sect. 9] and we get the following result.

  • For $n \geq 3$ and $n \not = 4$ , there exists real constants $C_3$ and $C_4$ such that

    (7.26) \begin{equation} B_s(P,Q) = C_3 \mathrm{Tr}(PQ) \mathrm{I} + C_4 \Big ( \frac{PQ+QP}{2} - \frac{1}{n} \mathrm{Tr}(PQ) \mathrm{I} \Big ) \,\quad \forall P, \, Q \in \mathfrak{so}_n. \end{equation}
  • For $n=4$ , we still have (7.26) but this requires additional arguments developed in Section 7.4.2.

For $\tilde B_a$ , thanks to Pieri’s formula [Reference Fulton and Harris36, Exercise 6.16] we have $\Lambda ^2(\Lambda ^2(V)) ={\mathbb S}_{(2,1,1)}$ where ${\mathbb S}_{(2,1,1)}$ denotes the Schur functor (or Weyl module) associated to partition $(2,1,1)$ of $4$ . As a Schur functor, ${\mathbb S}_{(2,1,1)}$ is irreducible over $\mathfrak{sl}_n{\mathbb C}$ , the Lie algebra of the group of unimodular matrices $\mathrm{SL}_n{\mathbb C}$ . We apply Weyl’s contraction method [Reference Fulton and Harris36, Sect. 19.5] to decompose it into irreducible representations over $\mathfrak{so}_n{\mathbb C}$ .

It can be checked that all contractions with respect to any pair of indices of $\Lambda ^2(\Lambda ^2(V))$ are either $0$ or coincide (up to a sign), with the single contraction $\mathcal K$ defined as follows:

\begin{eqnarray*}{\mathcal K}: \, \, \Lambda ^2(\Lambda ^2(V)) &\to & \Lambda ^2(V) \\[5pt] (v_1 \wedge v_2) \wedge (v_3 \wedge v_4) & \mapsto & (v_1 \cdot v_3) v_2 \wedge v_4 + (v_2 \cdot v_4) v_1 \wedge v_3 \\[5pt] && - (v_1 \cdot v_4) v_2 \wedge v_3 - (v_2 \cdot v_3) v_1 \wedge v_4, \quad \forall (v_1, \ldots, v_4) \in V^4. \end{eqnarray*}

$\mathcal K$ is surjective as soon as $n \geq 3$ . Indeed, Let $(e_i)_{i=1}^n$ be the canonical basis of $V$ . Then,

\begin{equation*} {\mathcal K} \big ( (e_i \wedge e_j) \wedge (e_i \wedge e_k) \big ) = e_j \wedge e_k, \quad \forall i,j,k \quad \text {all distinct}. \end{equation*}

We have $\ker{\mathcal K} ={\mathbb S}_{[2,1,1]}$ where ${\mathbb S}_{[2,1,1]}$ denotes the intersection of ${\mathbb S}_{(2,1,1)}$ with all the kernels of contractions with respect to pairs of indices (see [Reference Fulton and Harris36, Sect. 19.5]) and consequently

\begin{equation*} \Lambda ^2(\Lambda ^2(V)) \cong {\mathbb S}_{[2,1,1]} \oplus \Lambda ^2(V), \end{equation*}

is a decomposition of $\Lambda ^2(\Lambda ^2(V))$ in subrepresentations. We must discuss the irreducibility of ${\mathbb S}_{[2,1,1]}$ and $\Lambda ^2(V)$ according to the dimension.

  • If $n \geq 7$ , by [Reference Fulton and Harris36, Theorem 19.22], ${\mathbb S}_{[2,1,1]}$ and $\Lambda ^2(V)$ are respectively the irreducible representations of $\mathfrak{so}_n{\mathbb C}$ of highest weights $2{\mathcal L}_1 +{\mathcal L}_2 +{\mathcal L}_3$ and ${\mathcal L}_1 +{\mathcal L}_2$ (where $({\mathcal L}_i)_{i=1}^p$ is the basis of the weight space, i.e. is the dual basis (in $\mathfrak{h}^*_{\mathbb C}$ ) of the basis $(F_{2i-1,2i})_{i=1}^p$ of $\mathfrak{h}_{\mathbb C}$ , the complexification of $\mathfrak{h}$ ). On the other hand, $\vee ^2(V) ={\mathbb S}_{[2]} \oplus{\mathbb C} \, \mathrm{I}$ (where ${\mathbb S}_{[2]}$ is the space of symmetric trace-free matrices) is the decomposition of $\vee ^2(V)$ into irreducible representations over $\mathfrak{so}_n{\mathbb C}$ and corresponds to decomposing a symmetric matrix into a trace-free matrix and a scalar one. We note that ${\mathbb S}_{[2]}$ has highest weight $2{\mathcal L}_1$ while ${\mathbb C} \, \mathrm{I}$ has highest weight $0$ . By [Reference Fulton and Harris36, Prop. 26.6 & 27.7] the corresponding representations of $\mathfrak{so}_n$ are irreducible and real. Hence, they are irreducible representations of $\mathrm{SO}_n$ . Since the weights of ${\mathbb S}_{[2,1,1]}$ and $\Lambda ^2(V)$ and those of ${\mathbb S}_{[2]}$ and ${\mathbb C} \, \mathrm{I}$ are different, no irreducible subrepresentation of $\Lambda ^2(\Lambda ^2(V))$ can be isomorphic to an irreducible subrepresentation of $\vee ^2(V)$ . Consequently, by Schur’s Lemma we have

    (7.27) \begin{equation} B_a(P,Q) = 0 \quad \forall P, \, Q \in \mathfrak{so}_n, \end{equation}
  • Case $n \in \{3, \ldots, 6 \}$ .

    • - Case $n=3$ . We have $\Lambda ^2(V) \cong V$ by the isomorphism $\eta$ : $\Lambda ^2(V) \to V$ such that $(\eta (v \wedge w) \cdot z) = \mathrm{det}(v,w,z)$ , $\forall (v,w,z) \in V^3$ . Consequently $\Lambda ^2(\Lambda ^2(V)) \cong V$ as well. But $V$ is an irreducible representation of $\mathrm{so}_3{\mathbb C}$ with highest weight ${\mathcal L}_1$ . Hence, it can be isomorphic to neither ${\mathbb S}_{[2]}$ nor ${\mathbb C} \, \mathrm{I}$ . Therefore, by Schur’s lemma, (7.27) is true again.

    • - Case $n=4$ . By [Reference Fulton and Harris36, p. 297], the partitions $(2,1,1)$ and $(2)$ are associated in the sense of Weyl. Hence ${\mathbb S}_{[2,1,1]} \cong{\mathbb S}_{[2]}$ as $\mathfrak{so}_4{\mathbb C}$ representations. Since $\vee ^2(V)$ decomposes into irreducible representations according to $\vee ^2(V) ={\mathbb S}_{[2]} \oplus{\mathbb C} \mathrm{I}$ , we see that Schur’s lemma allows the possibility of a non-zero $\tilde B_a$ mapping the component ${\mathbb S}_{[2,1,1]}$ of $\Lambda ^2(\Lambda ^2(V))$ into the component ${\mathbb S}_{[2]}$ of $\vee ^2(V)$ . Likewise, $\Lambda ^2(V)$ is reducible. To show that (7.27) is actually true requires additional arguments which are developed in Section 7.4.3.

    • - Case $n=5$ . Like in the case $n=4$ , we find that the partitions $(2,1,1)$ and $(2,1)$ are associated and thus, ${\mathbb S}_{[2,1,1]} \cong{\mathbb S}_{[2,1]}$ . The latter is an irreducible representation of $\mathfrak{so}_5{\mathbb C}$ of highest weight $2{\mathcal L}_1 +{\mathcal L}_2$ . Likewise, $\Lambda ^2(V)$ is an irreducible representation of highest weight ${\mathcal L}_1 +{\mathcal L}_2$ . By [Reference Fulton and Harris36, Prop. 26.6] these are real irreducible representation of $\mathfrak{so}_5$ . Thus, they are irreducible representations of $\mathrm{SO}_5$ . Having different highest weights from those of ${\mathbb S}_{[2]}$ or ${\mathbb C}{\mathrm I}$ , by the same argument as in the case $n \geq 7$ , (7.27) holds true.

    • - Case $n=6$ . In this case, the partition $(2,1,1)$ is self-associated. By [Reference Fulton and Harris36, Theorem 19.22 (iii)], ${\mathbb S}_{[2,1,1]}$ decomposes into the direct sum of two non-isomorphic representations of $\mathfrak{so}_6{\mathbb C}$ of highest weights $2{\mathcal L}_1 +{\mathcal L}_2 +{\mathcal L}_3$ and $2{\mathcal L}_1 +{\mathcal L}_2 -{\mathcal L}_3$ . On the other hand, $\Lambda ^2(V)$ is an irreducible representation of highest weight ${\mathcal L}_1 +{\mathcal L}_2$ . By [Reference Fulton and Harris36, 27.7] the corresponding representations of $\mathfrak{so}_6$ are irreducible and real and thus generate irreducible representations of $\mathrm{SO}_6$ . Having different weights from those of ${\mathbb S}_{[2]}$ and ${\mathbb C}{\mathrm I}$ , the same reasoning applies again and (7.27) holds true.

Finally by adding (7.25) and (7.27), we get (7.14), which finishes the proof of Point (ii).

We now show (7.15) and (7.16). Taking the trace of (7.14), we get

\begin{equation*} C_3 = \frac { \mathrm {Tr} \big ( B(P,Q) \big ) }{n \, \mathrm {Tr} (PQ) }, \quad \forall P, \, Q \in \mathfrak {so}_n. \end{equation*}

Taking $P = Q = F_{ij}$ for $i \not = j$ and owing to the fact that $\mathrm{Tr} (F_{ij}^2) = -2 F_{ij} \cdot F_{ij} = -2$ , we find

\begin{equation*} C_3 = - \frac {1}{2n} \int _{\mathrm {SO}_n} \frac {A_{ij} - A_{ji}}{2} \, \mu (A)_{ij} \, \mathrm {Tr}(A) \, M(A) \, dA. \end{equation*}

Now, averaging this formula over all pairs $(i,j)$ with $i \not = j$ leads to (7.15). Again, the function $A \mapsto (A \cdot \mu (A)) \, \mathrm{Tr}(A) \, M(A)$ is a class function and Weyl’s integration formula (5.7) can be applied. Inserting (7.21), (7.22), (7.23) into (7.15) leads to (7.16).

We finish with showing (7.17)–(7.19). We now insert $P=F_{ij}$ and $Q=F_{i \ell }$ with $i \not = j$ , $i \not = \ell$ and $j \not = \ell$ . Then, $F_{ij} \cdot F_{i \ell } = 0$ , so that $\mathrm{Tr}(F_{ij} F_{i \ell }) = 0$ . A small computation shows that

\begin{equation*} \frac {F_{ij} F_{i \ell } + F_{i \ell } F_{ij}}{2} = - \frac {E_{j \ell } + E_{\ell j}}{2}, \end{equation*}

where $E_{j \ell }$ is the matrix with $(j,\ell )$ entry equal to $1$ and the other entries equal to $0$ . It follows that

(7.28) \begin{equation} - C_4 \frac{E_{j \ell } + E_{\ell j}}{2} = \int _{\mathrm{SO}_n} \Big ( \frac{A-A^T}{2} \Big )_{ij} \, \mu (A)_{i \ell } \, \frac{A + A^T}{2} \, M(A) \, dA. \end{equation}

Now, taking $P=Q=F_{ij}$ with $i \not = j$ , and noting that $F_{ij}^2 = -(E_{ii}+E_{jj})$ and $\mathrm{Tr} (F_{ij}^2) = -2$ , we get

(7.29) \begin{equation} - 2 C_3 \mathrm{I} + C_4 \Big (\!-\! (E_{ii} + E_{jj}) + \frac{2}{n} \mathrm{I} \Big ) = \int _{\mathrm{SO}_n} \Big ( \frac{A-A^T}{2} \Big )_{ij} \, \mu (A)_{ij} \, \frac{A + A^T}{2} \, M(A) \, dA. \end{equation}

Take $i$ , $j$ with $i \not = j$ fixed. For any $\ell \not = j$ , taking the $(\ell,j)$ -th entry of (7.28), we get

(7.30) \begin{equation} \frac{C_4}{2} = \int _{\mathrm{SO}_n} \mu (A)_{i \ell } \Big (\frac{A+A^T}{2} \Big )_{\ell j} \Big (\frac{A-A^T}{2}\Big )_{ji} \, M(A) \, dA. \end{equation}

Likewise, taking the $(j,j)$ -th entry of (7.29), we get

(7.31) \begin{equation} 2 C_3 - C_4 \Big (\!-\! 1 + \frac{2}{n} \Big ) = \int _{\mathrm{SO}_n} \mu (A)_{ij} \Big (\frac{A+A^T}{2} \Big )_{jj} \Big (\frac{A-A^T}{2}\Big )_{ji} \, M(A) \, dA. \end{equation}

Now, summing (7.30) over $\ell \not \in \{i, j\}$ and adding (7.31), we find

\begin{equation*} 2 C_3 + C_4 \frac {n^2 - 4}{2n} = \sum _{\ell =1}^n \int _{\mathrm {SO}_n} \mu (A)_{i \ell } \Big (\frac {A+A^T}{2} \Big )_{\ell j} \Big (\frac {A-A^T}{2}\Big )_{ji} \, M(A) \, dA. \end{equation*}

Then, averaging this equation over all pairs $(i,j)$ such that $i \not = j$ , we get (7.17) with $C'_{\!\!4}$ given by (7.18). Again, one can check that the function $A \mapsto \mathrm{Tr} \{ \mu (A) ( \frac{A+A^T}{2} ) (\frac{A-A^T}{2}) \} M(A)$ is a class function. Thanks to (6.14), (6.43) and to the fact that

(7.32) \begin{equation} \frac{A_\Theta + A_\Theta ^T}{2} = \sum _{k=1}^p \cos \theta _k (E_{2k-1 \, 2k-1} + E_{2k \, 2k}) +\epsilon _n E_{n\, n}, \end{equation}

we get

\begin{equation*} \mu (A_\Theta ) \Big ( \frac {A_\Theta +A_\Theta ^T}{2} \Big ) \Big ( \frac {A_\Theta -A_\Theta ^T}{2} \Big ) = \sum _{k=1}^p \alpha _k(\Theta ) \, \cos \theta _k \, \sin \theta _k \, (E_{2k-1 \, 2k-1} + E_{2k \, 2k}), \end{equation*}

and thus,

\begin{equation*} \mathrm {Tr} \Big \{ \mu (A_\Theta ) \Big ( \frac {A_\Theta +A_\Theta ^T}{2} \Big ) \Big ( \frac {A_\Theta -A_\Theta ^T}{2} \Big ) \Big \} = 2 \sum _{k=1}^p \alpha _k(\Theta ) \, \cos \theta _k \, \sin \theta _k, \end{equation*}

which leads to (7.19).

7.3. Final step: establishment of (3.4)

Now, we can finish with the following

Proposition 7.4. The functions $\rho$ and $\Gamma$ involved in (3.2) satisfy (3.4). The constants $c_2$ and $c_4$ are given by

(7.33) \begin{eqnarray} && c_2 = - \frac{2}{C_2} \Big (C_3 - \frac{C_4}{n} \Big ) \end{eqnarray}
(7.34) \begin{eqnarray} && = \frac{1}{n^2-4} \frac{\displaystyle \int \Big [ n (\mu (A) \cdot A) \mathrm{Tr}(A) + 2 \mathrm{Tr} \Big (\mu (A) \frac{A+A^T}{2} \frac{A-A^T}{2} \Big ) \Big ] M(A) dA}{\displaystyle \int (\mu (A) \cdot A) M(A) dA} \end{eqnarray}
(7.35) \begin{eqnarray} && c_4 = \frac{C_4}{2 C_2} \end{eqnarray}
(7.36) \begin{eqnarray} && = \frac{1}{2(n^2-4)} \frac{\displaystyle \int \Big [ 2 (\mu (A) \cdot A) \mathrm{Tr}(A) + n \mathrm{Tr} \Big (\mu (A) \frac{A+A^T}{2} \frac{A-A^T}{2} \Big ) \Big ] M(A) dA}{\displaystyle \int (\mu (A) \cdot A) M(A) dA} \end{eqnarray}

where $C_2$ , $C_3$ and $C_4$ are given in Prop. 7.3. Then, $c_2$ , $c_3$ and $c_4$ are given by (3.13), (3.14) and (3.15), respectively.

Proof. We simplify (7.5) in light of (7.11) and (7.14). From (7.11), we have $L({\mathbb P}) = C_2{\mathbb P}$ . With (7.1) and recalling that $\Omega _1$ is defined by (3.5), this leads to

(7.37) \begin{eqnarray} \rho \partial _t \Gamma &=& \frac{1}{\kappa } \Big [ - \big ( ( \nabla _x \rho \otimes{\textbf{e}}_1 ) \Gamma ^T - \Gamma ({\textbf{e}}_1 \otimes \nabla _x \rho ) \big ) + \frac{1}{C_2} \, \Gamma L({\mathbb P}) \Gamma ^T \Big ] \, \Gamma \nonumber \\[5pt] &=& \Big [ - \frac{1}{\kappa } \nabla _x \rho \wedge \Omega _1 + \frac{1}{C_2 \kappa } \, \Gamma L({\mathbb P}) \Gamma ^T \Big ] \, \Gamma. \end{eqnarray}

Now, we compute $\Gamma L({\mathbb P}) \Gamma ^T$ thanks to (7.5) and the expressions (7.14) of $B$ and (7.2) of $\mathbb S$ . We have

\begin{equation*} B(P,Q) = C'_3 \mathrm {Tr}(PQ) \mathrm {I} + C_4 \frac {PQ+QP}{2}, \quad \forall P, \, Q \in \mathfrak {so}_n, \end{equation*}

with $C'_3 = C_3 - \frac{C_4}{n}$ . This leads to

(7.38) \begin{equation} \sum _{m,q=1}^n B_{mq} ({\mathbb S}_{\cdot \cdot m q},F_{rs}) = \sum _{m,q=1}^n \Big [ C'_3 \mathrm{Tr}({\mathbb S}_{\cdot \cdot m q}F_{rs}) \delta _{mq} + C_4 \frac{ \big ({\mathbb S}_{\cdot \cdot m q} \, F_{rs} + F_{rs} \,{\mathbb S}_{\cdot \cdot m q} \big )_{mq}}{2} \Big ]. \end{equation}

We compute, for $m$ , $q$ , $r$ , $s$ , $u$ , $v$ in $\{1, \ldots, n\}$ :

\begin{equation*}\big ({\mathbb S}_{\cdot \cdot m q} \, F_{rs}\big )_{uv} = {\mathbb S}_{u r m q} \delta _{sv} - {\mathbb S}_{u s mq} \delta _{rv}, \quad \big (F_{rs} \, {\mathbb S}_{\cdot \cdot m q}\big )_{uv} = {\mathbb S}_{s v m q} \delta _{ru} - {\mathbb S}_{r v m q} \delta _{su}. \end{equation*}

Hence, $ \mathrm{Tr}({\mathbb S}_{\cdot \cdot m q}F_{rs}) = - 2{\mathbb S}_{r s m q}$ and

(7.39) \begin{eqnarray} \sum _{m,q=1}^n \mathrm{Tr}({\mathbb S}_{\cdot \cdot m q}F_{rs}) \delta _{mq} &=& -2 \sum _{m=1}^n{\mathbb S}_{r s m m} = - 2 \sum _{m,k,\ell =1}^n \Gamma _{kr} \frac{\partial \Gamma _{ks}}{\partial x_\ell } \Gamma _{\ell m} e_{1 \, m} \nonumber \\[5pt] &=& - 2 \sum _{k,\ell =1}^n \Gamma _{kr} \frac{\partial \Gamma _{ks}}{\partial x_\ell } \Omega _{1 \ell } = - 2 \sum _{k=1}^n \Gamma _{kr} \, (\Omega _1 \cdot \nabla _x) \Gamma _{ks}, \end{eqnarray}

where we have used Lemma 7.2. On the other hand, we have

(7.40) \begin{eqnarray} && \sum _{m,q=1}^n \frac{ \big ({\mathbb S}_{\cdot \cdot m q} \, F_{rs} + F_{rs} \,{\mathbb S}_{\cdot \cdot m q} \big )_{mq}}{2} = \frac{1}{2} \sum _{m,q=1}^n \big ({\mathbb S}_{m r m q} \delta _{sq} -{\mathbb S}_{m s m q} \delta _{rq} +{\mathbb S}_{s q m q} \delta _{rm} -{\mathbb S}_{r q m q} \delta _{sm} \big ) \nonumber \\[5pt] && = \frac{1}{2} \sum _{m=1}^n \big ({\mathbb S}_{m r m s} -{\mathbb S}_{m s m r} +{\mathbb S}_{s m r m} -{\mathbb S}_{r m s m} \big ) = \sum _{m=1}^n \big ({\mathbb S}_{m r m s} -{\mathbb S}_{m s m r} \big ) \nonumber \\[5pt] && = \frac{1}{2} \sum _{k, \ell, m=1}^n \Big ( \Gamma _{km} \frac{\partial \Gamma _{kr}}{\partial x_\ell } \big ( \Gamma _{\ell m} e_{1 \, s} + \Gamma _{\ell s} e_{1 \, m} \big ) - \Gamma _{km} \frac{\partial \Gamma _{ks}}{\partial x_\ell } \big ( \Gamma _{\ell m} e_{1 \, r} + \Gamma _{\ell r} e_{1 \, m} \big ) \Big ). \end{eqnarray}

Inserting (7.39) and (7.40) into (7.38) and using (7.5) and Lemma 7.2 leads to

\begin{eqnarray*} && L({\mathbb P})_{rs} = - \kappa \rho \Big \{ -2 C'_3 \sum _{k=1}^n \Gamma _{kr} \, (\Omega _1 \cdot \nabla _x) \Gamma _{ks} \\[5pt] && + \frac{C_4}{2} \sum _{k, \ell, m=1}^n \Big ( \Gamma _{km} \frac{\partial \Gamma _{kr}}{\partial x_\ell } \big ( \Gamma _{\ell m} e_{1 \, s} + \Gamma _{\ell s} e_{1 \, m} \big ) - \Gamma _{km} \frac{\partial \Gamma _{ks}}{\partial x_\ell } \big ( \Gamma _{\ell m} e_{1 \, r} + \Gamma _{\ell r} e_{1 \, m} \big ) \Big ) \Big \}. \end{eqnarray*}

Thus,

\begin{eqnarray*} && \big ( \Gamma L({\mathbb P}) \Gamma ^T \big )_{ij} = \sum _{r, s = 1}^n \Gamma _{ir} L({\mathbb P})_{rs} \Gamma _{js} \\[5pt] && = - \kappa \rho \Big \{ - 2 C'_3 \sum _{s=1}^n (\Omega _1 \cdot \nabla _x) \Gamma _{is} \, \Gamma _{js} + \frac{C_4}{2} \Big ( \sum _{k, r, s=1}^n \big ( \frac{\partial \Gamma _{kr}}{\partial x_k} \Gamma _{ir} \Gamma _{j s} e_{1 \, s} - \frac{\partial \Gamma _{ks}}{\partial x_k} \Gamma _{ir} \Gamma _{j s} e_{1 \, r} \big ) \\[5pt] &&\qquad + \sum _{k, m, r=1}^n \big ( \Gamma _{km} \frac{\partial \Gamma _{kr}}{\partial x_j} \Gamma _{ir} e_{1 \, m} - \Gamma _{km} \frac{\partial \Gamma _{kr}}{\partial x_i} \Gamma _{jr} e_{1 \, m} \big ) \Big ) \Big \}, \end{eqnarray*}

where we have used Lemma 7.2 again as well as that $\sum _{r=1}^n \Gamma _{jr} \Gamma _{ir} = \delta _{ij}$ and similar identities. We note that

\begin{equation*} \sum _{k, r = 1}^n \frac {\partial \Gamma _{kr}}{\partial x_k} \Gamma _{ir} = \big ( \Gamma (\nabla _x \cdot \Gamma ) \big )_i, \end{equation*}

with $\nabla _x \cdot \Gamma$ being the divergence of the tensor $\Gamma$ defined in the statement of Prop. 7.4. On the other hand, since

\begin{equation*} \sum _{k=1}^n \Gamma _{km} \frac {\partial \Gamma _{kr}}{\partial x_j} = - \sum _{k=1}^n \frac {\partial \Gamma _{km}}{\partial x_j} \Gamma _{kr}, \end{equation*}

we have

\begin{equation*} \sum _{k, m, r=1}^n \Gamma _{km} \frac {\partial \Gamma _{kr}}{\partial x_j} \Gamma _{ir} e_{1 \, m} = - \sum _{m=1}^n \frac {\partial \Gamma _{im}}{\partial x_j} e_{1 \, m} = - \frac {\partial \Omega _{1 \, i}}{\partial x_j}, \end{equation*}

because ${\textbf{e}}_1$ does not depend on space nor time. Thus, we get

(7.41) \begin{equation} \Gamma L({\mathbb P}) \Gamma ^T = - \kappa \rho \Big \{ - 2 C'_3 (\Omega _1 \cdot \nabla _x) \Gamma \, \Gamma ^T+ \frac{C_4}{2} \big ( \Gamma (\nabla _x \cdot \Gamma ) \big ) \wedge \Omega _1 + \nabla _x \wedge \Omega _1 \Big ) \Big \}. \end{equation}

Inserting (7.41) into (7.37) yields (3.4) with (3.6) and formulas (7.33), (3.14), (7.35) for the coefficients. Formulas (7.34), (3.13), (7.36), (3.15) follow with a little bit of algebra from the formulas of Prop. 7.3.

7.4. Case of dimension $\boldsymbol{n}=\textbf{4}$

In this subsection, we prove Prop. 7.3 for the special case $n=4$ .

7.4.1. Proof of (7.11)

If $n=4$ , there exists an automorphism $\beta$ of $\Lambda ^2(V)$ (with $V ={\mathbb C}^4$ ) characterised by the following relation (see [Reference Degond, Diez and Frouvelle19, Sect. 8])

(7.42) \begin{equation} \big (\beta (v_1 \wedge v_2)\big ) \cdot (v_3 \wedge v_4) = \mathrm{det} (v_1, v_2, v_3, v_4), \quad \forall (v_1, \ldots, v_4) \in V^4, \end{equation}

and where the dot product in $\Lambda ^2(V)$ extends that of $\mathfrak{so}_4$ , namely $(v_1 \wedge v_2) \cdot (v_3 \wedge v_4) = (v_1 \cdot v_3) (v_2 \cdot v_4) - (v_1 \cdot v_4) (v_2 \cdot v_3)$ . In addition, $\beta$ intertwines $\Lambda ^2(V)$ and itself as $\mathrm{SO}_4$ representations. It is also an involution (i.e. $\beta ^{-1} = \beta$ ) with

(7.43) \begin{equation} \beta (F_{12}) = F_{34}, \quad \beta (F_{13}) = -F_{24}, \quad \beta (F_{14}) = F_{23}. \end{equation}

The map $\beta$ has eigenvalues $\pm 1$ with associated eigenspaces $\Lambda _\pm$ such that

\begin{equation*} \Lambda _\pm = \mathrm {Span} \{ F_{12} \pm F_{34}, \, F_{13} \mp F_{24}, \, F_{14} \pm F_{23} \}. \end{equation*}

Thus, $\mathrm{dim} \Lambda _\pm = 3$ , each $\Lambda _\pm$ is an irreducible representation of $\mathrm{SO}_4$ and $\beta$ is the orthogonal symmetry in $\Lambda _+$ . Thanks to this, it was shown in ref. [Reference Degond, Diez and Frouvelle19, Sect. 8] that for a map $L$ : $\mathfrak{so}_4 \to \mathfrak{so}_4$ satisfying (7.20), there exist two real constants $C_2$ , $C'_2$ such that

\begin{equation*} L(P)= C_2 \, P + C'_2 \, \beta (P), \quad \forall P \in \mathfrak {so}_4. \end{equation*}

In ref. [Reference Degond, Diez and Frouvelle19, Sect. 8], it was used that $L$ commutes, not only with conjugations with elements of $\mathrm{SO}_4$ (inner automorphisms) through (7.20), but also with those of $\mathrm{O}_4 \setminus \mathrm{SO}_4$ (outer automorphisms). This simple observation allowed us to conclude that $C'_2 = 0$ . However, here, it is not true anymore that $L$ commutes with outer automorphisms. Hence, we have to look for a different argument to infer that $C'_2 = 0$ . This is what we develop now.

Taking the inner product of $L(P)$ with $\beta (P)$ and using (7.3), we get

\begin{equation*} \int _{\mathrm {SO}_4} (A \cdot P) \, (\mu (A) \cdot \beta (P)) \, M(A) \, dA = C_2 (P \cdot \beta (P)) + C'_2 (\beta (P), \beta (P)). \end{equation*}

We apply this equality with $P=F_{ij}$ for $i \not = j$ , and we note that $(\beta (F_{ij}) \cdot F_{ij})=0$ , $\forall i, j$ , and that $(\beta (P). \beta (P)) = (P.P)$ and $(\mu (A) \cdot \beta (P)) = (\beta \circ \mu (A) \cdot P)$ due to the fact that $\beta$ is an orthogonal self-adjoint transformation of $\mathfrak{so}_4$ . This leads to

\begin{equation*} C'_2 = \int _{\mathrm {SO}_4} \frac {A_{ij} - A_{ji}}{2} \, (\beta \circ \mu (A))_{ij} \, M(A) \, dA. \end{equation*}

Averaging over $(i,j)$ such that $i \not = j$ , we get

(7.44) \begin{equation} C'_2 = \frac{1}{6} \int _{\mathrm{SO}_4} \Big ( \frac{A-A^T}{2} \cdot \big (\beta \circ \mu (A)\big ) \Big ) \, M(A) \, dA. \end{equation}

Due to the fact that $\beta$ is an intertwining map, the function $A \mapsto ( A \cdot (\beta \circ \mu (A)) ) \, M(A)$ is a class function. Thus, we can apply Weyl’s integration formula (5.7). Using (6.43), (6.9), (7.22), (7.43) and the fact that $(F_{ij})_{i\lt j}$ forms an orthonormal basis of $\mathfrak{so}_4$ , we get

(7.45) \begin{equation} C'_2 = - \, \frac{\displaystyle \int _{{\mathcal T}_2} \big ( \sin \theta _1 \alpha _2 (\Theta ) + \sin \theta _2 \alpha _1 (\Theta ) \big ) \, m(\Theta ) \, d \Theta }{\displaystyle 6 \int _{{\mathcal T}_2} m(\Theta ) \, d \Theta }, \end{equation}

with $\Theta = (\theta _1, \theta _2)$ and $ m(\Theta ) = e^{\kappa (\cos \theta _1 + \cos \theta _2)} \, (\cos \theta _1 - \cos \theta _2)^2$ . Now, we define

(7.46) \begin{equation} \tilde \alpha _1 (\theta _1, \theta _2) = - \alpha _1 (\!-\!\theta _1, \theta _2), \quad \tilde \alpha _2 (\theta _1, \theta _2) = \alpha _2 (\!-\!\theta _1, \theta _2). \end{equation}

We can check that $\tilde \alpha = (\tilde \alpha _1, \tilde \alpha _2)$ belongs to $\mathcal V$ . Furthermore, using $\tilde \tau$ as a test function where $\tilde \tau$ is obtained from $\tau$ by the same formulas (7.46), we realise that $\tilde \alpha$ is another solution of the variational formulation (6.40). Since the solution of (6.40) is unique and equal to $\alpha$ , we deduce that $\tilde \alpha = \alpha$ , hence,

(7.47) \begin{equation} \alpha _1 (\!-\! \theta _1, \theta _2) = - \alpha _1(\theta _1, \theta _2), \quad \alpha _2 (\!-\! \theta _1, \theta _2) = \alpha _2(\theta _1, \theta _2). \end{equation}

Thus, changing $\theta _1$ into $-\theta _1$ in the integrals, the numerator of (7.45) is changed in its opposite, while the denominator is unchanged. It results that $C'_2=0$ , ending the proof.

7.4.2. Proof of (7.26)

In ref. [Reference Degond, Diez and Frouvelle19, Sect. 9] a symmetric bilinear map $B_s$ : $\mathfrak{so}_4 \times \mathfrak{so}_4 \to{\mathcal S}_4$ satisfying (7.25) is shown to be of the form

(7.48) \begin{equation} B_s(P,Q) = C_3 \mathrm{Tr}(PQ) \mathrm{I} + C_4 \Big ( \frac{PQ+QP}{2} - \frac{1}{4} \mathrm{Tr}(PQ) \mathrm{I} \Big ) + C_5 (\beta (P) \cdot Q) \mathrm{I}, \quad \forall P, \, Q \in \mathfrak{so}_4, \end{equation}

where $C_3$ , $C_4$ and $C_5$ are real constants and $\beta$ is the map defined by (7.42). To show that $C_5=0$ , we adopt the same method as in the previous section. Taking the trace of (7.48) and applying the resulting formula with $P=F_{ij}$ and $Q = \beta (F_{ij})$ , we get, with (7.24) and (7.4):

\begin{eqnarray*} && \frac{1}{2} \int _{\mathrm{SO}_4} \Big \{ \big ( \frac{A-A^T}{2} \cdot F_{ij} \big ) \big ( \beta \circ \mu (A) \cdot F_{ij} \big ) \\[5pt] &&\qquad + \Big ( \beta \big ( \frac{A-A^T}{2} \big ) \cdot F_{ij} \Big ) \big ( \mu (A) \cdot F_{ij} \big ) \Big \} \mathrm{Tr}(A) \, M(A) \, dA = 4 C_5. \end{eqnarray*}

Averaging the result over $i$ , $j$ , such that $i \not = j$ , we get

\begin{eqnarray*} 8 C_5 &=& \frac{1}{6} \int _{\mathrm{SO}_4} \Big \{ \Big ( \frac{A-A^T}{2} \cdot \beta \circ \mu (A) \Big ) + \Big ( \beta \big ( \frac{A-A^T}{2} \big ) \cdot \mu (A) \Big ) \Big \} \, \mathrm{Tr}(A) \, M(A) \, dA \\[5pt] &=& \frac{1}{3} \int _{\mathrm{SO}_4} \Big ( \frac{A-A^T}{2} \cdot \beta \circ \mu (A) \Big ) \, \mathrm{Tr}(A) \, M(A) \, dA. \end{eqnarray*}

This formula is similar to (7.44) but for the additional factor $\mathrm{Tr}(A)$ . This factor adds one more factor to the formula corresponding to (7.45) and this additional factor is an even function of the $\theta _k$ . Thus, the conclusion remains that the corresponding integral vanishes by antisymmetry, and this leads to $C_5=0$ .

7.4.3. Proof of (7.27)

We have already shown that $\Lambda ^2(\Lambda ^2(V)) = W \oplus Z$ , where $W \cong{\mathbb S}_{[2]}$ and $Z \cong \Lambda (V)$ . If $n=4$ , $\Lambda (V)$ is not irreducible. Instead, it decomposes into two irreducible representations of highest weights ${\mathcal L_1} +{\mathcal L_2}$ and ${\mathcal L_1} -{\mathcal L_2}$ . On the other hand, ${\mathcal S}_2$ decomposes into the irreducible representations ${\mathbb S}_{[2]}$ and ${\mathbb C} \, \mathrm{I}$ , which have highest weights $2{\mathcal L}_1$ and $0$ . Thus, by Schur’s lemma, a non-zero intertwining map $\tilde B_a$ : $\Lambda ^2(\Lambda ^2(V)) \to{\mathcal S}_2$ must be an isomorphism between $W$ and ${\mathbb S}_{[2]}$ and equal to zero on the complement $Z$ of $W$ in $\Lambda ^2(\Lambda ^2(V))$ . We now identify such a map.

If $n=4$ , there exists an isomorphism $\zeta$ : $\Lambda ^3(V) \to V$ such that $\zeta (v_1 \wedge v_2 \wedge v_3) \cdot v_4 = \mathrm{det}(v_1, v_2, v_3, v_4)$ , $\forall (v_1, \ldots, v_4) \in V^4$ . Now, we define the map $\mathcal R$ as follows:

\begin{eqnarray*}{\mathcal R}\; : \, \, \Lambda ^2(\Lambda ^2(V)) &\to & \vee ^2(V) \\[5pt] (v_1 \wedge v_2) \wedge (v_3 \wedge v_4) & \mapsto & \zeta (v_1 \wedge v_2 \wedge v_3) \vee v_4 - \zeta (v_1 \wedge v_2 \wedge v_4) \vee v_3 \\[5pt] && - \zeta (v_3 \wedge v_4 \wedge v_1) \vee v_2 + \zeta (v_3 \wedge v_4 \wedge v_2) \vee v_1, \quad \forall (v_1, \ldots, v_4) \in V^4. \end{eqnarray*}

$\mathcal R$ intertwines the representations $\Lambda ^2(\Lambda ^2(V))$ and $\vee ^2(V)$ of $\mathrm{so}_4{\mathbb C}$ . Thus, its kernel and image are subrepresentations of $\Lambda ^2(\Lambda ^2(V))$ and $\vee ^2(V)$ of $\mathrm{so}_4{\mathbb C}$ respectively. One has

\begin{eqnarray*} \mathrm{Tr} \big \{{\mathcal R} \big ( (v_1 \wedge v_2) \wedge (v_3 \wedge v_4) \big ) \big \} &=& 2 \{ \zeta (v_1 \wedge v_2 \wedge v_3) \cdot v_4 - \zeta (v_1 \wedge v_2 \wedge v_4) \cdot v_3 \\[5pt] && \quad - \zeta (v_3 \wedge v_4 \wedge v_1) \cdot v_2 + \zeta (v_3 \wedge v_4 \wedge v_2) \cdot v_1 \big \} \\[5pt] &=& 4 \big ( \mathrm{det} (v_1, v_2, v_3, v_4) - \mathrm{det} (v_3, v_4, v_1, v_2) \big ) = 0. \end{eqnarray*}

Hence, $\mathrm{im}({\mathcal R}) \subset{\mathbb S}_{[2]}$ . Furthermore $\mathrm{im}({\mathcal R}) \not = \{0\}$ . Indeed,

\begin{equation*} {\mathcal R} \big ( (e_1 \wedge e_2) \wedge (e_3 \wedge e_4) \big ) = - (e_1 \vee e_1 + e_2 \vee e_2) + e_3 \vee e_3 + e_4 \vee e_4 \not = 0, \end{equation*}

where $(e_i)_{i=1}^4$ is the canonical basis of $V ={\mathbb C}^4$ . Since ${\mathbb S}_{[2]}$ is irreducible and $\mathrm{im}({\mathcal R})$ is a non-trivial subrepresentation of ${\mathbb S}_{[2]}$ , we have $\mathrm{im}({\mathcal R}) ={\mathbb S}_{[2]}$ . We have $\mathrm{dim} \, \Lambda ^2(\Lambda ^2(V)) = 15$ , $\mathrm{dim} \,{\mathbb S}_{[2]} = 9$ ; hence by the rank nullity theorem, $\mathrm{dim} (\mathrm{ker} \,{\mathcal R}) = 6 = \mathrm{dim} \, \Lambda ^2(V)$ . Thus, $\mathrm{ker} \,{\mathcal R} = Z$ (indeed, $\mathrm{ker} \,{\mathcal R}$ has to be a subrepresentation of $\Lambda ^2(\Lambda ^2(V))$ and $Z \cong \Lambda ^2(V)$ is the only such representation which has the right dimension). Consequently, $\mathcal R$ is an isomorphism from the complement $W$ of $Z$ in $\Lambda ^2(\Lambda ^2(V))$ onto ${\mathbb S}_{[2]}$ and is zero when restricted to $Z$ . This shows that $\mathcal R$ is the map that needed to be identified. By Schur’s lemma, there is a constant $C_6 \in{\mathbb C}$ such that $\tilde B_a = C_6{\mathcal R}$ .

We have

\begin{equation*} \zeta (e_i \wedge e_j \wedge e_k) = \sum _{m=1}^4 \varepsilon _{ijkm} e_m, \end{equation*}

where $\varepsilon _{ijkm}$ is equal to zero if two or more indices $i,j,k,m$ are equal and equal to the signature of the permutation

\begin{equation*} \left ( \begin {array}{cccc} 1&2&3&4 \\[5pt] i&j&k&m \end {array} \right ), \end{equation*}

otherwise. Then,

\begin{equation*} {\mathcal R}\big ( (e_i \wedge e_j) \wedge (e_k \wedge e_\ell ) \big ) = \sum _{m=1}^4 \big [ \varepsilon _{ijkm} e_m \vee e_\ell - \varepsilon _{ij\ell m} e_m \vee e_k - \varepsilon _{k \ell im} e_m \vee e_j + \varepsilon _{k \ell jm} e_m \vee e_i \big ]. \end{equation*}

It follows that

\begin{equation*} B_a(F_{ij},F_{ik})_{i \ell } = C_6 {\mathcal R}\big ( (e_i \wedge e_j) \wedge (e_i \wedge e_k) \big )_{i \ell } = 2 C_6 \varepsilon _{ijk\ell }. \end{equation*}

Thus,

\begin{equation*} \sum _{i,j,k, \ell } \epsilon _{i j k \ell } B_a(F_{ij},F_{ik})_{i \ell } = 2 C_6 \sum _{i,j,k, \ell } \epsilon _{i j k \ell } \varepsilon _{ijk\ell } = 2 C_6 \mathrm {Card}(\mathfrak {S}_4) = 48 C_6. \end{equation*}

On the other hand,

\begin{eqnarray*} && \sum _{i,j,k, \ell } \epsilon _{i j k \ell } B_a(F_{ij},F_{ik})_{i \ell } = \frac{1}{2} \int _{\mathrm{SO}_4} \sum _{i,j,k, \ell } \epsilon _{i j k \ell } \Big [ \Big ( \frac{A-A^T}{2} \Big )_{ij} \mu (A)_{ik} \\[5pt] &&\qquad - \mu (A)_{ij} \Big ( \frac{A-A^T}{2} \Big )_{ik} \Big ] \Big ( \frac{A+A^T}{2} \Big )_{i\ell } \, M(A) \, dA \\[5pt] && = - \int _{\mathrm{SO}_4} \sum _{j,k, \ell, i} \epsilon _{j k \ell i} \Big ( \frac{A-A^T}{2} e_i \Big )_j \big (\mu (A) e_i \big )_k \Big ( \frac{A+A^T}{2} e_i \Big )_\ell \, M(A) \, dA \\[5pt] && = - \int _{\mathrm{SO}_4} \sum _i \zeta \Big ( \frac{A-A^T}{2} e_i \, \wedge \, \mu (A) e_i \, \wedge \, \frac{A+A^T}{2} e_i \Big ) \cdot e_i \, \, M(A) \, dA \\[5pt] && = - \int _{\mathrm{SO}_4} \sum _i \mathrm{det} \Big ( \frac{A-A^T}{2} e_i, \, \mu (A) e_i, \, \frac{A+A^T}{2} e_i, \, e_i \Big ) \, M(A) \, dA, \end{eqnarray*}

where we have used that

\begin{equation*} \zeta (u \wedge v \wedge w) = \sum _{i,j,k,\ell } \epsilon _{ijk\ell } u_i v_j w_k e_\ell, \end{equation*}

with $u_i$ the $i$ -th coordinates of $u$ in the basis $(e_i)_{i=1}^n$ and similarly for $v_j$ and $w_k$ . Now, we use (6.15) and get

(7.49) \begin{eqnarray} &&{\mathcal D}_i \;=\!:\; \int _{\mathrm{SO}_4} \mathrm{det} \Big ( \frac{A-A^T}{2} e_i, \, \mu (A) e_i, \, \frac{A+A^T}{2} e_i, \, e_i \Big ) \, M(A) \, dA \nonumber \\[5pt] && = \int _{{\mathcal T}} \int _{\mathrm{SO}_4} \mathrm{det} \Big ( g \frac{A_\Theta -A_\Theta ^T}{2} g^T e_i, \, g \mu (A_\Theta ) g^T e_i, \, g \frac{A_\Theta +A_\Theta ^T}{2} g^T e_i, \, e_i \Big ) \, dg \, m(\Theta ) \, d \Theta \nonumber \\[5pt] && = \int _{{\mathcal T}} \int _{\mathrm{SO}_4} \mathrm{det} \Big ( \frac{A_\Theta -A_\Theta ^T}{2} g^T e_i, \, \mu (A_\Theta ) g^T e_i, \, \frac{A_\Theta +A_\Theta ^T}{2} g^T e_i, \, g^T e_i \Big ) \, dg \, m(\Theta ) \, d \Theta \nonumber \\[5pt] && = \int _{{\mathcal T}} \int _{\mathrm{SO}_4} \sum _{j,k,\ell,m} g_{ij} \, g_{ik} \, g_{i\ell } \, g_{im} \, \nonumber \\[5pt] &&\qquad \mathrm{det} \Big ( \frac{A_\Theta -A_\Theta ^T}{2} e_j, \, \mu (A_\Theta ) e_k, \, \frac{A_\Theta +A_\Theta ^T}{2} e_\ell, \, e_m \Big ) \, m(\Theta ) \, d \Theta \end{eqnarray}

Now, thanks to (6.43), (6.14), (7.32), we notice that the matrices $\frac{A_\Theta -A_\Theta ^T}{2}$ , $\mu (A_\Theta )$ , $\frac{A_\Theta +A_\Theta ^T}{2}$ are sparse. Hence, the determinant in (7.49) is equal to zero for many values of the quadruple $(j,k,\ell,m) \in \{1, \ldots, 4\}^4$ . The non-zero values of this determinant are given in Table 1.

Table 1. Table of the non-zero values of the determinant in (7.49) (called ‘det’ in the table) as a function of $(j,k,\ell,m)$ . The quantities $\alpha _q$ , $c_q$ and $s_q$ for $q = 1, \, 2$ refer to $\alpha (\theta _q)$ , $\cos\!(\theta _q)$ , $\sin (\theta _q)$

It results that

(7.50) \begin{eqnarray} &&{\mathcal D}_i = \int _{{\mathcal T}} \big ( \alpha _2(\Theta ) \sin \theta _1 - \alpha _1(\Theta ) \sin \theta _2 \big ) \big ( \cos \theta _1 - \cos \theta _2) \, m(\Theta ) \, d \Theta \nonumber \\[5pt] &&\qquad \times \int _{\mathrm{SO}_4} ( g_{i1}^2 + g_{i2}^2 ) ( g_{i3}^2 + g_{i4}^2 ) \, dg, \end{eqnarray}

and the first integral in (7.50) is equal to $0$ by the symmetry relations (7.47).

8. Conclusion

In this paper, we have derived a fluid model for a system of stochastic differential equations modelling rigid bodies interacting through body-attitude alignment in arbitrary dimensions. This follows earlier work where this derivation was done in dimension $3$ only on the one hand or for simpler jump processes on the other hand. This extension was far from being straightforward. The main output of this work is to highlight the importance of concepts from Lie-group theory such as maximal torus, Cartan subalgebra and Weyl group in this derivation. We may anticipate that these concepts (which were hidden although obviously present in earlier works) may be key to more general collective dynamics models in which the control variables of the agents, i.e. the variable that determines their trajectory, belong to a Lie group or to a homogeneous space (which can be regarded as the quotient of a Lie group by one of its subgroups). At least when these Lie groups or homogeneous spaces are compact, we may expect that similar concepts as those developed in this paper are at play. Obviously, extensions to non-compact Lie groups or homogeneous spaces may be even more delicate.

Acknowledgements

PD holds a visiting professor association with the Department of Mathematics, Imperial College London, UK. AF acknowledges support from the Project EFI ANR-17-CE40-0030 of the French National Research Agency. AF thanks the hospitality of the Laboratoire de Mathématiques et Applications (LMA, CNRS) in the Université de Poitiers, where part of this research was conducted.

Competing interests

The authors declare none.

Appendix

A Direct derivation of strong form of equations satisfied by $(\boldsymbol\alpha _\boldsymbol{k})_{\boldsymbol{k}={\textbf{1}}}^\boldsymbol{p}$

Beforehand, we need to give an expression of the radial Laplacian, defined by (5.6).

(A.1) \begin{eqnarray} && L \varphi = \sum _{j=1}^p \Big (\frac{\partial ^2 \varphi }{\partial \theta _j^2} + \epsilon _n \frac{\sin \theta _j}{1 - \cos \theta _j} \, \frac{\partial \varphi }{\partial \theta _j} \Big ) \nonumber \\[5pt] &&\quad + \sum _{1 \leq j \lt k \leq p} \frac{2}{\cos \theta _k - \cos \theta _j} \, \Big ( \big ( \sin \theta _j \, \frac{\partial \varphi }{\partial \theta _j} - \sin \theta _k \, \frac{\partial \varphi }{\partial \theta _k}) \Big ) \end{eqnarray}
(A.2) \begin{eqnarray} &&= \sum _{j=1}^p \Big [ \frac{\partial ^2 \varphi }{\partial \theta _j^2} + \Big ( \sum _{k \not = j} \frac{2}{\cos \theta _k - \cos \theta _j} + \frac{\epsilon _n}{1 - \cos \theta _j} \Big ) \sin \theta _j \frac{\partial \varphi }{\partial \theta _j} \Big ]. \end{eqnarray}
(A.3) \begin{eqnarray} && = \frac{1}{u_n} \nabla _\Theta \cdot \big ( u_n \nabla _\Theta \varphi \big ), \end{eqnarray}

with $\epsilon _n$ given by (3.7).

In this section, we give a direct derivation of the strong form of the equations satisfied by $(\alpha _k)_{k=1}^p$ . For this, we use a strategy based on [Reference Faraut30, Section 8.3] (See also [Reference Degond18]). It relies on the following formula. Let $f$ be a function $\mathrm{SO}_n \to V$ , where $V$ is a finite-dimensional vector space over $\mathbb R$ . Then, we have

(A.4) \begin{equation} \varrho \big ( \mathrm{Ad}(A^{-1}) X -X \big )^2 f(A) - \varrho \big ( [\mathrm{Ad}(A^{-1}) X, X ] \big ) f(A) = \frac{d^2}{dt^2} \big ( f( e^{tX} A e^{-tX}) \big ) \big |_{t=0}. \end{equation}

for all $X \in \mathfrak{so}_n$ and all $A \in \mathrm{SO}_n$ , where $\varrho$ is defined at (2.4). We note in passing that $\varrho$ is a Lie algebra representation of $\mathfrak{so}_n$ into $C^\infty (\mathrm{SO}_n,V)$ , i.e. it is a linear map $\mathfrak{so}_n \to{\mathcal L}(C^\infty (\mathrm{SO}_n,V))$ (where ${\mathcal L}(C^\infty (\mathrm{SO}_n,V))$ is the space of linear maps of $C^\infty (\mathrm{SO}_n,V)$ into itself) which satisfies

\begin{equation*} \varrho ([X,Y]) f = [\varrho (X), \varrho (Y)] f, \quad \forall X, Y \in \mathfrak {so}_n, \quad \forall f \in C^\infty (\mathrm {SO}_n,V). \end{equation*}

In the previous formula, the bracket on the left is the usual Lie bracket in $\mathfrak{so}_n$ while the bracket on the right is the commutator of two elements of ${\mathcal L}(C^\infty (\mathrm{SO}_n,V))$ .

The following proposition gives the equation satisfied by $(\alpha _k)_{k=1}^p$ in strong form.

Proposition A.1. (i) The functions $(\alpha _k)_{k=1}^p$ defined by (6.9) satisfy the following system of partial differential equations:

(A.5) \begin{eqnarray} && L \alpha _\ell - \kappa \Big ( \sum _{k=1}^p \sin \theta _k \frac{\partial }{\partial \theta _k} \Big ) \alpha _\ell - \sum _{k \not = \ell } \Big ( \frac{\alpha _\ell - \alpha _k}{1 - \cos\!(\theta _\ell - \theta _k)} + \frac{\alpha _\ell + \alpha _k}{1 - \cos\!(\theta _\ell + \theta _k)} \Big ) \nonumber \\[5pt] &&\quad - \epsilon _n \frac{\alpha _\ell }{1 - \cos \theta _\ell } + \sin \theta _\ell = 0, \quad \forall \ell = 1, \ldots, p \end{eqnarray}

(ii) System (A.5) is identical with (3.11).

Proof. (i) Eq. (6.4) can be equivalently written

(A.6) \begin{equation} \Delta \mu (A) + (\nabla \log M \cdot \nabla \mu )(A) = \frac{A-A^T}{2}. \end{equation}

We evaluate (A.6) at $A = A_\Theta$ for $\Theta \in{\mathcal T}$ . We recall the expression (6.43) of $\frac{A_\Theta -A_\Theta ^T}{2}$ . Besides, with (3.1) and (3.21), we get

\begin{equation*} \nabla \log M = \kappa \nabla (A \cdot \mathrm {I}) = \kappa P_{T_A} \mathrm {I} = \kappa A \frac {A^T - A}{2}, \end{equation*}

which, thanks to (6.43) and (2.5), leads to

(A.7) \begin{eqnarray} (\nabla \log M \cdot \nabla \mu )(A_\Theta ) &=& \kappa \sum _{k=1}^p \sin \theta _k \, (\nabla \mu )(A_\Theta ) \cdot (A_\Theta F_{2k-1 \, 2k}) \nonumber \\[5pt] &=& \kappa \sum _{k=1}^p \sin \theta _k \, \big ( \varrho (F_{2k-1 \, 2k}) (\mu ) \big )(A_\Theta ). \end{eqnarray}

We recall (6.27). The following identity is proved in the same manner:

(A.8) \begin{equation} \big ( \varrho (F_{2k-1 \, 2k})^2 (\mu ) \big )(A_\Theta ) = \sum _{\ell =1}^p \frac{\partial ^2 \alpha _\ell }{\partial \theta _k^2}(A_\Theta ) \, F_{2\ell -1 \, 2_\ell }. \end{equation}

Inserting (6.27) into (A.7), we eventually get

(A.9) \begin{equation} (\nabla \log M \cdot \nabla \mu )(A_\Theta ) = \sum _{\ell =1}^p \Big (\!-\! \kappa \sum _{k=1}^p \sin \theta _k \frac{\partial \alpha _\ell }{\partial \theta _k}(A_\Theta ) \Big ) \, F_{2\ell -1 \, 2_\ell } \end{equation}

It remains to find an expression of $\Delta \mu (A_\Theta )$ . We use Formula (2.7) for $\Delta$ . For $A=A_\Theta$ , we see that $\varrho (F_{2k-1 \, 2k})^2 (\mu )$ for $k=1, \ldots, p$ is explicit thanks to (A.8). On the other hand, $\varrho (F_{ij})^2(\mu )$ for $(i,j) \not \in \{ (2k-1,2k), \, k=1, \ldots, p \}$ is not. To compute them, we apply the same strategy as in [Reference Faraut30, Section 8.3] and in ref. [Reference Degond18] based on (A.4). Given (6.6), we get, after a few lines of computations

\begin{equation*} \frac {d^2}{dt^2} \big ( \mu ( e^{tX} A e^{-tX}) \big ) \big |_{t=0} = \frac {d^2}{dt^2} \big ( e^{tX} \mu (A) e^{-tX} \big ) \big |_{t=0} = \big [X, [X,\mu (A)] \big ], \end{equation*}

and so, (A.4) leads to

(A.10) \begin{equation} \varrho \big ( \mathrm{Ad}(A^{-1}) X -X \big )^2 \mu (A) - \varrho \big ( [\mathrm{Ad}(A^{-1}) X, X ] \big ) \mu (A) = \big [X, [X,\mu (A)] \big ]. \end{equation}

We adopt the same proof outline as in ref. [Reference Degond18] for the determination of the radial Laplacian. As in the proof of Prop. 6.6, we successively treat the cases of $\mathrm{SO}_3$ , $\mathrm{SO}_4$ , $\mathrm{SO}_{2p}$ , $\mathrm{SO}_{2p+1}$ . We refer to the proof of Prop. 6.6 for the notations.

Case of $\mathrm{SO}_3$ . We use (A.10) with $A=A_\Theta$ and with $X=G^+$ and $X=G^-$ successively, and we add up the resulting equations. The details of the computations of what comes out of the left-hand side of (A.10) can be found in ref. [Reference Degond18]. On the other hand, after easy computations using (5.9), the right-hand side gives

\begin{equation*} \big [G^+, [G^+,\mu (A_\Theta )] \big ] + \big [G^-, [G^-,\mu (A_\Theta )] \big ] = - 2 \alpha _1(\theta _1) \, F_{12}. \end{equation*}

Thus, we get:

\begin{equation*} 2 (1-\cos \theta ) \Big ( \big (\varrho (G^+)^2 +\varrho (G^-)^2 \big ) \mu \Big )(A_\Theta ) + 2 \sin \theta \big ( \varrho (F_{12}) \mu \big ) (A_\Theta ) = - 2 \alpha _1(\theta _1) \, F_{12}.\end{equation*}

With (6.27) and (A.8), this yields

\begin{eqnarray*} \Delta \mu (A_\Theta ) &=& \Big ( \frac{\partial ^2 \alpha _1}{\partial \theta _1^2}(\theta _1) + \frac{\sin \theta _1}{1 - \cos \theta _1} \frac{\partial \alpha _1}{\partial \theta _1}(\theta _1) - \frac{1}{1 - \cos \theta _1} \alpha _1(\theta _1) \Big ) \, F_{12} \\[5pt] &=& \Big ( (L \alpha _1)(\theta _1) - \frac{1}{1 - \cos \theta _1} \alpha _1(\theta _1) \Big ) F_{12}. \end{eqnarray*}

Case of $\mathrm{SO}_4$ . We use (A.10) with $A=A_\Theta$ and with $X=H^+$ and $X=K^-$ successively, and we add up the resulting equations. We do similarly with $X = H^-$ and $X = K^+$ . Again, what results from the left-hand sides of (A.10) can be found in ref. [Reference Degond18], while the right-hand sides, using (5.9), give:

\begin{eqnarray*} \big [H^+, [H^+,\mu (A_\Theta )] \big ] + \big [K^-, [K^-,\mu (A_\Theta )] \big ] &=& 2 \big (\!-\! \alpha _1 + \alpha _2 \big ) (\Theta ) \, \big ( F_{12} - F_{34} \big ), \\[5pt] \big [H^-, [H^-,\mu (A_\Theta )] \big ] + \big [K^+, [K^+,\mu (A_\Theta )] \big ] &=& - 2 \big ( \alpha _1 + \alpha _2 \big ) (\Theta ) \, \big ( F_{12} + F_{34} \big ). \end{eqnarray*}

We get

\begin{eqnarray*} && 2 \big ( 1-\cos\!(\theta _2 - \theta _1) \big ) \Big ( \big (\varrho (H^+)^2 +\varrho (K^-)^2 \big ) \mu \Big )(A_\Theta ) \\[5pt] &&\quad - 2 \sin (\theta _2-\theta _1) \Big ( \big ( \varrho (F_{12}) - \varrho (F_{34}) \big ) \mu \Big ) (A_\Theta ) = 2 \big (\!-\! \alpha _1 + \alpha _2 \big ) (\Theta ) \, \big ( F_{12} - F_{34} \big ), \\[5pt] && 2 \big ( 1-\cos\!(\theta _1 + \theta _2) \big ) \Big ( \big (\varrho (H^-)^2 +\varrho (K^+)^2 \big ) \mu \Big )(A_\Theta ) \\[5pt] &&\quad + 2 \sin (\theta _1+\theta _2) \Big ( \big ( \varrho (F_{12}) + \varrho (F_{34}) \big ) \mu \Big ) (A_\Theta ) = - 2 \big (\alpha _1 + \alpha _2 \big ) (\Theta ) \, \big ( F_{12} + F_{34} \big ), \end{eqnarray*}

Collecting the results and using (6.27) and (A.8), we get

\begin{eqnarray*} \Delta \mu (A_\Theta ) &=& \Big \{ (L \alpha _1)(\Theta ) - \Big ( \frac{(\alpha _1 - \alpha _2)(\Theta )}{1 - \cos\!(\theta _1 - \theta _2)} + \frac{(\alpha _1 + \alpha _2)(\Theta )}{1 - \cos\!(\theta _1 + \theta _2)} \Big ) \Big \} F_{12} \\[5pt] &&+ \Big \{ (L \alpha _2)(\Theta ) - \Big ( \frac{(\alpha _2 - \alpha _1)(\Theta )}{1 - \cos\!(\theta _2 - \theta _1)} + \frac{(\alpha _2 + \alpha _1)(\Theta )}{1 - \cos\!(\theta _2 + \theta _1)} \Big ) \Big \} F_{34}. \end{eqnarray*}

Case of $\mathrm{SO}_{2p}$ . The computations are straightforward extensions of those done in the case of $\mathrm{SO}_{4}{\mathbb R}$ and lead to

(A.11) \begin{equation} \Delta \mu (A_\Theta ) = \sum _{\ell = 1}^p \Big \{ L \alpha _\ell (\Theta ) - \sum _{k \not = \ell } \Big ( \frac{(\alpha _\ell - \alpha _k)(\Theta )}{1 - \cos\!(\theta _\ell - \theta _k)} + \frac{(\alpha _\ell + \alpha _k)(\Theta )}{1 - \cos\!(\theta _\ell + \theta _k)} \Big ) \Big \} F_{2 \ell -1 \, 2 \ell }. \end{equation}

Case of $\mathrm{SO}_{2p+1}$ . In this case, we combine the computations done for the cases $\mathrm{SO}_{2p}$ and $\mathrm{SO}_3$ . They lead to

(A.12) \begin{eqnarray} \Delta \mu (A_\Theta ) &=& \sum _{\ell = 1}^p \Big \{ L \alpha _\ell (\Theta ) - \sum _{k \not = \ell } \Big ( \frac{(\alpha _\ell - \alpha _k)(\Theta )}{1 - \cos\!(\theta _\ell - \theta _k)} + \frac{(\alpha _\ell + \alpha _k)(\Theta )}{1 - \cos\!(\theta _\ell + \theta _k)} \Big ) \nonumber \\[5pt] && \qquad - \epsilon _n \frac{\alpha _\ell (\Theta )}{1 - \cos \theta _\ell } \Big \} F_{2 \ell -1 \, 2 \ell }. \end{eqnarray}

Now, collecting (6.43), (A.9) and (A.11) or (A.12) (according to the parity of $n$ ) and inserting them into (A.6) gives a matrix identity which is decomposed on the basis vectors $(F_{2 \ell -1 \, 2 \ell })_{\ell = 1}^p$ of $\mathfrak{h}$ . Hence, it must be satisfied componentwise, which leads to (A.5) and ends the proof of Point (i).

(ii) We have

(A.13) \begin{equation} m^{-1} \nabla _\Theta \cdot \big ( m \nabla _\Theta \alpha _\ell \big ) = \Delta _\Theta \alpha _\ell + \nabla _\Theta \log m \cdot \nabla _\Theta \alpha _\ell, \end{equation}

where $\Delta _\Theta$ is the Laplacian with respect to $\Theta$ (i.e. $\Delta _\Theta \varphi = \nabla _\Theta \cdot (\nabla _\Theta \varphi )$ for any smooth function $\varphi \in{\mathcal T}$ ). Then, from [Reference Degond18, Eq. (4.6) & (4.7)], we have

\begin{equation*} \frac {\partial \log m}{\partial \theta _j} = - \kappa \sin \theta _j + \frac {\partial \log u_n}{\partial \theta _j} = \sin \theta _j \Big ( 2 \sum _{k \not = j} \frac {\displaystyle 1}{\displaystyle \cos \theta _k - \cos \theta _j} + \frac {\displaystyle \epsilon _n}{\displaystyle 1 - \cos \theta _\ell } - \kappa \Big ). \end{equation*}

Inserting this into (A.13) and using (A.2), we find that the first term at the left-hand side of (3.11) corresponds to the first two terms of (A.5). The other terms of (3.11) have exact correspondence with terms of (A.5), which shows the identity of these equations.

References

Aceves-Sánchez, P., Bostan, M., Carrillo, J.-A. & Degond, P. (2019) Hydrodynamic limits for kinetic flocking models of Cucker-Smale type. Math. Biosci. Eng. 16(6), 78837910.CrossRefGoogle ScholarPubMed
Aoki, I. (1982) A simulation study on the schooling mechanism in fish. Bull. Jpn. Soc. Sci. Fish. 48(8), 10811088.CrossRefGoogle Scholar
Barbaro, A. B., Canizo, J. A., Carrillo, J. A. & Degond, P. (2016) Phase transitions in a kinetic flocking model of Cucker–Smale type. Multiscale Model. Simul. 14(3), 10631088.CrossRefGoogle Scholar
Barbaro, A. B. & Degond, P. (2014) Phase transition and diffusion among socially interacting self-propelled agents. Discrete Contin. Dyn. Syst. Ser. B 19(3), 12491278.Google Scholar
Bazazi, S., Buhl, C., Hale, J. J., Anstey, M. L., Sword, G. A., Simpson, S. J., Couzin, I. D. (2008) Collective motion and cannibalism in locust migratory bands. Curr. Biol. 18(10), 735739.CrossRefGoogle ScholarPubMed
Beér, A. & Ariel, G. (2019) A statistical physics view of swarming bacteria. Mov. Ecol. 7(1), 117.Google ScholarPubMed
Bertin, E., Droz, M. & Grégoire, G. (2006) Boltzmann and hydrodynamic description for self-propelled particles. Phys. Rev. E 74(2), 022101.CrossRefGoogle ScholarPubMed
Bertin, E., Droz, M. & Grégoire, G. (2009) Hydrodynamic equations for self-propelled particles: Microscopic derivation and stability analysis. J. Phys. A 42(44), 445001.CrossRefGoogle Scholar
Bertozzi, A. L., Kolokolnikov, T., Sun, H., Uminsky, D. & Von Brecht, J. (2015) Ring patterns and their bifurcations in a nonlocal model of biological swarms. Commun. Math. Sci. 13(4), 955985.CrossRefGoogle Scholar
Bolley, F., Cañizo, J. A. & Carrillo, J. A. (2012) Mean-field limit for the stochastic Vicsek model. Appl. Math. Lett. 25(3), 339343.CrossRefGoogle Scholar
Briant, M., Diez, A. & Merino-Aceituno, S. (2022) Cauchy theory for general kinetic Vicsek models in collective dynamics and mean-field limit approximations. SIAM J. Math. Anal. 54(1), 11311168.CrossRefGoogle Scholar
Cao, F., Motsch, S., Reamy, A. & Theisen, R. (2020) Asymptotic flocking for the three-zone model. Math. Biosci. Eng. 17(6), 76927707.CrossRefGoogle ScholarPubMed
Carrillo, J. A., Fornasier, M., Rosado, J. & Toscani, G. (2010) Asymptotic flocking dynamics for the kinetic Cucker–Smale model. SIAM J. Math. Anal. 42(1), 218236.CrossRefGoogle Scholar
Castellani, T. & Cavagna, A. (2005) Spin-glass theory for pedestrians. J. Stat. Mech. Theory Exp. 2005(05), P05012.CrossRefGoogle Scholar
Chaté, H., Ginelli, F., Grégoire, G. & Raynaud, F. (2008) Collective motion of self-propelled particles interacting without cohesion. Phys. Rev. E 77(4), 046113.CrossRefGoogle ScholarPubMed
Cho, H., Ha, S.-Y. & Kang, M. (2023) Continuum limit of the lattice Lohe group model and emergent dynamics. Math. Methods Appl. Sci. 46(8), 97839818.CrossRefGoogle Scholar
Cucker, F. & Smale, S. (2007) Emergent behavior in flocks. IEEE Trans. Automat. Control 52(5), 852862.CrossRefGoogle Scholar
Degond, P. (2023) Radial Laplacian on rotation groups. In: Carlen, E., Goncalves, P., & Soares A.-J. (editors), Particle Systems and Partial Differential Equations X. Springer Proceedings in Mathematics and Statistics (to appear).Google Scholar
Degond, P., Diez, A. & Frouvelle, A. (2021). Body-attitude coordination in arbitrary dimension, arXiv preprint arXiv: 2111.05614.Google Scholar
Degond, P., Diez, A., Frouvelle, A. & Merino-Aceituno, S. (2020) Phase transitions and macroscopic limits in a BGK model of body-attitude coordination. J. Nonlinear Sci. 30(6), 26712736.CrossRefGoogle Scholar
Degond, P., Diez, A. & Na, M. (2022) Bulk topological states in a new collective dynamics model. SIAM J. Appl. Dyn. Syst. 21(2), 14551494.CrossRefGoogle Scholar
Degond, P., Frouvelle, A. & Liu, J.-G. (2013) Macroscopic limits and phase transition in a system of self-propelled particles. J. Nonlinear Sci. 23(3), 427456.CrossRefGoogle Scholar
Degond, P., Frouvelle, A. & Liu, J.-G. (2015) Phase transitions, hysteresis, and hyperbolicity for self-organized alignment dynamics. Arch. Ration. Mech. Anal. 216(1), 63115.CrossRefGoogle Scholar
Degond, P., Frouvelle, A. & Merino-Aceituno, S. (2017) A new flocking model through body attitude coordination. Math. Models Methods Appl. Sci. 27(06), 10051049.CrossRefGoogle Scholar
Degond, P., Frouvelle, A., Merino-Aceituno, S. & Trescases, A. (2017). Alignment of self-propelled rigid bodies: From particle systems to macroscopic equations. In International Workshop on Stochastic Dynamics out of Equilibrium, Springer, pp. 2866.Google Scholar
Degond, P., Frouvelle, A., Merino-Aceituno, S. & Trescases, A. (2018) Quaternions in collective dynamics. Multiscale Model. Simul. 16(1), 2877.CrossRefGoogle Scholar
Degond, P., Frouvelle, A., Merino-Aceituno, S. & Trescases, A. (2023) Hyperbolicity and non-conservativity of a hydrodynamic model of swarming rigid bodies. Quart. Appl. Math. 82(1), 3564.CrossRefGoogle Scholar
Degond, P. & Motsch, S. (2008) Continuum limit of self-driven particles with orientation interaction. Math. Models Methods Appl. Sci. 18(supp01), 11931215.CrossRefGoogle Scholar
Diez, A. (2020) Propagation of chaos and moderate interaction for a piecewise deterministic system of geometrically enriched particles. Electron. J. Probab. 25(none), 138.CrossRefGoogle Scholar
Faraut, J. (2008). Analysis on Lie Groups, an Introduction, Cambridge University Press, Cambridge, UK.CrossRefGoogle Scholar
Fetecau, R. C., Ha, S.-Y. & Park, H. (2022) Emergent behaviors of rotation matrix flocks. SIAM J. Appl. Dyn. Syst. 21(2), 13821425.CrossRefGoogle Scholar
Figalli, A., Kang, M.-J. & Morales, J. (2018) Global well-posedness of the spatially homogeneous Kolmogorov–Vicsek model as a gradient flow. Arch. Ration. Mech. Anal. 227(3), 869896.CrossRefGoogle Scholar
Frouvelle, A. (2012) A continuum model for alignment of self-propelled particles with anisotropy and density-dependent parameters. Math. Models Methods Appl. Sci. 22(07), 1250011.CrossRefGoogle Scholar
Frouvelle, A. (2021). Body-attitude alignment: First order phase transition, link with rodlike polymers through quaternions, and stability. In Recent Advances in Kinetic Equations and Applications, Springer, pp. 147181.CrossRefGoogle Scholar
Frouvelle, A. & Liu, J.-G. (2012) Dynamics in a kinetic model of oriented particles with phase transition. SIAM J. Math. Anal. 44(2), 791826.CrossRefGoogle Scholar
Fulton, W. & Harris, J. (2004). Representation Theory: A First Course, Springer, New-York.CrossRefGoogle Scholar
Gamba, I. M. & Kang, M.-J. (2016) Global weak solutions for Kolmogorov–Vicsek type equations with orientational interactions. Arch. Ration. Mech. Anal. 222(1), 317342.CrossRefGoogle Scholar
Golse, F. & Ha, S.-Y. (2019) A mean-field limit of the Lohe matrix model and emergent dynamics. Arch. Ration. Mech. Anal. 234(3), 14451491.CrossRefGoogle Scholar
Griette, Q. & Motsch, S. (2019). Kinetic equations and self-organized band formations. In Active Particles, Vol. 2, Springer, pp. 173199.CrossRefGoogle Scholar
Ha, S.-Y., Ko, D. & Ryoo, S. W. (2017) Emergent dynamics of a generalized Lohe model on some class of Lie groups. J. Stat. Phys. 168(1), 171207.CrossRefGoogle Scholar
Ha, S.-Y. & Liu, J.-G. (2009) A simple proof of the Cucker-Smale flocking dynamics and mean-field limit. Commun. Math. Sci. 7(2), 297325.CrossRefGoogle Scholar
Hildenbrandt, H., Carere, C. & Hemelrijk, C. K. (2010) Self-organized aerial displays of thousands of starlings: A model. Behav. Ecol. 21(6), 13491359.CrossRefGoogle Scholar
Hsu, E. P. (2002). Stochastic Analysis on Manifolds. Number 38, American Mathematical Society, Providence, RI.Google Scholar
Jiang, N., Xiong, L. & Zhang, T.-F. (2016) Hydrodynamic limits of the kinetic self-organized models. SIAM J. Math. Anal. 48(5), 33833411.CrossRefGoogle Scholar
Lopez, U., Gautrais, J., Couzin, I. D. & Theraulaz, G. (2012) From behavioural analyses to models of collective motion in fish schools. Interface Focus 2(6), 693707.CrossRefGoogle ScholarPubMed
Motsch, S. & Tadmor, E. (2011) A new model for self-organized dynamics and its flocking behavior. J. Stat. Phys. 144(5), 923947.CrossRefGoogle Scholar
Parisi, G., Urbani, P. & Zamponi, F. (2020). Theory of Simple Glasses: Exact Solutions in Infinite Dimensions, Cambridge University Press, Cambridge, UK.CrossRefGoogle Scholar
Sarlette, A., Bonnabel, S. & Sepulchre, R. (2010) Coordinated motion design on lie groups. IEEE Trans. Automat. Control 55(5), 10471058.CrossRefGoogle Scholar
Sarlette, A., Sepulchre, R. & Leonard, N. E. (2009) Autonomous rigid body attitude synchronization. Automatica J. IFAC 45(2), 572577.CrossRefGoogle Scholar
Sepulchre, R., Sarlette, A. & Rouchon, P. (2010). Consensus in non-commutative spaces. In 49th IEEE conference on decision and control (CDC), IEEE, pp. 65966601.CrossRefGoogle Scholar
Simon, B. (1996). Representations of finite and compact groups. In Number 10 in Graduate Studies in Mathematics, American Mathematical Soc.Google Scholar
Toner, J. & Tu, Y. (1998) Flocks, herds, and schools: A quantitative theory of flocking. Phys. Rev. E 58(4), 48284858.CrossRefGoogle Scholar
Vicsek, T., Czirók, A., Ben-Jacob, E., Cohen, I. & Shochet, O. (1995) Novel type of phase transition in a system of self-driven particles. Phys. Rev. Lett. 75(6), 12261229.CrossRefGoogle Scholar
Vicsek, T. & Zafeiris, A. (2012) Collective motion. Phys. Rep. 517(3-4), 71140.CrossRefGoogle Scholar
Figure 0

Table 1. Table of the non-zero values of the determinant in (7.49) (called ‘det’ in the table) as a function of $(j,k,\ell,m)$. The quantities $\alpha _q$, $c_q$ and $s_q$ for $q = 1, \, 2$ refer to $\alpha (\theta _q)$, $\cos\!(\theta _q)$, $\sin (\theta _q)$