Hostname: page-component-586b7cd67f-r5fsc Total loading time: 0 Render date: 2024-11-28T06:00:06.827Z Has data issue: false hasContentIssue false

Flocking dynamics of agents moving with a constant speed and a randomly switching topology

Published online by Cambridge University Press:  30 April 2024

Hyunjin Ahn
Affiliation:
Department of Mathematics, Myongji University, Seoul, Gyeonggi-do, Republic of Korea
Woojoo Shim*
Affiliation:
Department of Mathematics Education, Kyungpook National University, Daegu, Republic of Korea
*
Corresponding author: Woojoo Shim; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

In this paper, we present a sufficient framework to exhibit the sample path-wise asymptotic flocking dynamics of the Cucker–Smale model with unit-speed constraint and the randomly switching network topology. We employ a matrix formulation of the given equation, which allows us to evaluate the diameter of velocities with respect to the adjacency matrix of the network. Unlike the previous result on the randomly switching Cucker–Smale model, the unit-speed constraint disallows the system to be considered as a nonautonomous linear ordinary differential equation on velocity vector, which forces us to get a weaker form of the flocking estimate than the result for the original Cucker–Smale model.

Type
Papers
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press

1. Introduction

The jargon flocking describes a phenomenon wherein agents within a self-propelled system organise themselves into cohesive groups and demonstrate structured motion based on local information and simple governing principles. After the seminal work by Vicsek et al. [Reference Vicsek, Czirók, Ben-Jacob, Cohen and Schochet32], the mathematical community also has shown keen interest in developing mathematical models to elucidate flocking dynamics over the past decades. Among them, the Cucker–Smale (CS) model, initially introduced in [Reference Cucker and Smale12], is considered as one of the most simple and successful mathematical representations of flocking behaviour. Notable directions include the studies on the CS model on general digraph [Reference Dong and Qiu16], unit-speed constraint [Reference Choi and Ha6], the impact of considering time delay [Reference Cho, Dong and Ha4, Reference Dong, Ha and Kim14], temperature field [Reference Ha and Ruggeri22], collision avoidance [Reference Choi, Kalsie, Peszek and Peters9, Reference Mucha and Peszek27], emergence of bi-cluster flocking [Reference Cho, Ha, Huang, Jin and Ko11], Riemannian manifold framework [Reference Ha, Kim and Schlöder17], the mean-field limit [Reference Ha and Liu19], time discretisation [Reference Dong, Ha and Kim15], hydrodynamic descriptions [Reference Karper, Mellet and Trivisa23], hierarchical rooted leadership structures [Reference Ha, Li, Slemrod and Xue21, Reference Li and Xue26, Reference Pignotti and Vallejo28, Reference Shen30] and alternating leaders [Reference Ha and Li20]. We also refer to a comprehensive survey paper [Reference Choi, Ha, Li, Bellomo, Degond and Tadmor8] for those who may be interested in this topic.

In this paper, we are interested in generalising the flocking model with the unit-speed constraint presented in [Reference Choi and Ha6] to be more practical. In [Reference Choi and Ha6], the unit-speed constrained model was given by the following second-order ordinary differential equations (ODEs) for the position–velocity configuration $\{(x_i, v_i)\}_{i=1}^{N}$ , motivated from the well-known CS model:

(1.1) \begin{equation} \begin{cases} \displaystyle \frac{d{x}_i}{dt} =v_i,\quad t\gt 0,\quad i\in [N]\;:\!=\; \{1,\ldots,N\},\\ \displaystyle \frac{dv_i}{dt}=\displaystyle \frac{1}{N}\sum _{j=1}^{N}\phi (\|x_i-x_j\|)\left (v_j-\frac{\langle v_j,v_i\rangle v_i}{\|v_i\|^2}\right ),\\ \displaystyle (x_i(0),v_i(0))=(x_i^0,v_i^0)\in \mathbb{R}^{d}\times \mathbb{S}^{d-1}, \end{cases} \end{equation}

where $N$ is the number of agents, $\langle \cdot,\cdot \rangle$ is the standard inner product on $\mathbb{R}^d$ , and $\|\cdot \|$ is the standard $\ell ^2$ -norm, respectively. In addition, we assume the communication weight $\phi : \mathbb{R}_{+}\to \mathbb{R}_{+}(\;:\!=\; [0,\infty ))$ is locally Lipschitz continuous function satisfying

\begin{align*} 0\leq \phi (r)\leq \phi (0),\quad (\phi (r_1)-\phi (r_2))(r_1-r_2)\leq 0,\quad r,r_1,r_2\geq 0. \end{align*}

The manifold $\mathbb{S}^{d-1}$ denotes a $(d-1)$ -dimensional unit sphere, which is isometrically embedded to $\mathbb{R}^d$ , that is,

\begin{equation*}\mathbb {S}^{d-1}\;:\!=\; \left \{x\;:\!=\; (x^1,\ldots,x^d)\,\bigg |\,\sum _{i=1}^d |x^i|^2=1\right \}.\end{equation*}

Similar to the CS model proposed in [Reference Cucker and Smale12], the model (1.1) and its variants have garnered considerable attention. For instance, the study on the bi-cluster flocking was presented in [Reference Cho, Ha, Huang, Jin and Ko10], and the exploration of multi-cluster flocking and critical coupling strength was discussed in [Reference Ha, Ko and Zhang18]. The time-delay effect was also considered in [Reference Choi and Ha5, Reference Choi and Seo7], and considerations regarding the temperature field have been explored in [Reference Ahn1, Reference Ahn, Byeon and Ha2]. Moreover, the work in [Reference Ru, Li, Liu and Wang29] has addressed the system within the context of a general digraph. In [Reference Ru, Li, Liu and Wang29], the authors introduced a static network topology into (1.1) and provided a sufficient framework to exhibit the asymptotic flocking. The objective was to analyse how the connectivity among the system’s agents influences its behaviour, where the adjacency matrix of the network topology was given by

\begin{align*} \begin{aligned} \chi _{ij}={\begin{cases} 1,& j\mbox{-}\text{th agent conveys information to}\ i\mbox{-}\text{th agent or } j=i,\\ 0,& \text{otherwise}. \end{cases}} \end{aligned} \end{align*}

Then, they provided a sufficient framework pertaining to initial data and system parameters that facilitate the emergence of flocking dynamics when the network topology $(\chi _{ij})$ contains a directed spanning tree.

However, even if the interaction network is expressed as a directed graph, there are still things that can be done to more realistically model the behaviour of actual flocks. Namely, the interaction network in (1.1) is assumed to be constant regardless of time, which may be a rather unrealistic assumption considering that some unpredictable external factors can interfere with the interaction. Since we want to observe whether the flocking phenomenon occurs asymptotically over infinite time, it is natural to think of a mathematical model that allows the network to change over time. Beyond the Cucker–Smale model, the introduction of random effects into the network topology of a multi-agent system has been explored in various literature, such as the studies on random geometric graphs [Reference Chen, Liu and Guo3] and random link failures [Reference Kar and Moura24, Reference Li, Liao, Huang, Zhu and Liu25], etc. Among them, we introduce a formulation of a many-body system with a randomly switching network topology inspired from [Reference Dong, Ha, Jung and Kim13], building upon the structure defined in (1.1). This system is governed by the Cauchy problem

(1.2) \begin{equation} \begin{cases} \displaystyle \frac{d{x}_i}{dt} =v_i,\quad t\gt 0,\quad i\in [N],\\ \displaystyle \frac{dv_i}{dt}=\displaystyle \frac{1}{N}\sum _{j=1}^{N}\chi ^{\sigma }_{ij}\phi (\|x_i-x_j\|)\left (v_j-\frac{\langle v_j,v_i\rangle v_i}{\|v_i\|^2}\right ),\\ \displaystyle (x_i(0,w),v_i(0,w))=(x_i^0(w),v_i^0(w))\in \mathbb{R}^{d}\times \mathbb{S}^{d-1}, \end{cases} \end{equation}

with the assumptions for $N$ , $\phi$ , $\langle \cdot,\cdot \rangle$ , $\|\cdot \|$ and $\mathbb{S}^{d-1}$ remain consistent with those in (1.1). In [Reference Dong, Ha, Jung and Kim13], the authors generalised the static network topology to a right-continuous stochastic process

\begin{equation*}\sigma\; :\;\mathbb {R}_+\times \Omega \to [N_G]=\{1,2,\ldots,N_G\}.\end{equation*}

Then, $\chi ^\sigma$ denotes the adjacency matrix of $\mathcal{G}_\sigma$ , where each $\mathcal{G}_i$ is a directed graph with vertices $\mathcal{V}=\{\beta _1,\ldots,\beta _N\}$ . Due to this right continuity condition, the authors were able to constrain the process $\sigma$ to have a piecewise continuous sample path that gives the opportunity to change its value at some random switching instants $0=t_0\lt t_1\lt t_2\lt \cdots$ , so that $\sigma (\cdot,\omega ):[t_k,t_{k+1})\to [N_G]$ is constant for each $k\in \mathbb{N}\cup \{0\}$ .

In this paper, we adopt the above constructions on $\sigma$ to the proposed system (1.2), and we are mainly concerned with the following issue:

What is the probability to emerge the flocking of the system (1.2)?

More specifically, the asymptotic flocking, for which we want to find sufficient conditions in terms of initial data and system parameters, is defined as follows:

Definition 1.1. Let $(\Omega,\mathcal{F},\mathbb{P})$ be a generic probability space and $Z=\{(x_i,v_i)\}_{i=1}^N$ be the solution process of the system (1.2).

  1. 1. The configuration $Z$ exhibits group formation for $w\in \Omega$ if

    \begin{equation*} \sup _{t\in \mathbb {R}_+} \max _{i,j\in [N]} \|x_i(t,w)-x_j(t,w)\|\lt \infty . \end{equation*}
  2. 2. The configuration $Z$ exhibits asymptotic velocity alignment for $w\in \Omega$ if

    \begin{equation*} \lim _{t \to \infty } \max _{i,j\in [N]}\|v_i(t,w)- v_j(t,w)\|=0. \end{equation*}
  3. 3. The configuration $Z$ exhibits asymptotic flocking for $w\in \Omega$ if

    \begin{align*} \begin{aligned} \sup _{t\in \mathbb{R}_+} \max _{i,j\in [N]} \|x_i(t,w)-x_j(t,w)\|\lt \infty,\quad \lim _{t \to \infty } \max _{i,j\in [N]} \|v_i(t,w)-v_j(t,w)\|=0. \end{aligned} \end{align*}

The rest of the paper is organised as follows. In Section 2, we reformulate the proposed system (1.2) into a matrix-valued ODE and briefly introduce some theoretical backgrounds related to our work. In addition, we provide preparatory estimates which will be crucially used to derive the desired flocking of the system (1.2) in Section 3. In Section 3, we demonstrate several sufficient frameworks concerning initial data and system parameters to exhibit the asymptotic flocking of the system (1.2). Finally, Section 5 is devoted to summarise the main results of this paper and discuss future work.

Notation. Throughout the paper, we employ the following notation for simplicity:

\begin{align*} &X\;:\!=\; (x_1,\ldots,x_N)^T,\quad V\;:\!=\; (v_1,\ldots,v_N)^T,\quad{A}_V\;:\!=\; \min _{i,j\in [N]}\langle v_i,v_j \rangle,\quad \mathbb{R}_+\;:\!=\; [0,\infty ),\\ & D_{Z}\;:\!=\; \max _{i,j\in [N]}\|z_i-z_j\| \quad \text{for}\quad Z=(z_1,\ldots,z_N)\in \{X,V\},\quad [N]\;:\!=\; \{1,\ldots,N\},\\ & M_{ij}\;:\!=\; \text{the}\,(i,j)\text{-th entry of}\,M\in \mathbb{R}^{m\times n},\quad x^i\;:\!=\; \text{the $i$-th component of $x\in \mathbb{R}^d$},\\ &A\geq B,\quad A,B\in \mathbb{R}^{m\times n} \iff A_{ij}\geq B_{ij}\quad \text{for all $i\in [m]$, $j\in [n]$}. \end{align*}

2. Preliminaries

In this section, we reformulate the system (1.2) into a suitable matrix-valued ODE and introduce some theoretical backgrounds related to matrix analysis of graphs.

2.1. Graph theory

We first briefly introduce basic notions in graph theory. A direct graph (in short digraph) $\mathcal{G}=(\mathcal{V}(\mathcal{G}),\mathcal{E}(\mathcal{G}))$ consists of a finite set $\mathcal{V}(\mathcal{G})=\{\beta _1,\ldots,\beta _N\}$ of vertices and a set $\mathcal{E}(\mathcal{G})\subset \mathcal{V}(\mathcal{G})\times \mathcal{V}(\mathcal{G})$ of arcs. If a pair $(\beta _j,\beta _i)$ is contained in $ \mathcal{E}(\mathcal{G})$ , $\beta _j$ is said to be a neighbour of $\beta _i$ , and we denote the neighbour set of the vertex $\beta _i$ by $\mathcal{N}_i\;:\!=\; \{j:(\beta _j,\beta _i)\in \mathcal{E}(\mathcal{G})\}$ . For given digraph $\mathcal{G}$ , we define its corresponding adjacency matrix $\chi (\mathcal{G})=(\chi _{ij})(\mathcal{G})$ as

\begin{align*} \begin{aligned} \chi _{ij}={\begin{cases} 1& \text{if}\quad j\in \mathcal{N}_i\cup \{i\},\\ 0& \text{otherwise}. \end{cases}} \end{aligned} \end{align*}

Since it is obvious that this correspondence is a one-to-one, we can say that given a matrix $A$ consisting of zeros and ones and with diagonal components all equal to one, we can also uniquely determine its corresponding digraph, which we write it as $\mathcal{G}(A)$ . A path in a digraph $\mathcal{G}$ from $\beta _{i_0}$ to $\beta _{i_p}$ is a finite sequence $\{\beta _{i_0},\ldots,\beta _{i_p}\}$ of distinct vertices such that each successive pair of vertices is an arc of $\mathcal{G}$ . The integer $p$ which is the number of its arcs is said to be the length of the path. If there is a path from $\beta _i$ to $\beta _j$ , then vertex $\beta _j$ is said to be reachable from vertex $\beta _i$ . We say $\mathcal{G}$ contains a spanning tree if there exists a vertex such that any other vertex of $\mathcal{G}$ is reachable from it. In this case, this vertex is said to be a root.

2.2. Conservation and dissipation law

In this subsection, we show that the system (1.2) has a conservation of each speed of agent and dissipation of their velocity diameter.

Lemma 2.1. (Conservation of speeds) Suppose that $(X,V)$ is a solution to the system (1.2). Then, it follows that for each $w\in \Omega$ ,

\begin{equation*}\|v_i(t,w)\|=1,\quad t\geq 0,\quad i\in [N].\end{equation*}

Proof. Once we take an inner product $v_i$ with (1.2) $_2$ , the following relation holds:

\begin{equation*}\left \langle v_j-\frac {\langle v_j,v_i\rangle v_i}{\|v_i\|^2},v_i\right \rangle =0,\quad i,j\in [N].\end{equation*}

Therefore, we have $\frac{d}{dt}\|v_i\|^2=0$ for all $i\in [N]$ , which is our desired result.

This means that the difference in velocities is determined entirely by the angle between them. However, one can also verify that for each $w\in \Omega$ , the maximal angle

\begin{equation*}A_V(t,w)\;:\!=\; \min _{i,j\in [N]}\langle v_i(t,w),v_j(t,w) \rangle \end{equation*}

is monotonically increasing in $t\geq 0$ , provided that $\displaystyle A_V(0,w)\;:\!=\; \min _{i,j\in [N]}\langle v_i^0(w),v_j^0(w) \rangle$ is strictly positive.

Lemma 2.2. (Monotonicity of $A_V$ ) Let $(X,V)$ be a solution to the system (1.2) satisfying

\begin{equation*}{A}_V(0,w)\gt 0\end{equation*}

for some $\omega \in \Omega$ . Then, the smallest inner product between heading angles $A_V(\cdot,w)$ is monotonically increasing.

Proof. For fixed $t\geq 0$ , we denote $i_t,j_t\in [N]$ the two indices satisfying

\begin{equation*}A_V(t,w)\;:\!=\; \langle v_{i_t}(t,w),v_{j_t}(t,w)\rangle .\end{equation*}

If we define a set $\mathcal{S}$ as

\begin{equation*}\mathcal {S}\;=\!:\;\{t\geq 0\,|\,A_V(t,w)\leq 0\},\end{equation*}

it follows from the condition $A_V(0,w)\gt 0$ and the continuity of $A_V(\cdot,w)$ that there exists $T_1\in (0,\infty ]$ satisfying

\begin{equation*}[0,T_1)\cap \mathcal {S}=\emptyset .\end{equation*}

Now, define $\mathcal{T}\;:\!=\; \inf \mathcal{S}$ and suppose we have $\mathcal{T}\lt \infty$ . Then, from the continuity of $A_V$ , we have

(2.1) \begin{equation} \lim _{t\to \mathcal{T}-}A_V(t,w)=0. \end{equation}

On the other hand, the locally Lipschitz function $A_V(\cdot,\omega )$ is differentiable at almost every $t\in (0,\mathcal{T})$ . Then, we use (1.2) $_2$ to obtain

\begin{align*} \frac{dA_V}{dt}&=\langle \dot{v}_{i_t},v_{j_t}\rangle +\langle \dot{v}_{j_t},v_{i_t}\rangle \\ &=\frac{1}{N}\sum _{k=1}^N\chi _{i_tk}^\sigma \phi (\|x_k-x_{i_t}\|)\left (\langle v_k,v_{j_t} \rangle -\langle v_k,v_{i_t}\rangle \langle v_{i_t},v_{j_t}\rangle \right )\\ &\quad +\frac{1}{N}\sum _{k=1}^N\chi _{j_tk}^\sigma \phi (\|x_k-x_{j_t}\|)\left (\langle v_k,v_{i_t} \rangle -\langle v_k,v_{j_t}\rangle \langle v_{i_t},v_{j_t}\rangle \right )\\ &\geq 0, \end{align*}

where we used

\begin{equation*} \begin{aligned} &1\geq \langle v_k,v_{i_t} \rangle \geq 0 \quad \text{and} \quad \langle v_k,v_{j_t} \rangle \geq \langle v_{i_t},v_{j_t}\rangle \geq 0,\\ &1\geq \langle v_k,v_{j_t} \rangle \geq 0 \quad \text{and} \quad \langle v_k,v_{i_t} \rangle \geq \langle v_{i_t},v_{j_t}\rangle \geq 0, \end{aligned} \end{equation*}

in the last inequality. Hence, $A_V(\cdot,w)$ is monotonically increasing in $[0, \mathcal{T}]$ , which leads a contradiction to (2.1). Therefore, we have $\mathcal{T}=\infty$ and obtain the monotone increasing property of $A_V(\cdot,\omega )$ .

As a direct consequence of Lemma 2.2, the velocity diameter $D_V$ is monotonically decreasing, due to the relation between $D_V$ and $A_V$ .

Corollary 2.1. (Monotonicity of $D_V$ ) Assume that $(X,V)$ is a solution to the system (1.2) with

\begin{equation*}D_V(0,w)\lt \sqrt {2}\end{equation*}

for some $\omega \in \Omega$ . Then, the velocity diameter $D_V(\cdot,w)$ is monotonically decreasing in time.

Proof. From Lemma 2.1, we have

\begin{equation*}\|v_i-v_j\|^2=2-2\langle v_i,v_j \rangle .\end{equation*}

Therefore, one can obtain the monotone decreasing property of

\begin{equation*}D_V^2=2-2A_V\end{equation*}

by using Lemma 2.2.

2.3. Matrix formulation

Now, we reorganise the system (1.2) to a matrix formulation. For this, we first fix $w\in \Omega$ and

\begin{equation*}X\;:\!=\; (x_1,\ldots,x_N)^T,\quad V\;:\!=\; (v_1,\ldots,v_N)^T,\end{equation*}

and we use the result of Lemma 2.1, that is,

\begin{equation*}\|v_i(t,w)\|=1,\quad t\gt 0,\quad i\in [N].\end{equation*}

Then, the right-hand side of (1.2) $_2$ can be rewritten as

\begin{align*} &\sum _{j=1}^{N}\chi ^{\sigma }_{ij}\phi (\|x_i-x_j\|)\left (v_j-\langle v_j,v_i\rangle v_i\right )\\ &=\sum _{j=1}^{N}\chi ^{\sigma }_{ij}\phi (\|x_i-x_j\|)\left (v_j-v_i\right )+\sum _{j=1}^{N}\chi ^{\sigma }_{ij}\phi (\|x_i-x_j\|)\left (v_i-\langle v_j,v_i\rangle v_i\right )\\ &=\sum _{j=1}^{N}\chi ^{\sigma }_{ij}\phi (\|x_i-x_j\|)\left (v_j-v_i\right )+\frac{1}{2}\sum _{j=1}^{N}\chi ^{\sigma }_{ij}\phi (\|x_i-x_j\|)\|v_i-v_j\|^2v_i. \end{align*}

On the other hand, we also consider the Laplacian matrices

\begin{equation*}\mathcal {L}_l\;:\!=\; \mathcal {D}_l-\mathcal {A}_l, \quad 1\leq l\leq N_G,\end{equation*}

where $\mathcal{A}_l\in \mathbb{R}^{N\times N}$ and $\mathcal{D}_l\in \mathbb{R}^{N\times N}$ are defined as

\begin{equation*}(\mathcal {A}_l)_{ij}\;:\!=\; \chi ^{l}_{ij}\phi (\|x_i-x_j\|),\quad \mathcal {D}_l\;:\!=\; \text{diag}(d^l_1,\ldots, d^l_N),\quad d_i^l\;:\!=\; \sum _{j=1}^{N}\chi ^{l}_{ij}\phi (\|x_i-x_j\|),\quad i,j\in [N].\end{equation*}

Then, (1.2) can be represented by the following matrix form:

(2.2) \begin{equation} \begin{cases} \displaystyle \dot{X}=V,\\ \displaystyle \dot{V}=-\frac{1}{N}\mathcal{L}_{\sigma }V+\frac{1}{N}\mathcal{R}_{\sigma }, \end{cases} \end{equation}

where the vector $\mathcal{R}_{l}$ is defined as

\begin{equation*}\mathcal {R}_{l}\;:\!=\; (r^l_1,\ldots, r^l_N)^T\quad \text {and} \quad r^l_i\;:\!=\; \frac {1}{2}\sum _{j=1}^{N}\chi ^{l}_{ij}\phi (\|x_i-x_j\|)\|v_i-v_j\|^2v_i,\quad 1\leq l\leq N_G.\end{equation*}

2.4. Matrix analysis

Now, we review some basic concepts on the matrix analysis to be used in Section 3.

Definition 2.1. Let $A\in \mathbb{R}^{N\times N}$ be a matrix whose entries are all non-negative.

  1. 1. $A$ is called a stochastic matrix if its row-sum is one:

    \begin{equation*}\sum _{j=1}^N A_{ij}=1,\quad i\in [N].\end{equation*}
  2. 2. $A$ is called a scrambling matrix if for $i,j\in [N]$ , there exists $k\in [N]$ such that

    \begin{equation*}A_{ik}\gt 0\quad \text {and} \quad A_{jk}\gt 0.\end{equation*}
  3. 3. $A$ is called an adjacency matrix of a digraph $\mathcal{G}=\mathcal{G}(A)$ if

    \begin{equation*}A_{ij}\gt 0 \iff (j,i)\in \mathcal {E}(\mathcal {G}).\end{equation*}

In addition, the following ergodicity coefficient $\mu$ of $A\in \mathbb{R}^{N\times N}$ also plays a key role in the analysis in Section 3:

\begin{equation*}\mu (A)\;:\!=\; \min _{i,j\in [N]}\sum _{k=1}^N\min (A_{ik},A_{jk}).\end{equation*}

For instance, one can easily verify the following two facts on the ergodicity coefficient:

  1. 1. $A\in \mathbb{R}^{N\times N}$ is a scrambling matrix $\iff$ $\mu (A)\gt 0$ .

  2. 2. Assume that $A,B\in \mathbb{R}^{N\times N}$ . Then, one has

    \begin{equation*}A\geq B \geq 0 \Longrightarrow \mu (A)\geq \mu (B).\end{equation*}

In what follows, we offer two results concerning stochastic matrices and scrambling matrices, which will be used to derive the asymptotic flocking of (1.2). The proof of the following lemma starts by applying the ideas of [Reference Dong, Ha and Kim15]. An improvement over the result in [Reference Dong, Ha and Kim15] is that the remaining term $B$ now plays a role by $D_B$ (the diameter between its columns) rather than the Frobenius norm $\|B\|$ , which allows us to apply Lemma 2.3 recursively. Consequently, we can lower the order of $N$ by one in the sufficient framework $(\mathcal{F}5)-(\mathcal{F}6)$ , compared to the result which uses the lemma in [Reference Dong, Ha and Kim15] directly.

Lemma 2.3. Let $A\in \mathbb{R}^{N\times N}$ be a non-negative stochastic matrix and let $B,Z,W\in \mathbb{R}^{N\times d}$ be matrices satisfying

(2.3) \begin{equation} W=AZ+B. \end{equation}

Then, for every norm $\|\cdot \|$ on $\mathbb{R}^d$ , we have

\begin{equation*}D_W\leq (1-\mu (A))D_Z+D_B,\end{equation*}

where $D_W, D_Z$ and $D_B$ are defined as

\begin{align*} &B\;:\!=\; (b_1,\ldots, b_N)^T,\quad Z\;:\!=\; (z_1,\ldots, z_N)^T,\quad W\;:\!=\; (w_1,\ldots,w_N)^T,\\ &b_i\;:\!=\; (b_i^1,\ldots,b_i^d)^T,\quad z_i\;:\!=\; (z_i^1,\ldots,z_i^d)^T, \quad z_i\;:\!=\; (z_i^1,\ldots,z_i^d)^T,\\ &D_W\;:\!=\; \max _{i,k\in [N]}\|w_i-w_k\|,\quad D_Z\;:\!=\; \max _{i,k\in [N]}\|z_i-z_k\|,\quad D_B\;:\!=\; \max _{i,k\in [N]}\|b_i-b_k\|. \end{align*}

Proof. First of all, the condition that $A$ is stochastic matrix implies

(2.4) \begin{equation} \sum _{j=1}^{N}\max \{0,a_{ij}-a_{kj}\}+\sum _{j=1}^{N}\min \{0,a_{ij}-a_{kj}\}=\sum _{j=1}^{N}(a_{ij}-a_{kj})=0. \end{equation}

Then, the direct calculation from (2.3) yields

(2.5) \begin{eqnarray} \begin{aligned} \|w_i-w_k\|&=\left \|\sum _{j=1}^{N}a_{ij}z_j+b_i-\sum _{j=1}^{N}a_{kj}z_j-b_k \right \|\\ &=\left \|\sum _{j=1}^{N}(a_{ij}-a_{kj})z_j+(b_i-b_k) \right \|\\ &\leq \left \|\sum _{j=1}^{N}(a_{ij}-a_{kj})z_j \right \|+\|b_i-b_k\|\\ &=\sup _{\substack{\phi \in (\mathbb{R}^d)^*\\\|\phi \|=1}}\sum _{j=1}^{N}(a_{ij}-a_{kj})\phi (z_j) +\|b_i-b_k\|\\ &=\sup _{\substack{\phi \in (\mathbb{R}^d)^*\\\|\phi \|=1}}\left [\sum _{j=1}^{N}\max \{0,a_{ij}-a_{kj}\}\phi (z_j)+\sum _{j=1}^{N}\min \{0,a_{ij}-a_{kj}\}\phi (z_j)\right ] +\|b_i-b_k\|\\ &\leq \sup _{\substack{\phi \in (\mathbb{R}^d)^*\\\|\phi \|=1}}\left [\sum _{j=1}^{N}\max \{0,a_{ij}-a_{kj}\}\max _{n\in [N]}\phi (z_n)+\sum _{j=1}^{N}\min \{0,a_{ij}-a_{kj}\}\min _{n\in [N]}\phi (z_n)\right ]\\ &\quad +\|b_i-b_k\|. \end{aligned} \end{eqnarray}

Now, we substitute (2.4) to (2.5) to get

\begin{equation*} \begin{aligned} \|w_i-w_k\|&\leq \sup _{\substack{\phi \in (\mathbb{R}^d)^*\\\|\phi \|=1}}\sum _{j=1}^{N}\max \{0,a_{ij}-a_{kj}\}\max _{n,m\in [N]}\phi (z_n-z_m)+\|b_i-b_k\|\\ &=\sum _{j=1}^{N}\max \{0,a_{ij}-a_{kj}\}\max _{n,m\in [N]}\|z_n-z_m\|+\|b_i-b_k\|\\ &=\sum _{j=1}^{N}\left (a_{ij}-\min \{a_{ij},a_{kj}\}\right )\max _{n,m\in [N]}\|z_n-z_m\|+\|b_i-b_k\|\\ &\leq (1-\mu (A))D_Z+D_B, \end{aligned} \end{equation*}

where we used the definition of $\mu (A)$ and the fact

\begin{equation*}\sum _{j=1}^{N}a_{ij}=1\end{equation*}

in the last inequality, which implies our desired result.

Lemma 2.4. [Reference Wu33] For each $i\in [N-1]$ , let $A_i\in \mathbb{R}^{N\times N}$ be a non-negative matrix whose all diagonal components are strictly positive and $\mathcal{G}(A_i)$ has a spanning tree. Then, $A_1\cdots A_{N-1}$ is a scrambling matrix.

Finally, we review several properties of state-transition matrix. Let $t_0\in \mathbb{R}$ and $A\;:\; [t_0,\infty )\to \mathbb{R}^{N\times N}$ be a right-continuous matrix-valued function. We consider a linear ODE governed by the following Cauchy problem:

(2.6) \begin{equation} \frac{d\mathcal{\xi }(t)}{dt}=A(t)\mathcal{\xi }(t),\quad t\gt t_0. \end{equation}

Then, the solution of (2.6) can be written as

(2.7) \begin{equation} \mathcal{\xi }(t)=\Phi (t,t_0)\mathcal{\xi }(t_0),\quad t\geq t_0, \end{equation}

where the $A$ -dependent matrix $\Phi (t,t_0)$ is said to be the state-transition matrix, which can be represented by using the Peano–Baker series [Reference Sontag31]:

(2.8) \begin{equation} \Phi (t,t_0)=I_{N}+ \sum _{k=1}^\infty \int _{t_0}^t\int _{t_0}^{s_1}\cdots \int _{t_0}^{s_{k-1}}A(s_1)\cdots A(s_{k})ds_k\cdots ds_1. \end{equation}

Now, fix $c\in \mathbb{R}$ and consider the following two Cauchy problems:

(2.9) \begin{equation} \frac{d\mathcal{\xi }(t)}{dt}=A(t)\mathcal{\xi }(t),\quad \frac{d\mathcal{\xi }(t)}{dt}=(A(t)+cI_{N})\mathcal{\xi }(t),\quad t\gt t_0. \end{equation}

If $\Phi (t,t_0)$ and $\Psi (t,t_0)$ are the state-transition matrices for the two linear ODEs in (2.9), respectively, then the authors of [Reference Dong, Ha and Kim15] obtained the following relation between $\Phi (t,t_0)$ and $\Psi (t,t_0)$ :

(2.10) \begin{equation} \Phi (t,t_0)=\exp ({-}c(t-t_0))\Psi (t,t_0),\quad \,t\gt t_0. \end{equation}

3. Emergence of stochastic flocking

In this section, we present the sufficient framework for the flocking model with unit-speed constraint and randomly switching network topology to exhibit asymptotic flocking. Recall that the model we are interested in this paper is

(3.1) \begin{equation} \begin{cases} \displaystyle \frac{d{x}_i}{dt} =v_i,\quad t\gt 0,\quad i\in [N],\\ \displaystyle \frac{dv_i}{dt}=\displaystyle \frac{1}{N}\sum _{j=1}^{N}\chi ^{\sigma }_{ij}\phi (\|x_i-x_j\|)\left (v_j-\frac{\langle v_j,v_i\rangle v_i}{\|v_i\|^2}\right ),\\ \displaystyle (x_i(0,w),v_i(0,w))=(x_i^0(w),v_i^0(w))\in \mathbb{R}^{d}\times \mathbb{S}^{d-1}. \end{cases} \end{equation}

3.1. Sufficient frameworks

In this subsection, we describe suitable sufficient frameworks in terms of initial data and system parameters to guarantee the asymptotic flocking of the system (3.1). For this, motivated from the methodologies studied in [Reference Dong, Ha, Jung and Kim13], we provide

  • $(\mathcal{F}1)$ There exists a probability space $(\Omega,\mathcal{F},\mathbb{P})$ and a sequence of independent and identically distributed (i.i.d) random variables $\tau _i: \Omega \to \mathbb{R},\,i\in \mathbb{N}$ such that

    \begin{equation*}\exists \,m,M\gt 0,\quad \mathbb {P}(m\leq \tau _i\leq M)=1,\quad i\in \mathbb {N}. \end{equation*}
  • $(\mathcal{F}2)$ Define the sequence of random variables $\{t_n: \Omega \to \mathbb{R}\}_{n\in \mathbb{N}\cup \{0\}}$ as

    \begin{equation*}t_n=\begin {cases} 0 & n=0,\\ \sum _{i=1}^n\tau _i&n\geq 1. \end {cases} \end{equation*}
    Then for each $\omega \in \Omega$ and $n\in \mathbb{N}$ , the sample path $\sigma (\cdot,\omega )$ satisfies
    \begin{equation*}\sigma (t,\omega )=\sigma (t_n(\omega ),\omega )\in [N_G],\quad \forall \,t\in [t_n(\omega ),t_{n+1}(\omega )), \end{equation*}
    and this implies that the process $\sigma$ is right continuous. In addition, we use the following notation for simplicity:
    \begin{equation*}\sigma _{t_n}(\omega )\;:\!=\; \sigma (t_n(\omega ),\omega ),\quad \omega \in \Omega,\quad n\in \mathbb {N}\cup \{0\}. \end{equation*}
  • $(\mathcal{F}3)$ $\{\sigma _{t_n}:\Omega \to [N_G]\}_{n\in \mathbb{N}\cup \{0\}}$ is the sequence of i.i.d random variables, where

    \begin{equation*}\mathcal {P}_l\;:\!=\; \mathbb {P}(\sigma _{t_n}=l)\gt 0\quad \text {for all}\quad l\in [N_G]\quad \text {and}\quad n\in \mathbb {N}\cup \{0\}.\end{equation*}
  • $(\mathcal{F}4)$ The union of all admissible network topologies $\{\mathcal{G}_1,\ldots,\mathcal{G}_{N_G}\}$ contains a spanning tree with vertices $\mathcal{V}=\{\beta _1,\ldots,\beta _N\}$ .

  • $(\mathcal{F}5)$ There exist $R\gt 1$ and $\bar{n}\in \mathbb{N}$ such that

    \begin{equation*} \begin{aligned} &r\;:\!=\; \frac{R}{\displaystyle \min _{1\leq l\leq N_G}\{-\log (1-\mathcal{P}_l)\}}\lt \frac{1}{M(N-1)\phi (0)},\quad \sum _{l=1}^{N_G}(1-\mathcal{P}_l)^{\bar{n}}\leq 1-\frac{1}{R},\\ &\left (\frac{m\phi (0)\exp ({-}\phi (0)M\bar{n})(N-1)^{-rM\phi (0)}}{N}\right )^{N-1}\leq \log 2, \end{aligned} \end{equation*}
  • $(\mathcal{F}6)$ There exist $\overline{M}\gt 0$ and a sufficiently large number $D_X^\infty \gt 0$ such that for $\mathbb{P}-$ almost every $w\in \Omega$ ,

    \begin{equation*} \begin {aligned} D_V(0,w)\lt \min \left \{\frac {N\log \overline {M}}{(N-1)\phi (0)\overline {M}C_0},\sqrt {2}\right \},\, D_X(0,w)+\overline {M}C_0D_V(0,w)\lt D_X^\infty,\\ \end {aligned}\end{equation*}
    where we set
    \begin{equation*} \begin{aligned} &C\;:\!=\; \left (\frac{m\phi (D_X^\infty )\exp ({-}\phi (0)M\bar{n})(N-1)^{-rM\phi (0)}}{N}\right )^{N-1},\\ &C_0\;:\!=\; M(N-1)\sum _{l=1}^\infty \Bigg [\big (\bar{n}+r\log l(N-1)\big )\exp \bigg ({-}\frac{C\left (l^{1-rM(N-1)\phi (0)}-1\right )}{1-rM(N-1)\phi (0)}\bigg )\Bigg ]. \end{aligned} \end{equation*}

Here, conditions $(\mathcal{F}1)-(\mathcal{F}4)$ mean that the interaction network is randomly and repeatedly determined after a period of time within an appropriate range, and the probability of each network topology being selected is the same regardless of time. Each of these network topologies may not have sufficient connectivity, but the fact that the union of graphs that can be selected contains some spanning tree allows all agents to send or receive information without being isolated in the long run. In addition, $(\mathcal{F}5)-(\mathcal{F}6)$ quantitatively represents the initial conditions to ensure the flocking phenomenon. In summary, these mean that the essential supremum of the initial velocity diameter ( $\displaystyle =\textrm{ess}\,{\rm sup}_{\omega \in \Omega }D_V(0,\omega )$ ) has to be sufficiently small.

Note that for $\sigma _t(w)\in [N_G]$ , the adjacency matrix $\chi _{ij}^{\sigma _t(w)}$ is defined by

\begin{align*} \begin{aligned} \chi _{ij}^{\sigma _t(w)}={\begin{cases} 1,\quad \text{if } (j,i)\in \mathcal{E}(\mathcal{G}_{\sigma _t(w)}),\\ 0,\quad \text{if } (j,i)\notin \mathcal{E}(\mathcal{G}_{\sigma _t(w)}), \end{cases}} \end{aligned} \end{align*}

and in particular, we have

\begin{equation*}(\beta _i,\beta _i)\in \mathcal {E}(\mathcal {G}_{k}),\quad i\in [N],\end{equation*}

which makes each vertex $\beta _i$ in the graph $\mathcal{G}_k$ ‘seems’ to have a self loop.

Remark 3.1. If we consider the network topology as a graph which connects each pair of points sufficiently close to each other (as in the Vicsek model), it is natural to assume that the scale of time interval $\tau _i\in [m,M]$ between network switching becomes smaller when the number of particles $N$ becomes larger. For example, it is widely known that the mean free time of gas molecules in a fixed volume of space is proportional to $1/N$ .

3.2. Asymptotic flocking dynamics

First of all, we will consider the union of the network topology on time intervals of the form $[a,b)$ , which we denote

\begin{equation*}\mathcal {G}([a,b))(w)\;:\!=\; \left (\mathcal {V},\bigcup _{t\in [a,b)} \mathcal {E}(\mathcal {G}_{\sigma (t,w)})\right ).\end{equation*}

In addition, we also define a sequence $\{n_k=n_k(\bar{n})\}_{k\in \mathbb{N}\cup \{0\}}$ as

\begin{equation*} \begin{aligned} n_k(\bar{n})\;:\!=\; k\bar{n}+\sum _{l=1}^{k}\lfloor r\log l\rfloor,\quad k\in \mathbb{N}\cup \{0\}, \end{aligned} \end{equation*}

where $\lfloor a\rfloor$ denotes the largest integer less than or equal to $a$ . Then, the following lemma provides a lower bound estimate of the probability to the random digraph

\begin{equation*}\omega \mapsto \mathcal {G}([t_{n_k},t_{n_{k+1}}))(w)\end{equation*}

to have a spanning tree for all $k\in \mathbb{N} \cup \{0\}$ .

Lemma 3.1. Let $(X,V)$ be a solution process to (3.1) satisfying $(\mathcal{F}1)-(\mathcal{F}5)$ . Then, the following probability estimate holds:

\begin{align*} &\mathbb{P}\left (\bigcup _{k=0}^{\infty }\bigg \{w\;:\;\mathcal{G}([t_{n_k},t_{n_{k+1}}))(w)\,\textrm{has a spanning tree}\bigg \}\right )\geq \exp \left ({-}\frac{R^2 \log R}{(R-1)^2} \sum _{l=1}^{N_G}(1-\mathcal{P}_l)^{\bar{n}-1}\right ). \end{align*}

Proof. Since $\{\sigma _{t_n}\}_{n\in \mathbb{N}\cup \{0\}}$ is a i.i.d sequence of random variables, we have

\begin{align*} &\mathbb{P}\left (w\;:\;\mathcal{G}([t_{n_k},t_{n_{k+1}}))(w)\,\text{does not contain a spanning tree}\right )\\[4pt] &\leq \mathbb{P}\left (w\;:\;\exists \,l\in \{1,\ldots,N_G\}\quad \text{such that}\,\,\sigma _{t_{n_k+i}}(w)\neq l\,\,\text{for all}\,\,0\leq i\lt n_{k+1}-n_{k}\right )\\[4pt] &\leq \sum _{l=1}^{N_G}\mathbb{P}\left (w\;:\;\sigma _{t_{n_k+i}}(w)\neq l \,\,\text{for all} \,\,0\leq i\lt n_{k+1}-n_{k}\right )\\[4pt] &=\sum _{l=1}^{N_G}(1-\mathcal{P}_l)^{n_{k+1}-n_{k}}=\sum _{l=1}^{N_G}(1-\mathcal{P}_l)^{\bar{n}+\lfloor r\log (k+1)\rfloor }\leq \sum _{l=1}^{N_G}(1-\mathcal{P}_l)^{\bar{n}}\leq 1-\frac{1}{R}. \end{align*}

Then, the probability to contain a spanning tree is

(3.2) \begin{equation} \begin{aligned} &\mathbb{P}\left (\bigcup _{k=0}^{\infty }\bigg \{w\;:\;\mathcal{G}([t_{n_k},t_{n_{k+1}}))(w)\,\text{has a spanning tree}\bigg \}\right )\\[5pt] &\quad \geq \prod _{k=0}^\infty \left (1-\sum _{l=1}^{N_G}(1-\mathcal{P}_l)^{\bar{n}+\lfloor r\log (k+1)\rfloor }\right )\\[5pt] &\quad \geq \prod _{k=0}^\infty \exp \left ({-\frac{R\log R }{R-1}\sum _{l=1}^{N_G}(1-\mathcal{P}_l)^{\bar{n}+\lfloor r\log (k+1)\rfloor }}\right )\\[5pt] &\quad =\exp \left ({-}\frac{R \log R}{R-1} \sum _{l=1}^{N_G}(1-\mathcal{P}_l)^{\bar{n}}\sum _{k=0}^\infty (1-\mathcal{P}_l)^{\lfloor r\log (k+1)\rfloor }\right ), \end{aligned} \end{equation}

where we used the following relation for $x\in [0,1-\frac{1}{R}]$ :

\begin{align*} 1-x\geq R^{-\frac{Rx}{R-1}}=\exp \left ({-}\frac{R\log R}{R-1}x \right ). \end{align*}

In addition, since we have

\begin{align*} 1\lt R=\min _{1\leq l\leq N_G}\{-r\log (1-\mathcal{P}_l)\}=-r\max _{1\leq l\leq N_G}\log (1-\mathcal{P}_l), \end{align*}

the convergence of $p$ -series yields

(3.3) \begin{equation} \begin{aligned} \sum _{k=0}^\infty (1-\mathcal{P}_l)^{\lfloor r\log (k+1)\rfloor }&\leq \sum _{k=0}^\infty (1-\mathcal{P}_l)^{ r\log (k+1)-1}\\ &=\frac{1}{1-\mathcal{P}_l}\sum _{k=0}^\infty \frac{1}{(k+1)^{-r\log (1-\mathcal{P}_l)}}\\ &\lt \frac{1}{1-\mathcal{P}_l}\cdot \frac{-r\log (1-\mathcal{P}_l)}{-r\log (1-\mathcal{P}_l)-1}\\ &\leq \frac{1}{1-\mathcal{P}_l}\cdot \frac{R}{R-1}, \end{aligned} \end{equation}

where we used

\begin{equation*} \begin{aligned} &(1-\mathcal{P}_l)^{ r\log (k+1)}=(1-\mathcal{P}_l)^{ r\log (k+1)}=\exp \left (r\log (k+1)\log (1-\mathcal{P}_l) \right )=(k+1)^{r\log (1-\mathcal{P}_l)},\\ &\sum _{k=0}^\infty \frac{1}{(k+1)^{-r\log (1-\mathcal{P}_l)}}\lt 1+\int _{1}^{\infty }\frac{1}{x^{-r\log (1-\mathcal{P}_l)}} dx\\ &\quad =1+\frac{1}{-r\log (1-\mathcal{P}_l)-1}{\left [-\frac{1}{x^{-r\log (1-\mathcal{P}_l)-1}} \right ]}_1^{\infty }\\ &\quad =1+\frac{1}{-r\log (1-\mathcal{P}_l)-1}, \end{aligned} \end{equation*}

in the equality and the second inequality, respectively. Finally, we apply the inequality (3.3) to (3.2) to obtain the desired result:

\begin{align*} \mathbb{P}\left (\bigcup _{k=0}^{\infty }\bigg \{w\;:\;\mathcal{G}([t_{n_k},t_{n_{k+1}}))(w)\,\text{has a spanning tree}\bigg \}\right )\geq \exp \left ({-}\frac{R^2 \log R}{(R-1)^2} \sum _{l=1}^{N_G}(1-\mathcal{P}_l)^{\bar{n}-1}\right ). \end{align*}

Now, we recall the matrix formulation of (3.1). According to (2.2), the matrix formulation of (3.1) was given as

(3.4) \begin{equation} \begin{cases} \displaystyle \dot{X}^{(1)}=V^{(1)},\\ \displaystyle \dot{V}^{(1)}=-\frac{1}{N}\mathcal{L}_{\sigma }({X}^{(1)})V^{(1)}+\frac{1}{N}\mathcal{R}_{\sigma }(X^{(1)},V^{(1)}). \end{cases} \end{equation}

Here, we take into account the homogeneous part of (3.4), which becomes

(3.5) \begin{equation} \dot{V}^{(2)}=-\frac{1}{N}\mathcal{L}_{\sigma }(X^{(1)})V^{(2)}. \end{equation}

If we denote the state-transition matrix corresponding to (3.5) as $\overline{\Phi }$ , then it follows from (2.7) that for $a\geq b\geq 0$ ,

(3.6) \begin{equation} V^{(2)}(a)=\overline{\Phi }(a,b)V^{(2)}(b). \end{equation}

Then, we have the following explicit form of the solution to (3.4):

(3.7) \begin{equation} V^{(1)}(a)=\overline{\Phi }(a,b)V^{(1)}(b)+\frac{1}{N}\int _{b}^{a}\overline{\Phi }(a,s)\mathcal{R}_{\sigma _s}(X^{(1)}(s),V^{(1)}(s))ds. \end{equation}

In the following lemma, we present a lower bound estimate of the ergodicity coefficient for the state-transition matrix $\overline{\Phi }$ when the sample path $(X,V)(\omega )$ satisfies

(3.8) \begin{equation} \mathcal{G}([t_{n_k},t_{n_{k+1}}))(w)\, \text{has a spanning tree for every}\, k\in \mathbb{N} \cup \{0\}, \end{equation}

to apply Lemma 2.3 to (3.7). For this, we assume some a priori conditions for a moment; there exists a non-negative number $D_X^\infty \geq 0$ and a positive number $\overline{M}\gt 1$ such that

(3.9) \begin{equation} \begin{aligned} &\circ \,\sup _{t\in \mathbb{R}_+}D_X(t,w)\leq D_X^\infty \lt \infty,\\ &\circ \, \frac{(N-1)^2 \phi (0)M}{N} \sup _{k\in \mathbb{N}}\sum _{l=1}^k\big (\bar{n}+r\log (N-1)+r\log l\big )D_V(t_{n_{(k-1)(N-1)}})\leq \log \overline{M}. \end{aligned} \end{equation}

Then, the following lemma allows to analyse the flocking dynamics of (3.1).

Lemma 3.2. Let $w\in \Omega$ be an event satisfying (3.8), and assume the sample path $(X,V)(\omega )$ of the system (3.1) satisfies $(\mathcal{F}1)-(\mathcal{F}5)$ and (3.9) $_1$ . Then, we obtain the following assertions:

  1. 1. For every ${k}\in \mathbb{N}\cup \{0\}$ , the ergodicity coefficient of $\overline{\Phi }(t_{n_{{ k}(N-1)}},t_{n_{({ k}-1)(N-1)}})$ satisfies

    \begin{equation*}\mu (\overline {\Phi }(t_{n_{{ k}(N-1)}},t_{n_{({ k}-1)(N-1)}}))\geq \left (\frac {m\phi (D_X^\infty )}{N}\right )^{N-1}\exp \left ({-\phi (0)\left (t_{n_{{ k}(N-1)}}-t_{n_{({ k}-1)(N-1)}}\right )}\right ).\end{equation*}
  2. 2. For every $T_1 \geq T_2\geq 0$ , $\overline{\Phi }(T_1,T_2)$ is a stochastic matrix.

Proof. (1) We employ the method used in [Reference Dong, Ha, Jung and Kim13]. First, for $k\in \mathbb{N}\cup \{0\}$ and $q\in [N-1]$ , we denote $\{{ k}_{q_a}\}_{a=1}^{N_q+1}$ the unique integer-valued increasing sequence such that

\begin{equation*} \begin{aligned} &{n_{({ k}-1)(N-1)+q-1}}=k_{q_1}\lt \cdots \lt k_{q_{N_q+1}}=n_{({ k}-1)(N-1)+q},\\ &\sigma _t=\sigma _{t_{k_{q_a}}}\neq \sigma _{t_{k_{q_{a+1}}}},\quad \forall \,t\in [t_{k_{q_a}},t_{k_{q_{a+1}}}),\quad a\in [N_q], \end{aligned} \end{equation*}

and we use the following notation for simplicity:

\begin{equation*}g_{q_a}\;:\!=\; \sigma _{t_{k_{q_a}}}\in [N_G],\quad \forall \,a\in [N_q].\end{equation*}

Now, we apply (2.6)–(2.10) to write the state-transition matrix for (3.5)–(3.6) in terms of Laplacian matrix $\mathcal{L}_l$ . Since $\phi$ is monotonically decreasing and $\chi ^{l}_{ij}\in \{0,1\}$ , the condition (3.9) $_1$ implies

\begin{equation*}\phi (\|x_i-x_j\|)\geq \phi (D_X^\infty ) \quad \text {and}\quad 0\leq d_i^l\leq N\phi (0),\quad i,j\in [N]. \end{equation*}

Then, we have

(3.10) \begin{equation} -\frac{1}{N}\mathcal{L}_{g_{q_a}}=\frac{1}{N}\left (\mathcal{A}_{g_{q_a}}-\mathcal{D}_{g_{q_a}}\right )\geq \frac{1}{N}\underline{\mathcal{A}}_{g_{q_a}}-\phi (0)I_N, \end{equation}

where each $(i,j)$ -th entry of the matrix $\underline{\mathcal{A}}_{g_{q_a}}$ is given as

\begin{align*} (\underline{\mathcal{A}}_{g_{q_a}})_{ij} \;:\!=\; \begin{cases} \displaystyle \chi ^{g_{q_a}}_{ij}\phi (D_X^\infty ),\quad &i\neq j,\\ \displaystyle \phi (0),\quad &i=j. \end{cases} \end{align*}

Thus, if $\overline{\Psi }(t_{k_{q_{a+1}}},t_{k_{q_{a}}})$ is the state-transition matrix corresponding to $-\frac{1}{N}\mathcal{L}_{g_{q_a}}+\phi (0)I_N$ , the relation (2.10) yields

\begin{equation*}\overline {\Phi }(t_{k_{q_{a+1}}},t_{k_{q_{a}}})=\exp ({-}\phi (0)(t_{k_{q_{a+1}}}-t_{k_{q_{a}}}))\overline {\Psi }(t_{k_{q_{a+1}}},t_{k_{q_{a}}}),\end{equation*}

and we apply $(\mathcal{F}1)$ and (3.10) to the Peano–Baker series representation for $\overline{\Psi }(t_{k_{q_{a+1}}},t_{k_{q_{a}}})$ to obtain

(3.11) \begin{eqnarray} \begin{aligned} \overline{\Psi }(t_{k_{q_{a+1}}},t_{k_{q_a}})&= I_N+\sum _{k=1}^\infty \int _{t_{k_{q_a}}}^{t_{k_{q_{a+1}}}}\int _{t_{k_{q_a}}}^{s_1}\cdots \int _{t_{k_{q_a}}}^{s_{k-1}}\prod _{b=1}^k \left ({-}\frac{1}{N}\mathcal{L}_{g_{q_a}}(s_b)+\phi (0)I_N \right )ds_k\cdots ds_1\\ &\geq I_N+\sum _{k=1}^\infty \int _{t_{k_{q_a}}}^{t_{k_{q_{a+1}}}}\int _{t_{k_{q_a}}}^{s_1}\cdots \int _{t_{k_{q_a}}}^{s_{k-1}} \left (\frac{1}{N}\underline{\mathcal{A}}_{g_{q_a}} \right )^kds_k\cdots ds_1\\ &= I_N+\sum _{k=1}^\infty \frac{1}{k!}(t_{k_{q_{a+1}}}-t_{k_{q_{a}}})^k\left (\frac{1}{N}\underline{\mathcal{A}}_{g_{q_a}} \right )^k\\ &=\sum _{k=0}^\infty \frac{1}{k!}(t_{k_{q_{a+1}}}-t_{k_{q_{a}}})^k\left (\frac{1}{N}\underline{\mathcal{A}}_{g_{q_a}} \right )^k\\ &= \exp \left (\frac{1}{N}(t_{k_{q_{a+1}}}-t_{k_{q_{a}}})\underline{\mathcal{A}}_{g_{q_a}}\right )\\ &\geq \exp \left (\frac{m}{N}\underline{\mathcal{A}}_{g_{q_a}}\right ). \end{aligned} \end{eqnarray}

Therefore, the state-transition matrix $\overline{\Phi }(t_{k_{q_{N_q+1}}},t_{k_{q_{1}}})=\overline{\Phi }(t_{n_{(k-1)(N-1)}+q},t_{n_{(k-1)(N-1)}+q-1})$ has a following lower bound:

(3.12) \begin{equation} \begin{aligned} \overline{\Phi }(t_{k_{q_{N_q+1}}},t_{k_{q_{1}}})&=\prod _{a=1}^{N_q}\overline{\Phi }(t_{k_{q_{a+1}}},t_{k_{q_a}})\\ &\geq \exp ({-}\phi (0)(t_{k_{q_{N_q+1}}}-t_{k_{q_{1}}}))\prod _{a=1}^{N_q}\exp \left (\frac{m}{N}\underline{\mathcal{A}}_{g_{q_a}}\right )\\ &\geq \frac{m}{N}\exp ({-}\phi (0)(t_{k_{q_{N_q+1}}}-t_{k_{q_{1}}}))\sum _{a=1}^{N_q}\underline{\mathcal{A}}_{g_{q_a}}\\ &\geq \frac{m}{N}\phi (D_X^\infty )\exp ({-}\phi (0)(t_{k_{q_{N_q+1}}}-t_{k_{q_{1}}}))A_q, \end{aligned} \end{equation}

where $A_q$ is the adjacency matrix of $\mathcal{G}([t_{k_{q_1}},t_{k_{q_{N_q+1}}}))$ for each $q\in [N-1]$ . Accordingly, we multiply (3.12) for $q=1,2,\ldots,N-1$ and obtain

(3.13) \begin{equation} \begin{aligned} &\overline{\Phi }(t_{n_{k(N-1)}},t_{n_{(k-1)(N-1)}})\\ &=\prod _{q=1}^{N-1}\overline{\Phi }(t_{n_{(k-1)(N-1)}+q},t_{n_{(k-1)(N-1)}+q-1})\\ &\geq \left (\frac{m}{N}\right )^{N-1}\phi (D_X^\infty )^{N-1}\exp \left ({-}\phi (0)(t_{n_{k(N-1)}}-t_{n_{(k-1)(N-1)}})\right )\prod _{q=1}^{N-1}A_q. \end{aligned} \end{equation}

Since $\mathcal{G}(A_q)$ contains a spanning tree for each $q\in [N-1]$ , Lemma 2.4 implies that $\prod _{q=1}^{N-1}A_q$ is a scrambling matrix and, in particular,

\begin{equation*}\mu \left (\prod _{q=1}^{N-1}A_q\right )\geq 1.\end{equation*}

Therefore, we apply $A\geq B\geq 0 \Longrightarrow \mu (A)\geq \mu (B)$ to (3.13) to obtain

\begin{equation*}\mu (\overline {\Phi }(t_{n_{k(N-1)}},t_{n_{(k-1)(N-1)}}))\geq \left (\frac {m}{N}\right )^{N-1}\phi (D_X^\infty )^{N-1}\exp \left ({-}\phi (0)\left (t_{n_{k(N-1)}}-t_{n_{(k-1)(N-1)}}\right )\right ).\end{equation*}

(2) Observe that the constant vector $V^{(2)}=[1,\ldots,1]^T$ can be a special solution to (3.5) $_2$ whatever $X^{(1)}$ is. Then, by (3.6), one has

\begin{equation*}[1,\ldots,1]^T=\overline {\Phi }(T_1,T_2)[1,\ldots,1]^T,\quad \forall \,T_1\geq T_2\geq 0.\end{equation*}

Finally, we combine the above relation and the non-negativity of $\overline{\Psi }(T_1,T_2)$ obtained from the Peano–Baker series representation as (3.11) to see that

\begin{equation*}\overline {\Phi }(T_1,T_2)=\exp ({-}\phi (0)(T_1-T_2))\overline {\Psi }(T_1,T_2)\geq 0\end{equation*}

is a stochastic matrix, which is our desired result.

Then, we apply Lemma 3.2 to (3.1) $_2$ to obtain the velocity alignment of the system (3.1) under a priori assumptions for some well-prepared initial data.

Lemma 3.3. (Velocity alignment) Let $w\in \Omega$ be an event satisfying (3.8), and assume the sample path $(X,V)(\omega )$ of the system (3.1) satisfies $(\mathcal{F}1)-(\mathcal{F}5)$ and (3.9) $_1$ . If we further assume

\begin{equation*} D_V(0,w)\lt \sqrt {2},\end{equation*}

the following inequality holds for all $t\in [t_{n_{k(N-1)}},t_{n_{(k+1)(N-1)}})$ and $k\in \mathbb{N}\cup \{0\}$ :

\begin{align*} D_V(t,w)\leq \overline{M}D_V(0,w)\exp \Bigg ({-}\frac{C\left ((k+1)^{1-rM(N-1)\phi (0)}-1\right )}{1-rM(N-1)\phi (0)}\Bigg ), \end{align*}

where $C$ is the constant defined in $(\mathcal{F}6)$ , that is,

\begin{equation*}C\;:\!=\; \left (\frac {m\phi (D_X^\infty )\exp ({-}\phi (0)M\bar {n})(N-1)^{-rM\phi (0)}}{N}\right )^{N-1}.\end{equation*}

Proof. In this proof, we suppress $w$ -dependence to simplify the notation. For every $t\in [t_{n_{(k-1)(N-1)}},t_{n_{k(N-1)}})$ , we apply the explicit formula (3.7) to obtain

\begin{equation*}V(t_{n_{k(N-1)}})=\overline {\Phi }(t_{n_{k(N-1)}},t_{n_{(k-1)(N-1)}})V(t_{n_{(k-1)(N-1)}})+\frac {1}{N}\int _{t_{n_{(k-1)(N-1)}}}^{t_{n_{k(N-1)}}}\overline {\Phi }(t_{n_{k(N-1)}},s)\mathcal {R}_{\sigma _s}(s)ds.\end{equation*}

Since $\overline{\Phi }(t_{n_{k(N-1)}},t_{n_{(k-1)(N-1)}})$ is a stochastic matrix ( $\because$ Lemma 3.2), we apply Lemmas 2.3 and 3.2 to get the following estimate for $D_V$ :

\begin{equation*} \begin {aligned} &D_V(t_{n_{k(N-1)}})\\ &\,\,\leq \left (1-\mu (\overline {\Phi }(t_{n_{k(N-1)}},t_{n_{(k-1)(N-1)}}))\right )D_V(t_{n_{(k-1)(N-1)}})+D_B\\ &\,\,\leq \left (1-\left (\frac {m}{N}\right )^{N-1}\phi (D_X^\infty )^{N-1}\exp \left ({-}\phi (0)\left (t_{n_{k(N-1)}}-t_{n_{(k-1)(N-1)}}\right )\right )\right )D_V(t_{n_{(k-1)(N-1)}})+D_B\\ &\,\,\;=\!:\;\mathcal {I}+D_B, \end {aligned} \end{equation*}

where the matrix $B$ is defined as

\begin{equation*}B\;:\!=\; \frac {1}{N}\int _{t_{n_{(k-1)(N-1)}}}^{t_{n_{k(N-1)}}}\overline {\Phi }(t_{n_{k(N-1)}},s)\mathcal {R}_{\sigma _s}(s)ds. \end{equation*}

In what follows, we estimate $\mathcal{I}$ and $D_B$ one by one.

$\bullet$ (Estimate of $\mathcal{I}$ ): By using $(\mathcal{F}1)$ and the definition of $n_k(\bar{n})$ , we have

\begin{align*} \mathcal{I}&\leq \left (1-\left (\frac{m\phi (D_X^\infty )}{N}\right )^{N-1}\exp \bigg ({-}\phi (0)M\bigg ((N-1)\bar{n}+\sum _{l=(k-1)(N-1)+1}^{k(N-1)}\lfloor r\log l\rfloor \bigg )\bigg )\right )\\ &\qquad \times D_V(t_{n_{(k-1)(N-1)}}). \end{align*}

$\bullet$ (Estimate of $D_B$ ): Since $D_B$ is the maximum distance between row vectors of $B$ , one can easily verify that

\begin{equation*}D_B\leq \frac {1}{N}\int _{t_{n_{(k-1)(N-1)}}}^{t_{n_{k(N-1)}}}D_{\overline {\Phi }(t_{n_{k(N-1)}},s)\mathcal {R}_{\sigma _s}(s)}ds. \end{equation*}

Then, since $\overline{\Phi }(t_{n_{k(N-1)}},s)$ is stochastic, we apply Lemma 2.3 to the integrand $D_{\overline{\Phi }(t_{n_{k(N-1)}},s)\mathcal{R}_{\sigma _s}(s)}$ to obtain

\begin{equation*} \begin{aligned} D_B&\leq \frac{1}{N}\int _{t_{n_{(k-1)(N-1)}}}^{t_{n_{k(N-1)}}}D_{\overline{\Phi }(t_{n_{k(N-1)}},s)\mathcal{R}_{\sigma _s}(s)}ds\\[5pt] &\leq \frac{1}{N}\int _{t_{n_{(k-1)(N-1)}}}^{t_{n_{k(N-1)}}}(1-\mu (\overline{\Phi }(t_{n_{k(N-1)}},s)))D_{\mathcal{R}_{\sigma _s}(s)}ds\\[5pt] &\leq \frac{1}{N}\int _{t_{n_{(k-1)(N-1)}}}^{t_{n_{k(N-1)}}}D_{\mathcal{R}_{\sigma _s}(s)}ds. \end{aligned} \end{equation*}

Hence, it suffices to find an upper bound for $D_{\mathcal{R}_{\sigma _s}(s)}$ , where the matrix $\mathcal{R}_{\sigma _s}(s)$ is given by

\begin{align*} \mathcal{R}_{{\sigma _s}}\;:\!=\; (r^{\sigma _s}_1,\ldots, r^{\sigma _s}_N)^T\quad \text{and} \quad r^{{\sigma _s}}_i\;:\!=\; \frac{1}{2}\sum _{j=1}^{N}\chi ^{{\sigma _s}}_{ij}\phi (\|x_i-x_j\|)\|v_i-v_j\|^2v_i. \end{align*}

To do this, we use Lemma 2.1, $\phi (\cdot )\leq \phi (0)$ , $\chi ^{\sigma _s}_{ij}\in \{0,1\}$ and Corollary 2.1 that for $s\in [t_{n_{(k-1)(N-1)}},t_{n_{k(N-1)}})$ ,

\begin{align*} \|r^{\sigma _s}_i\|&\leq \frac{1}{2}\sum _{j=1}^{N}\chi ^{\sigma _s}_{ij}\phi (\|x_i-x_j\|)\|v_i-v_j\|^2v_i\leq \frac{1}{2}\sum _{j=1}^{N}\phi (0)\|v_i-v_j\|^2\|v_i\|\\[5pt] &\leq \frac{1}{2}\sum _{j=1}^{N}\phi (0)\|v_i-v_j\|^2=\frac{1}{2}\sum _{\substack{1\leq j\leq N\\j\neq i}}\phi (0)\|v_i-v_j\|^2\\[5pt] &\leq \frac{1}{2}\sum _{\substack{1\leq j\leq N\\j\neq i}}\phi (0)D_V^2(s)\\[5pt] &\leq \frac{1}{2}\sum _{\substack{1\leq j\leq N\\j\neq i}}\phi (0)D_V(t_{n_{(k-1)(N-1)}})^2\\[5pt] &= \frac{(N-1)}{2}\phi (0)D_V(t_{n_{(k-1)(N-1)}})^2. \end{align*}

Therefore, the diameter $D_B$ has the following upper bound:

\begin{equation*} \begin {aligned} D_B &\leq \frac {1}{N}\int _{t_{n_{(k-1)(N-1)}}}^{t_{n_{k(N-1)}}}D_{\mathcal {R}_{\sigma _s}(s)}ds\\ &\leq \frac {1}{N}\int _{t_{n_{(k-1)(N-1)}}}^{t_{n_{k(N-1)}}}(N-1)\phi (0)D_V(t_{n_{(k-1)(N-1)}})^2ds \\ &= \frac {N-1}{N}\phi (0)(t_{n_{k(N-1)}}-t_{n_{(k-1)(N-1)}})D_V(t_{n_{(k-1)(N-1)}})^2\\ &\leq \frac {N-1}{N}\phi (0)M\left ((N-1)\bar {n}+\sum _{l=(k-1)(N-1)+1}^{k(N-1)}\lfloor r\log l\rfloor \right )D_V(t_{n_{(k-1)(N-1)}})^2. \end {aligned} \end{equation*}

Thus, we combine two estimates of $\mathcal{I}$ and $D_B$ to obtain

\begin{align*} &D_V(t_{n_{k(N-1)}})\\ &\leq \left (1-\left (\frac{m\phi (D_X^\infty )}{N}\right )^{N-1}\exp \bigg ({-}\phi (0)M\bigg ((N-1)\bar{n}+\sum _{l=(k-1)(N-1)+1}^{k(N-1)}\lfloor r\log l\rfloor \bigg )\bigg )\right )\\ &\quad \times D_V(t_{n_{(k-1)(N-1)}})\\ &\quad +\frac{N-1}{N}\phi (0)M\left ((N-1)\bar{n}+\sum _{l=(k-1)(N-1)+1}^{k(N-1)}\lfloor r\log l\rfloor \right )D_V(t_{n_{(k-1)(N-1)}})^2\\ &\leq \left (1-\left (\frac{m\phi (D_X^\infty )}{N}\right )^{N-1}\exp \bigg ({-}\phi (0)M\bigg ((N-1)\bar{n}+\sum _{l=(k-1)(N-1)+1}^{k(N-1)}r\log (k(N-1))\bigg )\bigg )\right )\\ &\qquad \times D_V(t_{n_{(k-1)(N-1)}})\\ &\quad +\frac{N-1}{N}\phi (0)M\left ((N-1)\bar{n}+\sum _{l=(k-1)(N-1)+1}^{k(N-1)} r\log (k(N-1))\right )D_V(t_{n_{(k-1)(N-1)}})^2\\ &= \left (1-\left (\frac{m\phi (D_X^\infty )}{N}\right )^{N-1}\exp \bigg ({-}\phi (0)M(N-1)\bigg (\bar{n}+r\log (k(N-1))\bigg )\bigg )\right ) D_V(t_{n_{(k-1)(N-1)}})\\ &\quad +\frac{(N-1)}{N}\phi (0)M\left ((N-1)\bigg (\bar{n}+r\log (k(N-1))\bigg )\right )D_V(t_{n_{(k-1)(N-1)}})^2\\ &= \left (1-\left (\frac{m\phi (D_X^\infty )\exp ({-}\phi (0)M\bar{n})}{N}\right )^{N-1}(k(N-1))^{-rM(N-1)\phi (0)}\right )D_V(t_{n_{(k-1)(N-1)}})\\ &\quad +\frac{(N-1)^2}{N}\phi (0)M\bigg (\bar{n}+r\log (k(N-1))\bigg )D_V(t_{n_{(k-1)(N-1)}})^2\\ &=\Bigg [1+\frac{(N-1)^2}{N}\phi (0)M\bigg (\bar{n}+r\log (N-1)+r\log k\bigg )D_V(t_{n_{(k-1)(N-1)}})\\ &\quad -\left (\frac{m\phi (D_X^\infty )\exp ({-}\phi (0)M\bar{n})(N-1)^{-rM\phi (0)}}{N}\right )^{N-1}k^{-rM(N-1)\phi (0)}\Bigg ]D_V(t_{n_{(k-1)(N-1)}})\\ &\leq \exp \left (\frac{(N-1)^2}{N}\phi (0)M\bigg (\bar{n}+r\log (N-1)+r\log k\bigg )D_V(t_{n_{(k-1)(N-1)}})\right )\\ &\quad \times \exp \left ({-}Ck^{-rM(N-1)\phi (0)}\right )D_V(t_{n_{(k-1)(N-1)}}), \end{align*}

where we used the following relation in the last inequality:

\begin{equation*}(1+\alpha )-x \leq \exp (\alpha -x),\quad \forall x\geq 0.\end{equation*}

By iterating the above process, we apply the a priori condition (3.9) $_2$ to obtain the following inequality for $t\in [t_{n_{k(N-1)}},t_{n_{(k+1)(N-1)}})$ :

\begin{align*} D_V(t)&\leq D_V(0)\exp \left (\frac{(N-1)^2}{N}\phi (0)M\sum _{l=1}^k\big (\bar{n}+r\log (N-1)+r\log l\big )D_V(t_{n_{(k-1)(N-1)}})\right )\\ &\quad \times \exp \left ({-}C\sum _{l=1}^k l^{-rM(N-1)\phi (0)}\right )\\ &\leq \overline{M}D_V(0)\exp \Bigg ({-}C\frac{(k+1)^{1-rM(N-1)\phi (0)}-1}{1-rM(N-1)\phi (0)}\Bigg ), \end{align*}

where we used

\begin{equation*}\sum _{l=1}^k l^{-rM(N-1)\phi (0)}\geq \int _{1}^{k+1} x^{-rM(N-1)\phi (0)} dx=\frac {(k+1)^{1-rM(N-1)\phi (0)}-1}{1-rM(N-1)\phi (0)} \end{equation*}

in the last inequality, which completes the proof.

Remark 3.2. In [Reference Dong, Ha, Jung and Kim13], the authors used a similar argument to provide a sufficient framework for the Cucker–Smale model to exhibit asymptotic flocking. For the Cucker–Smale model, the $D_B$ term in Lemma 3.3 does not exist, so that the sufficient framework does not need to constrain the upper bound of initial $D_V$ with respect to $\bar{n}$ . Therefore, it was possible to take $\bar{n}\to \infty$ to show that the probability to exhibit asymptotic flocking is $1$ for some well-prepared initial data and system parameters. In the model (3.1), however, the sufficient framework needs to constrain $D_V$ with respect to $\bar{n}$ (via $C_0$ in this paper) due to the existence of this $D_B$ term (see Lemma 2.3).

From Lemma 3.3, we can estimate the decay rate of velocity diameter $D_V$ in terms of $\overline{M},m,M,r,N$ . Therefore, we can determine a suitable sufficient condition in terms of initial data and system parameters to make the assumption (3.9) $_2$ imply the assumption (3.9) $_1$ .

Lemma 3.4. (Group formation) Let $w\in \Omega$ be an event satisfying (3.8), and assume the sample path $(X,V)(\omega )$ of the system (3.1) satisfies $(\mathcal{F}1)-(\mathcal{F}5)$ and (3.9) $_2$ . If we further assume

(3.14) \begin{equation} \begin{aligned} &D_V(0,w)\lt \sqrt{2},\\ &D_X(0,w)+\overline{M}C_0D_V(0,w)\lt D_X^\infty,\\ \end{aligned} \end{equation}

the first a priori assumption (3.9) $_1$ also holds, that is,

\begin{equation*}\sup _{t\in \mathbb {R}_+}D_X(t,w)\leq D_X^\infty,\end{equation*}

where $C_0$ is the constant defined in $(\mathcal{F}6)$ .

Proof. First, we define $S$ as

\begin{equation*}S\;=\!:\;\{t\gt 0\,|\,D_X(s,w)\leq D_X^\infty,\, \forall \,s\in [0,t)\},\end{equation*}

and claim:

\begin{equation*} t^*\;:\!=\; \sup S=+\infty . \end{equation*}

To see this, suppose that the contrary holds, that is, $t^*\lt +\infty$ . Since $D_X$ is continuous in $t$ and (3.14) implies $D_X(0,\omega )\lt D_X^\infty$ , we have $t^*\gt 0$ and

\begin{equation*}D_X(t^*-,w)=D_X^\infty .\end{equation*}

Then, we integrate the result of Lemma 3.3 on $t\in [0,t^*]$ and apply (3.14) to obtain

\begin{align*} D_X(t,w)&\leq D_X(0,w)+ \int _0^{t^*} D_V(s,w) ds\\[5pt] &\leq D_X(0,w)+ \int _0^{\infty } D_V(s,w) ds\\[5pt] &\leq D_X(0,w)+\overline{M}D_V(0,w)\sum _{k=0}^\infty \Bigg [(t_{n_{(k+1)(N-1)}}-t_{n_{k(N-1)}})\\[5pt] &\qquad \times \exp \bigg ({-}\frac{C\left ((k+1)^{1-rM(N-1)\phi (0)}-1\right )}{1-rM(N-1)\phi (0)}\bigg )\bigg )\Bigg ]\\[5pt] &\leq D_X(0,w)+\overline{M}M(N-1)D_V(0,w)\sum _{k=0}^\infty \Bigg [\big (\bar{n}+r\log (k+1)(N-1)\big )\\[5pt] &\qquad \times \exp \bigg ({-}\frac{C\left ((k+1)^{1-rM(N-1)\phi (0)}-1\right )}{1-rM(N-1)\phi (0)}\bigg )\bigg )\Bigg ]\\[5pt] &=D_X(0,w)+\overline{M}C_0D_V(0,w)\lt D_X^\infty . \end{align*}

By taking $t\to t^*$ , this inequality yields

\begin{equation*}D_X(t^*-,w)=\lim _{t \to t^*-}D_X(t,\omega )\leq D_X(0,w)+\overline {M}C_0D_V(0,w)\lt D_X^\infty,\end{equation*}

which contradicts to $D_X(t^*-,w)=D_X^\infty$ . Therefore, we can conclude $t^*=+\infty$ , which is our desired result.

Finally, we show that the condition $(\mathcal{F}6)$ implies the assumption (3.9) $_2$ , so that the asymptotic flocking occurs for $\omega \in \Omega$ satisfying (3.8).

Lemma 3.5. Let $w\in \Omega$ be an event satisfying (3.8), and assume the sample path $(X,V)(\omega )$ of the system (3.1) satisfies $(\mathcal{F}1)-(\mathcal{F}6)$ . Then, the assumption (3.9) $_2$ holds, that is,

\begin{equation*}\frac {(N-1)^2\phi (0)M}{N} \cdot \sup _{k\in \mathbb {N}}\Bigg [\sum _{l=1}^k\big (\bar {n}+r\log l(N-1)\big )D_V(t_{n_{(l-1)(N-1)}})\Bigg ]\leq \log \overline {M}.\end{equation*}

Proof. First, define $\mathcal{S}$ as a subset of $\mathbb{N}$ satisfying

\begin{align*} \mathcal{S}\;=\!:\;&\Bigg \{k\,\Bigg |\,\frac{(N-1)^2\phi (0)M}{N}\sum _{l=1}^k\big (\bar{n}+r\log l(N-1)\big )D_V(t_{n_{(l-1)(N-1)}})\leq \log \overline{M}\Bigg \}. \end{align*}

Since $(\mathcal{F}6)$ immediately implies $1\in \mathcal{S}$ , we can define $s^*\;=\!:\;\sup \mathcal{S}\in \mathbb{N}\cup \{\infty \}$ . Then, we claim

\begin{equation*} s^*=+\infty . \end{equation*}

To see this, suppose we have $s^*\lt +\infty$ . Then,

(3.15) \begin{equation} \mathcal{J}\;:\!=\; \frac{(N-1)^2\phi (0)M}{N}\sum _{l=1}^{s^*+1}\big (\bar{n}+r\log l(N-1)\big )D_V(t_{n_{(l-1)(N-1)}})\gt \log \overline{M}. \end{equation}

On the other hand, we can apply Corollary 2.1, Lemma 3.3 and $(\mathcal{F}6)$ to get

(3.16) \begin{equation} \begin{aligned} \mathcal{J}&\leq \frac{(N-1)^2\phi (0)M\overline{M}D_V(0)}{N}\\ &\quad \times \Bigg (\sum _{l=1}^{s^*}\big (\bar{n}+r\log l(N-1)\big )\exp \bigg ({-}\frac{C\left (l^{1-rM(N-1)\phi (0)}-1\right )}{1-rM(N-1)\phi (0)}\bigg )\\ &\quad +\big (\bar{n}+r\log (s^*+1)(N-1) \big )\exp \bigg ({-}\frac{C\left ((s^*)^{1-rM(N-1)\phi (0)}-1\right )}{1-rM(N-1)\phi (0)}\bigg )\Bigg ). \end{aligned} \end{equation}

By using $1-rM(N-1)\phi (0)\in (0,1)$ and $C\leq \log 2$ in $(\mathcal{F}5)$ , we have

(3.17) \begin{equation} \begin{aligned} &\sum _{l=s^*+1}^{\infty }\big (\bar{n}+r\log l(N-1)\big )\exp \bigg ({-}\frac{C\left (l^{1-rM(N-1)\phi (0)}-1\right )}{1-rM(N-1)\phi (0)}\bigg )\\ &\quad \geq \big (\bar{n}+r\log (s^*+1)(N-1)\big )\exp \bigg ({-}\frac{C\left ((s^*)^{1-rM(N-1)\phi (0)}-1\right )}{1-rM(N-1)\phi (0)}\bigg )\\ &\qquad \times \sum _{l=s^*+1}^{\infty }\exp \bigg ({-}\frac{C\left (l^{1-rM(N-1)\phi (0)}-(s^*)^{1-rM(N-1)\phi (0)}\right )}{1-rM(N-1)\phi (0)}\bigg )\\ &\quad \geq \big (\bar{n}+r\log (s^*+1)(N-1)\big )\exp \bigg (-\frac{C\left ((s^*)^{1-rM(N-1)\phi (0)}-1\right )}{1-rM(N-1)\phi (0)}\bigg )\\ &\qquad \times \sum _{l=s^*+1}^{\infty }\exp (-C(l-s^*))\\ &\quad \geq \big (\bar{n}+r\log (s^*+1)(N-1)\big )\exp \bigg (-\frac{C\left ((s^*)^{1-rM(N-1)\phi (0)}-1\right )}{1-rM(N-1)\phi (0)}\bigg )\times \sum _{l=1}^{\infty }2^{-l}\\ &\quad =\big (\bar{n}+r\log (s^*+1)(N-1)\big )\exp \bigg (-\frac{C\left ((s^*)^{1-rM(N-1)\phi (0)}-1\right )}{1-rM(N-1)\phi (0)}\bigg ). \end{aligned} \end{equation}

Therefore, we combine (3.16) and (3.17) to obtain

\begin{align*} \mathcal{J} &\leq \frac{(N-1)^2\phi (0)M\overline{M}D_V(0)}{N}\\ &\quad \times \Bigg [\sum _{l=1}^{\infty }\big (\bar{n}+r\log l(N-1)\big )\exp \bigg (-\frac{C\left (l^{1-rM(N-1)\phi (0)}-1\right )}{1-rM(N-1)\phi (0)}\bigg )\Bigg ]\\ &\lt \log \overline{M}, \end{align*}

which leads a contradiction to (3.15). This implies that $s^*$ must be infinity, which means the assumption (3.9) $_2$ holds.

Now, we are ready to state our main result. By combining Lemmas 3.13.5, we can deduce the following result.

Theorem 3.1. (Probability of asymptotic flocking) Suppose that $(X,V)$ is a solution process of the system (3.1) satisfying $(\mathcal{F}1)-(\mathcal{F}6)$ . Then, we have

\begin{equation*}\mathbb {P}\left (w\in \Omega :\,(X,V)(\omega )\,\textit {exhibits asymptotic flocking}\right )\geq \exp \left (-\frac {R^2 \log R}{(R-1)^2} \sum _{l=1}^{N_G}(1-\mathcal {P}_l)^{\bar {n}-1}\right ).\end{equation*}

Proof. Lemma 3.1 shows that the probability to satisfy (3.8) is greater than or equal to $\displaystyle \exp \left (-\frac{R^2 \log R}{(R-1)^2} \sum _{l=1}^{N_G}(1-\mathcal{P}_l)^{\bar{n}-1}\right )$ . Then, we apply Lemmas 3.33.5 to obtain the desired result.

To check whether this result is meaningful, we can compare the expected behaviour of the trivial solution with the result in Theorem 3.1. On the one hand, one can easily verify that the solution of (3.1) becomes the uniform linear motion of all agents with the same velocities when the event $\omega$ satisfies $D_V(0,\omega )=0$ . On the other hand, the following corollary shows that the probability to exhibit asymptotic flocking converges to $1$ when $\displaystyle \sup _{\omega \in \Omega }D_V(0,\omega )$ converges to $0$ , which implies that the result in Theorem 3.1 is consistent with the uniform linear motion of the trivial solution.

Corollary 3.1. Suppose that $(X^{(n)},V^{(n)})$ is a sequence of the solution process of the system (3.1) satisfying $(\mathcal{F}1)-(\mathcal{F}5)$ and

(3.18) \begin{equation} \sup _{n\in \mathbb{N}}\textrm{ess}\,{\rm sup}_{\omega \in \Omega }D_{X^{(n)}}(0,\omega )\lt \infty,\quad \lim _{n\to \infty }\textrm{ess}\,{\rm sup}_{\omega \in \Omega }D_{V^{(n)}}(0,\omega )=0. \end{equation}

Then, we have

\begin{equation*}\lim _{n\to \infty }\mathbb {P}\left (w\in \Omega :\,(X^{(n)},V^{(n)})(\omega )\,{\rm exhibits\,asymptotic\,flocking}\right )=1.\end{equation*}

Proof. Note that the initial velocity diameter $D_V(0,\omega )$ only affects to $(\mathcal{F}6)$ . To meet the condition $(\mathcal{F}6)$ , we set

\begin{equation*}\overline {M}=e,\quad D_{X}^\infty =\sup _{n\in \mathbb {N}}\textrm {ess}\,{\rm sup}_{\omega \in \Omega }D_{X^{(n)}}(0,\omega )+\frac {1}{\phi (0)}. \end{equation*}

Then, $(\mathcal{F}6)$ holds true for $(X^{(n)},V^{(n)})$ if

(3.19) \begin{equation} \sup _{\omega \in \Omega }D_{V^{(n)}}(0,\omega )\lt \min \left \{\frac{1}{e\phi (0)C_0},\sqrt{2} \right \}, \end{equation}

where $C_0$ is the number determined by $\bar{n}$ :

\begin{equation*} \begin{aligned} &C=\left (\frac{m\phi (D_X^\infty )\exp (-\phi (0)M\bar{n})(N-1)^{-rM\phi (0)}}{N}\right )^{N-1},\\ &C_0=M(N-1)\sum _{l=1}^\infty \Bigg [\big (\bar{n}+r\log l(N-1)\big )\exp \bigg (-\frac{C\left (l^{1-rM(N-1)\phi (0)}-1\right )}{1-rM(N-1)\phi (0)}\bigg )\Bigg ]. \end{aligned} \end{equation*}

In fact, every sufficiently large $\bar{n}$ is allowed in the condition $(\mathcal{F}5)$ , and $C_0=C_0(m,M,\phi,D_X^\infty,\bar{n})$ can be considered as an increasing function with respect to $\bar{n}$ . By using the condition (3.18), one can see that for every sufficiently large $\bar{n}$ , there exists $n_0\in \mathbb{N}$ such that (3.19) holds for all $n\geq n_0$ . Therefore, we apply Theorem 3.1 to get

\begin{equation*} \begin{aligned} &\liminf _{n\to \infty }\mathbb{P}\left (w\in \Omega :\,(X^{(n)},V^{(n)})(\omega )\,\text{exhibits asymptotic flocking}\right )\\ &\geq \lim _{\bar{n}\to \infty } \exp \left (-\frac{R^2 \log R}{(R-1)^2} \sum _{l=1}^{N_G}(1-\mathcal{P}_l)^{\bar{n}-1}\right )\\ &=1, \end{aligned} \end{equation*}

which implies our desired result.

4. Numerical simulation

In this section, we performed a numerical simulation of the Cauchy problem (3.1), especially for cases where theoretical predictions are relatively easy due to the simple structure of the interaction network.

Consider a system with three points, as shown in Figure 1, where particle 2 only affects particles 1 and 3, and no other interaction exists. Additionally, assume the following deterministic initial data so that we can control the simulation results more easily:

\begin{equation*} \begin{aligned} &x_1(0)=(0,1),\, x_2(0)=(0,0),\, x_3(0)=(0,-1),\\ &v_1(0)=(\cos \varepsilon,\sin \varepsilon ),\, v_2(0)=(1,0),\, v_3(0)=(\cos \varepsilon,-\sin \varepsilon ),\quad \varepsilon \in (0,1). \end{aligned} \end{equation*}

Then, we have

(4.1) \begin{equation} \begin{cases} \displaystyle \frac{dv_1}{dt}=\frac{1}{3}\chi _{12}^{\sigma }\phi (\|x_1-x_2\|)\left (v_2-\langle v_1,v_2\rangle v_1\right ),\\[5pt]\displaystyle \frac{dv_2}{dt}=0,\\[5pt] \displaystyle \frac{dv_3}{dt}=\frac{1}{3}\chi _{32}^{\sigma }\phi (\|x_3-x_2\|)\left (v_2-\langle v_3,v_2\rangle v_3\right ), \end{cases} \end{equation}

and the derivative of the inner product of velocities can be calculated as follows:

(4.2) \begin{equation} \frac{d}{dt}\langle v_i,v_2\rangle =\frac{1}{3}\chi _{i2}^{\sigma }\phi (\|x_i-x_2\|)(1-\langle v_i,v_2\rangle ^2),\quad i=1,3. \end{equation}

By using the primitive $\Phi (x)=\int _{0}^{x}\phi (y)dy$ , the following simple inequality can be obtained from (4.2):

(4.3) \begin{equation} \begin{aligned} \frac{d}{dt}\|v_i-v_2\|&=\frac{1}{2\|v_i-v_2\|}\frac{d}{dt}\|v_i-v_2\|^2\\ &=\frac{1}{2\|v_i-v_2\|}\frac{d}{dt}\left (2-2\langle v_i,v_2\rangle \right )\\ &=-\frac{1}{3\|v_i-v_2\|}\chi _{i2}^{\sigma }\phi (\|x_i-x_2\|)(1-\langle v_i,v_2\rangle ^2)\\ &=-\frac{1}{12}\chi _{i2}^{\sigma }\phi (\|x_i-x_2\|)\|v_i-v_2\|(4-\|v_i-v_2\|^2)\\ &\leq -\frac{1}{12}\chi _{i2}^{\sigma }(4-\|v_i-v_2\|^2)\phi (\|x_i-x_2\|)(v_i-v_2)\cdot \left (\frac{x_i-x_2}{\|x_i-x_2\|}\right )\\ &=-\frac{1}{12}\chi _{i2}^{\sigma }(4-\|v_i-v_2\|^2)\frac{d}{dt}\Phi (\|x_i-x_2\|). \end{aligned} \end{equation}

If there was no random selection of digraph $\mathcal{G}$ and $\chi _{12}^{\sigma }\equiv \chi _{32}^{\sigma }\equiv 1$ for all $t$ , (4.3) yields

(4.4) \begin{equation} \frac{d}{dt}\left (\log \frac{2+\|v_i-v_2\|}{2-\|v_i-v_2\|}+\frac{1}{3}\Phi (\|x_i-x_2\|) \right )\leq 0, \end{equation}

and if initial data satisfies

(4.5) \begin{equation} \frac{1}{3}\int _{\|x_i(0)-x_2(0)\|}^{\infty }\phi (x)dx\gt \log \frac{2+\|v_i(0)-v_2(0)\|}{2-\|v_i(0)-v_2(0)\|}, \end{equation}

we have

\begin{equation*} \begin{aligned} \limsup _{t\to \infty } \frac{1}{3}\Phi (\|x_i(t)-x_2(t)\|)&\leq \log \frac{2+\|v_i(0)-v_2(0)\|}{2-\|v_i(0)-v_2(0)\|}+\frac{1}{3}\Phi (\|x_i(0)-x_2(0)\|)\\ &\lt \frac{1}{3}\lim _{x\to \infty }\Phi (x), \end{aligned} \end{equation*}

which implies the existence of the finite upper bound $D_X^\infty$ of $\|x_i-x_2\|$ . Then, one can apply (4.3) to obtain

\begin{equation*} \begin{aligned} \frac{d}{dt}\log \|v_i-v_2\|&=-\frac{1}{12}\phi (\|x_i-x_2\|)(4-\|v_i-v_2\|^2)\\ &\leq -\frac{1}{12}\phi (D_X^\infty )(4-\|v_i(0)-v_2(0)\|^2), \end{aligned} \end{equation*}

which shows the exponential convergence of $\|v_i-v_2\|$ , so that the asymptotic flocking emerges. If $\|x_i(0)-x_2(0)\|=1$ and $\phi (x)=\frac{1}{(1+x^2)^2}$ , the left-hand side (4.5) is

\begin{equation*}\frac {1}{3}\cdot {\left [\frac {1}{2}\left (\frac {x}{x^2+1}+\arctan x\right ) \right ]}_{1}^{\infty }=\frac {\pi -2}{24}\simeq 0.047566, \end{equation*}

and (4.5) is equivalent to

\begin{align*} 2-\frac{4}{1+\exp \left (\frac{\pi -2}{24}\right )}\gt \|v_i(0)-v_2(0)\|=2\sin \frac{\varepsilon }{2}\Longleftrightarrow \varepsilon \lt 0.0475618\text{xxx}. \end{align*}

Below, Figure 2 shows the trajectories of three particles when $\chi _{12}^{\sigma }\equiv \chi _{32}^{\sigma }\equiv 1, \|x_i(0)-x_2(0)\|=1$ and $\phi (x)=\frac{1}{(1+x^2)^2}$ . To perform the numerical experiment, we simply used the first order Euler method and plotted trajectories for a total of 100,000 s with a time interval $\Delta t=\,$ 0.1 s. Although the horizontal axis in the plot is the $x$ -coordinate rather than time, it can be seen as if the $y$ -coordinates are drawn according to time, since the velocities of the three particles are close to $(1,0)$ . From these results, we can see that our theoretical prediction of the sufficient conditions for flocking to occur is nearly optimal, even with numerical errors.

Figure 1. Interaction network.

Figure 2. Trajectories of three particles, $\chi_{12}^{\sigma}\equiv \chi_{32}^{\sigma}\equiv 1$ .

On the other hand, the least connected way for the union of graphs to have a spanning tree is that $N_G=2$ and $P_1+P_2=1$ , where $\mathcal{G}_1$ and $\mathcal{G}_2$ only contain one edge $(2\to 1)$ and $(2\to 3)$ , respectively. In this case, since each $\chi _{ij}^{\sigma }$ is a component of $\mathcal{G}_{\sigma }$ ’s adjacency matrix, the sum of $\chi _{12}^{\sigma (t,\omega )}$ and $\chi _{32}^{\sigma (t,\omega )}$ must be 1 for all $t$ and $\omega$ . However, even if all constants are set, it is very difficult to estimate the exact value of $C_0$ in $(\mathcal{F}6)$ because the series $C_0$ converges at a very slow rate. For example, if we have $N=3, P_1=P_2=\frac{1}{2}$ and $m=M=0.05$ , then the conditions we get from $(\mathcal{F}5)$ are

\begin{equation*} \begin{aligned} r=\frac{R}{\log 2}\lt \frac{1}{0.05\cdot 2\cdot 1},\,\frac{1}{2^{\bar{n}-1}}\leq 1-\frac{1}{R},\, \left (\frac{0.05\cdot 1\cdot \exp \left (-1\cdot 0.05\cdot \bar{n} \right )\cdot 2^{-r\cdot 0.05\cdot 1}}{3}\right )^2\leq \log 2, \end{aligned} \end{equation*}

where the last conditions holds for every $r\gt 0$ and $\bar{n}\in \mathbb{N}$ . If we set $R = 5\log 2$ , any integer $\bar{n}\geq 2$ satisfies the condition $(\mathcal{F}5)$ , and at this time, the probability guaranteed in Theorem 3.1 can be maximised by choosing the largest $\bar{n}$ which satisfies $(\mathcal{F}6)$ .

In Figure 3, we show the trajectories of three particles when $\chi _{12}^{\sigma _{t_n}}=1-\chi _{32}^{\sigma _{t_n}}$ is a $n$ th sample from the distribution $\text{Bernoulli}(\frac{1}{2})$ and $\|x_i(0)-x_2(0)\|=1, \phi (x)=\frac{1}{(1+x^2)^2}$ as in Figure 2. Unlike in Figure 2, we cannot explicitly find the exact value of $\varepsilon$ at which the long-time behaviour starts to change, but at least, we can see that the distance between points diverges to infinity at $\varepsilon =0.04$ and asymptotic flocking occurs at $\varepsilon =0.02$ .

Figure 3. Trajectories of three particles, $\chi_{12}^{\sigma_{t_n}}=1-\chi_{32}^{\sigma_{t_n}}\overset{\text{i.i.d}}{\sim}\text{Bernoulli}(\frac{1}{2})$ .

The first feature that can be seen from the repeated experimental results at $\varepsilon =0.02$ is that asymptotic flocking occurs with a higher probability than predicted by theory. Although not shown in Figure 4, in practice, flocking never failed to occur even once during the experiments. The second feature is that the diameters of the three points always converged to a value of (approximately) $4.5$ , regardless of whether particle 1 or particle 3 moved further away from particle 2. In fact, this is somewhat natural, since the more times the interaction is turned off, the further away from particle 2 it is, and the sum of $\chi _{12}^\sigma$ and $\chi _{32}^\sigma$ is identical to the constant 1 in this system.

Figure 4. Three different simulations at $\varepsilon =0.02$ .

In Figure 5, we vary the size of $M=m$ while keeping all other conditions the same and plot their trajectories. From these three experiments, we can say that flocking tends to be harder to guarantee for larger $M=m$ . In fact, this has some theoretical interpretation: the particle that has its interaction turned off for time M will move away from the other particle for a long time without interaction, and the two particles that have already moved away will not be able to interact enough to reduce their velocity difference to cause flocking. Therefore, to guarantee flocking for $M \gt 0$ , the interaction must be stronger than in the deterministic example in Figure 2, and this tendency increases as $M$ increases. The sufficient condition $(\mathcal{F}5)$ we presented also has an upper bound on the value of $M$ that can cause flocking, which is $M\lt \frac{\log 2}{2}$ in the current setting. Although flocking actually occurred even at a larger $M=0.5$ , it can be clearly confirmed that the presence of $M$ affects whether flocking occurs.

Figure 5. Three different simulations at different $M$ .

Finally, we present experimental results that prove that flocking can either occur or not occur depending on sampling and that calculating the probability of flocking occurring as in our paper is indeed an appropriate form of outcome. In Figure 6, we ran three experiments with all parameters set the same as in Figure 4 except for $\varepsilon$ , which was set to $0.022$ . Although flocking occurs under relatively lenient conditions compared with the sufficient conditions $(\mathcal{F}1)-(\mathcal{F}6)$ we have presented, we are open to the possibility that this is not just a technical limitation but also a special property of the examples used in our numerical experiments.

Figure 6. Three different simulations at $\varepsilon =0.022$ .

5. Conclusion

In this paper, we presented a sufficient framework concerning initial data and system parameters to exhibit the asymptotic flocking of the Cucker–Smale model with a unit-speed constraint and randomly switching topology. For this, we used the explicit form of the given dynamical system by using the state-transition matrix of its homogeneous counterpart. Then, we used the relation between the ergodicity coefficient and the diameter of velocity to show that the asymptotic flocking occurs when the event that the union of the network topology in some time interval contains a spanning tree occurs infinitely many times. Subsequently, we provided a lower bound estimate of the probability of such an event, which therefore becomes the lower bound of probability to exhibit asymptotic flocking. In particular, we verified that the probability to exhibit asymptotic flocking converges to $1$ when the sufficient framework $(\mathcal{F}1)-(\mathcal{F}5)$ holds and $\displaystyle \sup _{\omega \in \Omega }D_V(0,\omega )$ converges to $0$ .

Acknowledgements

The work of H. Ahn was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (2022R1C12007321).

Competing interests

All authors declare that they have no conflicts of interest.

References

Ahn, H. (2023) Emergent behaviors of thermodynamic Cucker–Smale ensemble with a unit-speed constraint. Discrete Contin. Dyn. Syst. -B 28(9), 48004825.CrossRefGoogle Scholar
Ahn, H., Byeon, J. & Ha, S.-Y. (2023) Interplay of unit-speed constraint and singular communication in the thermodynamic Cucker-Smale model. Chaos 33. Paper no. 123132, 20 pp.CrossRefGoogle ScholarPubMed
Chen, G., Liu, Z. & Guo, L. (2014) The smallest possible interaction radius for synchronization of self-propelled particles. SIAM Rev. 56 (3), 499521.CrossRefGoogle Scholar
Cho, H., Dong, J.-G. & Ha, S.-Y. (2021) Emergent behaviors of a thermodynamic Cucker–Smale flock with a time delay on a general digraph. Math. Methods Appl. Sci. 45 (1), 164196.CrossRefGoogle Scholar
Choi, S.-H. & Ha, S.-Y. (2018) Interplay of the unit-speed constraint and time-delay in Cucker–Smale flocking. J. Math. Phys. 59 (8), 082701.CrossRefGoogle Scholar
Choi, S.-H. & Ha, S.-Y. (2016) Emergence of flocking for a multi-agent system moving with constant speed. Commun. Math. Sci. 14, 953972.CrossRefGoogle Scholar
Choi, S.-H. & Seo, H. (2023) Unit speed flocking model with distance-dependent time delay. Appl. Anal. 102, 23382364.CrossRefGoogle Scholar
Choi, Y.-P., Ha, S.-Y. & Li, Z. (2017). Emergent dynamics of the cucker–Smale flocking model and its variants. In Bellomo, N., Degond, P. & Tadmor, E. (editors), Active Particles Vol.I Theory, Models, Applications (Tentative Title), Modeling and Simulation in Science and Technology, Springer, Birkhäuser Google Scholar
Choi, Y.-P., Kalsie, D., Peszek, J. & Peters, A. (2019) A collisionless singular Cucker–Smale model with decentralized formation control. SIAM J. Appl. Dyn. Syst. 18, 19541981.CrossRefGoogle Scholar
Cho, J., Ha, S.-Y., Huang, F., Jin, C. & Ko, D. (2016) Emergence of bi-cluster flocking for agent-based models with unit speed constraint. Anal. Appl. (Singap.) 14, 3973.CrossRefGoogle Scholar
Cho, J., Ha, S.-Y., Huang, F., Jin, C. & Ko, D. (2016) Emergence of bi-cluster flocking for the Cucker–Smale model. Math. Models Methods Appl. Sci. 26, 11911218.CrossRefGoogle Scholar
Cucker, F. & Smale, S. (2007) Emergent behavior in flocks. IEEE Trans. Automat. Control 52 (5), 852862.CrossRefGoogle Scholar
Dong, J.-G., Ha, S.-Y., Jung, J. & Kim, D. (2020) On the stochastic flocking of the Cucker–Smale flock with randomly switching topologies. SIAM J. Control Optim. 58 (4), 23322353.CrossRefGoogle Scholar
Dong, J.-G., Ha, S.-Y. & Kim, D. (2019) Interplay of time delay and velocity alignment in the Cucker–Smale model on a general digraph. Discrete Contin. Dyn. Syst.-Ser. B 24, 55695596.Google Scholar
Dong, J.-G., Ha, S.-Y. & Kim, D. (2019) Emergent behaviors of continuous and discrete thermomechanical Cucker–Smale models on general digraphs. Math. Models Methods Appl. Sci. 29 (04), 589632.CrossRefGoogle Scholar
Dong, J.-G. & Qiu, L. (2017) Flocking of the Cucker–Smale model on general digraphs. IEEE Trans. Automat. Control 62 (10), 52345239.CrossRefGoogle Scholar
Ha, S.-Y., Kim, D. & Schlöder, F. W. (2021) Emergent behaviors of Cucker–Smale flocks on Riemannian manifolds. IEEE Trans. Automat. Control 66 (7), 30203035.CrossRefGoogle Scholar
Ha, S.-Y., Ko, D. & Zhang, Y. (2018) Remarks on the coupling strength for the Cucker–Smale with unit speed. Discrete Contin. Dyn. Syst. 38 (6), 27632793.CrossRefGoogle Scholar
Ha, S.-Y. & Liu, J.-G. (2009) A simple proof of Cucker–Smale flocking dynamics and mean-field limit. Commun. Math. Sci. 7 (2), 297325.CrossRefGoogle Scholar
Ha, S.-Y. & Li, Z. (2015) On the Cucker–Smale flocking with alternating leaders. Quart. Appl. Math. 73 (4), 693709.Google Scholar
Ha, S.-Y., Li, Z., Slemrod, M. & Xue, X. (2014) Flocking behavior of the Cucker–Smale model under rooted leadership in a large coupling limit. Quart. Appl. Math. 72 (4), 689701.CrossRefGoogle Scholar
Ha, S.-Y. & Ruggeri, T. (2017) Emergent dynamics of a thermodynamically consistent particle model. Arch. Ration. Mech. Anal. 223 (3), 13971425.CrossRefGoogle Scholar
Karper, T. K., Mellet, A. & Trivisa, K. (2015) Hydrodynamic limit of the kinetic Cucker–Smale flocking model. Math. Models Methods Appl. Sci. 25 (01), 131163.CrossRefGoogle Scholar
Kar, S. & Moura, J. M. F. (2010) Distributed consensus algorithms in sensor networks: Quantized data and random link failures. IEEE Trans. Signal Process. 58 (3), 13831400.CrossRefGoogle Scholar
Li, H., Liao, X., Huang, T., Zhu, W. & Liu, Y. (2015) Second-order global consensus in multiagent networks with random directional link failure. IEEE Trans. Neural Netw. Learn. Syst. 26 (3), 565575.CrossRefGoogle ScholarPubMed
Li, Z. & Xue, X. (2010) Cucker–Smale flocking under rooted leadership with fixed and switching topologies. SIAM J. Appl. Math. 70 (8), 31563174.CrossRefGoogle Scholar
Mucha, P. B. & Peszek, J. (2018) The Cucker–Smale equation: Singular communication weight, measure-valued solutions and weak-atomic uniqueness. Arch. Ration. Mech. Anal. 227 (1), 273308.CrossRefGoogle Scholar
Pignotti, C. & Vallejo, I. R. (2018) Flocking estimates for the Cucker–Smale model with time lag and hierarchical leadership. J. Math. Anal. Appl. 464 (2), 13131332.CrossRefGoogle Scholar
Ru, L., Li, X., Liu, Y. & Wang, X. (2021) Flocking of Cucker–Smale model with unit speed on general digraphs. Proc. Am. Math. Soc. 149 (10), 43974409.CrossRefGoogle Scholar
Shen, J. (2007) Cucker–Smale flocking under hierarchical leadership. Siam J. Appl. Math. 68 (3), 694719.CrossRefGoogle Scholar
Sontag, E. D. (1998) Mathematical Control Theory, 2nd ed., Texts in Applied Mathematics, Vol. 6, Springer-Verlag.CrossRefGoogle Scholar
Vicsek, T., Czirók, A., Ben-Jacob, E., Cohen, I. & Schochet, O. (1995) Novel type of phase transition in a system of self-driven particles. Phys. Rev. Lett. 75 (6), 12261229.CrossRefGoogle Scholar
Wu, C. W. (2006) Synchronization and convergence of linear dynamics in random directed networks. IEEE Trans. Automat. Control 51 (7), 12071210.CrossRefGoogle Scholar
Figure 0

Figure 1. Interaction network.

Figure 1

Figure 2. Trajectories of three particles, $\chi_{12}^{\sigma}\equiv \chi_{32}^{\sigma}\equiv 1$.

Figure 2

Figure 3. Trajectories of three particles, $\chi_{12}^{\sigma_{t_n}}=1-\chi_{32}^{\sigma_{t_n}}\overset{\text{i.i.d}}{\sim}\text{Bernoulli}(\frac{1}{2})$.

Figure 3

Figure 4. Three different simulations at $\varepsilon =0.02$.

Figure 4

Figure 5. Three different simulations at different $M$.

Figure 5

Figure 6. Three different simulations at $\varepsilon =0.022$.