Hostname: page-component-cd9895bd7-gvvz8 Total loading time: 0 Render date: 2024-12-26T04:09:45.462Z Has data issue: false hasContentIssue false

Distribution of consensus in a broadcast-based consensus algorithm with random initial opinions

Published online by Cambridge University Press:  09 May 2023

Shigeo Shioda*
Affiliation:
Chiba University
Dai Kato*
Affiliation:
Chiba University
*
*Postal address: Graduate School of Engineering, Chiba University, 1-33, Yayoi, Inage, Chiba, 263-8522 Japan.
*Postal address: Graduate School of Engineering, Chiba University, 1-33, Yayoi, Inage, Chiba, 263-8522 Japan.
Rights & Permissions [Opens in a new window]

Abstract

We study the distribution of the consensus formed by a broadcast-based consensus algorithm for cases in which the initial opinions of agents are random variables. We first derive two fundamental equations for the time evolution of the average opinion of agents. Using the derived equations, we then investigate the distribution of the consensus in the limit in which agents do not have any mutual trust, and show that the consensus without mutual trust among agents is in sharp contrast to the consensus with complete mutual trust in the statistical properties if the initial opinion of each agent is integrable. Next, we provide the formulation necessary to mathematically discuss the consensus in the limit in which the number of agents tends to infinity, and derive several results, including a central limit theorem concerning the consensus in this limit. Finally, we study the distribution of the consensus when the initial opinions of agents follow a stable distribution, and show that the consensus also follows a stable distribution in the limit in which the number of agents tends to infinity.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Consensus formation is a problem in which agents initially having different opinions mutually exchange and thereby update their opinions using a distributed algorithm in order to achieve a consensus [Reference DeGroot5]. In the past, this problem has appeared in a number of different contexts, including distributed computation [Reference Tsitsiklis18], load balancing in computer networks [Reference Muthukrishnan, Ghosh and Schultz11, Reference Rabani, Sinclair and Wanka14], distributed data fusion [Reference Xiao, Boyd and Lall19] or clock synchronization in sensor networks [Reference Li and Rus10], coordinate control of mobile agents [Reference Olfati-Saber and Murray13], and opinion formation on social networks [Reference Acemoğlu, Como, Fagnani and Ozdaglar1Reference Altafini and Lini3, Reference Frasca, Ravazzi, Tempo and Ishii8].

In the present paper, we consider a specific consensus algorithm, namely, the broadcast-based consensus algorithm. In this algorithm, one agent is chosen randomly with a given probability of broadcasting its opinion to neighbor agents. Agents receiving an opinion from their common neighbor compute the weighted average of their opinions and the received opinion. The broadcast-based algorithm is very amenable to implementation because this algorithm can exploit the broadcast nature of the wireless communication environment and does not require bidirectional communication among agents (nodes) [Reference Yang and Blum20]. The broadcast-based algorithm also has high affinity to social networks because, for example, Twitter users very frequently broadcast a received tweet to all followers, which is usually referred to as retweeting. In addition to these features, it has been shown that the broadcast-based algorithm yields a consensus with probability one, and the obtained consensus value is, as expected, equal to the average of the initial opinions of the agents if the agents broadcast their opinions with equal frequency [Reference Aysal, Yildiz, Sarwate and Scaglione4, Reference Fagnani and Zampieri7]. Therefore the broadcast-based consensus-forming algorithm has recently been investigated in several studies [Reference Aysal, Yildiz, Sarwate and Scaglione4, Reference Fagnani and Zampieri7, Reference Silvestre, Hespanha and Silvestre17, Reference Yang and Blum20].

Note that the consensus achieved in the algorithm is not generally a constant, but instead largely varies depending on the order in which agents broadcast. For application of a consensus algorithm to sensor data fusion or coordinate control of mobile agents, the obtained consensus should, as far as possible, be kept around the average of the initial sensor data or the initial coordinates of the agents. In addition, in the formation of consensus on social networks, it is desirable that the obtained consensus is approximately equal to the average of the initial opinions of all users without being strongly influenced by ‘influencers’. Several studies [Reference Aysal, Yildiz, Sarwate and Scaglione4, Reference Fagnani and Zampieri7] investigated the mean deviation of the obtained consensus value from the average of the initial opinions as a metric of the performance of the consensus-forming algorithm. However, little is known about the probability laws of the obtained consensus, not only for broadcast-based algorithms, but also for other general consensus-forming algorithms.

Previously, we investigated the distribution of the consensus obtained in the broadcast-based consensus algorithm and found that the obtained consensus has a complicated probabilistic behavior (e.g. it may follow a Cantor distribution), even if the number of agents is two [Reference Katoh and Shioda9, Reference Shioda15]. We also numerically found that if the initial opinion of the agents follows a stable distribution, such as a Gaussian distribution or a Lévy distribution, the distribution of the consensus approaches the distribution of the initial opinion (with different parameter values) as the number of agents increases. In the present paper, we formally extend our previous results to the case in which the initial opinion of each agent is a random variable. This extension is very natural when applying a consensus algorithm to real engineering issues, such as sensor-data fusion and opinion formation on social networks. For example, the data collected by a sensor, corresponding to the initial opinion of an agent, is usually a random variable because of the random noise included in the data. The opinion of each user of social network services is also a random variable that varies depending on the feelings of the user and the social environment at that time.

In the present paper, after presenting the problem formulation, we first derive two fundamental equations on the time evolution of the weighted average of opinions of the agents (Proposition 3.1 and Corollary 3.1). The derived equations, for example, reveal that the consensus is expressed as a weighted average of the initial opinions of agents. Based on the derived equations, we then investigate the distribution of the consensus in the limit where agents do not have any mutual trust (Theorem 4.1), and show that the consensus obtained is in sharp contrast to that with complete mutual trust in the statistical properties if the initial opinion of each agent is integrable. Next, we provide the formulation necessary to mathematically discuss consensus in the limit that the number of agents tends to infinity, and derive several results concerning the consensus in this limit (Theorem 5.1 and Corollary 5.1). We also show that the central limit theorem for consensus holds in the limit of increasing number of agents together with decreasing mutual trust between agents (Theorem 5.2). If the initial opinions of agents follow the common stable distribution, then the consensus is expected to follow the same stable distribution because the consensus is expressed as a weighted average of the initial opinions of agents. Thus we finally investigate the distribution of consensus in which each agent’s initial opinion follows a stable distribution and derive the analytical expression for the consensus (Theorem 6.2). The derived expression reveals that the consensus does not necessarily follow a stable distribution, even if the initial opinion follows a stable distribution. However, we find a simple relationship between the statistical dispersion of the consensus and that of the initial opinion (Corollary 6.1). We finally show that the consensus follows a stable distribution in the limit of an infinite number of agents (Theorem 6.2 and Corollary 6.1). The last result has been numerically demonstrated in a previous study [Reference Katoh and Shioda9, Reference Shioda15], and the present paper presents a mathematically formal proof.

The remainder of the present paper is organized as follows. In Section 2 we present the problem formulation. Then, in Section 3, we derive two fundamental equations for the time evolution of the weighted average of the opinions of the agents. In Section 4 we consider the distribution of the consensus in the limit where agents do not have any mutual trust. In Section 5 we provide the formulation necessary to mathematically discuss consensus in the limit in which the number of agents tends to infinity, and derive several results concerning the consensus in this limit. In Section 6 we consider the case in which the initial opinions of agents follow a common stable distribution. In Section 7 we conclude the paper with some comments on unresolved issues and prospects for their resolution.

2. Problem formulation

We consider N agents interacting on a complete graph. The agents are numbered from 1 to N. The agents have their own opinions, and the opinion of each agent is expressed as a real number. Let

denote the opinion of agent

at time

. The initial opinions of agents $\{X_n(0)\}_{n\in {\mathcal N}}$ are assumed to be statistically independent and identically distributed, and they are finite with probability one. The agents broadcast their opinions to each other as well as update their opinions at discrete times $t\in\natural$ in the following manner. At each discrete time, one of the agents broadcasts its opinion to its neighbors. The probability that agent n broadcasts its opinion at each discrete time is denoted by $p_n$ . We assume that $p_n>0$ for all $n\in {\mathcal N}$ and $\sum_{n\in {\mathcal N}}p_n=1$ . At each discrete time t, agent n updates its opinion based on the following equation:

(2.1) \begin{equation}X_n(t+1) = X_n(t) + \sum_{m\in {\mathcal N}} \delta_{m,e_t} r(X_m(t)-X_n(t)),\end{equation}

where $e_t\in{\mathcal N}$ is a random variable denoting the agent broadcasting its opinion at time t and

\begin{align*} \delta_{m,n}=\begin{cases}1,&m=n, \\[5pt] 0,&\text{otherwise.}\end{cases}\end{align*}

Note that $r\in (0,1]$ is a trust parameter that represents the confidence that each agent places on the opinion of other agents. We assume that $\{e_t\}_{t=0}^\infty$ are statistically independent and identically distributed random variables. We also assume that $\{e_t\}_{t=0}^\infty$ and $\{X_n(0)\}_{n\in {\mathcal N}}$ are mutually independent. Equation (2.1) can be expressed in matrix form as follows:

where $Q^{(k)}=\{q^{(k)}_{mn}\}$ is a matrix expressing the opinion updates when agent k is broadcasting its opinion, and its elements are given as

\begin{align*} q^{(k)}_{mn}=\begin{cases}1-r, &m= n, \\[5pt] r, & m\neq n,n=k, \\[5pt] 0,&\text{otherwise.}\end{cases}\end{align*}

Note that $Q^{(k)}$ is a stochastic matrix, that is, the sum of the elements of each row vector of $Q^{(k)}$ is equal to 1.

We define . The mnth element of Q is given as

\begin{align*} q_{mn}=\begin{cases}rp_n,& m\neq n, \\[5pt] 1-r(1-p_n),& m=n.\end{cases}\end{align*}

In the present paper, we mainly study , where , rather than studying the opinion vector ${\boldsymbol{X}}(t)=(X_1(t),\ldots,X_{N}(t))$ . Note that $\boldsymbol{\pi}$ is the left eigenvector of Q corresponding to an eigenvalue of one. Here $\bar X(t)$ can be considered as the weighted average of opinions of the agents because

\begin{align*} \bar X(t) = \sum_{n\in {\mathcal N}} p_n X_n(t),\end{align*}

and $p_n$ , which is the probability that agent n broadcasts its opinion, expresses the strength of the influence of agent n in the formation of the consensus. There are two reasons why we focus on $\bar X(t)$ rather than handling the opinion vector ${\boldsymbol{X}}(t)=(X_1(t),\ldots,X_{N}(t))$ . One reason is that using $\bar X(t)$ simplifies the discussion. The other reason comes from the properties of $\bar X(t)$ , which are appropriate for discussing the consensus. First, $\bar X(t)$ converges to the consensus as t goes to infinity, and thus the consensus can be known from $\bar X(t)$ . Second, $\bar X(t)$ is a martingale if $X_1(0)$ is integrable, because if $t\ge s$ ,

\begin{align*}\mathbb{E}[\bar X(t)\mid {\boldsymbol{X}}(s)]&=\mathbb{E}[\boldsymbol{\pi}{\boldsymbol{X}}(t)^{\textsf{T}}\mid {\boldsymbol{X}}(s)] \\[5pt] &=\mathbb{E}[\boldsymbol{\pi} Q^{(e_{t-1})}\times\cdots\times Q^{(e_s)}{\boldsymbol{X}}(s)^{\textsf{T}}\mid {\boldsymbol{X}}(s)] \\[5pt] &=\boldsymbol{\pi} \mathbb{E}[Q^{(e_{t-1})}]\times\cdots\times \mathbb{E}[Q^{(e_s)}]{\boldsymbol{X}}(s)^{\textsf{T}} \\[5pt] &=\boldsymbol{\pi} Q^{t-s}{\boldsymbol{X}}(s)^{\textsf{T}}\\[5pt] &=\pi {\boldsymbol{X}}(s)^{\textsf{T}}\\[5pt] &=\bar{X}(s),\end{align*}

and thus

\begin{align*} \mathbb{E}[\bar X(t)\mid \bar X(s)]=\mathbb{E}[\mathbb{E}[\bar X(t)\mid {\boldsymbol{X}}(s)]\mid \bar X(s)]=\mathbb{E}[\bar X(s)\mid \bar X(s)]=\bar X(s).\end{align*}

Because of this property, the conditional expectation of the consensus of given $\bar X(t)$ is equal to $\bar X(t)$ for all $t\ge 0$ , which will be shown at the end of this section.

Here we give a definition of obtaining the consensus.

Definition 2.1. If a random variable $X_c$ exists and satisfies

\begin{align*} \lim_{t\to\infty} {\boldsymbol{X}}(t)=(X_c,\ldots,X_c)\quad \text{with probability one},\end{align*}

then we say that a consensus is obtained and that it is equal to $X_c$ .

If agents whose initial opinions are constant and finite interact in a general directed graph, then the irreducibility of Q is a sufficient condition for obtaining a consensus in the broadcast-based consensus algorithm [Reference Katoh and Shioda9]. The present paper assumes that the graph is complete, $r>0$ , and $p_n>0$ for all $n\in {\mathcal N}$ , under the condition in which Q is irreducible. Since the initial opinions are assumed to be finite with probability one, consensus is reached with probability one in the problem formulation of the present paper. Note that the consensus is obtained if, for any $\epsilon> 0$ ,

(2.2) \begin{align}\lim_{t\to \infty} \mathbb{P}(\mathrm{d} X(t) >\epsilon) = 0,\end{align}

where

To see that (2.2) holds in the formulation of the present paper, note that $\mathrm{d} X(t) = (1-r)^t \mathrm{d} X(0)$ , and thus

\begin{align*}\mathbb{P}(\mathrm{d} X(t) >\epsilon)&=\mathbb{P}\bigl(\mathrm{d} X(0) > \epsilon (1-r)^{-t}\bigr) \\[5pt] &\le \mathbb{P}\bigl(2\max\{|X_1(0)|,\ldots,|X_N(0)|\} > \epsilon (1-r)^{-t}\bigr) \\[5pt] &=1-\mathbb{P}\biggl(\max\{|X_1(0)|,\ldots,|X_N(0)|\}\le \dfrac{\epsilon}{2} (1-r)^{-t}\biggr) \\[5pt] &=1-\mathbb{P}\biggl(\dfrac{\epsilon}{2} (1-r)^{-t}\le X_1(0) \le \dfrac{\epsilon}{2} (1-r)^{-t}\biggr)^N\\[5pt] &\to 0,\quad \text{as $t\to\infty$},\end{align*}

where in the last line we use the fact that $X_1(0)$ is finite with probability one. A definition of consensus formation similar to that by (2.2) will be used in Section 5 (Definition 5.1).

The notations frequently used in the present paper are summarized in Table 1.

Table 1. Notation.

3. Time evolution of average opinion

In this and the following sections, we denote the average opinion at time t, $\bar X(t)$ , by $\bar X^{(N)}(t,r)$ and the consensus, $X_c$ , by $X_c^{(N)}(r)$ in order to recall the dependence of the random variable on the trust parameter r and the number of agents N. Note that the average opinion at time 0 is denoted by $\bar X^{(N)}(0)$ because this opinion does not depend on r.

Proposition 3.1. We have

(3.1) \begin{equation}\bar X^{(N)}(t,r)=(1-r)^t \bar X^{(N)}(0)+r\sum_{k=0}^{t-1}(1-r)^{k}X_{e_k}(0),\quad t=1,2,\ldots .\end{equation}

Proof. $\bar X^{(N)}(t,r)$ can be written in the following form:

(3.2)

Observe that

(3.3) \begin{equation}Q^{(e_0)}{\boldsymbol{X}}(0)^{\textsf{T}}=(1-r){\boldsymbol{X}}(0)^{\textsf{T}}+r X_{e_0}(0)\textbf{1}^{\textsf{T}}\end{equation}

and

(3.4) \begin{equation}\Biggl(\prod_{k=1}^{t-1} Q^{(e_k)}\Biggr)\textbf{1}^{\textsf{T}}=\textbf{1}^{\textsf{T}},\end{equation}

where $\textbf{1}$ is a row vector having all elements equal to 1. Substituting (3.3) and (3.4) into (3.2) yields

(3.5) \begin{align}\bar X^{(N)}(t,r)&=(1-r)\boldsymbol{\pi}\Biggl(\prod_{k=1}^{t-1} Q^{(e_k)}\Biggr){\boldsymbol{X}}(0)^{\textsf{T}}+r X_{e_0}(0).\end{align}

Substituting

\begin{align*} Q^{(e_1)}{\boldsymbol{X}}(0)^{\textsf{T}}=(1-r){\boldsymbol{X}}(0)^{\textsf{T}}+r X_{e_1}(0)\textbf{1}^{\textsf{T}},\quad \Biggl(\prod_{k=2}^{t-1} Q^{(e_k)}\Biggr)\textbf{1}^{\textsf{T}}=\textbf{1}^{\textsf{T}},\end{align*}

into (3.5) yields

\begin{align*}\bar X^{(N)}(t,r)&=(1-r)^2\boldsymbol{\pi}\Biggl(\prod_{k=2}^{t-1} Q^{(e_k)}\Biggr){\boldsymbol{X}}(0)^{\textsf{T}}+(1-r)r X_{e_1}(0)+r X_{e_0}(0).\end{align*}

By iterating this procedure, we finally arrive at

\begin{align*}\bar X^{(N)}(t,r)&=(1-r)^{t-1} \boldsymbol{\pi}Q^{(e_{t-1})}{\boldsymbol{X}}(0)^{\textsf{T}}+r\sum_{k=0}^{t-2}(1-r)^{k}X_{e_k}(0) \\[5pt] &=(1-r)^t \bar X^{(N)}(0)+r\sum_{k=0}^{t-1}(1-r)^{k}X_{e_k}(0),\end{align*}

which completes the proof.

Proposition 3.1 implies that the consensus finally achieved is expressed as

\begin{align*} X_c^{(N)}(r)=r\sum_{k=0}^{\infty}(1-r)^{k}X_{e_k}(0).\end{align*}

Equation (3.1) can be written in a different form, as shown in the following corollary.

Corollary 3.1. We have

(3.6)

where the sum of random variables, $A_1(t,r),\ldots,A_N(t,r)$ , is equal to 1. That is,

(3.7) \begin{equation}\sum_{n\in {\mathcal N}} A_n(t,r) =1.\end{equation}

Proof. Substituting $\bar X^{(N)}(0) = \sum_{n\in {\mathcal N}} p_n X_n(0)$ into (3.1) yields

\begin{align*}\bar X^{(N)}(t,r)&=(1-r)^t \bar X^{(N)}(0)+r\sum_{k=0}^{t-1}(1-r)^{k}X_{e_k}(0) \\[5pt] &=\sum_{n \in {\mathcal N}}\Biggl\{ p_n (1-r)^t+r\sum_{k=0}^{t-1}(1-r)^{k}1_{(e_k=n)}\Biggr\}X_{n}(0) \\[5pt] &=\sum_{n\in {\mathcal N}}\Biggl\{ p_n +r\sum_{k=0}^{t-1}(1-r)^{k}(1_{(e_k=n)}-p_n)\Biggr\}X_{n}(0),\end{align*}

where the last equality follows from $(1-r)^t=1-r\sum_{k=0}^{t-1}(1-r)^{k}$ . Equation (3.7) can be shown by direct calculation as follows:

\begin{align*}\sum_{n\in {\mathcal N}} A_n(t,r)&=\sum_{n\in {\mathcal N}} p_n + \sum_{n\in {\mathcal N}} r\sum_{k=0}^{t-1}(1-r)^k (1_{(e_k=n)}-p_n) \\[5pt] &=\sum_{n\in {\mathcal N}} p_n + r\sum_{k=0}^{t-1}(1-r)^k\Biggl(\sum_{n\in {\mathcal N}} 1_{(e_k=n)}-\sum_{n\in {\mathcal N}} p_n\Biggr) \\[5pt] &=1 + r\sum_{k=0}^{t-1}(1-r)^k(1-1)\\[5pt] &=1,\end{align*}

where the last equality follows from $\sum_{n\in {\mathcal N}} p_n=1$ and $\sum_{n\in {\mathcal N}} 1_{(e_k=n)}=1$ .

Note that $A_n(t,r)\in [0,1]$ for all $t\in \natural\cup\{\infty\}$ , and $r\in (0,1]$ , and, as shown in (3.7), $A_1(t,r),\ldots,A_N(t,r)$ are not mutually independent.

Corollary 3.1 implies that the consensus that is finally obtained is expressed as

(3.8) \begin{align}X_c^{(N)}(r)= \sum_{n\in {\mathcal N}} A_n(r) X_{n}(0), \end{align}
(3.9)

Note that $\sum_{k=1}^K (1-r)^k (1_{(e_k=n)}-p_n)$ converges as K tends to infinity with probability one, and thus the right-hand side of (3.9) is well-defined because

(for details see Theorem 2.5.6 of [Reference Durrett6]). Equations (3.6) and (3.8) imply that $\bar X^{(N)}(t,r)$ or $X_c^{(N)}(r)$ is a weighted average of the initial opinions of agents having weights that are random variables.

The expectation and variance of $A_n(t,r)$ are given as follows:

As expected, if $X_1(0)$ is integrable, it follows from Corollary 3.1 that

\begin{align*} \mathbb{E}[\bar X^{(N)}(t,r)]=\sum_{n\in {\mathcal N}} \mathbb{E}[A_n(t,r)] \mathbb{E}[X_{n}(0)]=\mathbb{E}[X_{1}(0)]\sum_{n\in {\mathcal N}} p_n= \mathbb{E}[X_{1}(0)].\end{align*}

4. Consensus in the limit of no mutual trust

In the case of $r=0$ , the agents have no mutual trust, and thus consensus is not obtained because no agents change their opinions after time 0. The average opinion at $r=0$ ,

\begin{align*} \bar X^{(N)}(\infty,0)= \lim_{t\to\infty} \bar X^{(N)}(t,0)=\sum_{n\in {\mathcal N}} p_n X_n(0),\end{align*}

exists, but it is not the consensus at $r=0$ . Since the consensus is reached when $r>0$ and it is equal to $\bar X^{(N)}(\infty,r)$ , it is interesting to see whether $\bar X^{(N)}(\infty,r)$ converges to $\bar X^{(N)}(\infty,0)$ as $r\downarrow 0$ . In this section we investigate the convergence of $\bar X^{(N)}(\infty,r)$ in the limit $r\downarrow 0$ . In particular, we investigate whether $\lim_{r\downarrow 0} A_n(r)=p_n$ because $\lim_{r\downarrow 0} \bar X^{(N)}(\infty,r)= \bar X^{(N)}(\infty,0)$ if $\lim_{r\downarrow 0} A_n(r)=p_n$ for all $n\in {\mathcal N}$ . Note that the consensus in the limit $r\downarrow 0$ and that when $r=1$ are contrasting when $X_1(0)$ is integrable, which will be shown in this section.

In this section we use the notation $A_n(r,\omega)$ instead of $A_n(r)$ to explicitly denote that $A_n(r,\omega)$ is a mapping from the probability space $(\Omega,\mathbb{P})$ to . Note that $\lim_{r\downarrow 0} A_n(r,\omega)$ is not necessarily equal to $p_n$ for all $\omega\in \Omega$ . For example, if agent n broadcasts its opinion for all $t=0,1,\ldots,$ then $\lim_{r\downarrow 0} A_n(r,\omega)=1$ , and if agent n never broadcasts its opinion, then $\lim_{r\downarrow 0} A_n(r,\omega)=0$ . However, as shown in the following theorem, $A_n(r,\omega)=p_n$ for almost all of $\omega\in\Omega$ .

Lemma 4.1. We have

\begin{align*} \lim_{r\downarrow 0} A_n(r)=A_n(0)=p_n,\quad {with\ probability\ one}.\end{align*}

Proof. It suffices to show that $B_n(r)\to 0$ as $r\downarrow 0$ with probability one, where

It follows from the Chebyshev inequality that

Let

for any $\alpha>1$ . The Borel–Cantelli lemma implies that $B_n(r_i)\to 0$ as $i\to\infty$ with probability one because

\begin{align*}\sum_{i=0}^\infty \mathbb{P}(|B_n(r_i)|>\epsilon)<\sum_{i=0}^\infty\dfrac{\alpha^{-i}p_n(1-p_n)}{\epsilon^2}=\dfrac{p_n(1-p_n)}{(1-\alpha^{-1})\epsilon^2}<\infty.\end{align*}

Let

. As we proved above, $\mathbb{P}({\mathcal E}_n)=1$ . Let

Since $r_{i+1}(1-r_{i})^k < r(1-r)^k < r_{i}(1-r_{i+1})^k$ for $r\in [r_i,r_{i+1})$ , we have $B^\uparrow(r) < B_n(r)<B^\downarrow(r)$ , and thus

\begin{align*} \lim_{r\downarrow 0} B^\uparrow(r)\le \lim_{r\downarrow 0} B_n(r)\le \lim_{r\downarrow 0} B^\downarrow(r).\end{align*}

Observe that

\begin{align*} B_i^\downarrow&= r_{i}\sum_{k=0}^\infty (1-r_{i+1})^k (1_{(e_k=n)}-p_i)\\[5pt] &=\alpha^{-i}\sum_{k=0}^\infty (1-\alpha^{-(i+1)})^k (1_{(e_k=n)}-p_n) \\[5pt] &=\alpha\alpha^{-(i+1)}\sum_{k=0}^\infty (1-\alpha^{-(i+1)})^k (1_{(e_k=n)}-p_i)\\[5pt] &=\alpha B_n(r_{i+1})\end{align*}

and

\begin{align*} B_i^\uparrow&= r_{i+1}\sum_{k=0}^\infty (1-r_{i})^k (1_{(e_k=n)}-p_n)\\[5pt] &=\alpha^{-(i+1)}\sum_{k=0}^\infty (1-\alpha^{-{i}})^k (1_{(e_k=n)}-p_n) \\[5pt] &=\alpha^{-1}\alpha^{-{i}}\sum_{k=0}^\infty (1-\alpha^{-{i}})^k (1_{(e_k=n)}-p_n)\\[5pt] &=\alpha^{-1} B_n(r_{i}).\end{align*}

It follows from these observations that on ${\mathcal E}_n$

\begin{align*} \lim_{r\downarrow 0} B^\downarrow(r)=\lim_{i\to \infty}\alpha B_n(r_{i+1}) =0,\quad \lim_{r\downarrow 0} B^\uparrow(r)=\lim_{i\to \infty}\alpha^{-1} B_n(r_{i}) =0,\end{align*}

which completes the proof.

Lemma 4.1 readily yields the following result.

Theorem 4.1. $\bar X^{(N)}(\infty,r)$ is right continuous as a function of r at $r=0$ with probability one. That is,

\begin{align*}\lim_{r\downarrow 0} X_c^{(N)}(r) &= \lim_{r\downarrow 0} \bar X^{(N)}(\infty,r) = \bar X^{(N)}(\infty,0)=\sum_{n\in {\mathcal N}} p_n X_{n}(0)\quad {with\ probability\ one.}\end{align*}

In order to compare with the results for $r\downarrow 0$ , we also show the results for $r=1$ , in which the agents trust each other completely. It follows from $A_n(t,1) = 1_{(e_0=n)}$ that

(4.1) \begin{equation}\bar X^{(N)}(t,1)= \sum_{n\in {\mathcal N}} 1_{(e_0=n)} X_{n}(0) = X_{e_0}(0).\end{equation}

Since (4.1) holds for $t=1,2,\ldots,$ we have $\lim_{t\to\infty} \bar X^{(N)}(t,1) = X_{e_0}(0)$ . The consensus is readily reached at $t=1$ when $r=1$ and $X_{e_0}(0)\stackrel{d}{=} X_1(0)$ . As such, we finally obtain

(4.2) \begin{equation}X_c^{(N)}(1)\stackrel{d}{=} X_1(0),\end{equation}

where $\stackrel{d}{=}$ indicates equality in the distribution.

Theorem 4.1 and equation (4.2) provide very contrasting results. In the limit $r\downarrow 0$ , the consensus converges to the average of the initial opinions of agents, and, when $r=1$ , the consensus is probabilistically equal to the initial opinion of an arbitrarily chosen agent. It is rather ironic that the consensus converges to the average of the initial opinions of agents as $r\downarrow 0$ because the agents do not trust each other at all in the limit $r\downarrow 0$ . We also note that if $X_1(0)$ is integrable, then the consensus at $r=0$ reaches $\mathbb{E}[X_1(0)]$ as $N\to\infty$ due to the law of large numbers, while the consensus at $r=1$ does not depend on N. However, this is not always the case. If $X_1(0)$ follows a Cauchy distribution (and thus is not integrable), the achieved consensus also follows the same Cauchy distribution with $X_1(0)$ independently of the trust parameter r and the number of agents N, as shown in Section 6.

5. Limit of infinite number of agents

Numerically we found that the consensus tends to gradually approach a Gaussian distribution as the number of agents N tends to infinity while the trust parameter r tends to zero (an example of the numerical results will be shown at the end of this section). This fact suggests that the central limit theorem for the consensus will hold by increasing N to infinity at an appropriate speed while decreasing r closer to 0. Thus, in this section, we first provide the formulation necessary to study the consensus in the limit $N\to\infty$ while letting $p_n=1/N$ for all $n\in {\mathcal N}$ . Under the formulation provided, we then derive several results concerning the consensus in the limit $N\to\infty$ , which will be used in Section 6 to study the consensus when the initial opinions of agents follow a stable distribution. At the end of this section we discuss a central limit theorem for the consensus.

Proposition 5.1. If $X_1(0)$ is integrable, then

(5.1) \begin{equation}\lim_{N\to\infty} \mathbb{E}\bigl[\mathrm{e}^{\mathrm{i} \theta\bar X^{(N)}(t,r)}\bigr]=\mathrm{e}^{\mathrm{i} (1-r)^t\theta \mathbb{E}[X_1(0)]}\prod_{k=0}^{t-1}\varphi_X(r(1-r)^k\theta),\end{equation}

where $\varphi_X(\theta)$ is a characteristic function of $X_1(0)$ .

Proof. It follows from Proposition 3.1 that

\begin{align*}\mathbb{E}\bigl[\mathrm{e}^{\mathrm{i} \theta\bar X^{(N)}(t,r)}\bigr] &=\mathbb{E}\bigl[\mathrm{e}^{\mathrm{i} (1-r)^t\theta \bar X^{(N)}(0)+ir\sum_{k=0}^{t-1}(1-r)^{k}\theta X_{(e_k)}(0)}\bigr].\end{align*}

Let

Since $X_1(0)$ is integrable, the strong law of large numbers implies that $\mathbb{P}(\mathcal{A})=1$ . Let ${\mathcal B}^{(N)}(t)$ denote the event whereby each agent broadcasts its opinion at most once during $[0,t-1]$ , defined as follows:

Note that $\mathbb{P}({\mathcal B}^{(N)}(t))$ converges to 1 as $N\to\infty$ because

\begin{align*} \lim_{N\to\infty}\mathbb{P}({\mathcal B}^{(N)}(t))=\lim_{N\to\infty}\dfrac{(N-1)\times\cdots\times (N-t)}{N^t}=1.\end{align*}

Thus it follows that

\begin{align*}&\lim_{N\to\infty}\mathbb{E}\bigl[\mathrm{e}^{\mathrm{i} (1-r)^t\theta \bar X^{(N)}(0)+ir\sum_{k=0}^{t-1}(1-r)^{k}\theta X_{(e_k)}(0)}\bigr] \\[5pt] &\quad =\lim_{N\to\infty}\mathbb{E}\bigl[\mathrm{e}^{\mathrm{i} (1-r)^t\theta \bar X^{(N)}(0)+ir\sum_{k=0}^{t-1}(1-r)^{k}\theta X_{(e_k)}(0)}1_{{\mathcal A}\cap{\mathcal B}^{(N)}(t)}\bigr] \\[5pt] &\qquad +\lim_{N\to\infty}\mathbb{E}\bigl[\mathrm{e}^{\mathrm{i} (1-r)^t\theta \bar X^{(N)}(0)+ir\sum_{k=0}^{t-1}(1-r)^{k}\theta X_{(e_k)}(0)}1_{({\mathcal A}\cap{\mathcal B}^{(N)}(t))^c}\bigr] \\[5pt] &\quad =\mathbb{E}\biggl[\lim_{N\to\infty}\mathrm{e}^{\mathrm{i} (1-r)^t\theta \bar X^{(N)}(0)+ir\sum_{k=0}^{t-1}(1-r)^{k}\theta X_{(e_k)}(0)}1_{{\mathcal A}\cap{\mathcal B}^{(N)}(t)}\biggr] \\[5pt] &\quad =\mathrm{e}^{\mathrm{i} (1-r)^t\theta \mathbb{E}[X_1(0)]} \mathbb{E}\biggl[\lim_{N\to\infty}\mathrm{e}^{\mathrm{i} r\sum_{k=0}^{t-1}(1-r)^{k}\theta X_{(e_k)}(0)}1_{{\mathcal B}^{(N)}(t)}\biggr].\end{align*}

Since $X_{(e_0)}(0),\ldots,X_{(e_{t-1})}(0)$ are mutually independent on ${\mathcal B}^{(N)}(t)$ , it follows that

\begin{align*}\mathbb{E}\biggl[\lim_{N\to\infty}\mathrm{e}^{\mathrm{i} r \sum_{k=0}^{t-1}(1-r)^{k}\theta X_{(e_k)}(0)}1_{{\mathcal B}^{(N)}(t)}\biggr]&=\prod_{k=0}^{t-1}\mathbb{E}\bigl[\mathrm{e}^{\mathrm{i} r(1-r)^{k}\theta X_{(e_k)}(0)}\bigr] \\[5pt] &=\prod_{k=0}^{t-1}\mathbb{E}\bigl[\mathrm{e}^{\mathrm{i} r(1-r)^{k}\theta X_1(0)}\bigr] \\[5pt] &=\prod_{k=0}^{t-1}\varphi_X(r(1-r)^k\theta),\end{align*}

which completes the proof.

Remark 5.1. The assumption that $p_n=1/N$ for all $n\in {\mathcal N}$ is only used to prove that $\mathbb{P}({\mathcal B}^{(N)}(t))$ converges to 1 as $N\to\infty$ . Thus this assumption can be replaced by a looser assumption, under which $\mathbb{P}({\mathcal B}^{(N)}(t))\to 1$ as $N\to\infty$ .

Since the right-hand side of (5.1) satisfies the properties of a characteristic function, Proposition 5.1 implies that, as $N\to\infty$ , $\bar X^{(N)}(t,r)$ weakly converges to a random variable, denoted by $\bar X^{(\infty)}(t,r)$ , the characteristic function of which is equal to the right-hand side of (5.1). Here, $\bar X^{(\infty)}(t,r)$ corresponds to the average opinion of the agents in the limit $N\to\infty$ . It follows from (5.1) that

(5.2) \begin{align}\bar X^{(\infty)}(t,r)&\stackrel{d}{=}(1-r)^t \mathbb{E}[X_1(0)]+r \sum_{k=0}^{t-1} (1-r)^k Z_k,\end{align}

where $Z_0,Z_1,\ldots$ are independent and identically distributed random variables with the same distribution as $X_1(0)$ .

Equation (5.2) implies the following. For finite t, $X_{e_k}$ on the right-hand side of (3.1) can be replaced by $Z_k$ in the limit $N\to\infty$ . Instead of the right-hand side of (3.1), the right-hand side of (5.2) is easy to calculate because $\{Z_k\}_{k=0}^\infty$ is an independent and identically distributed sequence. If, as t goes to infinity, the left-hand side of (5.2) converges to the consensus in the limit $N\to\infty$ , that is,

(5.3) \begin{align}\lim_{N\to\infty} X_c^{(N)}(r) = \lim_{t\to \infty}\bar X^{(\infty)}(t,r) \stackrel{d}{=}r \sum_{k=0}^{\infty} (1-r)^k Z_k,\end{align}

then the calculation of the consensus seems easy. However, it is not ensured that (5.3) always holds even if $X_1(0)$ is integrable. This is because the difference between the largest and smallest opinions of agents

can go to infinity as $N\to\infty$ , and, in such a case, the formation of a consensus is not mathematically clear, even in the limit $t\to \infty$ . In order to define the consensus in the limit of an infinite number of agents, it is necessary to increase the number of agents N together with the elapsed time t to infinity so that the discrepancy among opinions of agents is kept sufficiently small in order to ensure that a consensus is reached. Based on the consideration mentioned above, in the following we introduce the definition of the consensus formation in the limit $N\to\infty$ . In the definition, we assume that a sequence of trust parameters $\{r_N\}_{N=1}^\infty$ is given.

Definition 5.1. If there is a sequence of non-negative integers $\{t_N\}_{N=1}^\infty$ such that $t_N\to\infty$ as $N\to\infty$ , and $\mathrm{d} X^{(N)}(t_N,r_N)$ converges to 0 as $N\to\infty$ in probability, that is,

\begin{align*} \lim_{N\to\infty} \mathbb{P}(\mathrm{d} X^{(N)}(t_N,r_N)> \epsilon) = 0 \quad \text{{for all $\epsilon>0$,}}\end{align*}

then we say that the consensus in the limit $N\to\infty$ , $X_c^{(\infty)}$ , is obtained, and $X_c^{(\infty)}$ is given as $\lim_{N\to\infty}\bar X^{(N)}(t_N,r_N)$ .

In this definition, $\{t_N\}_{N=1}^\infty$ represents how the time required to build a consensus increases with N for a given $\{r_N\}_{N=1}^\infty$ . Note that $\{r_N\}_{N=1}^\infty$ does not necessarily decrease to 0, and $r_N$ can be equal to a constant value r for all N. The definition of consensus formation in such a form has not been used before in the existing literature. Definition 5.1 is related to the definition of consensus formation by (2.2), in which only the time t is scaled to infinity.

In what follows, after deriving supplementary results (Lemma 5.2 and Corollary 5.1), we derive the main results (Theorem 5.1 and Corollary 5.2) on the consensus in the limit $N\to\infty$ when $r_N=r$ for all $N=1,2,\ldots,$ which will be used in Section 6. The following result is repeatedly used to derive the results in this section.

Lemma 5.1. (Lemma 3.1.1 of [Reference Durrett6].) If $c_j\to 0$ , $a_j\to \infty$ , and $a_jc_j\to\lambda$ , then $(1+c_j)^{a_j}\to \mathrm{e}^{\lambda}$ .

The next lemma gives the condition for reaching the consensus in the limit $N\to\infty$ .

Lemma 5.2. Suppose that

(5.4) \begin{equation} \lim_{N\to\infty}N \mathbb{P}(|X_1(0)|>\epsilon(1-r_N)^{-t_N})=0 \quad\text{{for all $\epsilon>0$.}}\end{equation}

Then the consensus in the limit $N\to\infty$ is obtained.

Proof. Let

Since $\mathrm{d} X^{(N)}(t,r)=(1-r)^t \mathrm{d} X^{(N)}(0)$ and $\mathrm{d} X^{(N)}(0)\le 2\max\{|X_1(0)|,\ldots,|X_N(0)|\}$ , we see that on $\mathcal{A}(N,\epsilon)$

\begin{align*} \mathrm{d} X^{(N)}(t_N,r_N)\le 2(1-r_N)^{t_N}\max\{|X_1(0)|,\ldots,|X_N(0)|\}\le\epsilon.\end{align*}

This means that $\{\mathrm{d} X^{(N)}(t_N,r_N)\le \epsilon\}\supset \mathcal{A}(N,\epsilon)$ . To complete the proof, we observe that

\begin{align*}\mathbb{P}(\mathcal{A}(N,\epsilon))&=\mathbb{P}(2(1-r_N)^{t_N}\max\{|X_1(0)|,\ldots,|X_N(0)|\}\le\epsilon) \\[5pt] &=\mathbb{P}\biggl(\max\{|X_1(0)|,\ldots,|X_N(0)|\}\le \dfrac{\epsilon}{2} (1-r_N)^{-{t_N}}\biggr) \\[5pt] &=\mathbb{P}\biggl(|X_1(0)|\le \dfrac{\epsilon}{2} (1-r_N)^{-{t_N}}\biggr)^N \\[5pt] &=\biggl(1-\mathbb{P}\biggl(|X_1(0)|>\dfrac{\epsilon}{2} (1-r_N)^{-{t_N}}\biggr)\biggr)^N.\end{align*}

Thus, if (5.4) holds, then

(5.5) \begin{align}\lim_{N\to\infty} \mathbb{P}(\mathcal{A}(N,\epsilon))&= \lim_{N\to\infty} \biggl(1-\mathbb{P}\biggl(|X_1(0)|> \dfrac{\epsilon}{2} (1-r_N)^{-t_N}\biggr)\biggr)^N \notag \\[5pt] & = \lim_{N\to\infty} \exp\biggl(-N\mathbb{P}\biggl(|X_1(0)|> \dfrac{\epsilon}{2} (1-r_N)^{-t_N}\biggr)\biggr) \notag \\[5pt] &=1,\end{align}

where the second equality follows from Lemma 5.1. Since $\{\mathrm{d} X^{(N)}(t_N,r_N)\le \epsilon\}\supset \mathcal{A}(N,\epsilon)$ , it follows from (5.5) that

\begin{align*} \lim_{N\to\infty} \mathbb{P}\bigl(\mathrm{d} X^{(N)}(t_N,r_N)\le\epsilon\bigr)=1 \quad\text{{for all $\epsilon>0$,}}\end{align*}

which completes the proof.

Corollary 5.1. If $X_1(0)$ is integrable, then the consensus in the limit $N\to\infty$ is obtained.

Proof. Choose $\{t_N\}_{N=1}^\infty$ satisfying the conditions that $t_Nr_N/\log N\to\infty$ and $t_N\to \infty$ as $N\to\infty$ . (For example, letting $t_N=N/r_N$ meets these conditions.) It follows from Lemma 5.1 that for any $\epsilon > 0$

\begin{align*} \lim_{N\to\infty} \dfrac{\epsilon}{N}(1-r_N)^{-t_N} = \mathrm{e}^{\lim_{N\to\infty}t_Nr_N-\log N+\log \epsilon } = \infty.\end{align*}

The above result means that there exists $N_0$ such that $N\le \epsilon (1-r_N)^{-t_N}$ for all $N\ge N_0$ . Since $X_1(0)$ is integrable, it follows that, for all $N\ge N_0$ ,

\begin{align*} N \mathbb{P}(|X_1(0)|>\epsilon(1-r_N)^{-t_N}) &\le N \mathbb{P}(|X_1(0)|>N)\\[5pt] & = \mathbb{E}[ N 1_{|X_1(0)|>N}] \\[5pt] &\le \mathbb{E}[|X_1(0)| 1_{|X_1(0)|>N}]\\[5pt] &\to 0, \quad \text{as $N\to\infty$}.\end{align*}

Thus the desired conclusion follows from Lemma 5.2.

Remark 5.2. According to Corollary 5.1, in order to obtain a consensus we should increase $t_N$ with N faster than $\log N/r_N$ when $X_1(0)$ is integrable.

The next theorem gives the condition for (5.3) being ensured.

Theorem 5.1. Suppose that $r_N=r$ for all $N=1,2,\ldots.$ If (5.4) holds with the choice of $\{t_N\}_{N=1}^\infty$ satisfying $t_N/\sqrt{N}\to 0$ as $N\to\infty$ , then $X_c^{(\infty)}$ , the consensus in the limit $N\to\infty$ , is obtained and

(5.6) \begin{align}X_c^{(\infty)}\stackrel{d}{=}r \sum_{k=0}^{\infty} (1-r)^k Z_k,\end{align}

where $Z_0,Z_1,\ldots$ are independent and identically distributed random variables with $X_1(0)$ .

Proof. Suppose that (5.4) holds with the choice of $\{t_N\}_{N=1}^\infty$ satisfying $t_N/\sqrt{N}\to 0$ as $N\to\infty$ . Since (5.4) holds, it follows from Corollary 5 that the consensus is obtained in the limit $N\to\infty$ and $\lim_{N\to\infty}\bar X^{(N)}(t_N,r)$ is equal to the consensus. According to Proposition 3.1,

\begin{align*}\bar X^{(N)}(t_N,r)&=(1-r)^{t_N}\bar X^{(N)}(0)+ r\sum_{k=0}^{t_N-1} (1-r)^k X_{e_k}(0).\end{align*}

Since for any $\epsilon > 0$

\begin{align*} \{(1-r)^{t_N}|\bar X^{(N)}(0)| >\epsilon\} \subset \{(1-r)^{t_N} |X_1(0)| >\epsilon\} \cup \cdots \cup \{(1-r)^{t_N} |X_N(0)| >\epsilon\},\end{align*}

it follows that

\begin{align*}\mathbb{P}((1-r)^{t_N}|\bar X^{(N)}(0)| >\epsilon) &\le N \mathbb{P}((1-r)^{t_N} |X_1(0)| >\epsilon) \\[5pt] &= N \mathbb{P} (|X_1(0)| >\epsilon (1-r)^{-t_N}),\end{align*}

where the last term, $N \mathbb{P}(|X_1(0)| >\epsilon (1-r)^{-t_N})$ , goes to 0 as $N\to \infty$ because of (5.4). This means that $(1-r)^{t_N}|\bar X^{(N)}(0)|\to 0$ as $N\to\infty$ in probability. Next, observe that on ${\mathcal B}^{(N)}(t_N)$ (defined in the proof of Proposition 5.1), $X_{e_0},\ldots,X_{e_{t_N-1}}$ are mutually independent. Thus, if $\mathbb{P}({\mathcal B}^{(N)}(t_N))\to 1$ as $N\to\infty$ , we obtain

\begin{align*} \lim_{N\to\infty}r \sum_{k=0}^{t_N-1} (1-r)^k X_{e_k}(0)\stackrel{d}{=}r \sum_{k=0}^{\infty} (1-r)^k Z_k.\end{align*}

The following observation proves $\lim_{N\to\infty}\mathbb{P}({\mathcal B}^{(N)}(t_N))=1$ :

\begin{align*}\lim_{N\to\infty}\mathbb{P}({\mathcal B}^{(N)}(t_N))&=\lim_{N\to\infty}\dfrac{N\times\cdots\times(N-t_N+1)}{N^{t_N}} \\[5pt] &=\lim_{N\to\infty}\prod_{k=0}^{t_N-1}\biggl(1-\dfrac{k}{N}\biggr)\\[5pt] &\ge \lim_{N\to\infty}\biggl(1-\dfrac{t_N-1}{N}\biggr)^{t_N}\\[5pt] &=1,\end{align*}

where the last equality follows from Lemma 5.1 and $(t_N-1)t_N/N\to 0$ as $N\to\infty$ because $t_N/\sqrt{N}\to 0$ as $N\to\infty$ . Thus the proof is completed.

Remark 5.3. The assumption $t_N/\sqrt{N}\to 0$ as $N\to\infty$ is not necessary for consensus building but for the consensus to be given as the right-hand side of (5.6). Thus, even if $t_N/\sqrt{N}$ does not go to 0 as $N\to\infty$ , the consensus may be obtained in the limit $N\to\infty$ , but it is not given as the right-hand side of (5.6). Note that the condition $t_N/\sqrt{N}\to 0$ as $N\to\infty$ may need to be replaced by another condition if the assumption $p_n=1/N$ for all $n\in {\mathcal N}$ used in this section is replaced by another condition.

Corollary 5.2. Suppose that $r_N=r$ for all $N=1,2,\ldots.$ If $X_1(0)$ is integrable, then $X_c^{(\infty)}$ is obtained and is given as (5.6).

Proof. Choose $\{t_N\}_{N=1}^\infty$ such that $t_N$ tends to infinity faster than $\log N$ but slower than $\sqrt{N}$ as $N\to\infty$ . With this choice of $\{t_N\}_{N=1}^\infty$ , (5.4) holds, as shown in the proof of Corollary 5.1, and thus applying Theorem 5.1 completes the proof.

In the remainder of this section, we assume $\{r_N\}_{N=1}^\infty$ meets the following conditions:

(5.7) \begin{align}\lim_{N\to\infty} r_N=0,\quad \lim_{N\to\infty} \log N/(r_N \sqrt{N})=0.\end{align}

Then we investigate the behavior of in the limit $N\to\infty$ , where $t_N$ increases faster than $\log N/r_N$ but slower than $\sqrt{N}$ .

Theorem 5.2. Suppose . If $\{t_N\}_{N=1}^\infty$ satisfies $\log N/(t_Nr_N)\to 0$ and $t_N/\sqrt{N}\to 0$ as $N\to \infty$ , then

\begin{align*} \dfrac{1}{\sqrt{r_N}}D^{(N)}(t_N,r_N) \Rightarrow N(0,\sigma^2/2),\quad \text{as $N\to \infty$},\end{align*}

where $\Rightarrow$ denotes weak convergence and $N(0,\sigma^2/2)$ is a Gaussian distribution with mean 0 and variance ${\sigma^2}/{2}$ .

Proof. Observe that on ${\mathcal B}^{(N)}(t)$ ,

\begin{align*}D^{(N)}(t,r)&\stackrel{d}{=}(1-r)^t \bar X^{(N)}(0)-\mathbb{E}[X_1(0)] +\sum_{k=0}^{t-1} (1-r)^k Z_k, \\[5pt] &=(1-r)^t (\bar X^{(N)}(0)-\mathbb{E}[X_1(0)])+r\sum_{k=0}^{t-1} (1-r)^k (Z_k-\mathbb{E}[X_1(0)]) \\[5pt] &=(1-r)^t (\bar X^{(N)}(0)-\mathbb{E}[X_1(0)])+r\sum_{k=0}^{t-1} (1-r)^k (Z_k-\mathbb{E}[Z_k]),\end{align*}

where $Z_0,Z_1,\ldots$ are independent random variables having the same distribution as $X_1(0)$ . Thus, on ${\mathcal B}^{(N)}(t_N)$ , we have

(5.8) \begin{align}&\dfrac{1}{\sqrt{r_N}}D^{(N)}(t_N,r_N)\notag \\[5pt] &\quad =\dfrac{1}{\sqrt{r_N}}(1-r_N)^{t_N} (\bar X^{(N)}(0)-\mathbb{E}[X_1(0)]) +\dfrac{1}{\sqrt{r_N}}\sum_{k=0}^{t_N-1} (1-r_N)^k (Z_k-\mathbb{E}[Z_k]).\end{align}

Note that if $t_N/\sqrt{N}\to 0$ as $N\to \infty$ , then $\mathbb{P}({\mathcal B}^{(N)}(t_N))\to 1$ as $N\to\infty$ , as shown in the proof of Theorem 5.1 The first term on the right-hand side of (5.8) converges to 0 as $N\to\infty$ because

\begin{align*}&\dfrac{1}{\sqrt{r_N}}(1-r_N)^{t_N} |\bar X^{(N)}(0)-\mathbb{E}[X_1(0)]| \\[5pt] &\quad \le\dfrac{1}{\sqrt{r_N N}} |N^{1/2}(\bar X^{(N)}(0)-\mathbb{E}[X_1(0)])| \\[5pt] &\quad =\dfrac{\sqrt{r_N}}{r_N\sqrt{N}}|N^{1/2}(\bar X^{(N)}(0)-\mathbb{E}[X_1(0)])|\\[5pt] &\quad \to 0,\quad\text{as $N\to\infty$},\end{align*}

where $N^{1/2}(\bar X^{(N)}(0)-\mathbb{E}[X_1(0)])$ converges to a random variable following a Gaussian distribution, $r_N\sqrt{N}\to\infty$ , and $r_N\to 0$ as $N\to\infty$ . Next, let

. Observe that

where the last equality follows from $\lim_{N\to\infty} (1-r_N)^{2t_N}= \mathrm{e}^{-\lim_{N\to\infty} 2t_Nr_N}=0$ because of the assumption $\log N/(r_N t_N)\to 0$ (and thus $r_N t_N\to \infty$ ) as $N\to\infty$ . We can also see that for all $\epsilon>0$

\begin{align*} &\sum_{k=0}^{t_N-1} \mathbb{E}[|Y_{N,k}|^2;\,Y_{N,k}\ge \epsilon]\\[5pt] &\quad \le \sum_{k=0}^{t_N-1} \mathbb{E}[|Y_{N,k}|^2;\,|Y_{N,0}|\ge \epsilon] \\[5pt] &\quad\le r_N\sum_{k=0}^{t_N-1} (1-r_N)^{2k} \mathbb{E}[|Z_0-\mathbb{E}[Z_0]|^2;\,\sqrt{r_N}|(Z_0-\mathbb{E}[Z_0])|\ge \epsilon] \\[5pt] &\quad\le \dfrac{1}{2-r_N}\mathbb{E}[|Z_0-\mathbb{E}[Z_0]|^2;\,\sqrt{r_N}|(Z_0-\mathbb{E}[Z_0])|\ge \epsilon] \\[5pt] &\quad = \dfrac{1}{2-r_N}\mathbb{E}\biggl[|Z_0-\mathbb{E}[Z_0]|^2;\,|Z_0-\mathbb{E}[Z_0]|\ge \dfrac{\epsilon}{\sqrt{r_N}}\biggr]\\[5pt] &\quad \to 0,\quad \text{as $N\to\infty$}.\end{align*}

Thus $Y_{N,0},\ldots,Y_{N,t_N-1}$ satisfies Lindeberg’s condition, and applying the central limit theorem to $\sum_{k=0}^{t_N-1} Y_{N,k}$ completes the proof.

As in the proof of Corollary 5.1, if $\log N/(t_Nr_N)\to 0$ as $N\to\infty$ , then $\lim_{N\to\infty} \bar X^{(N)}(t_N,r)$ is the consensus in the limit $N\to\infty$ and thus Theorem 5.2 can be considered as the central limit theorem for the consensus.

Figure 1 shows the results of simulation experiments concerning Theorem 5.2. In Figure 1(a) we assume that the initial opinions of agents follows a uniform distribution on the interval $[-1,1]$ . The dotted lines in the figure show the probability density function (PDF) for $X_c^{(N)}(r_N)/\sqrt{r_N}$ obtained from the results of $10^7$ simulations, and the solid black line shows the PDF for the Gaussian distribution $N(0,\sigma^2/2)$ . We set $r_N = N^{-0.4}$ , which satisfies (5.7). The figure confirms that the PDF of the consensus converges to the PDF of the Gaussian distribution as N increases. In Figure 1(b) we assume that the initial opinions of agents follow an exponential distribution with a mean of 1, and we also set $r_N = N^{-0.4}$ . The figure also confirms that the simulation results converge to the Gaussian distribution $N(0,\sigma^2/2)$ as N increases.

Figure 1. Convergence of distribution of consensus to Gaussian distribution. (a) Initial opinion follows uniform distribution. (b) Initial opinion follows exponential distribution.

6. Initial opinion follows stable distribution

As shown in (3.8), the consensus can be expressed as a weighted average of the initial opinions of agents. If the initial opinions of agents follow a common stable distribution, the consensus is expected to follow the same stable distribution, and an analytical expression for the distribution of the consensus might be available because the sum of independent random variables following a common stable distribution also follows the same distribution. In this section we consider the case in which the initial opinions of agents, $X_1(0), \ldots,X_N(0)$ , follow a common stable distribution represented by $\textsf{S}(\alpha,\beta,\gamma,\delta;\,0)$ [Reference Nolan12], where $\alpha\in (0,2]$ is the stability parameter (characteristic exponent), $\beta\in [-1,1]$ is the skewness parameter, $\gamma\in(0,\infty)$ is the scale parameter, and $\delta\in(-\infty,\infty)$ is the location parameter. The integer ‘0’ in $\textsf{S}(\alpha,\beta,\gamma,\delta;\,0)$ is used as a distinction between the different parametrizations [Reference Nolan12].

Theorem 6.1. ([Reference Nolan12].) If $\{S_n\}_{n=1}^N$ are mutually independent and $S_n$ $(n=1,\ldots,N)$ follows a stable distribution $\textsf{S}(\alpha,\beta_n,\gamma_n,\delta_n;\,0)$ , then $\sum_{n=0}^N a_nS_n$ follows a stable distribution $\textsf{S}(\alpha,\bar\beta,\bar\gamma,\bar\delta;\,0)$ , where $a_1,\ldots,a_N$ are arbitrary real numbers, and

\begin{align*}\bar\gamma &= \Biggl(\sum_{n=0}^N |a_n\gamma_n|^\alpha\Biggr)^{1/\alpha},\quad \bar\beta = \dfrac{\sum_{n=0}^N\beta_n(\operatorname{sign} a_n)|a_n\gamma_n|^\alpha}{\gamma^\alpha}, \\[5pt] \bar\delta &=\begin{cases}\sum_{n=0}^N a_n\delta_n + \tan\dfrac{\pi\alpha}{2}\bigl(\beta\gamma-\sum_{n=0}^N \beta_na_n\gamma_n\bigr),&\alpha\neq 1,\\[5pt] \sum_{n=0}^N a_n\delta_n + \dfrac{2}{\pi}\bigl(\beta\gamma\log\gamma-\sum_{n=0}^N \beta_na_n\gamma_n\log|a_n\gamma_n|\bigr),&\alpha= 1.\end{cases}\end{align*}

Applying Theorem 6.1 to a broadcast-based consensus-forming model yields the following result.

Theorem 6.2. If $X_1(0)$ follows a stable distribution $\textsf{S}(\alpha,\beta,\gamma,\delta;\,0)$ , then $\bar X^{(N)}(t,r)$ also follows a stable distribution $\textsf{S}(\alpha,\beta,\gamma_t,\delta_t;\,0)$ for a given outcome of $\{A_n(t,r)\}_{n\in {\mathscr N}}$ , where

\begin{align*} \gamma_t = \gamma\Biggl(\sum_{n\in {\mathscr N}} A_n(t,r)^\alpha\Biggr)^{1/\alpha},\quad \delta_t=\begin{cases}\delta,&\alpha\neq 1,\\[5pt] \delta - {\frac{2 \beta\gamma}{\pi}}\sum_{n\in {\mathscr N}} A_n(t,r)\log A_n(t,r),&\alpha= 1.\end{cases}\end{align*}

Proof. Applying Theorem 6.1 to Corollary 3.1 with $a_n=A_n(t,r), \beta_n=\beta, \gamma_n=\gamma$ , and $\delta_n=\delta$ readily gives the desired conclusion.

According to Theorem 6.2, $\bar X^{(N)}(t,r)$ follows a stable distribution when the outcome of $\{A_n(t,r)\}_{n\in {\mathscr N}}$ is given. Note that the scale parameter of $\bar X^{(N)}(t,r)$ , $\gamma_t$ , is a random variable because $\gamma_t$ depends on $\{A_n(t,r)\}_{n\in {\mathscr N}}$ . However, there is a deterministic relationship between $\gamma_t$ and the scale parameter of $X_1(0)$ , $\gamma$ , as shown in the following corollary.

Corollary 6.1. We have

\begin{align*} \gamma_t \begin{cases}\le \gamma,&\alpha > 1,\\[5pt] = \gamma,&\alpha = 1,\\[5pt] \ge \gamma,&\alpha < 1.\\[5pt] \end{cases}\end{align*}

Proof. First note that $\sum_{n\in {\mathscr N}} A_n(t,r)=1$ . If $\alpha > 1$ , $A_n(t,r)^\alpha\le A_n(t,r)$ and thus

\begin{align*} \gamma_t=\gamma\Biggl(\sum_{n\in {\mathscr N}} A_n(t,r)^\alpha\Biggr)^{1/\alpha}\le \gamma\Biggl(\sum_{n\in {\mathscr N}} A_n(t,r)\Biggr)^{1/\alpha}=\gamma1^{1/\alpha}=\gamma.\end{align*}

Similar arguments yield $\gamma_\infty\ge \gamma$ when $\alpha < 1$ . If $\alpha=1$ , then

\begin{equation*}\gamma_t=\gamma\Biggl(\sum_{n\in {\mathscr N}} A_n(t,r)\Biggr)=\gamma.\end{equation*}

The consensus obtained by the consensus formation algorithm is expected to be around an intermediate value of the initial opinions of agents. At least, the statistical dispersion of the consensus obtained by the algorithm should be smaller than the dispersion of the initial opinions of agents. However, Corollary 6.1 shows that the scale parameter for the obtained consensus, representing the statistical dispersion, does not become smaller than the scale parameter for the initial opinions when $\alpha \le 1$ . This means that the consensus obtained by the broadcast-based algorithm is not a representative opinion of agents when $\alpha \le 1$ .

Corollary 6.1 readily yields the following interesting result.

Corollary 6.2. If $X_1(0)$ follows a Cauchy distribution with scale parameter $\gamma$ , then $\bar X^{(N)}(t,r)$ also follows a Cauchy distribution with scale parameter $\gamma$ .

Proof. A Cauchy distribution is a stable distribution with $\alpha=1$ and $\beta=0$ . Theorem 6.2 implies that $\bar X^{(N)}(t,r)$ also follows a stable distribution, and its location parameter $\delta$ is the same with $X_1(0)$ . In addition, Corollary 6.1 implies that the scale parameter of $\bar X^{(N)}(t,r)$ is the same as that of $X_1(0)$ . These observations complete the proof.

Since $\bar X^{(N)}(t,r)$ converges to the consensus as $t\to\infty$ , Corollary 6.2 implies that if the initial opinion of each agent follows a Cauchy distribution, the consensus has exactly the same distribution as the initial opinion. Although these findings have been numerically demonstrated in previous studies [Reference Katoh and Shioda9, Reference Shioda15], in the present paper, Corollary 6.2 presents a mathematically formal proof.

Even if the initial opinions follow a stable distribution, the consensus as well as $\bar X^{(N)}(t,r)$ does not follow a stable distribution unless the initial opinions follow a Cauchy distribution because the scale parameter is generally a random variable. However, in the limit $N\to\infty$ , while letting $p_n=1/N$ for all $n\in {\mathcal N}$ , the scale parameter becomes a constant, and the consensus, as well as $\bar X^{(N)}(t,r)$ , follows a stable distribution, as shown in the following theorem.

Theorem 6.3. Suppose that $X_1(0)$ follows a stable distribution $\textsf{S}(\alpha,\beta,\gamma,\delta;\,0)$ , and $p_n=1/N$ for all $n\in {\mathcal N}$ . If (5.4) holds with the choice of $\{t_N\}_{N=1}^\infty$ satisfying $t_N/\sqrt{N}\to 0$ as $N\to\infty$ , then the consensus $X_c^{(\infty)}$ in the limit $N\to\infty$ , is obtained, and $X_c^{(\infty)}$ follows a stable distribution $\textsf{S}(\alpha,\beta,\gamma_\infty,\delta;\,0)$ , where

\begin{align*}\gamma_\infty &=\dfrac{\gamma r}{(1-(1-r)^\alpha)^{1/\alpha}}.\end{align*}

Proof. It follows from Theorem 5.1 that the consensus is obtained in the limit $N\to\infty$ , and $X_c^{(\infty)}$ is given as (5.6). Applying Theorem 6.1 to (5.6) proves the result.

Corollary 6.3. If $X_1(0)$ follows a stable distribution $\textsf{S}(\alpha,\beta,\gamma,\delta;\,0)$ with $\alpha > 1$ and $p_n=1/N$ for all $n\in {\mathcal N}$ , then $X_c^{(\infty)}$ , then the consensus in the limit $N\to\infty$ , is obtained and $X_c^{(\infty)}$ follows a stable distribution $\textsf{S}(\alpha,\beta,\gamma_\infty,\delta;\,0)$ , where

\begin{align*}\gamma_\infty &=\dfrac{\gamma r}{(1-(1-r)^\alpha)^{1/\alpha}}.\end{align*}

Proof. If $X_1(0)$ follows a stable distribution with $\alpha > 1$ , then $X_1(0)$ is integrable. Thus it follows from Corollary 5.2 that the consensus is obtained in the limit $N\to\infty$ and $X_c^{(\infty)}$ is given as (5.6). Applying Theorem 6.1 to (5.6) proves the result.

Suppose that $X_1(0)$ follows a Gaussian distribution with mean $\mu$ and variance $\sigma^2$ . This is a stable distribution with $\alpha=2$ , $\gamma=\sigma$ , and $\delta=\mu$ . Since a Gaussian distribution has a finite mean, it follows from Corollary 6.3 that $X_c^{(\infty)}$ follows a Gaussian distribution with mean $\mu$ , and its variance $\sigma_\infty^2$ is given by

(6.1) \begin{equation}\sigma_\infty^2=\dfrac{r\sigma^2}{2-r}.\end{equation}

Figure 2(a) shows the results of simulation experiments when $X_1(0)$ follows a Gaussian distribution with mean 0 and variance 1. The dots in the figure show the PDF for the obtained consensus, which was obtained from the results of $10^7$ simulations, and the black line shows the PDF for the Gaussian distribution with mean 0 and variance $\sigma_\infty$ , which is given by (6.1). The trust parameter r was set at 0.5. The PDF for the obtained consensus of the simulation results converges to the black line as N becomes large, and the simulation results with $N=100$ are almost identical to the results indicated by the black line.

Figure 2. Convergence to distribution in limit $N\to\infty$ . (a) Initial opinion follows Gaussian distribution. (b) Initial opinion follows Lévy distribution.

Next, suppose that $X_1(0)$ follows a Lévy distribution with scale parameter $\gamma$ and location parameter $\delta$ , and choose $\{t_N\}_{N=1}^\infty$ such that $t_N$ tends to infinity faster than $\log N$ but slower than $\sqrt{N}$ as $N\to\infty$ . With this choice of $\{t_N\}_{N=1}^\infty$ , (5.6) holds because

\begin{align*} N \mathbb{P} (X_1(0) >\epsilon (1-r)^{-t_N})\sim \dfrac{N}{\sqrt{\epsilon}a^{t_N/2}}=\dfrac{N}{\sqrt{\epsilon}N^{\log a {t_N}/{(2\log N)}}}\to 0, \quad \text{as $N\to\infty$},\end{align*}

where , and we use the fact that the complementary cumulative distribution function for the Lévy distribution can be approximated by a power-law distribution with an exponent of $3/2$ . Thus it follows from Theorem 6.3 that the consensus in the limit $N\to\infty$ is obtained, and $X_c^{(\infty)}$ follows a Lévy distribution with location parameter $\delta$ and scale parameter $\gamma_\infty$ , where

(6.2) \begin{equation}\gamma_\infty=\dfrac{\gamma r}{2-r-2\sqrt{1-r}}.\end{equation}

Figure 2(b) shows the results of simulation experiments when $X_1(0)$ follows a Lévy distribution with scale parameter 1 and location parameter 0. The dotted lines in the figure show the PDF for the obtained consensus, which was obtained from the results of $10^7$ simulations, and the solid black line shows the PDF for the Lévy distribution with mean 0 and variance $\gamma_\infty$ , which is given by (6.2). The trust parameter r was also set at 0.5. The figure also shows that the PDF for the obtained consensus of the simulation results converges to the black line as N becomes large, and the simulation results with $N=100$ are almost identical to the results indicated by the black line.

7. Concluding remarks

In the present paper we studied the distribution of the consensus when the initial opinions of agents are random variables following a common distribution. The extension to cases in which initial opinions are random variables created mathematically in the present paper would enrich the consensus formation problem. For example, as shown at the end of Section 4 or some of the results in Section 6 (e.g. Corollary 6.1), the effectiveness of the consensus algorithm largely depends on the integrability of the initial opinion and the consensus algorithm may not be useless when the initial opinion is not integrable. This finding comes from extending the problem when the initial opinion is a random variable. The consideration of the consensus formation problem in the limit in which the number of agents is infinite is also mathematically valuable, and we believe that further interesting results will be obtained.

Some of the results obtained in the present paper can be extended relatively easily to cases in which agents interact on a graph with more general topology. For example, we showed that the consensus is expressed as a weighted average of the initial opinions of agents (Corollary 3.1). We found that the expression of the consensus as a weighted average of the initial opinions of agents is also possible when agents are interacting on a non-complete graph. Since some of the results in Section 4 (Theorems 4.1) and Section 6 (Theorem 6.2, Corollary 6.1, and Corollary 6.1) are derived based on this expression, these results can be extended to cases in which agents interact on a non-complete graph. We also found that the expression of the consensus as a weighted average of the initial opinions of the agents is available not only for a broadcast-based algorithm, but also for a wide range of consensus algorithms, including a gossip-based algorithm. On the other hand, we derive (3.1) in Proposition 3.1 by fully using the completeness of the graph and the homogeneity of the trust parameter, so (3.1) may be able to be extended to more general cases but will have a much more complicated expression. Since most of results in Section 5 are derived based on (3.1), the extension of these results to more general cases will have some difficulties, although this is a very interesting research subject.

Recently, we showed that the time reversal process for the consensus formation process is ergodic when the initial opinions of agents are constant [Reference Shioda and Takehara16]. This allows us to numerically obtain the distribution for the consensus by observing the time reversal process for consensus formation for a fixed sample path. It is also interesting and worth considering to extend this result to cases in which the initial opinions of agents are random variables.

Acknowledgements

We wish to thank the referees for their insightful comments and suggestions which led to a substantial improvement on an earlier version of the manuscript.

Funding information

The present study was supported by the Japan Society for the Promotion of Science (JSPS) through KAKENHI grants JP19H02135, JP19H04096, and JP20K21783.

Competing interests

There are no competing interests to declare that arose during the preparation or publication process of this article.

References

Acemoğlu, D., Como, G., Fagnani, F. and Ozdaglar, A. (2011). Opinion fluctuations and persistent disagreement in social networks. In 2011 50th IEEE Conference on Decision and Control and European Control Conference, pp. 2347–2352. IEEE.CrossRefGoogle Scholar
Acemoğlu, D., Como, G., Fagnani, F. and Ozdaglar, A. (2013). Opinion fluctuations and disagreement in social networks. Math. Operat. Res. 38, 127.Google Scholar
Altafini, C. and Lini, G. (2014). Predictable dynamics of opinion forming for networks with antagonistic interactions. IEEE Trans. Automatic Control 60, 342357.Google Scholar
Aysal, T. C., Yildiz, M. E., Sarwate, A. D. and Scaglione, A. (2009). Broadcast gossip algorithms for consensus. IEEE Trans. Signal Process. 57, 27482761.Google Scholar
DeGroot, M. H. (1974). Reaching a consensus. J. Amer. Statist. Assoc. 69, 118121.Google Scholar
Durrett, R. (2019). Probability: Theory and Examples, 5th edn. Cambridge University Press.Google Scholar
Fagnani, F. and Zampieri, S. (2008). Randomized consensus algorithms over large scale networks. IEEE J. Sel. Areas Commun. 26, 634649.CrossRefGoogle Scholar
Frasca, P., Ravazzi, C., Tempo, R. and Ishii, H. (2013). Gossips and prejudices: ergodic randomized dynamics in social networks. IFAC Proceedings Volumes 46, 212219.CrossRefGoogle Scholar
Katoh, D. and Shioda, S. (2022). Probability laws of consensus in a broadcast-based consensus-forming algorithm. Stoch. Models 38, 91115.Google Scholar
Li, Q. and Rus, D. (2006). Global clock synchronization in sensor networks. IEEE Trans. Comput. 55, 214226.Google Scholar
Muthukrishnan, S., Ghosh, B. and Schultz, M. H. (1998). First- and second-order diffusive methods for rapid, coarse, distributed load balancing. Theory Comput. Syst. 31, 331354.CrossRefGoogle Scholar
Nolan, J. P. (2020). Stable Distributions: Models for Heavy Tailed Data. Springer.CrossRefGoogle Scholar
Olfati-Saber, R. and Murray, R. M. (2004). Consensus problems in networks of agents with switching topology and time-delays. IEEE Trans. Automatic Control 49, 15201533.CrossRefGoogle Scholar
Rabani, Y., Sinclair, A. and Wanka, R. (1998). Local divergence of Markov chains and the analysis of iterative load-balancing schemes. In 39th Annual Symposium on Foundations of Computer Science, pp. 694–703. IEEE.Google Scholar
Shioda, S. (2020). Distribution of consensus in a broadcasting-based consensus-forming algorithm. ACM SIGMETRICS Performance Evaluation Rev. 48, 9196.Google Scholar
Shioda, S. and Takehara, K. (2022). Ergodicity of time reversal process of stochastic consensus formation and its application. In ACM SIGMETRICS Performance Evaluation Rev. 50, 1214.Google Scholar
Silvestre, D., Hespanha, J. P. and Silvestre, C. (2019). Broadcast and gossip stochastic average consensus algorithms in directed topologies. IEEE Trans. Control Network Syst. 6, 474486.CrossRefGoogle Scholar
Tsitsiklis, J. N. (1984). Problems in decentralized decision making and computation. Tech. Rep. Massachusetts Institute of Technology Cambridge Laboratory for Information and Decision Systems.Google Scholar
Xiao, L., Boyd, S. and Lall, S. (2005). A scheme for robust distributed sensor fusion based on average consensus. In Fourth International Symposium on Information Processing in Sensor Networks (IPSN 2005), pp. 63–70. IEEE.Google Scholar
Yang, Y. and Blum, R. S. (2013). Broadcast-based consensus with non-zero-mean stochastic perturbations. IEEE Trans. Inform. Theory 59, 39713989.Google Scholar
Figure 0

Table 1. Notation.

Figure 1

Figure 1. Convergence of distribution of consensus to Gaussian distribution. (a) Initial opinion follows uniform distribution. (b) Initial opinion follows exponential distribution.

Figure 2

Figure 2. Convergence to distribution in limit $N\to\infty$. (a) Initial opinion follows Gaussian distribution. (b) Initial opinion follows Lévy distribution.