Game-Theoretic Analysis of the Hegselmann-Krause Model for Opinion Dynamics in Finite Dimensions* Seyed Rasoul Etesami, Tamer Bas ̧ar Coordinated Science Laboratory, University of Illinois at Urbana-Champaign, Urbana, IL 61801 Email: (etesami1, basar1)@illinois.edu Abstract — We consider the Hegselmann-Krause model for opinion dynamics and study the evolution of the system under various settings. We first analyze the termination time of the synchronous Hegselmann-Krause dynamics in arbitrary finite dimensions and show that the termination time in general only depends on the number of agents involved in the dynamics. To the best of our knowledge, that is the sharpest bound for the termination time of such dynamics that removes dependency of the termination time from the dimension of the ambient space. This answers an open question in [1] on how to obtain a tighter upper bound for the termination time. Furthermore, we study the asynchronous Hegselmann-Krause model from a novel game-theoretic approach and show that the evolution of an asynchronous Hegselmann-Krause model is equivalent to a sequence of best response updates in a well-designed potential game. We then provide a polynomial upper bound for the expected time and expected number of switching topologies until the dynamic reaches an arbitrarily small neighborhood of its equilibrium points, provided that the agents update uniformly at random. This is a step toward anal- ysis of heterogeneous Hegselmann-Krause dynamics. Finally, we consider the heterogeneous Hegselmann-Krause dynamics and provide a necessary condition for the finite termination time of such dynamics. In particular, we sketch some future directions toward more detailed analysis of the heterogeneous Hegselmann-Krause model. Index Terms — Multidimensional Hegselmann-Krause model; homogeneous, heterogeneous, synchronous, asynchronous, opin- ion dynamics; potential game; strategic equivalence; best re- sponse dynamics. I. I NTRODUCTION Opinion formation in social networks is an important area of research that has attracted a lot of attention in recent years in a wide range of disciplines, such as psychology, economics, political science, and electrical and computer engineering. A natural question that commonly arises in all those areas is the extent to which one can predict the outcome of the opinion formation of entities under some complex interaction process running among these social actors. Consensus problems in which a set of agents are trying to achieve the same goal have been addressed by many researchers, such as [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12]. In such problems, which are still an active area of research, the goal is to achieve a certain agreement among *Research supported in part by the “Cognitive & Algorithmic Decision Making” project grant through the College of Engineering of the University of Illinois, and in part by AFOSR MURI Grant FA 9550-10-1-0573 and NSF grant CCF 11-11342. agents. However, there are many of situations in which there is neither a desire for consensus nor any tendency for the underlying process to approach a common outcome. In fact, such situations frequently emerge in the context of political elections and product marketings when there are multiple candidates or product choices to be selected among. Those facts have motivated researchers to study disagreement along with consensus. One of the first studies that considers disagreement beside consensus was undertaken by Friedkin and Johnsen [13], whose model was later extended by Hegselmann and Krause in [14], in the sense that [14] relaxes the assumption of time-invariant influence weights among the agents. More pre- cisely, the Hegselmann-Krause dynamics allow the influence weights to be a function of not only time, but also the states. It is worth noting that although such extensions make the analysis of Hegselmann-Krause dynamics mathematically much more complicated but interesting, one may argue that the assumption of influence weights depending on the evolv- ing opinion distance (which is the case in the Hegselmann- Krause dynamics) is questionable from a practical point of view, given the literature in experimental social psychology, e.g., see [15], [16], where social psychologists have long been intrigued by the hypothesis that opinion differences reliably predict direct relations of interpersonal influence. Still, a rigorous analysis of the Hegselmann-Krasue dynamics is both theoretically and practically important. The theo- retical aspects is that it allows us to develop novel tools useful to study more complex time and state dependent evolutionary dynamics and elaborate on their connections with other fields. The practical aspect is that, other than applications in the modeling of opinion dynamics, the model has applications in the robotics rendezvous problem in plane and space [17]. Accordingly, we consider the Hegselmann- Krause model in R d , where d ≥ 1 . In the Hegselmann-Krause model, a finite number of agents frequently update their opinions based on the possible interactions among them. The opinion of each agent in this model is captured by a scalar quantity in one dimension or a vector in Euclidean space R d> 1 in higher dimensions. In fact, because of the conservative nature of social entities, each agent in this model communicates only with those whose opinions are closer to him and lie within a certain level of his confidence (bound of confidence), where the distance arXiv:1412.6546v1 [cs.GT] 19 Dec 2014 between agents’ opinions is measured by the Euclidian norm in the ambient space. Depending on whether the bound of confidence is the same for all the agents or not, one can distinguish two different types of dynamics, known as homogeneous and heterogeneous , respectively. Moreover, the updating process of the agents may be synchronous, meaning that all the agents update simultaneously, or asynchronous, where the agents update in turn. Although at first glance the differences among these four types of dynamics may seem negligible, in fact, their outcomes are substantially different, such that most of the results from one cannot be carried over to the others [18], [19], [20]. In particular, because of the extra freedom for the agents’ movements in higher dimensions, analyzing such dynamics for dimensions higher than one is considerably more complex than for one dimension [21], [1], [22]. It is known that synchronous homogeneous Hegselmann- Krause dynamics will terminate after finitely many steps [14], [18]. The same model has also been used for distributed rendezvous in a robotic network [17], [23]. In the model, depending on the initial profile and the confidence bound, the final state may or may not be a consensus. The existing studies on the behavior of the Hegselmann-Krause model in one dimension where the agents’ opinions are scalars can be found in [24]. It was shown in [17] that the termination time of the Hegselmann-Krause dynamics in one dimension is at least O ( n ) , where n is the number of agents, and at most O ( n 3 ) [1], [25]. Moreover, the stability and the termination time of such dynamics in higher dimensions were studied in [26], [21], and the work in [21] bounds the termination time of such dynamics using the number of isolated agents through the evolution of the dynamics. In a recent work of Bhattacharyya et al. [1], a polynomial upper bound of O ( n 10 d 2 ) was given for such dynamics in higher dimensions, but leaving the dependency of such a bound on the dimension of ambient space as an open problem. In this work, we improve the upper bound to O ( n 8 ) and show that the termination time is, indeed, independent of the dimension of the ambient space. The asynchronous homogeneous Hegselmann-Krause model was considered in [27], where the authors were able to establish stability of this model using a proper quadratic comparison function when the probability of updating for each agent is uniformly bounded from below by some positive constant p > 0 . In this paper, we model the evolution of such dynamics as a sequence of best response updates in a potential game and provide a polynomial upper bound for the maximum expected switching topologies and the expected time it takes for the dynamics to reach an arbitrarily small neighborhood of its steady state provided that the agents update uniformly at random. We refer readers to [28] and [29] for some of the possible connections between control of distributed systems and potential games. Furthermore, the synchronous heterogeneous Hegselmann-Krause model was studied in [19], and recently in [20], where the authors con- jecture that the number of switching topologies throughout the dynamics must be finite. In fact, our analysis for an asynchronous homogeneous Hegselmann-Krause model here is a step toward more detailed analysis of the heterogeneous model using an appropriate potential function over directed graphs [30], [4]. Furthermore, numerous simulation results have been conducted to study and explore the evolutionary properties of the Hegselmann-Krause dynamics under vari- ous settings. For more information, we refer the reader to [18], [19], [14], [31]. This paper is organized as follows. In Section II, we review the Hegselmann-Krause dynamics under various settings. In Section III, we develop some preliminary results and mention some existing results for later use. In Section IV we consider the synchronous Hegselmann-Krause model in arbitrary finite dimensions and provide a polynomial upper bound for the termination time, independent of the dimension of the opinion space. That not only improves on the previous bounds, but also removes the dependency of the termination time on the dimension of the ambient space. In Section V, we model the asynchronous Hegselmann-Krause model as a potential game and provide its corresponding potential function. Using that function, we bound the expected number of switching topologies of the network when the agents update their opinions uniformly at random. Moreover, we provide an upper bound for the expected number of steps until the agents reach a δ -neighborhood of their steady state for some δ > 0 . We also directly show strategic equivalence of the game to a team problem. In Section VI we turn our attention to the heterogeneous Hegselmann- Krause model and provide a necessary condition for such dynamics to terminate in finite time. In Section VII, using the tools developed in this work, we discuss some of the possible future directions toward more detailed analysis of heterogeneous Hegselmann-Krause dynamics. We conclude the paper with the final remarks of Section VIII. Notations : For a positive integer n , we let [ n ] := { 1 , 2 , . . . , n } . For a vector v ∈ R n , we let v i be the i th entry of v . We say that v is stochastic if v i ≥ 0 for all i ∈ [ n ] and ∑ n i =1 v i = 1 . Similarly, for a matrix A , we let A ij be the ij th entry of A . We say that A is stochastic (or row-stochastic ) if each of its rows is stochastic, and we let min + A = min i,j { A ij | A ij > 0 } . We use A i to denote the i th row of A . We use A ′ to denote the transpose of a matrix A , and ‖ v ‖ to denote the Euclidean norm of a vector v . We let the consensus vector 1 be a vector of unit size ( ‖ 1 ‖ = 1 ) with equal entries. For a matrix A with real eigenvalues, we let λ 2 ( A ) be its second smallest eigenvalue. A scrambling matrix is a stochastic matrix such that the inner product of each pair of its rows is positive. For a vector y we use conv ( y ) to show the convex hull of its components and diam ( conv ( y )) = max p,q ∈ conv ( y ) ‖ p − q ‖ . We define the distance between two sets P, Q ⊆ R n to be dist ( P, Q ) = inf p ∈ P,q ∈ Q ‖ p − q ‖ . For a graph G , we let A G be its adjacency matrix and D G be a diagonal matrix whose diagonal entries are equal to the degree of the nodes in the graph. Moreover, we use L G = A G − D G to denote the Laplacian of that graph. Finally, we use | S | to denote the cardinality of a finite set S . II. H EGSELMANN -K RAUSE D YNAMICS In this section we describe the discrete-time Hegselmann- Krause opinion dynamics model as introduced in [14]. Let us assume that we have a set of n agents [ n ] = { 1 , . . . , n } and we want to model the interactions among their opinions. It is assumed that at each time t = 0 , 1 , 2 , . . . , the opinion of agent i ∈ [ n ] can be represented by a vector x i ( t ) ∈ R d for some d ≥ 1 . According to that model, the evolution of opinion vectors can be modeled by the following discrete-time dynamics: x ( t + 1) = A ( t, x ( t ) ,~  ) x ( t ) , (1) where A ( t, x ( t ) ,~  ) is an n × n row-stochastic matrix and x ( t ) is the n × d matrix such that its i th row contains the opinion of the i th agent at time t = 0 , 1 , 2 , . . . , i.e., it is equal to x i ( t ) . We refer to x ( t ) as the opinion profile at time t . The entries of A ( t, x ( t ) ,~  ) are functions of time step t , current profile x ( t ) , confidence vector ~  = (  1 ,  2 , . . . ,  n ) > 0 and an updating scheme. The parameters  i , i ∈ [ n ] are referred to as the confidence bounds . In the homogeneous case of the dynamics, we assume that  i = , ∀ i ∈ [ n ] for some  > 0 , while in the heterogeneous model, different agents may have different bounds of confidence. Our focus in this paper is mainly on the homogeneous model, but we also analyze the heterogeneous case toward the end, in Section VI. For the sake of simplicity of notation and for a fixed x (0) ∈ R n × d , we drop the dependency of A ( t, x ( t ) ,~  ) on x ( t ) and  and simply write A ( t ) . In what follows next, we distinguish two different versions of Hegselmann-Krause dynamics. A. Synchronous Hegselmann-Krause Model In the synchronous Hegselmann-Krause model, each agent i updates its value at time t = 0 , 1 , 2 , . . . , by averaging its own value and the values of all the other agents that are in its  -neighborhood at time t . To be more specific, given a profile x ( t ) at time t , define the matrix A ( t ) in (1) by: A ij ( t ) = { 1 |N i ( t ) | if j ∈ N i ( t ) , 0 else , (2) where N i ( t ) is the set of agents in the  -neighborhood of agent i , i.e., N i ( t ) = { j ∈ [ n ] | ‖ x i ( t ) − x j ( t ) ‖ ≤  } . B. Asynchronous Hegselmann-Krause Model In the asynchronous case and at each time instant t = 0 , 1 , 2 , . . . , only one agent, namely i ∗ , updates its value to the average of its neighbors, while the others remain unchanged. Selection of such an agent may be at random or based on some predefined order. In this paper, we assume that the agents are chosen uniformly at random to update their opinions. In that case the updating matrix A ( t, x ( t ) ,~  ) given in (1) can be written as A ij ( t ) =      1 |N i ∗ ( t ) | if i = i ∗ , j ∈ N i ∗ ( t ) , 1 if i = j 6 = i ∗ 0 else , (3) where here we have assumed that agent i ∗ updates its opinion at time t . Remark 1: In the heterogeneous Hegselmann-Krause model, each agent i is able to observe only its  i - neighborhood, and we have N i ( t,  i ) = { j ∈ [ n ] | ‖ x i ( t ) − x j ( t ) ‖ ≤  i } . Remark 2: There are other types of Hegselmann-Krause dynamics where the evolution of dynamics is subject to noise or perturbation in the system or when the agents are truth seekers in the sense that they are attracted by the truth by a positive amount [32], [33]. Moreover, the continuous version of the Hegselmann-Krause model, in which a continuum of opinions are involved in the dynamics, has been considered in [34], [35], [36]. Remark 3: As can be seen from the above formulations, the Hegselmann-Krause dynamics do not preserve the opin- ion average of the agents, and the evolution of the system strongly depends on the history and the states, which may switch between different topologies. In fact, it is not possible to determine the topology of the network at the current time, unless one can observe the state of the system in the previous time step. Those facts make the analysis of such dynamics much more complicated than analysis of the average-preserving dynamics with fixed topology. III. P RELIMINARY R ESULTS In this section, we briefly discuss some preliminary results and provide some definitions that will be used to prove our main results. Lemma 1: ( Perron-Frobenius for Laplacians [37] ): Let L be a matrix with non-positive off-diagonal entries such that the graph of the non-zero off-diagonal entries is connected. Then the smallest eigenvalue has multiplicity 1, and the corresponding eigenvector is strictly positive. Next, we state Cheeger’s inequality, which relates the spectral gap of the Laplacian matrix to the expansion of its corresponding graph. Lemma 2 (Cheeger’s Inequality [38]): Let G = ( V , E ) be an undirected graph with Laplacian matrix L . Moreover, for a subset of vertices S ⊆ V let e ( S, S c ) denote the number of edges with one vertex in S and one vertex in its complement S c . Defining the cut ratio Φ( S ) = e ( S,S c ) | S || S c | and isoperimetric number of G by Φ = min S ⊂V Φ( S ) , we have Φ 2 2 d max ≤ λ 2 ( L ) ≤ 2Φ , where d max denotes the maximum degree of the graph G and λ 2 ( L ) is the second smallest eigenvalue of the Laplacian L . Lemma 3: ( Courant-Fischer Formula [39] ) Let A be an n × n symmetric matrix with eigenvalues λ 1 ≤ λ 2 ≤ . . . , ≤ λ n and corresponding eigenvectors v 1 , . . . , v n . Moreover, for 1 ≤ k ≤ n , let S k denote the span of v 1 , . . . , v k (with S 0 = { 0 } ), and let S ⊥ k denote the orthogonal complement of S k , i.e., S ⊥ k = { v ∈ R n | v ′ u = 0 , ∀ u ∈ S k } . Then λ k = min ‖ x ‖ =1 x ∈ S ⊥ k − 1 x ′ Ax. Corollary 1 (Rayleigh-Quotient [39]): Let G = ( V , E ) be a graph and L be the Laplacian of G . We already know from the Perron-Frobenius lemma (Lemma 1) that the smallest eigenvalue is λ 1 = 0 with eigenvector v 1 = 1 . By the Courant-Fischer Formula, we get λ 2 ( L ) = min ‖ x ‖ =1 x ⊥ 1 x ′ L x. Lemma 4: Suppose C is a stochastic matrix and y = Cx ; then diam ( conv ( y )) ≤ (1 − μ ( C )) diam ( conv ( x )) where μ ( C ) = min i 6 = j ( ∑ n k =1 min( c ik , c jk )) . In particular, when C is a scrambling matrix with min + C ≥ δ , then we can say μ ( C ) ≥ δ , or diam ( conv ( y )) ≤ (1 − δ ) diam ( conv ( x )) . Proof: A short proof of the above lemma can be found in [40]. In fact, one of the fundamental concepts and properties of the synchronous Hegselmann-Krause dynamics that will be used extensively throughout this paper is that the dynamics admit a quadratic Lyapunov function [35], [41], [42]. IV. S YNCHRONOUS M ULTIDIMENSIONAL H EGSELMANN -K RAUSE D YNAMICS In this section we consider the homogeneous synchronous Hegselmann-Krause model as was introduced in (2). We start our discussion by introducing some notation that will be used throughout this section. Definition 1: We say that a time instance t is a merging time for the dynamics if two agents with different opinions move to the same place. Based on that definition, we can see that if two agents i and j merge at time instant t , then they will have the same opinion at time t +1 and onward, while their common opinion may vary with time. Moreover, prior to the termination time of the dynamics, we cannot have more than n merging times, since there are n agents in the model. In what follows next, we define the notions of termination time and communication graphs. Definition 2: For every set of n ≥ 1 agents we define the termination time T n of the synchronous Hegselmann-Krause dynamics to be the maximum number of iterations before steady state is reached over all the initial profiles. Definition 3: Given an opinion profile at time t , we asso- ciate with that opinion profile an undirected graph G ( t ) = ([ n ] , E ( t )) where the edge ( i, j ) ∈ E ( t ) if and only if i ∈ N j ( t ) . We refer to such a graph as the communication graph or communication topology of the dynamics at time step t . Furthermore, a connected component of the communication graph is called δ - trivial for some δ > 0 , if all the agents in that component lie within a distance of at most δ from each other. Remark 4: From Definition 3, it is not hard to see that for any δ <  , a δ -trivial component forms a complete component (clique) in the communication topology of the dynamics. In particular, if there is such a δ -trivial component at some time t , then in the next time step, all the agents in that component will merge to the same opinion. In our earlier work [21], we were able to analyze the termination time of the Hegselmann-Krause dynamics based on the number of isolated agents throughout the dynamics. Theorem 1: For the termination time T n of the syn- chronous Hegselmann-Krause dynamics in R d , we have: T n ∑ t =0 ( 1 2 ) | S 0 ( t ) | < 8 n 6 , where S 0 ( t ) denotes the set of agents (singletons) at time t who do not observe any opinions other than them inside their neighborhood, i.e., i ∈ S 0 ( t ) if and only if N i ( t ) = { x i ( t ) } . Proof: A proof can be found in [21]. As a particular result of Theorem 1, if for a particular in- stance of the dynamics, the agents maintain the connectivity throughout the dynamics, we conclude that T n = O ( n 6 ) . In fact, Theorem 1 gives us the idea that the termination time of the dynamics greatly depends on the connectivity of the underlying graph and should be independent of the dimension of the opinions ( d ). In this paper, we resolve that problem and show that indeed, the termination time is independent of the dimension. That answers one of the open questions raised in [ 1 ] related to the existence of a tighter polynomial upper bound independent of the dimension of the opinion space. For that purpose, we utilize a quadratic Lyapunov function that was introduced earlier in [42]. Lemma 5: Let V ( t ) = ∑ i,j ∈ [ n ] min {‖ x i ( t ) − x j ( t ) ‖ 2 ,  2 } . Then V is non-increasing along the trajectory of the syn- chronous Hegselmann-Krause dynamics. In particular, we have V ( t ) − V ( t + 1) ≥ 4 n ∑ ` =1 ‖ x ` ( t + 1) − x ` ( t ) ‖ 2 . Proof: A proof can be found in [42]. In the following theorem, we provide a lower bound for the amount of decrease of the above Lyapunov function as long as there exists one non-  -trivial component in the dynamics. Theorem 2: The termination time of the synchronous Hegselmann-Krause dynamics in arbitrary finite dimensions is independent of the dimension and is bounded from above by T n ≤ n 8 + n . Proof: Let us assume that the opinion profile x ( t ) = ( x 1 ( t ) , x 2 ( t ) , . . . , x n ( t )) ′ is not an equilibrium point of the dynamics and that time t is not a merging time. Therefore, without loss of generality, we may assume that the com- munication graph at time t is connected with a non-  -trivial component; otherwise, we can restrict ourselves to one of the non-  -trivial components. (Note that such a non-  -trivial component exists, because of Remark 4 and the fact that t is not a merging time.) By projecting each individual column of x ( t ) to the consensus vector 1 we can write x ( t ) = [ c 1 1 | c 2 1 | . . . | c d 1 ] + [ ̄ c 1 1 ⊥ (1) | ̄ c 2 1 ⊥ (2) | . . . | ̄ c d 1 ⊥ ( d ) ] , (4) where 1 ⊥ ( k ) , k = 1 , . . . , d are column vectors of unit size that are orthogonal to the consensus vector, i.e., 1 ′ 1 ⊥ ( k ) = 0 , and c k , ̄ c k , k = 1 , . . . , d are coefficients of projection of the k th column of x ( t ) on 1 and 1 ⊥ ( k ) , respectively. Now we claim that ∑ d k =1 ̄ c 2 k >  2 4 . Otherwise, we show that every two agents x i ( t ) and x j ( t ) must lie within a distance of at most  from each other, which is in contrast with the assumption that the component is a non-  -trivial component. In fact, if ∑ d k =1 ̄ c 2 k ≤  2 4 , we can write, ‖ x i ( t ) − x j ( t ) ‖ 2 = d ∑ k =1 ̄ c 2 k ( 1 ⊥ ( k ) i − 1 ⊥ ( k ) j ) 2 ≤ 2 d ∑ k =1 ̄ c 2 k ( ( 1 ⊥ ( k ) i ) 2 + ( 1 ⊥ ( k ) j ) 2 ) ≤ 2 d ∑ k =1 ̄ c 2 k ( ‖ 1 ⊥ ( k ) ‖ 2 + ‖ 1 ⊥ ( k ) ‖ 2 ) = 4 d ∑ k =1 ̄ c 2 k ≤  2 , (5) where the first equality is due to the decomposition given in (4) and the second equality is valid since the vectors 1 ⊥ ( k ) , k = 1 . . . , d , are of unit size. The contradiction shows that ∑ d k =1 ̄ c 2 k >  2 4 . Next, we notice that x ( t + 1) = A ( t ) x ( t ) , where A ( t ) is the stochastic matrix defined in (2). Using (4) we can write, x ( t ) − x ( t + 1) = ( I − A ( t )) x ( t ) = [ ̄ c 1 ( I − A ( t )) 1 ⊥ (1) | . . . | ̄ c d ( I − A ( t )) 1 ⊥ ( d ) ] , (6) where the equality holds since 1 belongs to the null space of I − A ( t ) . In particular, we have, n ∑ ` =1 ‖ x ` ( t ) − x ` ( t +1) ‖ 2 = n ∑ ` =1 d ∑ k =1 ( x `k ( t ) − x `k ( t +1) ) 2 = d ∑ k =1 ( n ∑ ` =1 ( x `k ( t ) − x `k ( t +1) ) 2 ) = d ∑ k =1 ̄ c 2 k ‖ ( I − A ( t )) 1 ⊥ ( k ) ‖ 2 , (7) where in the last equality we have used (6). Let us assume that Q ( t ) = ( I − A ( t )) ′ ( I − A ( t )) . It is not hard to see that Q ( t ) is a positive semidefinite matrix. Moreover, 0 is an eigenvalue of Q with multiplicity one, corresponding to the eigenvector 1 . To see that, let us assume that there exists another vector v , such that Q ( t ) v = 0 . Multiplying that equality from the left by v ′ , we get ‖ ( I − A ( t )) v ‖ 2 = 0 , and hence ( I − A ( t )) v = 0 . Since by the Perron-Frobenius lemma (Lemma 1), 1 is the only unit eigenvector of I − A ( t ) corresponding to eigenvalue 0, we conclude that v = α 1 for some α ∈ R . In other words, 1 is the only unit eigenvector of Q ( t ) corresponding to eigenvalue 0. Moreover, Q ( t ) is a symmetric real-valued matrix and, hence, diagonalizable, where 1 is its only eigenvector corresponding to eigenvalue 0. That shows that the multiplicity of the eigenvalue 0 in Q ( t ) is exactly one. Let us use λ 2 ( Q ( t )) to denote the second smallest eigen- value of Q ( t ) . By the above argument, it must be strictly positive. Using the Courant-Fischer lemma (Lemma 3), we get λ 2 ( Q ( t )) = min ‖ y ‖ =1 ,y ⊥ 1 y ′ Q ( t ) y . Now for every k = 1 , . . . , d , we can write ‖ ( I − A ( t )) 1 ⊥ ( k ) ‖ 2 = ( 1 ⊥ ( k ) ) ′ ( I − A ( t )) ′ ( I − A ( t )) 1 ⊥ ( k ) = ( 1 ⊥ ( k ) ) ′ Q ( t ) 1 ⊥ ( k ) ≥ min ‖ y ‖ =1 y ⊥ 1 y ′ Q ( t ) y = λ 2 ( Q ( t )) , (8) where the inequality holds, since 1 ′ 1 ⊥ ( k ) = 0 and ‖ 1 ⊥ ( k ) ‖ = 1 . Substituting (8) in (7) we get n ∑ ` =1 ‖ x ` ( t ) − x ` ( t + 1) ‖ 2 ≥ d ∑ k =1 λ 2 ( Q ( t )) ̄ c 2 k ≥ λ 2 ( Q ( t ))  2 4 . (9) Henceforth, we bound λ 2 ( Q ( t )) from below based on a function of n . For that purpose, let us assume that D ( t ) = diag ( 1 + d 1 ( t ) , 1 + d 2 ( t ) , . . . , 1 + d n ( t ) ) , i.e., D ( t ) is a diagonal matrix with D kk ( t ) = 1+ d k ( t ) , k ∈ [ n ] . Moreover, let L ( t ) denote the Laplacian matrix of the communication graph at time step t . By entry wise comparison of both sides, it is not hard to see that I − A ( t ) = D ( t ) − 1 L ( t ) . Now we can write, λ 2 ( Q ( t )) = λ 2 (( D ( t ) − 1 L ( t )) ′ ( D ( t ) − 1 L ( t ))) = λ 2 ( L ( t ) D ( t ) − 2 L ( t )) , (10) where the last equality is due to the fact that L ( t ) and D ( t ) are both symmetric matrices. Next, using the same argument as above, we notice that since L ( t ) D ( t ) − 2 L ( t ) is a symmetric and real-valued matrix, it is diagonalizable, and its zero eigenvalue corresponding to eigenvector 1 has multiplicity one. To see that, let us assume that there is another vector u such that L ( t ) D ( t ) − 2 L ( t ) u = 0 ; then, we must have, 0 = u ′ L ( t ) D ( t ) − 2 L ( t ) u = n ∑ i =1 ( 1 1 + d i ( t ) ) 2 ( L ( t ) u ) 2 i , which results in L ( t ) u = 0 , or, equivalently, u is a scalar multiple of the consensus vector 1 . Now, using the Courant-Fischer lemma, we can write, λ 2 ( L ( t ) D ( t ) − 2 L ( t ) ) = min ‖ y ‖ =1 y ⊥ 1 y ′ L ( t ) D ( t ) − 2 L ( t ) y ≥ min ‖ y ‖ =1 y ⊥ 1 y ′ L ( t )( 1 n 2 I ) L ( t ) y = λ 2 ( L ( t )( 1 n 2 I ) L ( t ) ) = 1 n 2 λ 2 ( L 2 ( t ) ) = 1 n 2 λ 2 2 ( L ( t ) ) , (11) where the last equality is due to the fact that L is diag- onalizable (it is a symmetric and real-valued matrix) with an eigenvalue 0 of multiplicity 1. Substituting (11) in (10) we get λ 2 ( Q ( t )) ≥ 1 n 2 λ 2 2 ( L ( t ) ) . Now, using Cheeger’s Inequality (Lemma 2) and since L ( t ) is the Laplacian of a connected graph, we can bound λ 2 ( L ( t ) ) from below by 2 n 2 , which is due to the isoperimetric number of the communication graph for the minimum cut set. Putting it all together, we have, λ 2 ( Q ( t )) ≥ 1 n 2 λ 2 2 ( L ( t ) ) ≥ 4 n 6 . (12) Finally, combining (12) with (9), we conclude that the amount of decrease in the quadratic Lyapunov function if there is a non-  -trivial component is at least  2 n 6 . In other words, if t is not a merging time, we have V ( t ) − V ( t +1) ≥  2 n 6 . Since by definition V ( · ) is always a nonnegative quantity with V (0) ≤  2 n 2 and the number of merging times can be at most n , we conclude that the termination time is bounded from above by n 8 + n . V. A SYNCHRONOUS H EGSELMANN -K RAUSE D YNAMICS In this section, we consider the asynchronous Hegselmann- Krause dynamics as introduced in Section II. We first notice that such dynamics do not necessarily reach their steady state in finite time. The simplest case one can consider is when there are only two agents on the real line, separated by a distance less than the confidence bound  . In such a case, no matter what the order of the updating process is, the agents will never arrive at the same opinion or disappear from each other’s neighborhood. The two agents will get closer and closer and asymptotically converge to some steady state. That justifies asymptotic analysis of the asynchronous Hegselmann-Krause dynamics, which we will consider in this section. In fact, one can easily show that unless the dynamics start from a steady state, it will never reach its steady state in finite time for any asynchronous updating scheme. The reason is that unless the dynamics start from a steady state, at any time instant t , there are at least two agents i and j who are connected ( j ∈ N i ( t ) ), and updating any of them does not bring them to the same opinion. Furthermore, unlike the synchronous case in one dimension, where the order of agents’ opinions is preserved throughout the dynamics, in the asynchronous case, the order of the agents’ opinions may or may not change, depending on the updating scheme. In this section, we consider a uniformly randomized updating scheme for the agents and analyze the asymptotic conver- gence of such dynamics to their steady state. But before we start, we need the following two definitions. Definition 4: We call an updating process a uniform up- dating scheme for the asynchronous Hegselmann-Krause mode if at each time instant t = 0 , 1 , . . . , only one agent is chosen independently and with probability 1 n from the set of all agents [ n ] and updates its opinion. Definition 5: Given a δ > 0 , we say that an opinion profile x ( t ) is a δ -equilibrium if the set of agents partition into different sets (clusters) { C 1 , C 2 , . . . , C m } for some m ∈ N such that dist ( conv ( C i ) , conv ( C j )) > , ∀ i 6 = j and diam ( conv ( C k )) < δ, ∀ k = 1 . . . m . In fact, Definition 5 simply states that a profile x ( t ) is a δ -equilibrium if the opinions of agents at time t form some small groups of diameter at most δ that are far from each other by a distance of at least  . Next, we introduce a network formation game that can explain the behavior of the agents in asynchronous Hegselmann-Krause dynamics. A. Network Formation Game Let us consider a set of n road constructors (players) in R d who are funded by the government to construct roads. The budget that the government allocates to each player at the beginning is a fixed amount and is equal to $( n − 1)  2 ( $  2 support for each possible road that one player can construct). Ideally, the government would like for all the possible ( n 2 ) roads to be constructed by the players. To that end and in order to create an incentive for players to build as many roads as they can, the government will punish each player by $  2 if he or she decides not to construct a road (i.e., the government will take that player’s supporting $  2 back). On the other hand, each player has the ability to construct roads only within an  2 -neighborhood of himself or herself. (One can assume that the players do not take risks and do not want to spend money beyond the support they received from the government per road.) In such a game, players act myopically, trying to build roads with those who are most beneficial to them. If two players who are located at x, y ∈ R d build a road together, the cost to them is naturally proportional to their distance from each other and is equal to ‖ x − y ‖ 2 . (The farther the players are from each other, the more costly to make a road.) Therefore, in that setting, the payoff for the i th player, i ∈ [ n ] , at location x i can be formulated as U i ( x i , x − i ) = ( n − 1)  2 − n ∑ j =1 min {‖ x i − x j ‖ 2 ,  2 } , (13) where x − i denotes the actions of all players except the i th one. In such a game, we assume that agents act rationally and are able to compute and play their best response at time steps t = 0 , 1 , 2 , . . . . Based on the above scenario, we have the following lemma. Lemma 6: The sequence of the players’ best responses in the network formation game under some specific up- dating scheme is equivalent to the evolution of the asyn- chronous Hegselmann-Krause dynamics under the same up- dating scheme. Proof: Let us assume that at time step t the i th agent updates his location in order to increase his pay- off. If the current locations of the players are denoted by x 1 ( t ) , x 2 ( t ) , . . . , x n ( t ) , the position of agent i at the next time step would be x i ( t + 1) = argmin x n ∑ j =1 min {‖ x − x j ( t ) ‖ 2 ,  2 } = argmin x ∑ j ∈N i ( t ) ‖ x − x j ( t ) ‖ 2 = ∑ j ∈N i ( t ) x j ( t ) |N i ( t ) | . That establishes the equivalence between the best response dynamics and the updating process in the asynchronous Hegselmann-Krause model. Proposition 3: An action profile ( x ∗ 1 , x ∗ 2 , . . . , x ∗ n ) is a Nash equilibrium of the network formation game if and only if it is a steady state of the asynchronous Hegselmann-Krause dynamics. Proof: Given an arbitrary Nash equilibrium ( x ∗ 1 , x ∗ 2 , . . . , x ∗ n ) , we show that it is a steady state of the asynchronous Hegselmann-Krause dynamics by showing that for all i, j ∈ [ n ] we either have x ∗ i = x ∗ j , or ‖ x ∗ i − x ∗ j ‖ >  . To show this by contradiction, let us assume that there are two players at locations x ∗ p 6 = x ∗ q such that ‖ x ∗ p − x ∗ q ‖ ≤  . Let L = { x ∗ p , x ∗ q , x ∗ ` 1 , . . . , x ∗ ` s } denote the set of all the players’ actions at this equilibrium point which are in the same connected component as x ∗ p and x ∗ q in the communication graph. Denoting one of the extreme points of conv ( L ) by x ∗ ` and using Lemma 6, it is not hard to see that player ` ’s action is not his best response, i.e., ∑ j ∈N ∗ ` x ∗ j |N ∗ ` | 6 = x ∗ ` , where N ∗ ` = { j : ‖ x ∗ j − x ∗ ` ‖ ≤  } . This is in contrast with the assumption of ( x ∗ 1 , x ∗ 2 , . . . , x ∗ n ) being a Nash equilibrium. To show that every steady state of the asynchronous Hegselmann-Krause dynamics is a Nash equilibrium of the network formation game is quite straight forward. Next we show that the above network formation game is, indeed, a potential game, with the sum of the utilities as a potential function. A further result (Corollary 2) shows directly that it is strategically equivalent to a team problem. Theorem 4: The network formation game is a potential game with a potential function of U ( x 1 , x 2 , . . . , x n ) = ∑ n i =1 U i ( x i , x − i ) . In particular, we have U ( x i , x − i ) − U ( x ′ i , x − i ) ≤ − 2 |N i |‖ x i − x ′ i ‖ 2 , where x ′ i denotes the deviation of the i th player from action x i to his best response x ′ i = 1 |N i | ∑ j ∈N i x j , and x − i denotes the actions of all players except the i th one. Proof: Let N i and N ′ i denote the set of neighbors of player i before and after deviating, respectively. By definition of the payoff function of players (13), we can write, U ( x i , x − i ) − U ( x ′ i ,x − i ) = ∑ j ∈N i ∪N ′ i ( U j ( x i , x − i ) − U j ( x ′ i , x − i ) ) = U i ( x i , x − i ) − U i ( x ′ i , x − i ) + ∑ j ∈N i ∩N ′ i ( U j ( x i , x − i ) − U j ( x ′ i , x − i ) ) + ∑ j ∈N i \N i ∩N ′ i ( U j ( x i , x − i ) − U j ( x ′ i , x − i ) ) + ∑ j ∈N ′ i \N i ∩N ′ i ( U j ( x i , x − i ) − U j ( x ′ i , x − i ) ) , (14) where the first equality is due to the fact that the utility of the players who do not observe x i or x ′ i does not change. Next, we compute each of the summands in the above expression. Note that only the action of player i changes from x i to x ′ i , while all others’ actions remain unchanged (Figure 1). We can write, U j ( x i , x − i ) − U j ( x ′ i , x − i ) = ‖ x j − x ′ i ‖ 2 −‖ x j − x i ‖ 2 , j ∈ N i ∩N ′ i U j ( x i , x − i ) − U j ( x ′ i , x − i ) =  2 −‖ x j − x i ‖ 2 , j ∈ N i \ N i ∩N ′ i U j ( x i , x − i ) − U j ( x ′ i , x − i ) = ‖ x j − x ′ i ‖ 2 −  2 , j ∈ N ′ i \ N i ∩N ′ i . (15) Fig. 1. Deviation of the i th player by updating to his best response x ′ i . The reason for the first equality in (15) is that after the i th player deviates, every agent in j ∈ N i ∩ N ′ i still holds his connection with i , and hence, by the definition of the payoff function (13), his payoff is subjected to a change of ‖ x j − x ′ i ‖ 2 −‖ x j − x i ‖ 2 . (Note that all players except the i th one are kept fixed.) Similarly, every player j ∈ N i \ N i ∩N ′ i stays connected to x i while disconnecting his link with the i th player after i ’s deviation (since agent i gets far from him by moving from x i to x ′ i , and hence they both prefer to stop building the road and each pay $  2 to the government). Therefore, the amount of change in the j th player’s payoff is equal to  2 −‖ x j − x i ‖ 2 . In a similar way, one can observe that the third equality in (15) holds. By the same line of argument and because of symmetry, one can easily show that the amount of change in the i th player’s payoff is equal to the sum of all the terms in (15) over j ∈ N i ∪ N ′ i . In fact, we can write, U i ( x i , x − i ) − U i ( x ′ i , x − i ) = ∑ j ∈N i ∩N ′ i ( ‖ x j − x ′ i ‖ 2 − ‖ x j − x i ‖ 2 ) + ∑ j ∈N i \N i ∩N ′ i (  2 −‖ x j − x i ‖ 2 ) + ∑ j ∈N ′ i \N i ∩N ′ i ( ‖ x j − x ′ i ‖ 2 −  2 ) = ( |N i | − ( |N i ∩ N ′ i | ))  2 − ( |N ′ i | − ( |N i ∩ N ′ i | ))  2 + ∑ j ∈N ′ i ‖ x j − x ′ i ‖ 2 − ∑ j ∈N i ‖ x j − x i ‖ 2 ≤ ∑ j ∈N i \N i ∩N ′ i ‖ x j − x ′ i ‖ 2 − ∑ j ∈N ′ i \N i ∩N ′ i ‖ x j − x ′ i ‖ 2 + ∑ j ∈N ′ i ‖ x j − x ′ i ‖ 2 − ∑ j ∈N i ‖ x j − x i ‖ 2 = ∑ j ∈N i ‖ x j − x ′ i ‖ 2 − ∑ j ∈N i ‖ x j − x i ‖ 2 , (16) where in the last inequality we have used the facts that ( |N i | − ( |N i ∩ N ′ i | ))  2 ≤ ∑ j ∈N i \N i ∩N ′ i ‖ x j − x ′ i ‖ 2 ( |N ′ i | − ( |N i ∩ N ′ i | ))  2 ≥ ∑ j ∈N ′ i \N i ∩N ′ i ‖ x j − x ′ i ‖ 2 . (Note that ‖ x j − x ′ i ‖ 2 ≥  2 , if j ∈ N i \ N i ∩ N ′ i , and ‖ x j − x ′ i ‖ 2 ≤  2 , if j ∈ N ′ i \ N i ∩ N ′ i .) Substituting (15) and (16) in (14) and using (16), we get U ( x i , x − i ) − U ( x ′ i , x − i ) ≤ 2[ ∑ j ∈N i ‖ x j − x ′ i ‖ 2 − ∑ j ∈N i ‖ x j − x i ‖ 2 ] = − 2 |N i |‖ x i − x ′ i ‖ 2 , where the last equality comes from substituting x ′ i = 1 N i ∑ j ∈N i x j because player i deviates to his best place (Lemma 6). Corollary 2: The network formation game is strategically equivalent to a team problem. Proof: For any arbitrary player i ∈ [ n ] , let us define β ( x − i ) = ( n − 1)( n − 2)  2 − ∑ r,s ∈ [ n ] \{ i } min {‖ x r − x s ‖ 2 ,  2 } . Note that β ( x − i ) depends on the actions of all the players except the i th player. By definition of U ( x 1 , . . . , x n ) = ∑ n k =1 U k ( x k , x − k ) , we can write 2 U i ( x i , x − i ) + β ( x − i ) = U ( x 1 , x 2 , . . . , x n ) . This shows that the network formation game is essentially a team problem, in the sense that every Nash equilibrium of the game is a person-by-person optimal solution for the team, and vice versa. More details on such strategic equivalence can be found in [43]. Now we are ready to provide an upper bound on the ex- pected number of steps until the asynchronous Hegselmann- Krause dynamics with a uniform updating scheme reaches its δ -equilibrium. Theorem 5: The expected number of steps until the agents in the asynchronous Hegselmann-Krause dynamics with a uniform updating schedule reach a δ -equilibrium is bounded from above by 2 n 9 (  δ ) 2 . Proof: We evaluate the expected increase of the poten- tial function given in Theorem 4. Since each player is chosen independently and with probability 1 n , we have E [ U ( t + 1) − U ( t )] = n ∑ i =1 1 n E [ U ( t + 1) − U ( t ) | i updates ] ≥ 2 n ∑ i =1 |N i ( t ) | n ‖ x i ( t ) − x i ( t + 1) ‖ 2 ≥ 2 n n ∑ i =1 ‖ x i ( t ) − x i ( t + 1) ‖ 2 , (17) where in the first inequality we have used the result of Theorem 4. Now using the result of Theorem 1 and by the same argument as in derivation of (5), we know that as long as there is a non- δ -trivial component, we must have ∑ d k =1 ̄ c k ≥ δ 2 4 , and therefore, ∑ n i =1 ‖ x i ( t ) − x i ( t +1) ‖ 2 ≥ δ 2 n 6 . Moreover, since U ( τ ) < n 2  2 , we conclude that the expected number of times that nontrivial components of a diameter larger than δ > 0 will emerge is bounded from above by 2 n 9 (  δ ) 2 . In fact, in the case of scalar asynchronous Hegselmann- Krause dynamics, one could come up with a sharper bound which we state in the following lemma. Lemma 7: The expected number of steps until the scalar asynchronous Hegselmann-Krause dynamics reach an  n - equilibrium is bounded from above by n 5+2 log n ( n +1) + n . Proof: Consider a particular time instant t , and let x 1 ( t ) = min k ∈ [ n ] x k ( t ) and x m ( t ) = max k ∈N 1 ( t ) x k ( t ) . Also, without loss of generality, let us assume that x 1 ( t ) = 0 . It is clear that if x m ( t ) >  n α and agent 1 updates, then we will have x 1 ( t + 1) >  n 1+ α , where α is a number to be determined later. In this case, the expected potential function will increase by at least 2 n ‖ x 1 ( t ) − x 1 ( t + 1) ‖ 2 ≥ 2  2 n 3+2 α . Otherwise, there is no other agent in the interval [  n α ,  ] . Now we consider two cases (Figure 2): • Agent x m ( t ) has a neighbor in the interval ( , x m ( t )+  ] . Assuming that agent m updates, we will have x m ( t + 1) ≥ x m ( t )+  n , and hence, ‖ x m ( t + 1) − x m ( t ) ‖ 2 ≥ ‖  n − x m ( t ) ‖ 2 ≥ (  n −  n α ) 2 . Therefore, in this case and using (17), the amount of increase in the expected potential function is at least 2  2 n 3 (1 − 1 n α − 1 ) 2 . • Agent x m ( t ) does not have any neighbor in the interval ( , x m ( t ) +  ] . We note that all the agents in the interval [0 , x m ( t )] form a cluster that is separated from other agents by a distance of at least  . Noting that two separate clusters of nodes on a real line will stay apart from each other in the rest of the dynamics, we can decompose the original dynamics into two groups and analyze each of them separately. Fig. 2. Illustration of two different cases in the proof of Lemma 7. By choosing α = log n ( n + 1) , we get 2  2 n 3+2 α = 2  2 n 3 (1 − 1 n α − 1 ) 2 , and we can see that either we have an increase of size 2  2 n 3+2 log n ( n +1) in the expected potential function, or the dynamics decompose into a cluster of size at most  n α <  n and another part. Since the expected potential function cannot increase more than n 5+2 log n ( n +1) number of steps ( U ( · ) ≤ n 2  2 ) and we cannot have more than n clustering decompositions, the expected number of steps until the dynamics decompose to clusters whose size is at most  n is bounded from above by n 5+2 log n ( n +1) + n . Remark 5: From the above lemma, after the expected number of n 5+2 log n ( n +1) + n ≈ n 7 , every agent lies within a cluster of diameter at most  n , and those always are separated from each other by a distance of at least  . Therefore, each agent in a cluster can observe the others, and henceforth, the diameter of the convex hull of each of the clusters shrinks very fast. In the following, we provide a bound on the expected number of switching topologies during the evolution of the asynchronous Hegselmann-Krause process. Theorem 6: The expected number of switching topologies of the asynchronous Hegselmann-Krause dynamics with a uniform updating scheme is bounded from above by 16 n 9 . Proof: We show that switching topologies substantially increase the expected value of the potential function. To see that, first assume that the opinion profile at time t − 1 , i.e., x ( t − 1) , is  2 -trivial, and that updating some agent i at this time causes a switch in the topology of the network. We claim that the next profile, i.e., x ( t ) , is not  2 -trivial. Note that since there is a switch at time t and that within each of the  2 - trivial components each agent is able to observe the others, the convex hull of such a component shrinks even further after the updating of any agent in the component. Therefore, the switches must occur between the  2 -trivial components and not within them. Now, let us assume that i ∈ C p ( C p denotes an  2 -trivial component) and that updating agent i at time t − 1 makes him visible to another agent j in a different  2 -trivial component C q (Figure 3). Since C p is an  2 -trivial component and the agents in C p are all the agents who are visible to agent i at time t − 1 , the movement of agent i from x i ( t − 1) to x i ( t ) can be at most  2 . Moreover, since agents j and i belong to different  2 -trivial components, their distance at time t − 1 was larger than  . That means that such a switching causes i and j to make a link with a distance of at least  2 in the profile x ( t ) . Fig. 3. Switching topology at time t from an  2 -trivial profile x ( t − 1) . Now we partition all the possible switching times based on the profile at the previous time instant: • Time t is a switching time, and x ( t − 1) is an  2 -trivial profile. In this case and based on the above argument, x ( t ) is not an  2 -trivial profile, and using the same argument as in relation (5) and in view of (9) and (12), we get ∑ n k =1 ‖ x k ( t ) − x k ( t + 1) ‖ 2 ≥  2 16 n 6 . • Time t is a switching time, and x ( t − 1) is not an  2 - trivial profile. In this case and within a non-  2 -trivial component, using the same argument as in the first case, we get ∑ n k =1 ‖ x k ( t − 1) − x k ( t ) ‖ 2 ≥  2 16 n 6 . Therefore, if t is a switching time, using (17) we conclude that there is an increase of  2 16 n 6 at either time t − 1 or t in the expected potential function. In other words, if t is a switching time, using (17) we can write, E [ U ( t +1) − U ( t − 1)] = E [ U ( t +1) − U ( t )] + E [ U ( t ) − U ( t − 1)] ≥ 2 n  2 16 n 6 =  2 8 n 7 . Now, given an arbitrary initial profile x (0) , let us use p t to denote the probability of occurrence of a switching at time t = 1 , 2 , . . . . Therefore, the amount of increase in the expected potential function is at least ∑ ∞ t =0 p t  2 16 n 7 (since we may count each instant twice). On the other hand, since U ( τ ) ≤ n 2  2 , ∀ τ = 1 , 2 , . . . , we conclude that ∑ ∞ t =0 p t . But ∑ ∞ t =0 p t is exactly equal to the expected number of switching topologies. Therefore, the expected number of switching topologies is bounded from above by 16 n 9 . VI. H ETEROGENEOUS H EGSELMANN -K RAUSE D YNAMICS Once again we consider the Hegselmann-Krause model (2), but this time we assume that each agent i has his or her own bound of confidence  i , which could be different from the others. Therefore, N i ( x ( t )) = { 1 ≤ j ≤ n : ‖ x i ( t ) − x j ( t ) ‖ ≤  i } and A ( t ) , t ≥ 0 will change correspondingly. That causes an asymmetry for the interactions among the agents. In other words, there is a possibility that one agent x i ( t ) observes agent x j ( t ) but not vice versa. In fact, we are interested in studying the convergence behavior of such dynamics. In contrast with the homogeneous Hegselmann- Krause model, which reaches its steady state after finite time, the following example shows that in the heterogeneous case, steady state may not be reached in finite time. Example 1: Consider three agents x 1 , x 2 , x 3 that are lo- cated at − 1 , 1 3 , 1 , respectively, at the initial time t = 0 . Also, let us assume  1 = 1 2 ,  2 = 2 ,  3 = 1 2 . As can be seen, agent x 2 is able to see all the agents at each time step. Therefore, after the first iteration, x 2 (1) = − 1+ 1 3 +1 3 = 1 3 2 , and since the confidence bounds of x 1 and x 3 are small, they can see no one except themselves, and hence they will remain in their own locations. Therefore, at time t = 1 , we will have x 1 (1) = − 1 , x 2 (1) = 1 3 2 , x 3 (1) = 1 . With the same line of argument, it is not hard to see that at any time instant t = 1 , 2 , . . . the position of agents will be x 1 ( t ) = − 1 , x 2 ( t ) = 1 3 t +1 , x 3 ( t ) = 1 . That shows that the dynamics will converge to their steady state ( − 1 , 0 , 1) , but not in finite time. In the above example, one of the main reasons that the convergence was not achieved in finite time was that there were two agents who didn’t have interaction with others in the dynamics and remained fixed without any movement forever. We refer to such agents as silent agents . In the next theorem, we show that if the amount of time an agent sleeps (is inactive) is finite, then we will have finite time convergence of the dynamics to their steady state. We note that similar type of such asynchronous analysis under different scenarios and settings can be found in [44], [45], [46]. Theorem 7: Consider the heterogeneous Hegselmann- Krause model, where the i th agent i ∈ [ n ] has a confidence bound of  i > 0 . Also, assume that there is an integer T ∗ such that no agent is silent for a period of time longer than T ∗ . Then, the dynamics will converge to their steady state in finite time. Proof: We prove the theorem by induction on the number of agents. For n = 1 the result is obvious, and the initial time is the termination time. Let us assume that the result holds for each k ≤ n , and now suppose that we have n + 1 agents with different confidence bounds. We show that there is a finite time T such that the left product of every T consecutive matrices A ( t ) , t ≥ 0 of the dynamics will generate a matrix with at least one positive column. Starting from agent 1, let us define S ( t ) = { i ∈ [ n + 1] | ( A ( t ) A ( t − 1) . . . A (0)) i 1 > 0 } , and S c ( t ) = [ n + 1] \ S ( t ) to be its complement. Since each agent can see itself at each time instant, if i ∈ S ( t ) for some time t , then it will be in S ( t ′ ) for all t ′ ≥ t . In other words, we have S (0) ⊆ S (1) ⊆ S (2) ⊆ . . . . Now we claim that there must be a finite time T such that S ( T ) = [ n + 1] . Otherwise, let us assume that there exists a time instant t 0 such that S ( t 0 ) = S ( t ) , ∀ t > t 0 . By the definition of S ( t ) , that means that for t > t 0 , none of the agents in S c ( t ) can see any agent in S ( t ) (although it may happen that some agents in S ( t ) are still able to see some of the agents in S c ( t ) ). That means that the agents in the set S c ( t 0 ) constitute a group of agents whose opinions in the future of the dynamics t ≥ t 0 will not be influenced by any other agent in S ( t 0 ) . On the other hand, since | S c ( t 0 ) | ≤ n (note that S (0) = { 1 } ), according to the induction assumption, the agents in S c ( t 0 ) will reach their steady state after some finite time T n , where T n denotes the maximum number of steps for n agents to reach their steady state, which, by induction assumption, is considered to be a finite number. However, under the hypothesis of the Theorem, after reaching the steady state, these agents cannot remain silent for more than T ∗ more steps. Therefore, after a finite time T ∗ + T n , at least one more agent will be added to the set S ( t 0 ) , and the cardinality of S ( t 0 ) will increase by at least 1. Since the total number of agents is n + 1 , T := ( n + 1)( T ∗ + T n ) steps are enough to guarantee S ( T ) = [ n + 1] . That shows that A ( T ) . . . A (1) A (0) will be a matrix in which the first column will be strictly positive. On the other hand, since all the positive entries of those matrices are bounded from below by min + ( A ( t )) ≥ 1 n +1 , the minimum positive entry of the left product of every T consecutive such matrices will be larger than ( 1 n +1 ) T . Using Lemma 4, we can see that after every T steps, the diameter of the convex hull of the agents’ opinions will shrink by a factor of at least 1 − ( 1 n +1 ) T . Therefore, there exists a finite time T n +1 < ∞ such that the diameter of the convex hull of the agents’ opinions at time T n +1 is smaller than min i ∈ [ n +1]  i . That means that after T n +1 steps, every agent is able to observe the others in his or her own neighborhood, and in the next step, the dynamics reach a steady state. In fact, the above theorem asserts that if there exists an external input which creates an incentive for the agents to interact with someone else after some period of time, then the circulation of information in the society will be sufficient to guarantee the finite time formation of the opinions. VII. D ISCUSSION Inspired by the results given in Section V, we will now discuss some of the possible directions that could be pursued to analyze the asynchronous heterogeneous Hegselmann- Krause model in more detail. In fact, because of the different confidence bounds, the symmetry from which we benefit in the homogeneous case does not hold anymore. Therefore, the communication topology in this case can be interpreted as a digraph (directed graph) instead of an undirected graph. In this case one way of showing the asymptotic conver- gence of the heterogeneous Hegselmann-Krause dynamics to an steady state is to design a proper utility function for each player such that the resulting network formation game changes to a team problem, such that each player’s update contributes an increase (decrease) to a global function toward an equilibrium. A natural idea here is to define the utility of the players based on functions of their own confidence bound and their relative distance from others such that their best response dynamics coincide with the evolution of the asynchronous heterogeneous Hegselmann-Krause dynamics. For example, one may define the utility of the i th player to be U i ( t ) = ( n − 1)  2 i − ∑ n j =1 min { ( x i ( t ) − x j ( t )) 2 ,  2 i } , where  i de- notes the confidence bound of the i th agent and x ( t ) = ( x 1 ( t ) , x 2 ( t ) , . . . , x n ( t )) denotes the opinion profile at time instant t . It turns out that such utility functions do not make the network formation game a potential game or lead to a strategically equivalent team problem. However, one can consider 15 different possibilities for creation or breaking of edges among agents, assuming that only one agent updates (deviates) to a new position. In that case, one can think of a proper weighting on the edges in order to distinguish one-sided edges from symmetric (two-sided) edges. For example, if there is a one-sided edge from player i to player j , one can rescale the utility of agent i by a fraction of his own confidence bound and his neighbors’ in order to adjust the influence of other players’ actions on his own utility function. At this point, we are not aware of any such utility functions, and we leave the full analysis of the heterogeneous Hegselmann-Krause dynamics as a future direction of research. VIII. C ONCLUSION In this paper, we studied the termination time of the Hegselmann-Krause dynamics in finite dimensions and un- der various settings: synchronous, asynchronous, homoge- neous, and heterogeneous. We provided a polynomial upper bound for the termination time of the synchronous homoge- neous model independent of the dimension of the ambient space. We showed that the asynchronous Hegselmann-Krause model can be formulated as a sequence of best response dynamics of a potential game. Furthermore, we provided an upper bound for the expected number of steps until the dy- namics reaches its δ -equilibrium. In particular, we bounded the expected number of switchings in the topology of the networks during the evolution of the system. We considered the heterogeneous Hegselmann-Krause dynamics, and we obtained a necessary condition for finite time convergence of such dynamics. Finally, we discussed some of the possible future directions that could be pursued to enable analysis of heterogeneous Hegselmann-Krause dynamics in more detail. As a future direction of research, one may think of how to enrich the Hegselmann-Krause model in order to remove some of its current limitations. As an example, one could modify the model by allowing the agents with the same opinion to be related to each other by some constraints, meaning that having the same opinion at some time instant does not necessarily lead to having the same opinion for all the future time instances. R EFERENCES [1] A. Bhattacharyya, M. Braverman, B. Chazelle, and H. L. Nguyen, “On the convergence of the Hegselmann-Krause system,” in Proceedings of the 4th Conference on Innovations in Theoretical Computer Science . ACM, 2013, pp. 61–66. [2] T. C. Aysal, A. D. Sarwate, and A. G. Dimakis, “Reaching consensus in wireless networks with probabilistic broadcast,” Allerton’09: Pro- ceedings of the 47th Annual Allerton Conference on Communication, Control, and Computing , pp. 732–739, 2009. [3] A. Jadbabaie, J. Lin, and A. S. Morse, “Coordination of groups of mobile autonomous agents using nearest neighbor rules,” IEEE Transactions on Automatic Control , vol. 48, no. 6, pp. 988–1001, 2003. [4] R. Olfati-Saber and R. M. Murray, “Consensus problems in networks of agents with switching topology and time-delays,” IEEE Transac- tions on Automatic Control , vol. 49, no. 9, pp. 1520–1533, 2004. [5] A. Kashyap, T. Bas ̧ar, and R. Srikant, “Quantized consensus,” Auto- matica , vol. 43, no. 7, pp. 1192–1203, 2007. [6] S. Sundaran and C. N. Hadjicostis, “Finite-time distributed consensus in graphs with time-invariant topologies,” Proc. American Control Conference , pp. 711–716, 2007. [7] M. H. DeGroot, “Reaching a consensus,” Journal of the American Statistical Association , vol. 69, no. 345, pp. 118–121, 1974. [8] A. Olshevsky and J. Tsitsiklis, “On the nonexistence of quadratic Lyapunov functions for consensus algorithms,” IEEE Transactions on Automatic Control , vol. 53, pp. 2642–2645, 2008. [9] P. F. R. Carli, F. Fagnani, and S. Zampieri, “Gossip consensus algorithms via quantized communication,” Automatica , vol. 46, pp. 70–80, 2010. [10] J. C. Delvenne, R. Carli, and S. Zampieri, “Optimal strategies in the average consensus problem,” Wavelet Analysis and Multiresolution Methods , vol. 56, pp. 759–765, 2009. [11] L. Wang and F. Xiao, “Finite-time consensus problems for networks of dynamic agents,” IEEE Transactions on Automatic Control , vol. 55, no. 4, pp. 950–955, 2010. [12] S. Etesami and T. Bas ̧ar, “Convergence time for unbiased quantized consensus,” Proc. 52nd IEEE Conference on Decision and Control (CDC) , pp. 6190–6195, 2013. [13] N. E. Friedkin and E. C. Johnsen, “Social influence networks and opinion change,” Advances in Group Processes , vol. 16, no. 1, pp. 1–29, 1999. [14] R. Hegselmann and U. Krause, “Opinion dynamics and bounded confidence models, analysis, and simulation,” Artificial Societies and Social Simulation , vol. 5, pp. 1–33, 2002. [15] J. B. Stiff and P. A. Mongeau, Persuasive Communication . Guilford Press, 2003. [16] N. E. Friedkin and E. C. Johnsen, “Social Influence Network Theory,” Cambridge University Press, New York , 2011. [17] F. Bullo, J. Cortes, and S. Martinez, Distributed Control of Robotic Networks . Princeton University Press, 2009. [18] J. Lorenz, “Repeated averaging and bounded-confidence, modeling, analysis and simulation of continuous opinion dynamics,” Ph.D. dis- sertation, University of Bremen, 2007. [19] ——, “Heterogeneous bounds of confidence: Meet, discuss and find consensus!” Complexity , vol. 15, no. 4, pp. 43–52, 2010. [20] A. Mirtabatabaei and F. Bullo, “Opinion dynamics in heterogeneous networks: Convergence conjectures and theorems,” SIAM Journal on Control and Optimization , vol. 50, no. 5, pp. 2763–2785, 2012. [21] S. R. Etesami, T. Bas ̧ar, A. Nedi ́ c, and B. Touri, “Termination time of multidimensional Hegselmann-Krause opinion dynamics,” in Proc. American Control Conference (ACC), 2013 . IEEE, 2013, pp. 1255– 1260. [22] B. Chazelle, “The total s-energy of a multiagent system,” SIAM Journal on Control and Optimization , vol. 49, no. 4, pp. 1680–1706, 2011. [23] M. Zhu and S. Martinez, “On the convergence time of asynchronous distributed quantized averaging algorithms,” IEEE Transactions on Automatic Control , vol. 56, pp. 386–390, 2011. [24] S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah, “Gossip algorithms: Design, Analysis, and Applications,” Proc. IEEE Infocom , vol. 3, pp. 1653–1664, 2005. [25] S. Mohajer and B. Touri, “On convergence rate of scalar Hegselmann- Krause dynamics,” in Proc. American Control Conference (ACC), 2013 . IEEE, 2013, pp. 206–210. [26] A. Nedi ́ c and B. Touri, “Multi-dimensional Hegselmann-Krause dy- namics,” in Proc. 2012 IEEE 51st Annual Conference on Decision and Control (CDC) . IEEE, 2012, pp. 68–73. [27] B. Touri and C. Langbort, “On endogenous random consensus and averaging dynamics,” Proc. 52nd IEEE Conference on Decision and Control (CDC) , pp. 6208–6212, 2013. [28] J. R. Marden, G. Arslan, and J. S. Shamma, “Cooperative control and potential games,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics , vol. 39, no. 6, pp. 1393–1407, 2009. [29] A. Rantzer, “Using game theory for distributed control engineering,” Language , vol. 280, no. 53, p. 16, 2008. [30] H. Zhang, F. L. Lewis, and Z. Qu, “Lyapunov, adaptive, and optimal design techniques for cooperative systems on directed communication graphs,” IEEE Transactions on Industrial Electronics , vol. 59, no. 7, pp. 3026–3041, 2012. [31] J. M. Hendrickx, “Graphs and networks for the analysis of autonomous agent systems,” Ph.D. Thesis, Universite Catholique de Louvain , 2011. [32] M. Pineda, R. Toral, and E. Hern ́ andez-Garc ́ ıa, “The noisy Hegselmann-Krause model for opinion dynamics,” The European Physical Journal B , vol. 86, no. 12, pp. 1–10, 2013. [33] S. Kurz and J. Rambau, “On the Hegselmann-Krause conjecture in opinion dynamics,” Journal of Difference Equations and Applications , vol. 17, no. 6, pp. 859–876, 2011. [34] J. M. Hendrickx and A. Olshevsky, “On symmetric continuum opinion dynamics,” Proc. 52nd IEEE Conference on Decision and Control (CDC) , pp. 1989–1994, 2013. [35] V. D. Blondel, J. M. Hendrickx, and J. N. Tsitsiklis, “On the 2r conjecture for multi-agent systems,” in Proceedings of the European Control Conference 2007 (ECC2007) . Citeseer, 2007, pp. 874–881. [36] J. Lorenz, “Continuous opinion dynamics under bounded confidence: A survey,” International Journal of Modern Physics C , vol. 18, no. 12, pp. 1819–1838, 2007. [37] L. Saloff-Coste, “Lectures on finite Markov chains,” in Lectures on Probability Theory and Statistics . Springer, 1997, pp. 301–413. [38] C. D. Godsil, G. Royle, and C. Godsil, Algebraic graph theory . Springer, New York, 2001. [39] R. A. Horn and C. R. Johnson, Matrix Analysis . Cambridge University Press, 2012. [40] J. Shen, “A geometric approach to ergodic non-homogeneous Markov chains,” Lecture Notes in Pure and Applied Mathematics , pp. 341–366, 2000. [41] B. Touri and A. Nedi ́ c, “On existence of a quadratic comparison func- tion for random weighted averaging dynamics and its implications,” in Proc. 50th IEEE Conference on Decision and Control and European Control Conference (CDC-ECC) . IEEE, 2011, pp. 3806–3811. [42] M. Roozbehani, A. Megretski, and E. Frazzoli, “Lyapunov analysis of quadratically symmetric neighborhood consensus algorithms,” in Proc. 47th IEEE Conference on Decision and Control (CDC) . IEEE, 2008, pp. 2252–2257. [43] T. Bas ̧ar and G. J. Olsder, Dynamic Noncooperative Game Theory . SIAM, 1999, vol. 23. [44] S. Li and T. Bas ̧ar, “Asymptotic agreement and convergence of asynchronous stochastic algorithms,” IEEE Transactions on Automatic Control , vol. 32, no. 7, pp. 612–618, 1987. [45] D. P. Bertsekas, “Distributed asynchronous computation of fixed points,” Mathematical Programming , vol. 27, no. 1, pp. 107–120, 1983. [46] J. N. Tsitsiklis and M. Athans, “Convergence and asymptotic agree- ment in distributed decision problems,” IEEE Transactions on Auto- matic Control , vol. 29, no. 1, pp. 42–50, 1984.