Distributed and Proximity-Constrained C-Means for Discrete Coverage Control Gabriele Oliva 1 , ∗ , Andrea Gasparri 2 , Adriano Fagiolini 3 , and Christoforos N. Hadjicostis 4 Abstract — In this paper we present a novel distributed coverage control framework for a network of mobile agents, in charge of covering a finite set of points of interest (PoI), such as people in danger, geographically dispersed equipment or environmental landmarks. The proposed algorithm is inspired by C -Means, an unsupervised learning algorithm originally proposed for non-exclusive clustering and for identification of cluster centroids from a set of observations. To cope with the agents’ limited sensing range and avoid infeasible coverage solutions, traditional C -Means needs to be enhanced with proximity constraints , ensuring that each agent takes into account only neighboring PoIs. The proposed coverage control framework provides useful information concerning the ranking or importance of the different PoIs to the agents, which can be exploited in further application-dependent data fusion processes, patrolling, or disaster relief applications. I. I NTRODUCTION In the literature, coverage control has been gaining par- ticular attention by the scientific community in recent years. Typical approaches follow the path of the seminal works in [1], [2], where agents interact in order to achieve two main objectives: (1) to partition the area of interest into zones for which a single agent is responsible for the cov- ering (e.g., Voronoi regions); (2) to reach locations that minimize a function of the distance from each point in the region assigned to them (e.g., navigate to a weighted centroid of the assigned Voronoi region). Over the years, the importance of taking into account agents’ limited sensing and communication capabilities has become paramount [3], [4]. Recent innovations in the field include, among others, time varying methodologies [5], dynamic approaches [6], and adaptive management of agents’ trust [7]. Note that most of the existing literature focuses on continuous space coverage, while in several practical situations it might be required to cover just a discrete set of points of interest (PoI). In fact, a discrete space approach has lower computational demands with respect to continuous space ones and allows the modeling of specific targets of interest. To the best of our knowledge, work at the state of the art focused on the coverage of discrete points of interest is quite sparse. In a 1 Unit of Automatic Control, Department of Engineering, Università Campus Bio-Medico di Roma, via Álvaro del Portillo 21, 00128, Rome, Italy. 2 University Roma Tre, Department of Engineering, Via della Vasca Navale 79, 00146, Rome, Italy. 3 Dipartimento di Energia, Ingegneria dell’Informazione e Modelli Matem- atici (DEIM), University of Palermo, Viale delle Scienze, Edificio 10, 90128, Palermo, Italy. 4 Department of Electrical and Computer Engineering, University of Cyprus, 75 Kallipoleos Avenue, P.O. Box 20537, 1678 Nicosia, Cyprus. ∗ Corresponding author, email: g.oliva@unicampus.it recent work [8], the authors use k -means to solve a discrete coverage problem where multiple agents are in charge of a set of PoIs. A related work, even though not specifically focused on coverage, is [9] where roles are assigned to the agents via constrained discrete optimization. In both cases, however, the agents are not constrained to have a limited sensing radius. In this paper, we consider a distributed problem where there is a finite and discrete set of n PoIs in fixed locations in space, and a set of r < n mobile agents aims at achieving the following two objectives: (1) to partition the PoIs in r groups such that PoIs in the same group are spatially close while PoIs in different groups are far apart; (2) to associate each agent to a group and assign a location to such agent in order to maximize the coverage. Our distributed approach is inspired by the C -means [10], a non-exclusive and unsuper- vised data clustering algorithm, where a set of points (e.g., the PoIs) are grouped so that each point can be associated, at the same time, to different clusters (i.e., to different agents) with different intensities. From a mathematical standpoint, we extend the classical C -means by introducing proximity constraints , which account for the limited sensing range of the agents. As a matter of fact, results dealing with the distribution of clustering algorithms, including the C-means, can be found in the literature [11]–[14]. Compared to these works, where the sensors collaborate in order to compute the centroids and their associations in a distributed fashion, in our work we reverse the perspective. That is, the agents play the role of the centroid and collaborate in order to compute their location and the association to the PoIs; the latter play the role of the sensors. The outline of the paper is as follows: Section II provides some preliminary definitions, while Section III introduces the problem addressed in this paper; Sections IV and V solve two sub-problems that are the basis for our algorithm, while Section VI provides the proposed algorithm along with simulation results; the final Section VII collects some conclusive remarks and future work directions. II. P RELIMINARIES In the following, we consider the Euclidean space R d , equipped with the usual norm. Let us define the metric projection onto a convex set as follows: Definition 1 (Metric Projection): Let X ⊂ R d be a convex set. The metric projection onto X is a function Pr X : R d → X such that Pr X ( v ) = arg min z ∈ X ‖ v − z ‖ . In the following, we refer to a metric projection simply as projection. To serve the scope of this paper, we now arXiv:1612.03849v4 [cs.RO] 16 Sep 2017 specialize the projection onto a closed ball. Definition 2 (Closed Ball): A closed ball B i , ρ ⊂ R d with radius ρ and center p i ∈ R d is given by B i , ρ = { z ∈ R d , : ‖ p i − z ‖ ≤ ρ } . Definition 3 (Projection onto a Closed Ball): Let B i , ρ ⊂ R d be the closed ball of radius ρ > 0 centered at p i ∈ R d . The projection onto B i , ρ is the function Pr B i , ρ ( v ) = α v + ( 1 − α ) p i , where α = ρ / ‖ v − p i ‖ if ‖ v − p i ‖ > ρ and α = 1 otherwise. We now review the basics of convex constrained opti- mization. The reader is referred to [15] for a comprehensive overview of the topic. Definition 4 (Convex Constrained Optimization Problem): A convex constrained optimization problem with s constraints has the following structure: min x ∈ R d f ( x ) Subject to { g i ( x ) ≤ 0 , i = 1 , . . . , q h i ( x ) = 0 , i = q + 1 , . . . , s . (1) where f and all g i and h i are convex functions. The set of feasible solutions for the above problem is given by Ω = { x ∈ R d | g i ( x ) ≤ 0 , 1 ≤ i ≤ q ; h i ( x ) = 0 , q + 1 ≤ i ≤ s } , which is a convex set since all g i and h i are convex. We now review conditions that are fundamental in order to solve convex constrained optimization problems. Definition 5 (Slater’s Conditions): The convex constrained optimization problem in Eq. (1) satisfies the Slater’s Condition if there is an x ∈ Ω such that g i ( x ) < 0 for all i = 1 , . . . , q and h i ( x ) = 0 for all i = q + 1 , . . . , s . Moreover, if there is an x ∈ Ω that satisfies the Slater’s Condition and is such that all nonlinear g i ( x ) are negative, while all linear g i ( x ) are non-positive, then the problem is said to satisfy the restricted Slater’s Condition . We now review Karush-Kuhn-Tucker (KKT) Theorem (see [15] and references therein). Theorem 1 (KKT Theorem): Consider a convex constrained optimization problem as in Eq. (1) that satisfies the restricted Slater’s Condition, and let the Lagrangian function be the function f ( x , λ λ λ ) = f ( x ) + ∑ q i = 1 λ i g i ( x ) + ∑ s i = q + 1 λ i h i ( x ) , where λ λ λ = [ λ 1 , . . . , λ s ] T . The point x ∗ ∈ R d is a global minimizer for the problem if and only if the following conditions hold: 1) ∂ f ( x , λ λ λ ) / ∂ x | x ∗ , λ λ λ ∗ = 0; 2) λ ∗ i g i ( x ∗ ) = 0, for all i = 1 , . . . , q ; 3) g i ( x ∗ ) ≤ 0 , 1 ≤ i ≤ q and h i ( x ∗ ) = 0 , q + 1 ≤ i ≤ s ; 4) λ ∗ i ≥ 0, for all i = 1 , . . . , s . III. P ROBLEM S TATEMENT Let us consider a set of n PoIs and r < n agents deployed in a bounded region in R d , with d ∈ { 2 , 3 } ; we assume that each PoI i is in a fixed location p i ∈ R d , while the j -th agent is initially in location x j ( 0 ) ∈ R d . The following set of assumptions is now in order: a) an agent j is able to sense all PoIs for which the Euclidean distance from j is less than or equal to ρ ; b) an agent j is able to communicate with all other agents for which the Euclidean distance from j is less than or equal to a threshold θ ≥ 2 ρ ; c) the agents are deployed initially in a way such that each PoI is sensed by at least one agent, and each agent senses at least one PoI; d) the set { p 1 , . . . , p n } contains at least n > r distinct points. Note that the last two assumptions, i.e., c) and d), are inherited from the standard C-means [10]. In particular, Assumption c) ensures that no PoI or agent is neglected by the optimization process. Let x = [ x T 1 , . . . , x T r ] T ∈ R rd collect the location of all agents and let U be an n × r matrix encoding the associations among the agents and the PoIs, having u i j at its ( i , j ) − th entry. Moreover, let δ i j = ‖ p i − x j ‖ . The problem can be stated as follows. Problem 1: Find x ∗ ∈ R rd and U ∗ ∈ R n × r such that J ( x ∗ , U ∗ ) = min x , U ∑ n i = 1 ∑ r j = 1 u m i j δ 2 i j Subject to          ∑ r j = 1 u i j = 1 , ∀ i ( I ) ∑ n i = 1 u i j > 0 , ∀ j ( II ) u i j ∈ [ 0 , 1 ] , ∀ i , j ( III ) u i j ( δ 2 i j − ρ 2 ) ≤ 0 , ∀ i , j ( IV ) where m is a finite positive integer that is greater than one. Problem 1 extends the standard C-means problem [10] through the proximity constraints (IV). Note that constraint (I) ensures the complete coverage of each PoI by the agents, whereas constraint (II) ensures that each agent covers at least one PoI. We point out that the larger the parameter m is, the less the associations u i j influence the objective function; as in the case of C-means, such a parameter accounts for the degree of overlapping among the different clusters 1 . Note also that constraint (III) requires u i j ∈ [ 0 , 1 ] ; as a consequence, the combination of constraint (III) and the proximity constraints (IV) implies that u i j = 0 whenever δ i j > ρ . Therefore, these constraints ensure that an agent j does not cover those PoIs that are out-of-reach from it. In the following, we develop a distributed and iterative algorithm to solve Problem 1; specifically, at each iteration k , our algorithm alternates between an assignment phase where we find the optimal associations u i j ( k ) given the current locations x ( k ) and a refinement phase where we assume the associations u i j ( k ) are fixed and we find the optimal location x ( k + 1 ) for the agents 2 . The two steps are iterated until a stopping condition is met. We next characterize (in Sections IV and V) the optimality conditions for each sub-problem, respectively. Future work will be focused on studying the optimality of the overall algorithm. IV. A SSIGNMENT P HASE In this section we discuss how, given fixed agents’ loca- tions, the optimal associations U can be computed. 1 When there is no a priori knowledge, a popular choice is m = 2 (see [16] and references therein). 2 We abstract away from the actual kinematics of the agents and we neglect the actual planning problem, assuming that a control loop to guide the agent toward the desired point exists. Specifically, the optimal choice for the i -th row of U (i.e., the associations of the i -th PoI to all the agents) is as follows. If there is at least one agent j such that x j = p i (i.e., the j -th agent lies in the same location as the i -th PoI), the terms u i 1 , . . . , u i , r must satisfy 3 r ∑ j = 1 u i j ( k ) = 1 , u i j ( k ) = 0 if δ i j 6 = 0; (2) if, conversely, x j 6 = p i for all agents j (i.e., no agent is at the same location as the i -th PoI), the terms u i j are as follows: u i j =      ( ∑ h | δ ih ≤ ρ ( δ i j δ ih ) 2 m − 1 ) − 1 , if δ i j ≤ ρ , 0 , otherwise . (3) We now prove the optimality of such strategy. Theorem 2: For given agents’ locations x that satisfy Assumption c), a solution { x , U } where U satisfies Equations (2) and (3) is a global minimizer for Problem 1. Proof: Consider a given x that satisfies Assump- tion c). Problem 1 reduces to finding U that minimizes J ( x , U ) = J ( U ) subject to constraints (I)–(III); as for the proximity constraints (IV) we have that if δ i j ≤ ρ then the corresponding proximity constraint is not active, while for δ i j > ρ , since u i j ∈ [ 0 , 1 ] , the constraint reduces to u i j = 0. As a consequence, the objective function becomes J ( U ) = n ∑ i = 1 ∑ j | δ i j ≤ ρ u m i j δ 2 i j . Let I 1 be the index set of the PoIs that are not over- lapped by any agent and I 2 be the complementary set, i.e., I 1 = { i ∈ { 1 , . . . , n } | δ i j > 0 for all j ∈ { 1 , . . . , r } and I 2 = { 1 , . . . , n } \ I 1 . We can split J ( U ) in two terms J ( U ) = ∑ i ∈ I 1 ∑ j | δ i j ≤ ρ u m i j δ 2 i j ︸ ︷︷ ︸ J 1 ( U ) + ∑ i ∈ I 2 ∑ j | δ i j ≤ ρ u m i j δ 2 i j ︸ ︷︷ ︸ J 2 ( U ) . (4) From Eq. (2), it follows that all terms δ i j appearing in J 2 ( U ) are null, hence J 2 ( U ) = 0. This implies that the optimality of U only depends on the entries u i j appearing in J 1 ( U ) . Since x satisfies Assumption c) by construction, using Eq. (3) it follows that constraints (II) and (III) are always satisfied. Furthermore, it is convenient to rewrite the variables u i j as the square of new variables w i j , i.e., u i j = w 2 i j . By doing this, and since constraint (IV) implies that u i j = 0 when δ i j > ρ , we can rewrite constraint (I) as ∑ j | δ i j ≤ ρ w 2 i j = 1. As a result, our problem becomes that of finding W ∗ ∈ R r × d that solves J ( W ∗ ) = min W ∑ i ∈ I 1 ∑ j | δ i j ≤ ρ w 2 m i j δ 2 i j Subject to { ∑ j | δ i j ≤ ρ w 2 i j = 1 , ∀ i ∈ I 1 . Note that the above problem is convex and has just equality constraints, hence the Slater’s Condition is trivially satisfied 3 When two or more agents are at the same position of a poi p j , there is an infinity of choices for the associations u i j that satisfy Eq. (2). and Theorem 1 applies. Let ζ ζ ζ = [ ζ 1 , . . . ζ q ] T be the Lagrangian multipliers associ- ated to the constraints, while the corresponding Lagrangian function is J ( W , ζ ζ ζ ) = ∑ i ∈ I 1 ∑ j | δ i j ≤ ρ w 2 m i j δ 2 i j + ∑ i ∈ I 1 ζ i ( ∑ j | δ i j ≤ ρ w 2 i j − 1 ) . By Theorem 1, W ∗ , ζ ζ ζ ∗ is a global optimal solution to the above problem if and only if it satisfies ∂ J ( W , ζ ζ ζ ) ∂ w i j ∣ ∣ ∣ W ∗ , ζ ζ ζ ∗ = 2 m ( w ∗ i j ) 2 m − 1 δ i j + 2 w ∗ i j ζ ∗ i = 0 , (5) for all i ∈ I 1 and j ∈ { 1 , . . . , r } and ∑ j | δ i j ≤ ρ ( w ∗ ) 2 i j = 1 , ∀ i ∈ I 1 . (6) From Eq. (5) it follows that ( w ∗ i j ) 2 = u ∗ i j = ( − ζ ∗ i / m δ 2 i j ) 1 ( m − 1 ) . (7) Summing over all h such that δ ih ≤ ρ and applying Eq. (6) we get ( − ζ ∗ i ) 1 ( m − 1 ) = 1 ∑ h | δ ih ≤ ρ ( 1 / m δ 2 ih ) 1 ( m − 1 ) . (8) By plugging Eq. (8) in Eq. (7), it follows that ( w ∗ i j ) 2 = 1 ( m δ 2 i j ) 1 ( m − 1 ) ∑ h | δ ih ≤ ρ ( 1 m δ 2 ih ) 1 ( m − 1 ) = 1 ∑ h | δ ih ≤ ρ ( δ i j δ ih ) 2 ( m − 1 ) . Therefore, since u ∗ i j = ( w ∗ i j ) 2 , we conclude that Eq. (3) corresponds to the global optimal solution. V. R EFINEMENT P HASE Let us now discuss the refinement phase within the pro- posed algorithm, i.e., a way to find the optimal location of the agents, for fixed associations. Note that, when the fixed associations are admissible for Problem 1, our problem is equivalent to solving a collection of independent sub-problems (i.e., one for each agent) having the following structure. Problem 2: Find x ∗ j ∈ R d that solves J ( x ∗ j ) = min x j ∈ R d ∑ i | δ i j ≤ ρ u m i j δ 2 i j Subject to { δ 2 i j ≤ ρ 2 , ∀ i s.t. u i j > 0 . (9) We now characterize the optimal solution of Problem 2. To this end, we first define the set of admissible solutions. Definition 6 (Admissible Solutions to Problem 2): The set of admissible solutions to Problem 2 is B ∗ j = ∩ i | u i j > 0 B i , ρ , where B i , ρ is the ball of radius ρ centered at the location p i of the i -th PoI. Remark 1: Problem 2 is convex, since for fixed terms u i j the objective function is a linear combination of convex functions, and similarly the constraints are convex functions. We now establish a condition for Problem 2 to satisfy Slater’s Condition. Proposition 1: Problem 2 satisfies Slater’s Condition if and only if B ∗ j is not a singleton. Proof: The fact B ∗ j is a singleton implies that at least two constraints are satisfied at the equality (i.e., at least two balls are tangent), thus preventing Slater’s Condition to be satisfied. We now provide an optimality condition to move an agent j when U is fixed. Theorem 3: Let U be admissible to Problem 1, and sup- pose B ∗ j is not a singleton. Then x ∗ j = ∑ i | δ i j ≤ ρ ( u m i j + λ ∗ i ) p i / ∑ i | δ i j ≤ ρ ( u m i j + λ ∗ i ) (10) is a global minimizer for Problem 2 if and only if there exist λ λ λ ∗ = [ λ ∗ 1 , . . . , λ ∗ n ] T ∈ R n such that: (a) x ∗ j ∈ B ∗ j ; (b) λ ∗ i ≥ 0 for all i ; (c) λ ∗ i ( ‖ p i − x ∗ j ‖ 2 − ρ 2 ) = 0 for all i . Proof: As discussed in Remark 1, Problem 2 is convex. Moreover, as shown in Proposition 1, it satisfies Slater’s condition and therefore it satisfies also the restricted Slater’s condition. Therefore, the KKT Theorem (Theorem 1) applies to Problem 2. Let us consider the Lagrangian function associated to Problem 2, i.e., J ( x j , λ λ λ ∗ ) = ∑ i | δ i j ≤ ρ u m i j δ 2 i j + ∑ i | δ i j ≤ ρ λ ∗ i ( δ 2 i j − ρ 2 ) . (11) By Theorem 1, a point x ∗ j is a global minimizer for Prob- lem 2 if and only if: 1) ∂ J ( x j , λ λ λ ) / ∂ x j | x ∗ j , λ λ λ ∗ = 0; 2) δ i j ≤ ρ , for all i ∈ { 1 , . . . , n } ; 3) λ i ≥ 0, for all i ∈ { 1 , . . . , n } ; 4) λ ∗ i ( ‖ p i − x ∗ j ‖ 2 − ρ 2 ) = 0, for all i ∈ { 1 , . . . , n } . Clearly, conditions (a)–(c) are equivalent to the above conditions (2)– (4). To prove Theorem 3, therefore, we must show that any solution satisfying condition (1) has the structure of Eq. (10); to establish this, we show that, for any λ λ λ , ∂ J ( x j , λ λ λ ) / ∂ x j vanishes at x ∗ j along any arbitrary direction y ∈ R d with ‖ y ‖ 6 = 0. Let t ∈ R and let x ∗ j be the optimal solution for the problem in Eq. (9), and let us define J ( t ) = J ( x ∗ j + t y , λ λ λ ) . It holds J ( t ) = ∑ i | δ i j ≤ ρ ( u m i j + λ i ) ‖ p i − ( x ∗ j + t y ) ‖ 2 − ρ 2 ∑ i | δ i j ≤ ρ λ i . We can rearrange J ( t ) as J ( t ) = ∑ i | δ i j ≤ ρ ( u m i j + λ i )( p i − x ∗ j − t y ) T ( p i − x ∗ j − t y ) − ρ 2 ∑ i | δ i j ≤ ρ λ i , so that dJ ( t ) / dt = − 2 ∑ i | δ i j ≤ ρ ( u m i j + λ i ) y T ( p i − x ∗ j − t y ) and the KKT condition (1) is satisfied if d J ( 0 ) d t = − 2 ∑ i | δ i j ≤ ρ ( u m i j + λ i ) y T ( p i − x ∗ j ) = = − 2 y T ∑ i | δ i j ≤ ρ ( u m i j + λ i )( p i − x ∗ j ) = 0 . (12) For arbitrary nonzero y , Eq. (12) holds if and only if ∑ i | δ i j ≤ ρ ( u m i j + λ i )( p i − x ∗ j ) = 0 , which, considering λ λ λ = λ λ λ ∗ , is equivalent to Eq. (10). This concludes the proof. We now present some technical results that are fundamen- tal in order to construct a global minimizer for Problem 2. Lemma 1: Given a nonempty, non-singleton set B ∗ , ob- tained as the intersection of a collection of n balls { B 1 , ρ , . . . , B n , ρ } , each centered at v i ∈ R d and with radius ρ , i.e. B ∗ = ∩ n i = 1 B i , ρ , for every point v ∈ R d there exist non- negative μ 1 , . . . , μ n and a positive μ with μ + ∑ n i = 1 μ i = 1, such that the projection of v onto B ∗ is given by Pr B ∗ ( v ) = μ v + ∑ n i = 1 μ i v i . Proof: To prove our statement, we show that Pr B ∗ ( v ) is the solution of a convex optimization problem, which can be solved by resorting to KKT Theorem. Let us recall that the projection Pr B ∗ ( v ) is the point of minimum distance from v among those in B ∗ . In other words, Pr B ∗ ( v ) is the solution of the following problem Pr B ∗ ( v ) = min z ∈ R d ‖ z − v ‖ 2 Subject to { ‖ z − v i ‖ 2 ≤ ρ 2 , ∀ i = 1 , . . . , n The above problem is convex, since ‖ z − v ‖ 2 and ‖ z − v i ‖ 2 are convex functions. Moreover, since we assumed B ∗ is not empty nor a singleton, the above problem satisfies Slater’s Condition, so KKT Theorem 1 applies. Let γ i be the Lagrangian multiplier associated to the i -th constraint, and let γ γ γ be the stack vector of all γ i . The Lagrangian function L ( z , γ γ γ ) for the above prob- lem is L ( z , γ γ γ ) = ‖ z − v ‖ 2 + ∑ n i = 1 γ i ( ‖ z − v i ‖ 2 − ρ 2 ) ; hence, ∂ L ( z , γ γ γ ) / ∂ z | z ∗ , γ γ γ ∗ = 2 z ∗ − 2 v + 2 ∑ n i = 1 γ ∗ i ( z ∗ − v i ) = 0, which implies that the optimal solution has the form z ∗ = v + ∑ n i = 1 γ ∗ i v i 1 + ∑ n i = 1 γ ∗ i , (13) where the Lagrangian multipliers γ ∗ i must satisfy γ ∗ i ≥ 0 for all i = 1 , . . . , n and γ ∗ i ( ‖ z ∗ − v i ‖ 2 − ρ 2 ) = 0 for all i = 1 , . . . , n . Under the above constraints on γ ∗ i , our statement is verified for μ = 1 / ( 1 + ∑ n i = 1 γ ∗ i ) > 0 and μ i = γ ∗ i / 1 + ∑ n i = 1 γ ∗ i ≥ 0, for all i = 1 , . . . , n . This completes our proof. Corollary 1: If v 6 ∈ B ∗ then Pr B ∗ ( v ) lies on the intersection of the boundaries of the balls corresponding to positive coefficients γ i . Proof: From Lemma 1 the coefficients γ i are non- negative and must satisfy γ i ( ‖ z − v i ‖ 2 − ρ 2 ) = 0, for all i = 1 , . . . , n . Hence, Pr B ∗ ( v ) must belong to the intersection of the boundaries of the balls associated to positive γ i . The proof is complete noting that, when v 6 ∈ B ∗ , Eq. (13) implies that there must be at least a positive γ i . We now provide a technical result which will be used later in the main theorem of this section. Lemma 2: Let V = { v 1 , . . . , v n } ⊂ R d with n > 1 and consider given μ > 0 and μ i ∈ ( 0 , 1 ) for all i ∈ { 1 , . . . , n } such that μ + ∑ n i = 1 μ i = 1. For any choice of α > 0 there is a choice of λ i for all i ∈ { 1 , . . . , n } satisfying μ i = λ i / ( α + n ∑ h = 1 λ h ) , λ i ≥ 0 . (14) Proof: For the sake of the proof, it is convenient to rearrange Eq. (14) as λ i = μ i ( α + ∑ n h = 1 λ h ) , λ i ≥ 0. Stacking for all λ i we get λ λ λ = α μ μ μ + μ μ μ 1 T λ λ λ , (15) where λ λ λ and μ μ μ are the stack vectors collecting all λ i and μ i , respectively. Since by assumption μ > 0, it holds 1 − 1 T μ μ μ = μ > 0, from the Sherman-Morrison Theorem [17] it follows that the matrix ( I − μ μ μ 1 T ) − 1 exists and has the following structure ( I − μ μ μ 1 T ) − 1 = I + μ μ μ 1 T / μ , (16) where we notice that all entries are non-negative by con- struction. At this point, by plugging Eq. (16) in Eq. (15) we obtain λ λ λ = ( I − μ μ μ 1 T ) − 1 α μ μ μ = ( I + μ μ μ 1 T / μ ) α μ μ μ . Since α > 0, μ i ≥ 0 and the matrix in Eq. (16) has non-negative entries, we conclude that λ λ λ ≥ 0. We now provide a constructive method to obtain a solution to Problem 2. Theorem 4: Let U be admissible to Problem 1 and assume that x ∈ R rd satisfies Assumption c). Then x ∗ j = Pr B ∗ j   ∑ i | δ i j ≤ ρ u m i j p i / ∑ i | δ i j ≤ ρ u m i j   (17) is a global minimizer for Problem 2. Proof: Let us define the set I j = { i ∈ { 1 , . . . , n } | u i j > 0 } and let ˆ x j = n ∑ i = 1 u m i j p i / n ∑ i = 1 u m i j , ˆ u = n ∑ i = 1 u m i j . (18) We first handle two special cases. Note that, if ˆ x j ∈ B ∗ j then Pr B ∗ j ( ˆ x j ) = ˆ x j and ˆ x j itself satisfies Theorem 3 with all λ ∗ i = 0. Moreover, if B ∗ j is a singleton, since x satisfies Assumption c) it must hold B ∗ j = { x j } . In this case, Problem 2 fails to satisfy Slater’s conditions and Theorem 3 does not apply; however, Theorem 3 is no longer required if B ∗ j = { x j } , as Pr B ∗ j ( ˆ x j ) = x j by construction. In other words, in this case agent j does not move. We now focus on the case ˆ x j 6 ∈ B ∗ j and B ∗ j 6 = { x j } , that is x j does not belong to B ∗ j and B ∗ j is not a singleton. In this setting, our goal is to show that Pr B ∗ j ( ˆ x ) is a solution having the form of Eq. (10), for which a closed form of the Lagrangian multipliers λ ∗ i that satisfies conditions (a)–(c) in Theorem 3 is given. If ˆ x 6 ∈ B ∗ j , then Pr B ∗ j ( ˆ x ) ∈ B ∗ j lies on the boundary of B ∗ j , hence Condition (a) in Theorem 3 is satisfied. By Lemma 1 and Corollary 1, there is an I ∗ j ⊆ I j such that Pr B ∗ j ( ˆ x ) = ˆ μ ˆ x + ∑ i ∈ I ∗ j μ i p i . (19) with ˆ μ ∈ ( 0 , 1 ) , μ i ∈ ( 0 , 1 ) for all i ∈ I ∗ j and ˆ μ + ∑ i ∈ I ∗ j μ i = 1. In other words, the projection is a convex combination of ˆ x and the location of the PoIs p i for i ∈ I ∗ j , where the coefficient ˆ μ for ˆ x is strictly positive by construction. Let λ λ λ ∗ and μ μ μ be the stack vectors of the Lagrangian multi- pliers λ ∗ i and the coefficients μ i for all i ∈ I ∗ j , respectively. By Proposition 2, choosing λ λ λ ∗ = ˆ u ( I + μ μ μ 1 T / ˆ μ ) μ μ μ ≥ 0 implies μ i = λ ∗ i / ( ˆ u + ∑ h ∈ I ∗ j λ ∗ h ) , ∀ i ∈ I ∗ , (20) and therefore ˆ μ = ˆ u / ( ˆ u + ∑ h ∈ I ∗ j λ ∗ h ) . (21) Plugging Eq. (20) and Eq. (21) in Eq. (19), and choosing λ ∗ i = 0 for all i ∈ { 1 , . . . , n } \ I ∗ j we get Pr B ∗ j ( ˆ x ) = ˆ u ˆ u + ∑ h ∈ I ∗ j λ ∗ h ˆ x j + ∑ i ∈ I ∗ j λ ∗ i ˆ u + ∑ h ∈ I ∗ j λ ∗ h p i , which has the same structure as x ∗ j in Eq. (10). Note that all λ ∗ i ≥ 0, hence condition (b) in Theorem 3 is satisfied. Moreover, as dis- cussed above, Corollary 1 guarantees that ‖ Pr B ∗ j ( ˆ x ) − p i ‖ = ρ for all i ∈ I ∗ j , and since λ ∗ i = 0 for i 6 ∈ I ∗ j also condition (c) in Theorem 3 is satisfied. This completes the proof. VI. P ROPOSED A LGORITHM AND S IMULATIONS We now discuss our distributed algorithm, based on the repeated alternated solution of the two sub-problems solved in the previous sections. Specifically, each agent alternates between the assignment phase, where it computes the asso- ciations with the sensed PoIs based on its current location, and the refinement phase, where it selects a new location based on the sensed PoIs and the associations. We point out that, knowing the associations, the refinement phase is executed locally by each agent, with no need for coordina- tion among neighboring agents. Conversely, communication among neighboring agents is required during the assignment phase, to collect all the distances ‖ p i − x h ( k ) ‖ involving the i -th PoI and the agents able to sense it. Note that communi- cation is ensured by Assumption b). This allows each agent to compute the associations via Eqs. (2) and (3). Note also that the algorithm execution requires an implicit coordination among the agents, which can be achieved by resorting to well known protocols such as consensus algorithms [18]. Details are omitted for the sake of brevity. Finally, a stopping criterion is required: a simple choice is to let each agent stop when its new location is closer than ε to the current one. Let us now numerically demonstrate the effectiveness of the proposed algorithm. In Figure 1, we show an example where PoIs and agents are embedded in the unit square [ 0 , 1 ] 2 ⊂ R 2 . Specifically, we consider n = 140 PoIs (circles) and r = 4 agents (triangles). The figure compares the result of the proposed algorithm for ρ = 0 . 35 with the result of the standard C-means. In Figures 1a and 1b, we assign the same colors to those PoIs having their largest association with the same agent. Moreover we report the initial and final position for the agents within the proposed algorithm in white and blue, respectively and, for comparison, we report in red the final positions found via standard C-means. According to the plots, in spite of sparsity, quite similar results are achieved. The plots in Figures 1c and 1d show the associations obtained for one cluster within the proposed algorithm (for ρ = 0 . 35) and within the standard C-means. This gives an intuitive explanation of the effect of the proximity constraints on the associations, i.e., membership with a very low degree are truncated in the proposed algorithm. Figure 2 reports the value of the objective function with respect to the iterations for the solution found by the C-means and by the proposed algorithm for different values of ρ . Overall, the figure numerically demonstrate that the behavior of the proposed algorithm tends to coincide with the behavior of the standard C-means as ρ grows, i.e., as ρ increases, the agents become more and more aware of the PoIs, until the proximity constraints no longer exist. Notably, this numerical simulation suggests that our algorithm can be thought as a generalization of the standard C-means. VII. C ONCLUSIONS In this paper we provide a novel distributed coverage algorithm for a set of agents aiming at covering in a non- exclusive way a discrete and finite set of points of interest, which is inspired by the standard C-Means algorithm. Several directions are envisioned for future work: (1) con- sider an asynchronous setting and moving PoIs; (2) in- troduce the possibility for an agent to leave some of the PoIs uncovered if they are already covered by other agents; (3) provide a formal characterization of the convergence properties of the overall proposed algorithm. Proposed Algorithm ( ρ = 0 . 35) (a) Standard C-means (b) (c) (d) Fig. 1. Plots 1a and 1b: comparison of the proposed algorithm for ρ = 0 . 35 (plot 1a) with the standard C-means (plot 1b), considering a particular instance. Plots 1c and 1d: comparison of the associations between a particular agent and all the PoIs within the proposed algorithm when ρ = 0 . 35 (plot 1c) and within the standard C-means (plot 1d). The color of the PoI tends to red or blue as the associations approach one or zero, respectively. Black crosses indicate associations that are exactly zero. R EFERENCES [1] J. Cortés and F. Bullo, “From geometric optimization and nonsmooth analysis to distributed coordination algorithms,” in Proceedings of the IEEE Conference on Decision and Control , pp. 3274–3280, 2003. 5 10 15 20 25 30 35 40 45 50 55 steps 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 objective function Comparison on Objective Functions Proposed Algorithm ( ρ = 0 . 35) Proposed Algorithm ( ρ = 0 . 5) Proposed Algorithm ( ρ = 0 . 65) Proposed Algorithm ( ρ = 0 . 8) Proposed Algorithm ( ρ = 0 . 95) Standard C-means Fig. 2. Comparison of the value of the objective function at each step of the proposed algorithm, for different choices of ρ , with the one obtained by the standard C-means, considering the example of Figure 1. [2] ——, “Coordination and geometric optimization via distributed dy- namical systems,” SIAM Journal on Control and Optimization , vol. 44, no. 5, pp. 1543–1574, 2005. [3] K. Laventall and J. Cortés, “Coverage control by multi-robot networks with limited-range anisotropic sensory,” International Journal of Con- trol , vol. 82, no. 6, pp. 1113–1121, 2009. [4] Y. Kantaros and M. M. Zavlanos, “Distributed communication-aware coverage control by mobile sensor networks,” Automatica , vol. 63, pp. 209–220, 2016. [5] S. G. Lee, Y. Diaz-Mercado and M. Egerstedt, “Multirobot control using time-varying density functions,” IEEE Transactions on Robotics , vol. 31, no. 2, pp. 489-493, 2015. [6] J. M. Palacios-Gasós, E. Montijano, C. Sagüés, and S. Llorente, “Distributed coverage estimation and control for multirobot persistent tasks,” IEEE Transactions on Robotics , vol. 32, no. 6, pp. 1444–1460, 2016. [7] A. Pierson and M. Schwager, “Adaptive inter-robot trust for robust multi-robot sensor coverage,” in Robotics Research , pp. 167–183, Springer, 2016. [8] B. Jiang, Z. Sun, B. D. O. Anderson and C. Lageman, “Higher order mobile coverage control with application to localization,” arXiv preprint arXiv:1703.02424v1 , 2017. [9] E. Montijano and A. R. Mosteo, “Efficient multi-robot formations using distributed optimization,” in Proceedings of the IEEE Conference on Decision and Control , pp. 6167–6172, 2014. [10] J. C. Bezdek, Pattern Recognition with Fuzzy Objective Function Algorithms , Springer Science & Business Media, 2013. [11] S. Patel, V. Patel, and D. Jinwala, “Privacy preserving distributed k - means clustering in malicious model using zero knowledge proof,” in Proceedings of the International Conference on Distributed Computing and Internet Technology , pp. 420–431, 2013. [12] G. Oliva, R. Setola, and C. N. Hadjicostis, “Distributed k -means algorithm,” arXiv preprint arXiv:1312.4176 , 2013. [13] J. Qin, W. Fu, H. Gao, and W. X. Zheng, “Distributed K-means algorithm and fuzzy c-means algorithm for sensor networks based on multiagent consensus theory,” IEEE Transactions on Cybernetics , vol. 47, no. 3, pp. 772 - 783, 2016. [14] G. Oliva, R. Setola, and C. N. Hadjicostis, “Distributed C-means data clustering algorithm,” in Proceedings of the IEEE Conference on Decision and Control , pp. 4396–4401, 2016. [15] W. I. Zangwill, Nonlinear Programming: a Unified Approach , Prentice-Hall, 1969. [16] J. Nayak, B. Naik, and H. Behera, “Fuzzy C-means (FCM) clustering algorithm: a decade review from 2000 to 2014,” in Computational Intelligence in Data Mining-Volume 2 , pp. 133–149, Springer, 2015. [17] J. Sherman and W. J. Morrison, “Adjustment of an inverse matrix corresponding to a change in one element of a given matrix,” The Annals of Mathematical Statistics , vol. 21, no. 1, pp. 124–127, 1950. [18] R. Olfati-Saber and R. M. Murray, “Consensus problems in networks of agents with switching topology and time-delays,” IEEE Transac- tions on Automatic Control , vol. 49, no. 9, pp. 1520–1533, 2004.