Distributed and Proximity-Constrained
C-Means for Discrete Coverage Control
Gabriele Oliva1,∗, Andrea Gasparri2, Adriano Fagiolini3, and Christoforos N. Hadjicostis4
Abstract— In this paper we present a novel distributed
coverage control framework for a network of mobile agents, in
charge of covering a finite set of points of interest (PoI), such
as people in danger, geographically dispersed equipment or
environmental landmarks. The proposed algorithm is inspired
by C-Means, an unsupervised learning algorithm originally
proposed for non-exclusive clustering and for identification
of cluster centroids from a set of observations. To cope
with the agents’ limited sensing range and avoid infeasible
coverage solutions, traditional C-Means needs to be enhanced
with proximity constraints, ensuring that each agent takes into
account only neighboring PoIs. The proposed coverage control
framework provides useful information concerning the ranking
or importance of the different PoIs to the agents, which
can be exploited in further application-dependent data fusion
processes, patrolling, or disaster relief applications.
I. INTRODUCTION
In the literature, coverage control has been gaining par-
ticular attention by the scientific community in recent years.
Typical approaches follow the path of the seminal works
in
[1], [2], where agents interact in order to achieve two
main objectives: (1) to partition the area of interest into
zones for which a single agent is responsible for the cov-
ering (e.g., Voronoi regions); (2) to reach locations that
minimize a function of the distance from each point in
the region assigned to them (e.g., navigate to a weighted
centroid of the assigned Voronoi region). Over the years, the
importance of taking into account agents’ limited sensing
and communication capabilities has become paramount [3],
[4]. Recent innovations in the field include, among others,
time varying methodologies [5], dynamic approaches [6],
and adaptive management of agents’ trust [7]. Note that
most of the existing literature focuses on continuous space
coverage, while in several practical situations it might be
required to cover just a discrete set of points of interest (PoI).
In fact, a discrete space approach has lower computational
demands with respect to continuous space ones and allows
the modeling of specific targets of interest. To the best of
our knowledge, work at the state of the art focused on the
coverage of discrete points of interest is quite sparse. In a
1 Unit of Automatic Control, Department of Engineering, Università
Campus Bio-Medico di Roma, via Álvaro del Portillo 21, 00128, Rome,
Italy.
2 University Roma Tre, Department of Engineering, Via della Vasca Navale
79, 00146, Rome, Italy.
3 Dipartimento di Energia, Ingegneria dell’Informazione e Modelli Matem-
atici (DEIM), University of Palermo, Viale delle Scienze, Edificio 10, 90128,
Palermo, Italy.
4 Department of Electrical and Computer Engineering, University of Cyprus,
75 Kallipoleos Avenue, P.O. Box 20537, 1678 Nicosia, Cyprus.
∗Corresponding author, email: g.oliva@unicampus.it
recent work [8], the authors use k-means to solve a discrete
coverage problem where multiple agents are in charge of
a set of PoIs. A related work, even though not specifically
focused on coverage, is [9] where roles are assigned to the
agents via constrained discrete optimization. In both cases,
however, the agents are not constrained to have a limited
sensing radius.
In this paper, we consider a distributed problem where
there is a finite and discrete set of n PoIs in fixed locations
in space, and a set of r < n mobile agents aims at achieving
the following two objectives: (1) to partition the PoIs in r
groups such that PoIs in the same group are spatially close
while PoIs in different groups are far apart; (2) to associate
each agent to a group and assign a location to such agent in
order to maximize the coverage. Our distributed approach is
inspired by the C-means [10], a non-exclusive and unsuper-
vised data clustering algorithm, where a set of points (e.g.,
the PoIs) are grouped so that each point can be associated, at
the same time, to different clusters (i.e., to different agents)
with different intensities. From a mathematical standpoint,
we extend the classical C-means by introducing proximity
constraints, which account for the limited sensing range of
the agents. As a matter of fact, results dealing with the
distribution of clustering algorithms, including the C-means,
can be found in the literature [11]–[14]. Compared to these
works, where the sensors collaborate in order to compute the
centroids and their associations in a distributed fashion, in
our work we reverse the perspective. That is, the agents play
the role of the centroid and collaborate in order to compute
their location and the association to the PoIs; the latter play
the role of the sensors.
The outline of the paper is as follows: Section II provides
some preliminary definitions, while Section III introduces
the problem addressed in this paper; Sections IV and V
solve two sub-problems that are the basis for our algorithm,
while Section VI provides the proposed algorithm along
with simulation results; the final Section VII collects some
conclusive remarks and future work directions.
II. PRELIMINARIES
In the following, we consider the Euclidean space Rd,
equipped with the usual norm. Let us define the metric
projection onto a convex set as follows:
Definition 1 (Metric Projection): Let X ⊂Rd be a convex
set. The metric projection onto X is a function PrX : Rd →X
such that PrX(v) = argminz∈X ∥v−z∥.
In the following, we refer to a metric projection simply
as projection. To serve the scope of this paper, we now
arXiv:1612.03849v4  [cs.RO]  16 Sep 2017
specialize the projection onto a closed ball.
Definition 2 (Closed Ball): A
closed
ball
Bi,ρ ⊂Rd
with
radius
ρ
and
center
pi ∈Rd
is
given
by
Bi,ρ = {z ∈Rd, : ∥pi −z∥≤ρ}.
Definition 3 (Projection onto a Closed Ball): Let
Bi,ρ ⊂Rd be the closed ball of radius ρ > 0 centered
at pi ∈Rd. The projection onto Bi,ρ
is the function
PrBi,ρ(v) = αv+(1−α)pi,
where
α = ρ/∥v −pi∥
if
∥v−pi∥> ρ and α = 1 otherwise.
We now review the basics of convex constrained opti-
mization. The reader is referred to [15] for a comprehensive
overview of the topic.
Definition 4 (Convex Constrained Optimization Problem):
A
convex
constrained
optimization
problem
with
s
constraints has the following structure:
minx∈Rd f(x)
Subject to
(
gi(x) ≤0,
i = 1,...,q
hi(x) = 0,
i = q+1,...,s.
(1)
where f and all gi and hi are convex functions. The set of
feasible solutions for the above problem is given by
Ω= {x ∈Rd |gi(x) ≤0, 1 ≤i ≤q; hi(x) = 0, q+1 ≤i ≤s},
which is a convex set since all gi and hi are convex.
We now review conditions that are fundamental in order
to solve convex constrained optimization problems.
Definition 5 (Slater’s Conditions): The
convex
constrained optimization problem in Eq. (1) satisfies the
Slater’s Condition if there is an x ∈Ωsuch that gi(x) < 0
for all i = 1,...,q and hi(x) = 0 for all i = q + 1,...,s.
Moreover, if there is an x ∈Ωthat satisfies the Slater’s
Condition and is such that all nonlinear gi(x) are negative,
while all linear gi(x) are non-positive, then the problem is
said to satisfy the restricted Slater’s Condition.
We now review Karush-Kuhn-Tucker (KKT) Theorem (see
[15] and references therein).
Theorem 1 (KKT Theorem): Consider
a
convex
constrained
optimization
problem
as
in
Eq.
(1)
that
satisfies
the
restricted
Slater’s
Condition,
and
let
the
Lagrangian
function
be
the
function
f(x,λλλ) = f(x)+∑q
i=1 λigi(x)+∑s
i=q+1 λihi(x),
where
λλλ = [λ1,...,λs]T. The point x∗∈Rd is a global minimizer
for the problem if and only if the following conditions hold:
1) ∂f(x,λλλ)/∂x|x∗,λλλ ∗= 0; 2) λ ∗
i gi(x∗) = 0, for all i = 1,...,q;
3) gi(x∗) ≤0, 1 ≤i ≤q and hi(x∗) = 0, q + 1 ≤i ≤s;
4) λ ∗
i ≥0, for all i = 1,...,s.
III. PROBLEM STATEMENT
Let us consider a set of n PoIs and r < n agents deployed
in a bounded region in Rd, with d ∈{2, 3}; we assume that
each PoI i is in a fixed location pi ∈Rd, while the j-th
agent is initially in location xj(0) ∈Rd. The following set of
assumptions is now in order: a) an agent j is able to sense
all PoIs for which the Euclidean distance from j is less than
or equal to ρ; b) an agent j is able to communicate with
all other agents for which the Euclidean distance from j is
less than or equal to a threshold θ ≥2ρ; c) the agents are
deployed initially in a way such that each PoI is sensed by at
least one agent, and each agent senses at least one PoI; d) the
set {p1,...,pn} contains at least n > r distinct points. Note
that the last two assumptions, i.e., c) and d), are inherited
from the standard C-means [10]. In particular, Assumption c)
ensures that no PoI or agent is neglected by the optimization
process.
Let x = [xT
1 ,...,xT
r ]T ∈Rrd collect the location of all
agents and let U be an n×r matrix encoding the associations
among the agents and the PoIs, having uij at its (i, j) −th
entry. Moreover, let δij = ∥pi −x j∥. The problem can be
stated as follows.
Problem 1: Find x∗∈Rrd and U∗∈Rn×r such that
J(x∗,U∗) = minx,U ∑n
i=1 ∑r
j=1 um
ijδ 2
ij
Subject to









∑r
j=1 uij = 1,
∀i
(I)
∑n
i=1 uij > 0,
∀j
(II)
uij ∈[0,1],
∀i, j
(III)
uij(δ 2
ij −ρ2) ≤0,
∀i, j
(IV)
where m is a finite positive integer that is greater than one.
Problem 1 extends the standard C-means problem [10]
through the proximity constraints (IV). Note that constraint
(I) ensures the complete coverage of each PoI by the agents,
whereas constraint (II) ensures that each agent covers at least
one PoI. We point out that the larger the parameter m is, the
less the associations uij influence the objective function; as
in the case of C-means, such a parameter accounts for the
degree of overlapping among the different clusters1.
Note also that constraint (III) requires uij ∈[0,1]; as a
consequence, the combination of constraint (III) and the
proximity constraints (IV) implies that uij = 0 whenever
δij > ρ. Therefore, these constraints ensure that an agent j
does not cover those PoIs that are out-of-reach from it.
In the following, we develop a distributed and iterative
algorithm to solve Problem 1; specifically, at each iteration k,
our algorithm alternates between an assignment phase where
we find the optimal associations uij(k) given the current
locations x(k) and a refinement phase where we assume the
associations uij(k) are fixed and we find the optimal location
x(k + 1) for the agents2. The two steps are iterated until a
stopping condition is met.
We next characterize (in Sections IV and V) the optimality
conditions for each sub-problem, respectively. Future work
will be focused on studying the optimality of the overall
algorithm.
IV. ASSIGNMENT PHASE
In this section we discuss how, given fixed agents’ loca-
tions, the optimal associations U can be computed.
1When there is no a priori knowledge, a popular choice is m = 2 (see
[16] and references therein).
2We abstract away from the actual kinematics of the agents and we neglect
the actual planning problem, assuming that a control loop to guide the agent
toward the desired point exists.
Specifically, the optimal choice for the i-th row of U (i.e.,
the associations of the i-th PoI to all the agents) is as follows.
If there is at least one agent j such that xj = pi (i.e., the j-th
agent lies in the same location as the i-th PoI), the terms
ui1,...,ui,r must satisfy3
r
∑
j=1
uij(k) = 1,
ui j(k) = 0 if δi j ̸= 0;
(2)
if, conversely, xj ̸= pi for all agents j (i.e., no agent is at the
same location as the i-th PoI), the terms ui j are as follows:
uij =





 
∑
h|δih≤ρ
 δij
δih

2
m−1
!−1
,
if δi j ≤ρ,
0,
otherwise.
(3)
We now prove the optimality of such strategy.
Theorem 2: For given agents’ locations x that satisfy
Assumption c), a solution {x,U} where U satisfies Equations
(2) and (3) is a global minimizer for Problem 1.
Proof:
Consider a given x that satisfies Assump-
tion c). Problem 1 reduces to finding U that minimizes
J(x,U) = J(U) subject to constraints (I)–(III); as for the
proximity constraints (IV) we have that if δi j ≤ρ then the
corresponding proximity constraint is not active, while for
δi j > ρ, since uij ∈[0,1], the constraint reduces to uij = 0.
As a consequence, the objective function becomes
J(U) =
n
∑
i=1 ∑
j|δij≤ρ
um
i jδ 2
i j.
Let I1 be the index set of the PoIs that are not over-
lapped by any agent and I2 be the complementary set,
i.e., I1 = {i ∈{1,...,n}|δi j > 0 for all j ∈{1,...,r} and
I2 = {1,...,n}\I1. We can split J(U) in two terms
J(U) = ∑
i∈I1 ∑
j|δij≤ρ
um
i jδ 2
i j
|
{z
}
J1(U)
+ ∑
i∈I2 ∑
j|δij≤ρ
um
i jδ 2
i j
|
{z
}
J2(U)
.
(4)
From Eq. (2), it follows that all terms δi j appearing in J2(U)
are null, hence J2(U) = 0. This implies that the optimality of
U only depends on the entries ui j appearing in J1(U). Since
x satisfies Assumption c) by construction, using Eq. (3) it
follows that constraints (II) and (III) are always satisfied.
Furthermore, it is convenient to rewrite the variables uij as
the square of new variables wi j, i.e., ui j = w2
i j. By doing this,
and since constraint (IV) implies that ui j = 0 when δij > ρ,
we can rewrite constraint (I) as ∑j|δij≤ρ w2
i j = 1. As a result,
our problem becomes that of finding W ∗∈Rr×d that solves
J(W ∗) = minW ∑i∈I1 ∑j|δij≤ρ w2m
i j δ 2
i j
Subject to
n
∑j|δij≤ρ w2
i j = 1,
∀i ∈I1.
Note that the above problem is convex and has just equality
constraints, hence the Slater’s Condition is trivially satisfied
3When two or more agents are at the same position of a poi pj, there is
an infinity of choices for the associations uij that satisfy Eq. (2).
and Theorem 1 applies.
Let ζζζ = [ζ1,...ζq]T be the Lagrangian multipliers associ-
ated to the constraints, while the corresponding Lagrangian
function is
J(W,ζζζ) = ∑
i∈I1 ∑
j|δij≤ρ
w2m
ij δ 2
ij + ∑
i∈I1
ζi( ∑
j|δij≤ρ
w2
ij −1).
By Theorem 1, W ∗,ζζζ ∗is a global optimal solution to the
above problem if and only if it satisfies
∂J(W,ζζζ)
∂wij

W ∗,ζζζ ∗= 2m(w∗
ij)2m−1δij +2w∗
ijζ ∗
i = 0,
(5)
for all i ∈I1 and j ∈{1,...,r} and
∑
j|δij≤ρ
(w∗)2
ij = 1,
∀i ∈I1.
(6)
From Eq. (5) it follows that
(w∗
ij)2 = u∗
ij =
�−ζ ∗
i /mδ 2
ij

1
(m−1) .
(7)
Summing over all h such that δih ≤ρ and applying Eq. (6)
we get
(−ζ ∗
i )
1
(m−1) =
1
∑h|δih≤ρ(1/mδ 2
ih)
1
(m−1)
.
(8)
By plugging Eq. (8) in Eq. (7), it follows that
(w∗
ij)2 =
1
(mδ 2
ij)
1
(m−1) ∑h|δih≤ρ(
1
mδ 2
ih )
1
(m−1)
=
1
∑h|δih≤ρ( δi j
δih )
2
(m−1)
.
Therefore, since u∗
ij = (w∗
ij)2, we conclude that Eq. (3)
corresponds to the global optimal solution.
V. REFINEMENT PHASE
Let us now discuss the refinement phase within the pro-
posed algorithm, i.e., a way to find the optimal location of
the agents, for fixed associations.
Note that, when the fixed associations are admissible for
Problem 1, our problem is equivalent to solving a collection
of independent sub-problems (i.e., one for each agent) having
the following structure.
Problem 2: Find x∗
j ∈Rd that solves
J(x∗
j) = minx j∈Rd
∑
i|δij≤ρ
um
ijδ 2
ij
Subject to
n
δ 2
ij ≤ρ2,
∀i s.t. uij > 0.
(9)
We now characterize the optimal solution of Problem 2.
To this end, we first define the set of admissible solutions.
Definition 6 (Admissible Solutions to Problem 2): The
set of admissible solutions to Problem 2 is B∗
j =
∩
i|ui j>0Bi,ρ,
where Bi,ρ is the ball of radius ρ centered at the location pi
of the i-th PoI.
Remark 1: Problem 2 is convex, since for fixed terms uij
the objective function is a linear combination of convex
functions, and similarly the constraints are convex functions.
We now establish a condition for Problem 2 to satisfy
Slater’s Condition.
Proposition 1: Problem 2 satisfies Slater’s Condition if
and only if B∗
j is not a singleton.
Proof: The fact B∗
j is a singleton implies that at least
two constraints are satisfied at the equality (i.e., at least two
balls are tangent), thus preventing Slater’s Condition to be
satisfied.
We now provide an optimality condition to move an
agent j when U is fixed.
Theorem 3: Let U be admissible to Problem 1, and sup-
pose B∗
j is not a singleton. Then
x∗
j = ∑
i|δij≤ρ
(um
i j +λ ∗
i )pi / ∑
i|δij≤ρ
(um
i j +λ ∗
i )
(10)
is a global minimizer for Problem 2 if and only if there exist
λλλ ∗= [λ ∗
1 ,...,λ ∗
n ]T ∈Rn such that: (a) x∗
j ∈B∗
j; (b) λ ∗
i ≥0
for all i; (c) λ ∗
i

∥pi −x∗
j∥2 −ρ2
= 0 for all i.
Proof: As discussed in Remark 1, Problem 2 is convex.
Moreover, as shown in Proposition 1, it satisfies Slater’s
condition and therefore it satisfies also the restricted Slater’s
condition. Therefore, the KKT Theorem (Theorem 1) applies
to Problem 2. Let us consider the Lagrangian function
associated to Problem 2, i.e.,
J(xj,λλλ ∗) = ∑
i|δij≤ρ
um
i jδ 2
i j + ∑
i|δij≤ρ
λ ∗
i
�δ 2
i j −ρ2
.
(11)
By Theorem 1, a point x∗
j is a global minimizer for Prob-
lem 2 if and only if: 1) ∂J(x j,λλλ)/∂xj|x∗
j,λλλ ∗= 0; 2) δij ≤
ρ, for all i ∈{1,...,n}; 3) λi ≥0, for all i ∈{1,...,n};
4) λ ∗
i

∥pi −x∗
j∥2 −ρ2
= 0, for all i ∈{1,...,n}. Clearly,
conditions (a)–(c) are equivalent to the above conditions (2)–
(4). To prove Theorem 3, therefore, we must show that any
solution satisfying condition (1) has the structure of Eq. (10);
to establish this, we show that, for any λλλ, ∂J(xj,λλλ)/∂x j
vanishes at x∗
j along any arbitrary direction y ∈Rd with
∥y∦= 0. Let t ∈R and let x∗
j be the optimal solution for the
problem in Eq. (9), and let us define J(t) = J(x∗
j +ty,λλλ).
It holds J(t) =
∑
i|δij≤ρ
(um
i j +λi)∥pi −(x∗
j +ty)∥2 −ρ2
∑
i|δij≤ρ
λi.
We can rearrange J(t) as
J(t) = ∑
i|δij≤ρ
(um
ij +λi)(pi−x∗
j −ty)T(pi−x∗
j −ty)−ρ2 ∑
i|δij≤ρ
λi,
so that dJ(t)/dt = −2 ∑
i|δij≤ρ
(um
i j +λi)yT(pi −x∗
j −ty) and the
KKT condition (1) is satisfied if
dJ(0)
dt
= −2 ∑
i|δij≤ρ
(um
i j +λi)yT(pi −x∗
j) =
= −2yT ∑
i|δij≤ρ
(um
i j +λi)(pi −x∗
j) = 0.
(12)
For arbitrary nonzero y, Eq. (12) holds if and only if
∑
i|δij≤ρ
(um
i j +λi)(pi −x∗
j) = 0,
which, considering λλλ = λλλ ∗, is equivalent to Eq. (10). This
concludes the proof.
We now present some technical results that are fundamen-
tal in order to construct a global minimizer for Problem 2.
Lemma 1: Given a nonempty, non-singleton set B∗, ob-
tained as the intersection of a collection of n balls
{B1,ρ,...,Bn,ρ}, each centered at vi ∈Rd and with radius
ρ, i.e. B∗= ∩n
i=1Bi,ρ, for every point v ∈Rd there exist non-
negative µ1,...,µn and a positive µ with µ +∑n
i=1 µi = 1,
such that the projection of v onto B∗
is given by
PrB∗(v) = µv+∑n
i=1 µivi.
Proof: To prove our statement, we show that PrB∗(v)
is the solution of a convex optimization problem, which can
be solved by resorting to KKT Theorem.
Let us recall that the projection PrB∗(v) is the point of
minimum distance from v among those in B∗. In other words,
PrB∗(v) is the solution of the following problem
PrB∗(v) = minz∈Rd ∥z−v∥2
Subject to
n
∥z−vi∥2 ≤ρ2,
∀i = 1,...,n
The above problem is convex, since ∥z−v∥2 and ∥z−vi∥2
are convex functions. Moreover, since we assumed B∗is not
empty nor a singleton, the above problem satisfies Slater’s
Condition, so KKT Theorem 1 applies.
Let γi be the Lagrangian multiplier associated to the
i-th constraint, and let γγγ
be the stack vector of all
γi. The Lagrangian function L(z,γγγ) for the above prob-
lem is L(z,γγγ) = ∥z−v∥2 +∑n
i=1 γi(∥z−vi∥2 −ρ2); hence,
∂L(z,γγγ)/∂z|z∗,γγγ∗= 2z∗−2v+2∑n
i=1 γ∗
i (z∗−vi) = 0, which
implies that the optimal solution has the form
z∗= v+∑n
i=1 γ∗
i vi
1+∑n
i=1 γ∗
i
,
(13)
where the Lagrangian multipliers γ∗
i must satisfy γ∗
i ≥0 for
all i = 1,...,n and γ∗
i (∥z∗−vi∥2 −ρ2) = 0 for all i = 1,...,n.
Under the above constraints on γ∗
i , our statement is verified
for µ = 1/(1+∑n
i=1 γ∗
i ) > 0 and µi = γ∗
i /1+∑n
i=1 γ∗
i ≥0, for
all i = 1,...,n. This completes our proof.
Corollary 1: If v ̸∈B∗then PrB∗(v) lies on the intersection
of the boundaries of the balls corresponding to positive
coefficients γi.
Proof:
From Lemma 1 the coefficients γi are non-
negative and must satisfy γi(∥z−vi∥2 −ρ2) = 0, for all
i = 1,...,n. Hence, PrB∗(v) must belong to the intersection
of the boundaries of the balls associated to positive γi. The
proof is complete noting that, when v ̸∈B∗, Eq. (13) implies
that there must be at least a positive γi.
We now provide a technical result which will be used later
in the main theorem of this section.
Lemma 2: Let V = {v1,...,vn} ⊂Rd with n > 1 and
consider given µ > 0 and µi ∈(0,1) for all i ∈{1,...,n}
such that µ +∑n
i=1 µi = 1. For any choice of α > 0 there is
a choice of λi for all i ∈{1,...,n} satisfying
µi = λi/(α +
n
∑
h=1
λh),
λi ≥0.
(14)
Proof:
For the sake of the proof, it is convenient to
rearrange Eq. (14) as λi = µi(α +∑n
h=1 λh), λi ≥0. Stacking
for all λi we get
λλλ = αµµµ + µµµ1Tλλλ,
(15)
where λλλ and µµµ are the stack vectors collecting all λi
and µi, respectively. Since by assumption µ > 0, it holds
1−1T µµµ = µ > 0, from the Sherman-Morrison Theorem [17]
it follows that the matrix (I −µµµ1T)−1 exists and has the
following structure
(I −µµµ1T)−1 = I + µµµ1T/µ,
(16)
where we notice that all entries are non-negative by con-
struction. At this point, by plugging Eq. (16) in Eq. (15) we
obtain λλλ = (I −µµµ1T)−1αµµµ = (I + µµµ1T/µ)αµµµ. Since α > 0,
µi ≥0 and the matrix in Eq. (16) has non-negative entries,
we conclude that λλλ ≥0.
We now provide a constructive method to obtain a solution
to Problem 2.
Theorem 4: Let U be admissible to Problem 1 and assume
that x ∈Rrd satisfies Assumption c). Then
x∗
j = PrB∗
j

∑
i|δij≤ρ
um
i jpi / ∑
i|δij≤ρ
um
i j


(17)
is a global minimizer for Problem 2.
Proof: Let us define the set Ij = {i ∈{1,...,n}|uij > 0}
and let
ˆxj =
n
∑
i=1
um
i jpi /
n
∑
i=1
um
i j,
ˆu =
n
∑
i=1
um
i j.
(18)
We first handle two special cases. Note that, if ˆxj ∈B∗
j then
PrB∗
j(ˆxj) = ˆxj and ˆx j itself satisfies Theorem 3 with all λ ∗
i =
0. Moreover, if B∗
j is a singleton, since x satisfies Assumption
c) it must hold B∗
j = {xj}. In this case, Problem 2 fails to
satisfy Slater’s conditions and Theorem 3 does not apply;
however, Theorem 3 is no longer required if B∗
j = {xj}, as
PrB∗
j(ˆxj) = xj by construction. In other words, in this case
agent j does not move. We now focus on the case ˆx j ̸∈B∗
j
and B∗
j ̸= {xj}, that is xj does not belong to B∗
j and B∗
j
is not a singleton. In this setting, our goal is to show that
PrB∗
j(ˆx) is a solution having the form of Eq. (10), for which
a closed form of the Lagrangian multipliers λ ∗
i that satisfies
conditions (a)–(c) in Theorem 3 is given. If ˆx ̸∈B∗
j, then
PrB∗
j(ˆx) ∈B∗
j lies on the boundary of B∗
j, hence Condition
(a) in Theorem 3 is satisfied. By Lemma 1 and Corollary 1,
there is an I∗
j ⊆Ij such that
PrB∗
j(ˆx) = ˆµ ˆx+ ∑
i∈I∗
j
µipi.
(19)
with ˆµ ∈(0,1), µi ∈(0,1) for all i ∈I∗
j and ˆµ +∑i∈I∗
j µi = 1.
In other words, the projection is a convex combination of ˆx
and the location of the PoIs pi for i ∈I∗
j , where the coefficient
ˆµ for ˆx is strictly positive by construction.
Let λλλ ∗and µµµ be the stack vectors of the Lagrangian multi-
pliers λ ∗
i and the coefficients µi for all i ∈I∗
j , respectively. By
Proposition 2, choosing λλλ ∗= ˆu(I + µµµ1T/ ˆµ)µµµ ≥0 implies
µi = λ ∗
i /( ˆu+ ∑
h∈I∗
j
λ ∗
h ),
∀i ∈I∗,
(20)
and therefore
ˆµ = ˆu/( ˆu+ ∑
h∈I∗
j
λ ∗
h ).
(21)
Plugging
Eq.
(20)
and
Eq.
(21)
in
Eq.
(19),
and
choosing
λ ∗
i
= 0
for
all
i ∈{1,...,n} \ I∗
j
we
get
PrB∗
j(ˆx) =
ˆu
ˆu+∑h∈I∗
j λ ∗
h ˆx j +∑i∈I∗
j
λ ∗
i
ˆu+∑h∈I∗
j λ ∗
h pi, which has the
same structure as x∗
j in Eq. (10). Note that all λ ∗
i ≥0, hence
condition (b) in Theorem 3 is satisfied. Moreover, as dis-
cussed above, Corollary 1 guarantees that ∥PrB∗
j(ˆx)−pi∥= ρ
for all i ∈I∗
j , and since λ ∗
i = 0 for i ̸∈I∗
j also condition (c)
in Theorem 3 is satisfied. This completes the proof.
VI. PROPOSED ALGORITHM AND SIMULATIONS
We now discuss our distributed algorithm, based on the
repeated alternated solution of the two sub-problems solved
in the previous sections. Specifically, each agent alternates
between the assignment phase, where it computes the asso-
ciations with the sensed PoIs based on its current location,
and the refinement phase, where it selects a new location
based on the sensed PoIs and the associations. We point
out that, knowing the associations, the refinement phase is
executed locally by each agent, with no need for coordina-
tion among neighboring agents. Conversely, communication
among neighboring agents is required during the assignment
phase, to collect all the distances ∥pi −xh(k)∥involving the
i-th PoI and the agents able to sense it. Note that communi-
cation is ensured by Assumption b). This allows each agent
to compute the associations via Eqs. (2) and (3). Note also
that the algorithm execution requires an implicit coordination
among the agents, which can be achieved by resorting to
well known protocols such as consensus algorithms [18].
Details are omitted for the sake of brevity. Finally, a stopping
criterion is required: a simple choice is to let each agent stop
when its new location is closer than ε to the current one.
Let us now numerically demonstrate the effectiveness of
the proposed algorithm. In Figure 1, we show an example
where PoIs and agents are embedded in the unit square
[0,1]2 ⊂R2. Specifically, we consider n = 140 PoIs (circles)
and r = 4 agents (triangles). The figure compares the result
of the proposed algorithm for ρ = 0.35 with the result of the
standard C-means. In Figures 1a and 1b, we assign the same
colors to those PoIs having their largest association with the
same agent. Moreover we report the initial and final position
for the agents within the proposed algorithm in white and
blue, respectively and, for comparison, we report in red the
final positions found via standard C-means. According to the
plots, in spite of sparsity, quite similar results are achieved.
The plots in Figures 1c and 1d show the associations obtained
for one cluster within the proposed algorithm (for ρ = 0.35)
and within the standard C-means. This gives an intuitive
explanation of the effect of the proximity constraints on
the associations, i.e., membership with a very low degree
are truncated in the proposed algorithm. Figure 2 reports
the value of the objective function with respect to the
iterations for the solution found by the C-means and by
the proposed algorithm for different values of ρ. Overall,
the figure numerically demonstrate that the behavior of the
proposed algorithm tends to coincide with the behavior of
the standard C-means as ρ grows, i.e., as ρ increases, the
agents become more and more aware of the PoIs, until the
proximity constraints no longer exist. Notably, this numerical
simulation suggests that our algorithm can be thought as a
generalization of the standard C-means.
VII. CONCLUSIONS
In this paper we provide a novel distributed coverage
algorithm for a set of agents aiming at covering in a non-
exclusive way a discrete and finite set of points of interest,
which is inspired by the standard C-Means algorithm.
Several directions are envisioned for future work: (1) con-
sider an asynchronous setting and moving PoIs; (2)
in-
troduce the possibility for an agent to leave some of the
PoIs uncovered if they are already covered by other agents;
(3) provide a formal characterization of the convergence
properties of the overall proposed algorithm.
Proposed Algorithm (ρ = 0.35)
(a)
Standard C-means
(b)
(c)
(d)
Fig. 1.
Plots 1a and 1b: comparison of the proposed algorithm for
ρ = 0.35 (plot 1a) with the standard C-means (plot 1b), considering a
particular instance. Plots 1c and 1d: comparison of the associations between
a particular agent and all the PoIs within the proposed algorithm when
ρ = 0.35 (plot 1c) and within the standard C-means (plot 1d). The color
of the PoI tends to red or blue as the associations approach one or zero,
respectively. Black crosses indicate associations that are exactly zero.
REFERENCES
[1] J. Cortés and F. Bullo, “From geometric optimization and nonsmooth
analysis to distributed coordination algorithms,” in Proceedings of the
IEEE Conference on Decision and Control, pp. 3274–3280, 2003.
5
10
15
20
25
30
35
40
45
50
55
steps
1.2
1.4
1.6
1.8
2
2.2
2.4
2.6
objective function
Comparison on Objective Functions
Proposed Algorithm (ρ = 0.35)
Proposed Algorithm (ρ = 0.5)
Proposed Algorithm (ρ = 0.65)
Proposed Algorithm (ρ = 0.8)
Proposed Algorithm (ρ = 0.95)
Standard C-means
Fig. 2.
Comparison of the value of the objective function at each step of
the proposed algorithm, for different choices of ρ, with the one obtained
by the standard C-means, considering the example of Figure 1.
[2] ——, “Coordination and geometric optimization via distributed dy-
namical systems,” SIAM Journal on Control and Optimization, vol. 44,
no. 5, pp. 1543–1574, 2005.
[3] K. Laventall and J. Cortés, “Coverage control by multi-robot networks
with limited-range anisotropic sensory,” International Journal of Con-
trol, vol. 82, no. 6, pp. 1113–1121, 2009.
[4] Y. Kantaros and M. M. Zavlanos, “Distributed communication-aware
coverage control by mobile sensor networks,” Automatica, vol. 63, pp.
209–220, 2016.
[5] S. G. Lee, Y. Diaz-Mercado and M. Egerstedt, “Multirobot control
using time-varying density functions,” IEEE Transactions on Robotics,
vol. 31, no. 2, pp. 489-493, 2015.
[6] J. M. Palacios-Gasós, E. Montijano, C. Sagüés, and S. Llorente,
“Distributed coverage estimation and control for multirobot persistent
tasks,” IEEE Transactions on Robotics, vol. 32, no. 6, pp. 1444–1460,
2016.
[7] A. Pierson and M. Schwager, “Adaptive inter-robot trust for robust
multi-robot sensor coverage,” in Robotics Research, pp. 167–183,
Springer, 2016.
[8] B. Jiang, Z. Sun, B. D. O. Anderson and C. Lageman, “Higher
order mobile coverage control with application to localization,” arXiv
preprint arXiv:1703.02424v1, 2017.
[9] E. Montijano and A. R. Mosteo, “Efficient multi-robot formations
using distributed optimization,” in Proceedings of the IEEE Conference
on Decision and Control, pp. 6167–6172, 2014.
[10] J. C. Bezdek, Pattern Recognition with Fuzzy Objective Function
Algorithms, Springer Science & Business Media, 2013.
[11] S. Patel, V. Patel, and D. Jinwala, “Privacy preserving distributed k-
means clustering in malicious model using zero knowledge proof,” in
Proceedings of the International Conference on Distributed Computing
and Internet Technology, pp. 420–431, 2013.
[12] G. Oliva, R. Setola, and C. N. Hadjicostis, “Distributed k-means
algorithm,” arXiv preprint arXiv:1312.4176, 2013.
[13] J. Qin, W. Fu, H. Gao, and W. X. Zheng, “Distributed K-means
algorithm and fuzzy c-means algorithm for sensor networks based on
multiagent consensus theory,” IEEE Transactions on Cybernetics, vol.
47, no. 3, pp. 772 - 783, 2016.
[14] G. Oliva, R. Setola, and C. N. Hadjicostis, “Distributed C-means
data clustering algorithm,” in Proceedings of the IEEE Conference
on Decision and Control, pp. 4396–4401, 2016.
[15] W. I. Zangwill, Nonlinear Programming: a Unified Approach,
Prentice-Hall, 1969.
[16] J. Nayak, B. Naik, and H. Behera, “Fuzzy C-means (FCM) clustering
algorithm: a decade review from 2000 to 2014,” in Computational
Intelligence in Data Mining-Volume 2, pp. 133–149, Springer, 2015.
[17] J. Sherman and W. J. Morrison, “Adjustment of an inverse matrix
corresponding to a change in one element of a given matrix,” The
Annals of Mathematical Statistics, vol. 21, no. 1, pp. 124–127, 1950.
[18] R. Olfati-Saber and R. M. Murray, “Consensus problems in networks
of agents with switching topology and time-delays,” IEEE Transac-
tions on Automatic Control, vol. 49, no. 9, pp. 1520–1533, 2004.