arXiv:1006.5877v2 [cs.DC] 23 Sep 2010 RoboCast: Asynchronous Communication in Robot Networks Zohir Bouzid 1 ⋆ , Shlomi Dolev 2 ⋆⋆ , Maria Potop-Butucaru 1 , and S ́ ebastien Tixeuil 1 1 Universit ́ e Pierre et Marie Curie - Paris 6, France 2 Ben Gurion University of the Negev, Israel Abstract. This paper introduces the RoboCast communication abstrac- tion. The RoboCast allows a swarm of non oblivious, anonymous robots that are only endowed with visibility sensors and do not share a com- mon coordinate system, to asynchronously exchange information. We propose a generic framework that covers a large class of asynchronous communication algorithms and show how our framework can be used to implement fundamental building blocks in robot networks such as gath- ering or stigmergy. In more details, we propose a RoboCast algorithm that allows robots to broadcast their local coordinate systems to each others. Our algorithm is further refined with a local collision avoidance scheme. Then, using the RoboCast primitive, we propose algorithms for deterministic asynchronous gathering and binary information exchange. 1 Introduction Existing studies in robots networks focus on characterizing the computational power of these systems when robots are endowed with visibility sensors and com- municate using only their movements without relying on any sort of agreement on a global coordinate system. Most of these studies [1, 5, 4] assume oblivious robots ( i.e. robots have no persistent memory of their past actions), so the “memory” of the network is implicit and generally deduced from the current positions of the robots. Two computation models are commonly used in robot networks: ATOM [9] and CORDA [7]. In both models robots perform in Look- Compute-Move cycles. The main difference is that these cycles are executed in a fully asynchronous manner in the CORDA model while each phase of the Look- Compute-Move cycle is executed in a lock step fashion in the ATOM model. These computation models have already proved their limitations. That is, the ⋆ Supported by DIGITEO project PACTOLE and the ANR projects R-DISCOVER and SHAMAN. ⋆⋆ Part of the research was done during a supported visit of Shlomi Dolev at LIP6 Universit Pierre et Marie Curie - Paris 6. Partially supported by Rita Altura trust chair in computer sciences, ICT Programme of the European Union under contract number FP7-215270 (FRONTS), and US Air Force European Office of Aerospace Research and Development, grant number FA8655-09-1-3016. 2 deterministic implementations of many fundamental abstractions such as gath- ering or leader election are proved impossible in these settings without additional assumptions ([8, 3]). The purpose of this paper is to study how the addition of bounded memory to each individual robot can increase the computational power of an asynchronous swarm of robots. We focus on an all-to-all communication primitive, called RoboCast, which is a basic building block for the design of any distributed system. A positive answer to this problem is the open gate for solving fundamental problems for robot networks such as gathering, scattering, election or exploration. In robot networks, using motion to transmit information is not new [9, 10, 6]. In [9], Suzuki and Yamashita present an algorithm for broadcasting the local coordinate system of each robot (and thus build a common coordinate system) under the ATOM model. The algorithm heavily relies on the phase atomicity in each Look-Compute-Move cycle. In particular, a robot a that observes another robot b in four distinct positions has the certitude that b has in turn already seen a in at least two different positions. The situation becomes more intricate in the asynchronous CORDA model. Indeed, the number of different positions observed for a given robot is not an indicator on the number of complete cycles executed by that robot since cycles are completely uncorrelated. By contrast, our implementation of RoboCast is designed for the more general CORDA model and uses a novel strategy: the focus moves from observing robots in different positions to observing robots moving in different directions . That is, each robot changes its direction of movement when a particular stage of the algorithm is completed; this change allows the other robots to infer information about the observed robot. Another non trivial issue that needs to be taken care of without explicit communication is collisions avoidance , since colliding robots could be confused due to indistinguishability. Moreover, robots may physically collide during their Move phase. One of the techniques commonly used to avoid collisions consists in computing a Voronoi diagram [2] and allowing robots to move only inside their Voronoi cells [5]. Since the Voronoi cells do not overlap with one another, robots are guaranteed to not collide. This simple technique works well in the ATOM model but heavily relies on the computation of the same Voronoi diagram by the robots that are activated concurrently, and thus does not extend to the CORDA model where different Voronoi diagrams may be computed by different robots, inducing possible collisions. Our approach defines a collision-free zone of movement that is compatible with the CORDA model constraints. Applications of our RoboCast communication primitive include fundamental services in robot networks such as gathering and stigmergy. Deterministic gath- ering of two stateless robots has already been proved impossible when robots have no common orientation [9]. In [9], the authors also propose non-oblivious solutions for deterministic gathering in the ATOM model. Our RoboCast per- mits to extend this result to the CORDA model, using bounded memory and a limited number of movements. Recently, in [6], the authors extend the work of [9] to efficiently implement stigmergy in robot networks in the ATOM model. 3 Stigmergy is the ability for robots to exchange binary information that is en- coded in the way they move. This scheme is particularly appealing for secure communication in robot networks, since e.g. jamming has no impact on robot communication capability. The RoboCast primitive allows to extend this mech- anism to the CORDA model, with a collision-free stigmergy scheme. Our contribution We formally specify a robot network communication prim- itive, called RoboCast, and propose implementation variants for this primitive, that permit anonymous robots not agreeing on a common coordinate system, to exchange various information ( e.g. their local coordinate axes, unity of mea- sure, rendez-vous points, or binary information) using only motion in a two dimensional space. Contrary to previous solutions, our protocols all perform in the fully asynchronous CORDA model, use constant memory and a bounded number of movements. Then, we use the RoboCast primitive to efficiently solve some fundamental open problems in robot networks. We present a fully asyn- chronous deterministic gathering and a fully asynchronous stimergic communi- cation scheme. Our algorithms differ from previous works by several key features: they are totally asynchronous (in particular they do not rely on the atomicity of cycles executed by robots), they make no assumption on a common chirality or knowledge of the initial positions of robots, and finally, each algorithm uses only a bounded number of movements. Also, for the first time in these settings, our protocols use CORDA-compliant collision avoidance schemes. Roadmap The paper is made up of six sections. Section 2 describes the com- puting model and presents the formal specification of the RoboCast problem. Section 3 presents our protocol and its complexity. The algorithm is enhanced in Section 4 with a collision-avoidance scheme. Using the Robocast primitive, Section 5 proposes algorithms for deterministic asynchronous gathering and bi- nary information exchange. Finally, Section 6 provides concluding remarks. Some proofs are relegated to the appendix. 2 Model We consider a network that consists of a finite set of n robots arbitrarily deployed in a two dimensional space, with no two robots located at the same position. Robots are devices with sensing, computing and moving capabilities. They can observe (sense) the positions of other robots in the space and based on these observations, they perform some local computations that can drive them to other locations. In the context of this paper, the robots are anonymous , in the sense that they can not be distinguished using their appearance and they do not have any kind of identifiers that can be used during the computation. In addition, there is no direct mean of communication between them. Hence, the only way for robots to acquire information is by observing their positions. Robots have unlimited visibil- ity , i.e. they are able to sense the entire set of robots. We assume that robots are 4 non-oblivious , i.e. they can remember observations, computations and motions performed in previous steps. Each robot is endowed with a local coordinate sys- tem and a local unit measure which may be different from those of other robots. This local coordinate system is assumed to be fixed during a run unless it is explicitly modified by the corresponding robot as a result of a computation. We say in this case that robots remember their own coordinate systems. This is a common assumption when studying non-oblivious robot networks [9, 6]. A protocol is a collection of n programs , one operating on each robot. The program of a robot consists in executing Look-Compute-Move cycles infinitely many times. That is, the robot first observes its environment (Look phase). An observation returns a snapshot of the positions of all robots within the visibility range. In our case, this observation returns a snapshot of the positions of all robots. The observed positions are relative to the observing robot, that is, they use the coordinate system of the observing robot. Based on its observation, a robot then decides — according to its program — to move or to stay idle (Compute phase). When a robot decides a move, it moves to its destination during the Move phase. The local state of a robot is defined by the content of its memory and its position. A configuration of the system is the union of the local states of all the robots in the system. An execution e = ( c 0 , . . . , c t , . . . ) of the system is an infinite sequence of configurations, where c 0 is the initial configuration of the system, and every transition c i → c i +1 is associated to the execution of a non empty subset of actions . The granularity (or atomicity) of those actions is model-dependent and is defined in the sequel of this section. A scheduler is a predicate on computations, that is, a scheduler defines a set of admissible computations, such that every computation in this set satisfies the scheduler predicate. A scheduler can be seen as an entity that is external to the system and selects robots for execution. As more power is given to the scheduler for robot scheduling, more different executions are possible and more difficult it becomes to design robot algorithms. In the remainder of the paper, we consider that the scheduler is fair and fully asynchronous , that is, in any infinite execution, every robot is activated infinitely often, but there is no bound on the ratio between the most activated robot and the least activated one. In each cycle, the scheduler determines the distance to which each robot can move in this cycle, that is, it can stop a robot before it reaches its computed destination. However, a robot r i is guaranteed to be able to move a distance of at least δ i towards its destination before it can be stopped by the scheduler. We now review the main differences between the ATOM [9] and CORDA [7] models. In the ATOM model, whenever a robot is activated by the scheduler, it performs a full computation cycle. Thus, the execution of the system can be viewed as an infinite sequence of rounds. In a round one or more robots are acti- vated by the scheduler and perform a computation cycle. The fully-synchronous ATOM model refers to the fact that the scheduler activates all robots in each round, while the regular ATOM model enables the scheduler to activate only a subset of the robots. In the CORDA model, robots may be interrupted by 5 the scheduler after performing only a portion of a computation cycle. In par- ticular, phases (Look, Compute, Move) of different robots may be interleaved. For example, a robot a may perform a Look phase, then a robot b performs a Look-Compute-Move complete cycle, then a computes and moves based on its previous observation (that does not correspond to the current configuration anymore). As a result, the set of executions that are possible in the CORDA model are a strict superset of those that are possible in the ATOM model. So, an impossibility result that holds in the ATOM model also holds in the CORDA model, while an algorithm that performs in the CORDA model is also correct in the ATOM model. Note that the converse is not necessarily true. The RoboCast Problem The RoboCast communication abstraction provides a set of robots located at arbitrary positions in a two-dimensional space the possi- bility to broadcast their local information to each other. The RoboCast abstrac- tion offers robots two communication primitives: RoboCast(M) sends Message M to all other robots, and Deliver(M) delivers Message M to the local robot. The message may consists in the local coordinate system, the robot chirality, the unit of measure, or any binary coded information. Consider a run at which each robot r i in the system invokes RoboCast( m i ) at some time t i for some message m i . Let t be equal to max { t 1 , . . . , t n } . Any protocol solving the RoboCast Problem has to satisfy the following two proper- ties: Validity : For each message m i , there exists a time t ′ i > t after which every robot in the system has performed Deliver( m i ) . Termination : There exists a time t T ≥ max { t ′ 1 , . . . , t ′ n } after which no robot performs a movement that causally depends on the invocations of RoboCast( m i ) . 3 Local Coordinate System RoboCast In this section we present algorithms for robocasting the local coordinate system. For ease of presentation we first propose an algorithm for two-robots then the general version for systems with n robots. The local coordinate system is defined by two axes (abscissa and ordinate), their positive directions and the unity of measure. In order to robocast this in- formation we use a modular approach. That is, robots invoke first the robocast primitive ( LineRbcast 1 hereafter) to broadcast a line representing their abscissa. Then, using a parametrized module ( LineRbcast 2), they robocast three succes- sive lines encoding respectively their ordinate, unit of measure and the positive direction of axes. This invocation chain is motivated by the dependence between the transmitted lines. When a node broadcasts a line, without any additional knowledge, two different points have to be sent in order to uniquely identify the line at the destination. However, in the case of a coordinate system, only for the first transmitted axis nodes need to identify the two points. The transmission of the subsequent axes needs the knowledge of a unique additional point. 6 3.1 Line RoboCast In robot networks the broadcast of axes is not a new issue. Starting with their seminal paper [9], Suzuki and Yamashita presented an algorithm for broadcast- ing the axes via motion that works in the ATOM model. Their algorithm heavily relies on the atomicity of cycles and the observation focus on the different posi- tions of the other robots during their Move phase. This type of observation is totally useless in asynchronous CORDA model. In this model, when a robot r moves towards its destination, another robot r ′ can be activated k > 1 times with k arbitrarily large, and thus observe r in k different positions without having any clue on the number of complete cycles executed by r . In other words, the number of different positions observed for a given robot is not an indicator on the number of complete executed cycles since in CORDA cycles are completely uncorrelated. Our solution uses a novel strategy. That is, the focus moves from observing robots in different positions to observing their change of direction: each robot changes its direction of movement when a particular stage of the algorithm is completed; this change allows the other robots to infer information about the observed robot. Line RoboCast Detailed Description Let r 0 and r 1 be the two robots in the system. In the sequel, when we refer to one of these robots without specifying which, we denote it by r i and its peer by r 1 − i . In this case, the operations on the indices of robots are performed modulo 2. For ease of presentation we assume that initially each robot r i translates and rotates its local coordinate system such that its x-axis and origin coincide with the line to be broacast and its current location respectively. We assume also that each robot is initially located in the origin of its local coordinate system. At the end of the execution, each robot must have broadcast its own line and have received the line of its peer. A robot ”receives” the line broadcast by its peer when it knows at least two distinct positions of this line. Thus, to send its line, each robot must move along it (following a scheme that will be specified later) until it is sure that it has been observed by the other robot. The algorithm idea is simple: each robot broadcasts its line by moving along it in a certain direction (considered to be positive). Simultaneously, it observes the different positions occupied by its peer r 1 − i . Once r i has observed r 1 − i in two distinct positions, it informs it that it has received its line by changing its direction of movement, that is, by moving along its line in the reverse direction (the negative direction if the first movement have been performed in the positive direction of the line). This change of direction is an acknowledgement for the reception of the peer line. A robot finishes the algorithm once it changed its direction and observed that the other robot also changed its direction. This means that both robots have sent their line and received the other’s line. The algorithm is described in detail as Algorithm 1. Due to space limitations, its proof is given in the Appendix. Each robot performs four stages referred in Algorithm 1 as states: 7 – state S 1 : This is the initial state of the algorithm. At this state, the robot r i stores the position of its peer in the variable pos 1 and heads towards the position (1 . 0) of its local coordinate system. That is, it moves along its line in the positive direction. Note that r i stays only one cycle in this state and then goes to state S 2 . – state S 2 : A this point, r i knows only one point of its peer line (recorded in pos 1 ). To be able to compute the whole peer line, r i must observe r 1 − i in another (distinct) position of this line. Hence, each time it is activated, r i checks if r 1 − i is still located in pos 1 or if it has already changed its position. In the first case (line 2 .a of the code), it makes no movement by selecting its current position as its destination. Otherwise (line 2 .b ), it saves the new position of r 1 − i in pos 2 and delivers the line formed by pos 1 and pos 2 . Then, it initiates a change of direction by moving towards the point ( − 1 . 0) of its local coordinate system, and moves to state S 3 . – state S 3 : at this point r i knows the line of its peer locally derived from pos 1 and pos 2 . Before finishing the algorithm, r i must be sure that also r 1 − i knows its line. Therefore, it observes r 1 − i until it detects a change of direction (the condition of line 3 .a ). If this is not the case and if r i is still in the positive part of its x-axis, then it goes to the position ( − 1 , 0) of its local coordinate system (line 3 .b ). Otherwise (if r i is already in the negative part of its x- axis), it performs a null movement (line 3 .c ). When r i is in state S 3 one is sure, as we shall show later, that r 1 − i knows at least one position of l i , say p . Recall that l i corresponds to the x-axis of r i . It turns out that p is located in the positive part of this axis. In moving towards the negative part of its x-axis, r i is sure that it will eventually be observed by r 1 − i in a position distinct from p which allows r 1 − i to compute l i . – state S 4 : At this stage, both r i and r 1 − i received the line sent by each others. That is, r i has already changed its own direction of movement, and observed that r 1 − i also changed its direction. But nothing guarantees that at this step r 1 − i knows that r i changed its direction of movement. If r i stops now, r 1 − i may remain stuck forever (in state S 3 ). To announce the end of the algorithm to its peer, r i heads towards a position located outside l i , That is, it will move on a line nextl i (distinct from l i ) which is given as parameter to the algorithm. During the move from l i to nextl i , r i should avoid points outside these lines. To this end, r i must first pass through myIntersect - which is the intersection of l i and nextl i - before moving to a point located in nextl i but not on l i (refer to lines 3 .a. 2, 3 .a. 3 and 4 .a of the code). Note that the robocast of a line is usually followed by the robocast of other information (e.g. other lines that encode the local coordinate system). To helps this process the end of the robocast of l i should mark the beginning of the next line, nextl i , robocast. Therefore, once r i reaches myIntersect , r i rotates its local coordinate system such that its x-axis matches now with nextl i , and then it moves toward the point of (1,0) of its (new) local coordi- nate system. When r 1 − i observes r i in a position that is not on l i , it learns that r i knows that r 1 − i learned l 1 − i , and so it can go to state S 4 (lines 3 .a. ∗ ) and finish the algorithm. 8 Algorithm 1 Line RoboCast LineRbcast1 for two robots: Algorithm for robot r i Variables : state : initially S 1 pos 1 , pos 2: initially ⊥ destination, myIntersect : initially ⊥ Actions : 1. State [ S 1 ] : %Robot ri starts the algorithm% a. pos 1 ← observe (1 − i ) b. destination ← (1 , 0) i c. state ← S 2 d. Move to destination 2. State [ S 2 ] : % ri knows one position of l 1 − i % a. if ( pos 1 = observe (1 − i )) then destination ← observe ( i ) b. else 1. pos 2 ← observe (1 − i ) 2. l 1 − i ← line ( pos 1 , pos 2) 3. Deliver ( l 1 − i ) 4. destination ← ( − 1 , 0) i 5. state ← S 3 endif c. Move to destination 3. State [ S 3 ] : % ri knows the line robocast by robot r 1 − i % a. if ( pos 2 is not inside the line segment [ pos 1 , observe (1 − i )]) then 1. state ← S 4 2. myIntersect ← intersection ( li , nextli ) 3. destination ← myIntersect b. else if ( observe ( i ) ≥ (0 , 0) i ) then destination ← (0 , − 1) i c. else destination ← observe ( i ) endif endif d. Move to destination 4. State [ S 4 ] : % ri knows that robot r 1 − i knows its line li % a. if ( observe ( i ) 6 = myIntersect ) then destination ← myIntersect b. else 1. ri rotates its coordinate system such that its x-axis and the origin match with nextli and myIntersect respectively. 2. destination ← (1 , 0) i ; return endif c. Move to destination 3.2 Line RoboCast: a Composable Version Line RoboCast primitive is usually used as a building block for achieving more complex tasks. For example, the RoboCast of the local coordinate system re- quires the transmission of four successive lines representing respectively the ab- scissa, the ordinate, the value of the unit measure and a forth line to determine the positive direction of axes. In stigmergic communication a robot has to trans- mit at least a line for each binary information it wants to send. In all these examples, the transmitted lines are dependent one of each other and therefore their successive transmission can be accelerated by directly exploiting this de- pendence. Indeed, the knowledge of a unique point (instead of two) is sufficient for the receiver to infer the sent line. In the following we propose modifications of the Line RoboCast primitive in order to exploit contextual information that are encoded in a set of predicates that will be detailed in the sequel. In the case of the local coordinate system, the additional information the transmission can exploit is the fact that the abscissa is perpendicular to the ordinate. Once the abscissa is transmitted, it suffices for a robot to simply send a single position of its ordinate, say pos 1. The other robots can then calculate the ordinate by finding the line that passes through pos 1 and which is perpendicular 9 to the previously received abscissa. In the modified version of the Line RoboCast algorithm the predicate isP erpendicular encodes this condition. For the case of stigmergy, a robot transmits a binary information by robo- casting a line whose angle to the abscissa encodes this information. The lines transmitted successively by a single robot are not perpendicular to each others. However, all these lines pass through the origin of the coordinate system of the sending robot. In this case, it suffices to transmit only one position located on this line as long as it is distinct from the origin. We say in this case that the line satisfies the predicate passT hrOrigin . A second change we propose relates to the asynchrony of the algorithm. In fact, even if robots execute in unison, they are not guaranteed to finish the execution of LineRbcast 1 at the same time (by reaching S 4 ). A robot r i can begin transmitting its k -th line l i when its peer r 1 − i is still located in its ( k − 1)- th line ancientl 1 − i that r i has already received. r i should ignore the positions transmitted by r 1 − i until it leaves ancientl 1 − i for a new line. It follows that to make the module composable, the old line that the peer has already received from its peer should be supplied as an argument ( ancientl 1 − i ) to the function. Thus, it will not consider the positions occupied by r 1 − i until the latter leaves ancientl 1 − i . In the following, we present the code of the new Line RoboCast function that we denote by LineRbcast 2. Its description and its formal proof are omitted since they follow the same lines as those of LineRbcast 1. 3.3 RoboCast of the Local Coordinate System To robocast their two axes (abscissa and ordinate), robots call LineRbcast1 to robocast the abscissa, then LineRbcast2 to robocast the ordinate. The parameter 6 = myOrdinate of LineRbcast 2 stands for the next line to be robocast and it can be set to any line different from myOrdinate . The next line to robocast ( unitLine ) is a line whose angle with the x-axis encodes the unit of measure. This angle will be determined during the execution LineRbcast 2. 1. peerAbscissa ← LineRbcast 1( myAbscissa, myOrdinate ) 2. peerOrdinate ← LineRbcast 2( myOrdinate, 6 = myOrdinate, peerAbscissa, isP erpendicular ) After executing the above code, each robot knows the two axes of its peer coordinate system but not their positive directions neither their unit of measure. To robocast the unit of measure we use a technique similar to that used by [9]. The idea is simple: each robot measures the distance d i between its origin and the peer’s origin in terms of its local coordinate system. To announce the value of d i to its peer, each robot robocast via LineRbcast2 a line, unitLine , which passes through its origin and whose angle with its abscissa is equal to f ( d i ) where for x > 0 , f ( x ) = (1 / 2 x ) × 90 ◦ is a monotonically increasing function with range (0 ◦ , 90 ◦ ). The receiving robot r 1 − i can then infer d i from f ( d i ) and compute the unit measure of r i which is equal to d 1 − i /d i . The choice of (0 ◦ , 90 ◦ ) as a 10 Algorithm 2 Line RoboCast LineRbcast2 for two robots: Algorithm for robot r i Inputs: li : the line to robocast nextli : the next line to robocast after li precedentl 1 − i : the line robocast precedently by r 1 − i predicate : a predicate on the output l 1 − i , for example isP erpendicular and passT hrOrigin . Outputs: l 1 − i : the line robocast by r 1 − i Variables : state : initially S 1 pos 1: initially ⊥ destination, myIntersect, peerIntersect : initially ⊥ Actions : 1. State [ S 2 ] : % ri starts robocasting its line li % a. if ( observe (1 − i ) ∈ precedentl 1 − i ) then destination ← observe ( i ) b. else 1. pos 3 ← observe (1 − i ) 2. l 1 − i ← the line that passes through pos 3 and satisfies predicate . 3. Deliver ( l 1 − i ) 4. peerIntersect ← intersection between l 1 − i and precedentl 1 − i 5. destination ← (0 , − 1) i 6. state ← S 3 endif c. Move to destination 2. State [ S 3 ] : % ri knows the line robocast by robot r 1 − i % a. if ( pos 3 is not inside the line segment [ peerIntersect, observe (1 − i )]) then 1. state ← S 4 2. myIntersect ← intersection ( li , nextli ) 3. destination ← myIntersect b. else if ( observe ( i ) ≥ (0 , 0) i ) then destination ← (0 , − 1) i c. else destination ← observe ( i ) endif endif d. Move to destination 3. State [ S 4 ] : similar to state S 4 of the lineRbcast 1 function. range for f ( x ) (instead of (0 ◦ , 360 ◦ )) is motivated by the fact that the positive directions of the two axes are not yet known to the robots. It is thus impossible to distinguish between an angle α with α ∈ (0 ◦ , 90 ◦ ) and the angles Π − α , − α , and Π + α . To overcome the ambiguity and to make f ( x ) injective, we restrict the range to (0 ◦ , 90 ◦ ). In contrast, Suzuki and Yamashita [9] use a function f ′ ( x ) slightly different from ours: (1 / 2 x ) × 360 ◦ . That is, its range is equal to (0 ◦ , 360 ◦ ). This is because in ATOM, robots can robocast at the same time the two axis and their positive directions, for example by restricting the movement of robots to only the positive part of their axes. Since the positive directions of the two axes are known, unitLine can be an oriented line whose angle f ′ ( x ) can take any value in (0 ◦ , 360 ◦ ) without any possible ambiguity. Positive directions of axes Once the two axes are known, determining their positive directions amounts to selecting the upper right quarter of the coordinate system that is positive for both x and y . Since the line used to robocast the unit of distance passes through two quarters (the upper right and the lower left), it remains to choose among these two travelled quarters which one corresponds to the upper right one. To do this, each robot robocast just after the line encod- ing the unit distance another line which is perpendicular to it such that their intersection lays inside the upper right quarter. 11 Generalization to n robots The generalization of the solution to the case of n > 2 robots has to use an additional mechanism to allow robots to ”rec- ognize” other robots and distinguish them from each others despite anonymity. Let us consider the case of three robots r 1 , r 2 , r 3 . When r 1 looks the second time, r 2 and r 3 could have moved (or be moving), each according to its local coordinate system and unit measure. At this point, even with memory of past observations, r 1 may be not able to distinguish between r 2 and r 3 in their new positions given the fact that robots are anonymous. Moreover, r 2 and r 3 could even switch places and appear not to have moved. Hence, the implementation of the primitive observe ( i ) is not trivial. For this, we use the collision avoidance techniques presented in the next section to instruct each robot to move only in the vicinity of its initial position. This way, other robots are able to recognize it by using its past positions. The technical details of this mechanism are given at the end of the next section. Apart from this, the generalization of the protocol with n robots is trivial. We present its detailed description in the Appendix. 3.4 Motion Complexity Analysis Now we show that the total number of robot moves in the coordinate system RoboCast is upper bounded. For the sake of presentation, we assume for now that the scheduler does not interrupt robots execution before they reach their planned destination. Each robot is initially located at the origin of its local coordinate system. To robocast each axis, a robot must visit two distinct positions: one located in the positive part of this axis and the other one located in its negative part. For example, to robocast its x -axis, a robot has first to move from its origin to the position (1 . 0) i , then from (1 . 0) i to the ( − 1 , 0) i . Then, before initiating a robocast for the other axis, the robot must first return back to its origin. Hence, at most 3 movements are needed to robocast each axis. This implies that to robocast the whole local coordinate system, at most 12 movements have to be performed by a particular robot. In the general CORDA model, the scheduler is allowed to stop robots be- fore they reach their destination, as long as a minimal distance of δ i has been traversed. In this case, the number of necessary movements is equal to at most 8 ∗ (1 + 1 /δ i ). This worst case is obtained when a robot is not stopped by the scheduler when moving from its origin towards another position (thus letting it go the farthest possible), but stopped whenever possible when returning back from this (far) position to the origin. This contrasts with [9] and [6] where the number of positions visited by each robot to robocast a line is unbounded (but finite). This is due to the fact that in both approaches, robots are required to make a non null movement when- ever activated until they know that their line has been received. Managing an arbitrary large number of movements in a restricted space to prevent collisions yields severe requirements in [6]: either robots are allowed to perform infinitely small movements (and such movements can be seen by other robots with infinite precision), or the scheduler is restricted in its choices for activating robots (no 12 robot can be activated more than k times, for a given k , between any two acti- vations of another robot) and yields to a setting that is not fully asynchronous. Our solution does not require any such hypothesis. 4 Collision-free RoboCast In this section we enhance the algorithms proposed in Section 3 with the collision- free feature. In this section we propose novel techniques for collision avoidance that cope with the system asynchrony. Our solution is based on the same principle of locality as the Voronoi Diagram based schemes. However, acceptable moves for a robot use a different geometric area. This area is defined for each robot r i as a local zone of movement and is denoted by ZoM i . We require that each robot r i moves only inside ZoM i . The intersection of different ZoM i must remain empty at all times to ensure collision avoidance. We now present three possible definitions for the zone of movement: ZoM 1 i , ZoM 2 i and ZoM 3 i . All three ensure collision avoidance in CORDA, but only the third one can be computed in a model where robots do not know the initial position of their peers. Let P ( t ) = { p 1 ( t ) , p 2 ( t ) . . . , p n ( t ) } be the configuration of the network at time t , such that p i ( t ) denotes the position of robot r i at time t expressed in a global coordinate system. This global coordinate system is unknown to individual robots and is only used to ease the presentation and the proofs. Note that P ( t 0 ) describes the initial configuration of the network. Definition 1. ( Voronoi Diagram )[2] The Voronoi diagram of a set of points P = { p 1 , p 2 , . . . , p n } is a subdivision of the plane into n cells, one for each point in P . The cells have the property that a point q belongs to the Voronoi cell of point p i iff for any other point p j ∈ P , dist ( q, p i ) < dist ( q, p j ) where dist ( p, q ) is the Euclidean distance between p and q . In particular, the strict inequality means that points located on the boundary of the Voronoi diagram do not belong to any Voronoi cell. Definition 2. ( ZoM 1 i ) Let DV ( t 0 ) be the Voronoi diagram of the initial config- uration P ( t 0 ) . For each robot r i , the zone of movement of r i at time t , ZoM 1 i ( t ) , is the Voronoi cell of point p i ( t 0 ) in DV ( t 0 ) . Definition 3. ( ZoM 2 i ) For each robot r i , define the distance d i = min { dist ( p i ( t 0 ) , p j ( t 0 )) with r j 6 = r i } . The zone of movement of r i at time t , ZoM 2 i ( t ) , is the circle cen- tered in p i (0) and whose diameter is equal to d i / 2 . A point q belongs to ZoM 2 i ( t ) iff dist ( q, p i ( t 0 )) < d i / 2 . Definition 4. ( ZoM 3 i ) For each robot r i , define the distance d i ( t ) = min { dist ( p i ( t 0 ) , p j ( t )) with r j 6 = r i } at time t . The zone of r i at time t , ZoM 3 i ( t ) , is the circle centered in p i ( t 0 ) and whose diameter is equal to d i ( t ) / 3 . A point q belongs to ZoM 3 i ( t ) iff dist ( q, p i ( t 0 )) < d i ( t ) / 3 . 13 p current position initial position zone of movement d d/2 p q q (a) ZoM 2 p p q current position initial position zone of movement d' d'/3 p q (b) ZoM 3 p Fig. 1. Example zones of movement: The network is formed of two robots: p and q . d is the distance between the initial positions of p and q (dashed circles), d ′ is the distance between the initial position of p and the current position of q . The diameter of ZoM 2 p (blue) is d/ 2 and that of ZoM 3 p (yellow) is d ′ / 3. Note that ZoM 1 and ZoM 2 are defined using information about the initial configuration P ( t 0 ), and thus cannot be used with the hypotheses of Algorithm 2. In contrast, robot r i only needs to know its own initial position and the current positions of other robots to compute ZoM 3 i . As there is no need for r i to know the initial positions of other robots, ZoM 3 i can be used with Algorithm 2. It remains to prove that ZoM 3 i guarantees collision avoidance. We first prove that ZoM 1 i does, which is almost trivial because its definition does not depend on time. Then, it suffices to prove that ZoM 3 i ⊆ ZoM 2 i ⊆ ZoM 1 i . Besides helping us in the proof, ZoM 2 i can be interesting in its own as a cheap collision avoidance scheme in the ATOM model, as computing a cycle of radius half the distance to the nearest neighbor is much easier that computing a full blown Voronoi diagram. Lemma 41 If ∀ t , for each robot r i , the destination point computed by r i at t remains inside ZoM 1 i ( t ) , then collisions are avoided. Proof. By definition of Voronoi diagram, different Voronoi cells do not overlap. Moreover, for a given i , ZoM 1 i is static and does not change over time. Hence, ∀ i, j ∈ Π , ∀ t, t ′ , ZoM 1 i ( t ) ∩ ZoM 1 j ( t ′ ) = ∅ . Clearly, ZoM 2 i ⊆ ZoM 1 i which means that ZoM 2 i ensures also collision avoid- ance. Lemma 42 If ∀ t , for each robot r i , the destination point computed by r i at t always remains inside ZoM 2 i ( t ) , then collisions are avoided. 14 The proof of the above lemma follows directly from the fact that ∀ tZoM 2 i ( t ) ⊆ ZoM 1 i ( t ) and Lemma 41. Lemma 43 ∀ t, ZoM 3 i ( t ) ⊆ ZoM 2 i ( t ) . Proof. Fix some robot r i and let r j be the closest robot from r i at time t 0 . Let d 0 denote the initial distance between r i and r j , that is, d 0 = dist ( p i ( t 0 ) , p j ( t 0 )). We assume that all robots move only inside their ZoM 3 i computed as explained in Definition 4. Let t 1 ≥ t 0 be the first time at which a robot in { r i , r j } , say e.g. r i , finishes a Look phase after t 0 . The destination computed by r i in this cycle is located inside ZoM 3 i ( t 1 ), which is a circle centered at p i ( t 0 ) and whose diameter is ≤ d 0 / 3. Hence, the destination computed by r i is distant from p i (0) by at most d 0 / 3. Let t 2 ≥ t 1 be the first time after t 1 at which a robot, say r j , finishes a Look phase. Between t 1 and t 2 , r i may have finished its Move phase or not. In any case, the observed configuration by r j at t 2 is such that r i is distant from p j ( t 0 ) by at most d 0 + d 0 / 3. This implies that ZoM 3 j ( t 2 ) has a diameter of at most ( d 0 + d 0 / 3) / 3, which implies that the destination point computed by r j in this cycle is distant from p j ( t 0 ) by at most d 0 + d 0 / 3 + d 0 / 9. Repeating the argument, we get that ∀ t , ZoM 3 i ( t ) has a diameter ≤ ∞ ∑ i =1 d 0 / 3 i . Reducing the formula, we obtain that ZoM 3 i ( t ) is always ≤ d 0 / 2, which implies that ZoM 3 i ( t ) ⊆ ZoM 2 i ( t ). Ensuring Collision-freedom in Line Robocast Algorithms To make LineR- bcast1 and LineRbcast2 collision-free, it is expected that any destination com- puted by a robot r i at t be located within its ZoM 3 i ( t ). The computation of destinations is modified as follows: Let dest i ( t ) be the destination computed by a robot r i at time t . Based on dest i ( t ), r i computes a new destination dest ′ i ( t ) that ensures collision avoidance. dest ′ i ( t ) can be set to any point located in [ p i ( t 0 ) , dest i ( t )] ∩ ZoM 3 i ( t ). For example, we can take dest ′ i ( t ) to be equal to the point located in the line segment [ p i ( t 0 ) , dest i ( t )] and distant from p i ( t 0 ) by a distance of d i ( t ) / 2 with d i ( t ) computed as explained in Definition 4. This modification of the destination computation method does not impact algorithms correctness since it does not depend on the exact value of computed destinations, but on the relationship between the successive positions occupied by each robot. The algorithms remain correct as long as robots keep the capabil- ity to freely change their direction of movement and to move in both the positive and the negative part of each such direction. This capability is not altered by the collision avoidance scheme since the origin of the coordinate system of each robot - corresponding to its original position - is strictly included in its zone of movement, be it defined by ZoM 1 , ZoM 2 or ZoM 3 . Generalisation of the Protocols to n Robots As explained at the end of Section 3, the generalisation of our algorithms to the case of n robots has to deal with the issue of distinguishing robots from each others despite their anonymity. The solution we use is to instruct each robot to move in the close 15 neighbourhood of its original position. Thus, other robots can recognize it by comparing its current position with past ones. For this solution to work, it is necessary that each robot always remains the closest one to all the positions it has previously occupied. Formally speaking, we define the zone of movement ZoM 4 in a similar way as ZoM 3 except that the diameter is this time equal to d i ( t ) / 6 (vs. d i ( t ) / 3). We now show that ZoM 4 provides the required properties. Let r i and r j be an arbitrary pair of robots and Let d ij denotes the distance between their initial positions. It can easily shown, using the same arguments as the proof of Lemma 43, that: 1. Neither of the two robots moves away from its initial position by a distance greater than d ij / 4. This implies that each robot remains always at a distance strictly smaller than d ij / 2 from all the positions it has previously held. 2. The distance between r i (resp. r j ) and all the positions held by r j ( r i ) is strictly greater than d ij / 2. Hence, r i can never be closer than r j to a position that was occupied by r j , and vice versa. This implies that it is always possible to recognize a robot by associating it with the position which is closest to it in some previously observed configuration. 5 RoboCast Applications 5.1 Asynchronous Deterministic 2-Gathering Given a set of n robots with arbitrary initial locations and no agreement on a global coordinate system, n -Gathering requires that all robots eventually reach the same unknown beforehand location. n -Gathering was already solved when n > 2 in both ATOM [9] and CORDA [4] oblivious models. The problem is impossible to solve for n = 2 even in ATOM, except if robots are endowed with persistent memory [9]. In this section we present an algorithm that uses our RoboCast primitive to solve 2-Gathering in the non-oblivious CORDA model. A first ”naive” solution is for each robot to robocast its abscissa and ordinate axes and to meet the other robot at the midpoint m of their initial positions. RoboCasting the two axes is done using our Line RoboCast function described above in conjunction with the ZoM 3 − based collision avoidance scheme. A second possible solution is to refine Algorithm ψ f − point (2) of [9, 10] by using our Line RoboCast function to ”send” lines instead of the one used by the authors. The idea of this algorithm is that each robot which is activated for the first time translates and rotates its coordinate system such that the other robot is on its positive y -axis, and then it robocasts its (new) x -axis to the other robot using our Line Robocast function. In [9], the authors give a method that allows each robot to compute the initial position of one’s peer by comparing their two robocast x -axes defined above. Then each robot moves toward the midpoint of their initial positions. Our Line RoboCast routine combined with the above idea achieves gathering in asynchronous systems within a bounded ( vs. finite in [9]) 16 number of movements of robots and using only two ( vs. four) variables in their persistent memory. Theorem 51 There is an algorithm for solving deterministic gathering for two robots in non-oblivious asynchronous networks (CORDA). 5.2 Asynchronous Stigmergy Stigmergy [6] is the ability of a group of robots that communicate only through vision to exchange binary information. Stigmergy comes to encode bits in the movements of robots. Solving this problem becomes trivial when using our Robo- Cast primitive. First, robots exchange their local coordinate system as explained in Section 3. Then, each robot that has a binary packet to transmit robocasts a line to its peers whose angle with respect to its abscissa encodes the binary information. Theoretically, as the precision of visual sensors is assumed to be in- finite, robots are able to observe the exact angle of this transmitted line, hence the size of exchanged messages may be infinite also. However, in a more real- istic environment in which sensor accuracy and calculations have a margin of error, it is wiser to discretize the measuring space. For this, we divide the space around the robot in several sectors such that all the points located in the same sector encode the same binary information (to tolerate errors of coding). For instance, to send binary packets of 8 bits, each sector should have an angle equal to u = 360 ◦ / 2 8 . Hence, when a robot moves through a line whose angle with re- spect to the abscissa is equal to α , the corresponding binary information is equal to ⌊ α/n ⌋ . Thus, our solution works in asynchronous networks, uses a bounded number of movements and also allows robots to send binary packets and not only single bits as in [6]. 6 Conclusion and Perspectives We presented a new communication primitive for robot networks, that can be used in fully asynchronous CORDA networks. Our scheme has the additional properties of being motion, memory, and computation efficient. We would like to raise some open questions: 1. The solution we presented for collision avoidance in CORDA can be used for protocols where robots remain in their initial vicinity during the whole protocol execution. A collision-avoidance scheme that could be used with all classes of protocol is a challenging issue. 2. Our protocol assumes that a constant number of positions is stored by each robot. Investigating the minimal number of stored positions for solving a par- ticular problem would lead to interesting new insights about the computing power that can be gained by adding memory to robots. 17 References 1. H. Ando, Y. Oasa, I. Suzuki, and M. Yamashita. Distributed memoryless point convergence algorithm for mobile robots with limited visibility. Robotics and Au- tomation, IEEE Transactions on , 15(5):818–828, 1999. 2. F. Aurenhammer. Voronoi diagrams a survey of a fundamental geometric data structure. ACM Computing Surveys (CSUR) , 23(3):405, 1991. 3. D. Canepa and M. G. Potop-Butucaru. Stabilizing flocking via leader election in robot networks. In SSS , pages 52–66, 2007. 4. M. Cieliebak, P. Flocchini, G. Prencipe, and N. Santoro. Solving the robots gath- ering problem. Automata, Languages and Programming , pages 192–192, 2003. 5. X. D ́ efago and A. Konagaya. Circle formation for oblivious anonymous mobile robots with no common sense of orientation. In POMC , pages 97–104, 2002. 6. Y. Dieudonn ́ e, S. Dolev, F. Petit, and M. Segal. Deaf, dumb, and chatting asyn- chronous robots. In OPODIS , pages 71–85, 2009. 7. P. Flocchini, G. Prencipe, N. Santoro, and P. Widmayer. Hard tasks for weak robots: The role of common knowledge in pattern formation by autonomous mobile robots. Algorithms and Computation , pages 93–102, 1999. 8. G. Prencipe. On the feasibility of gathering by autonomous mobile robots. In SIROCCO 2005 , volume 3499 of LNCS , pages 246–261. Springer, May 2005. 9. I. Suzuki and M. Yamashita. Distributed anonymous mobile robots: Formation of geometric patterns. SIAM Journal of Computing , 28(4):1347–1363, 1999. 10. I. Suzuki and M. Yamashita. Erratum: Distributed anonymous mobile robots: Formation of geometric patterns. SIAM Journal of Computing , 36(1):279–280, 2006. 18 Appendix A Correctness Analysis of LineRbcast1 In the following we prove that Algorithm 1 satisfies the specification of a Robo- Cast, namely validity and termination. First we introduce some notations that will be further used in the proofs of the algorithm. For each variable v , and each robot r i , we denote r i .v ( t ) the value of the variable v in the local memory of r i at time t . When the time information can be derived from the context, we use simply r i .v . Proof of the Validity property. We start by the validity property. For this, we first prove a series of technical lemmas. The following two lemmas state the existence of a time instant at which both robots have reached S 2 and a following time instant at which at least one of them have reached S 3 . Lemma A1 Eventually, both robots reach state S 2 . Proof. Thanks to the fairness assumption of the scheduler, every robot is acti- vated infinitely often. The first time each robot is activated, it executes the lines (1.a, 1.b, 1.c) of Algorithm 1 and it reaches the state S 2 . Lemma A2 Eventually, at least one robot reaches state S 3 . Proof. Let r 1 and r 2 be two robots executing Algorithm 1, and assume towards contradiction that neither of them reaches state S 3 . But according to Lemma A1, they both eventually reach S 2 . Consider for each robot r i the cycle in which it reaches state S 2 , and define t i to be the time of the end of the Look phase of this cycle. Without loss of generality, assume t 1 ≤ t 2 (the other case is symmetric). Hence, the variable r 1 .pos 1 describes the position of robot r 2 at t 1 expressed in the local coordinate system of robot r 1 . Let t 3 > t 2 be the time at which robot r 2 finishes its cycle that leads it to state S 2 . Between t 2 and t 3 , robot r 2 performed a non null movement because it moved towards the point (1 . 0) of its local coordinate system (line 1 .b of the code). Hence, the position of r 2 at t 3 is different from its position at t 1 ≤ t 2 which was recorded in the variable r 1 .pos 1 . By assumption, r 2 never reaches state S 3 , so each time it is activated after t 3 it keeps executing the lines 2 .a and 2 .c of the code and never moves from its current position (reached at t 3 ). By fairness, there is a time t ≥ t 3 at which robot r 1 is activated again. At this time, it observes r 2 in a position different from r 1 .pos 1 . This means that for r 1 the condition of line 2 .a is false. Hence, r 1 executes the else block of the condition (2 .b. ∗ ) and reaches state S 3 which contradicts the assumption and proves the lemma. The next lemma expresses the fact that our algorithm exhibits some kind of synchrony in the sense that robots advance in the execution of the algorithm through the different states in unisson. That is, neither of them surpasses the other by more than one stage (state). 19 Lemma A3 If at some time robot r i is in state S j and robot r 1 − i is in state S k then | j − k | ≤ 1 . Proof. The proof of the lemma is divided into two parts: – If robot r i is in state S 3 and robot r 1 − i is in state S j then j ≥ 2. proof: Since robot r i is in state S 3 , it has necessarily executed the lines (1 .a . . . 1 .d ) and (2 .b. ∗ ) of the code. Hence, the value of variable r i .pos 1 is different from that of r i .pos 2 . This means that r i has seen r 1 − i in at least two different positions which implies that r 1 − i has been activated at least once. Hence, r 1 − i has necessarily executed the lines (1 .a . . . 1 .c ) and has reached S 2 . – If robot r i is in state S 4 and robot r 1 − i is in state S j then j ≥ 3. proof: For a robot r 1 − i to reach state S 4 , it must execute the line 3 .a of the code and detect that the other robot r 1 − i has changed its direction of movement (it moved toward the negative part of its x-axis). Thus, robot r 1 − i has necessarily executed lines (2 .b. 2 . . . 2 .c ) of the code which means that r 1 − i is in state S 3 . Before this, robot r 1 − i moved only in one direction, that is, in the positive direction of its x-axis. This proves the lemma. The following lemma states that each robot r i is guaranteed to be observed by its peer at least once in a position located in the positive part of its local x-axis. The observed position is stored in r 1 − i .pos 1 . Expressed otherwise, this means that each robot ”send” a position located in its positive x-axis to its peer. This property is important for proving validity which correspond to both robots eventually reaching S 3 (Lemmas A5 and A6). Indeed, since each robot r i is guaranteed that a point located in its local x-axis was received by its peer r 1 − i , it suffices for r i to send its line to head toward the negative part of its x-axis and to stay there until it is observed by r 1 − i . That is, until a position located in its negative x-axis (and thus distinct from r 1 − i .pos 1 ) is received by r 1 − i . Lemma A4 For each robot r i , the variable r i .pos 1 describes a position located in the positive axis of the other robot r 1 − i . Proof. The value of the variable r i .pos 1 is assigned for the first time in line 1 .a when r i is still in state S 1 . At this time, according to Lemma A3, r 1 − i is necessarily in state S 1 or S 2 (Otherwise, this would contradict Lemma A3 since we would have a time at which a robot ( r 1 − i ) is in a state S j with j ≥ 3 concurrently with another robot ( r i ) that is in state S 1 ). This means that the variable r i .pos 1 describes a position held by robot r 1 − i while it was in state S 1 or S 2 . But according to the algorithm, when a robot is in state S 1 or S 2 , it is still located in a position of its positive x-axis (or the origin). Hence, the variable r i .pos 1 describes a position located in the positive x-axis of robot r 1 − i which proves the lemma. 20 Lemma A5 Eventually, both robots reach state S 3 . Proof. According to Lemmas A2 and A3, there is a time at which some robot, say r i , reaches S 3 and the other one ( r 1 − i ) is at a state S j with j ≥ 2. If j ≥ 3 then the lemma holds and we are done. So we assume in what follows that r 1 − i is at S 2 and we prove that it eventually reaches S 3 . Assume for the sake of contradiction that this is not the case, that is, r 1 − i remains always stuck in S 2 . This implies, according to Lemma A3, that r i remains also stuck in state S 3 . When r i is in state S 3 , it keeps executing the line 3 .b of the code until it reaches a position located in the negative part of its x-axis. Denote by t 1 the first time at which r i reaches its negative x-axis. Each time r i is activated after t 1 , it executes the line 3 .c of the code corresponding to a null movement and it never moves from its current position (located in the negative x-axis). This is because we assumed that r i remains stuck in S 3 forever. By fairness, there is a time t 2 ≥ t 1 at which r 1 − i is activated. This time, the condition of line 2 .a does not hold for robot r 1 − i because the position returned by observe ( i ) is located in the negative x-axis of r i and is different from r 1 − i .pos 1 which is located in the positive part of the x-axis of r 1 (as stated in Lemma A4). Hence, r 1 − i executes the part 2 .b. ∗ of the code and changes its status to S 3 . Now we are ready to prove the validity property. Lemma A6 Algorithm 1 satisfies the validity property of the Line Robocast Problem for two robots. Proof. Eventually both robots reach state S 3 according to Lemma A5. Each robot r i in state S 3 has necessarily executed the blocks 1 . ∗ and 2 .b. ∗ of the algorithm and thus delivered the line defined by the positions r i .pos 1 and r i .pos 2 . Now we prove that this line is well defined and that it does correspond to l 1 − i , the line sent by r 1 − i . r i .pos 1 and r i .pos 2 are well defined since they are assigned a value in lines 1 .a and 1 .b. 1 respectively before r i delivers the line. The assigned values correspond to two positions of r 1 − i . Moreover, by the condition of line 2 .b. we have that these two positions are distinct. It remains to prove that they belong to l 1 − i . The values of variables r i .pos 1 and r i .pos 2 are assigned when r i is in state S 1 and S 2 respectively. Hence, according to Lemma A3, when r i .pos 1 and r i .pos 2 are defined, r 1 − i did not yet reach S 4 and moved only through its x-axis. This means that r i .pos 1 and r i .pos 2 correspond to two distinct positions of the x-axis of r 1 − i . Hence, r i delivered l 1 − i . Since both robots eventually reach S 3 , both lines l i and l 1 − i are eventually delivered. Proof of the Termination property. Now we prove that the algorithm actually terminates. Before terminating, each robot r i must be sure that its peer r 1 − i has received its sent line, that is, r 1 − i has reached the state S 3 . As already explained, r i can infer the transition of r 1 − i to S 3 by detecting a change of its direction of movement. Upon this, r i can go on to state S 4 and terminates safely. In the 21 following two lemmas, we prove that at least one robot reaches S 4 . To do this, we first prove in Lemma A7 that at least one robot, say r 1 − i , is observed by its peer in two distinct positions located in the positive part of its x-axis. Later, when r 1 − i moves to its negative x-axis and r i observes it there, r i learns that r 1 − i changed its direction of movement which allows the transition of r i to state S 4 . This is proved in Lemma A8. Lemma A7 For at least one robot, say r i , the two variables r i .pos 1 and r i .pos 2 describe two positions located in the positive x-axis of r 1 − i and such that r i .pos 2 > r i .pos 1 with respect to the local coordinate system of r 1 − i . Proof. Let r i be the first robot to enter state S 3 . The other robot ( r 1 − i ) is in state S 2 in accordance with Lemma A3. Hence, r 1 − i moved only through the positive direction of its x-axis, so the variables r i .pos 1 and r i .pos 2 correspond to two different positions in the positive x-axis of robot r 1 − i or in its origin. But since r i .pos 2 was observed after r i .pos 1 and r 1 − i moves in the positive direction of its x-axis, then r i .pos 2 > r i .pos 1 with respect to the local coordinate system of r 1 − i . Lemma A8 Eventually, at least one robot reaches state S 4 . Proof. We assume towards contradiction that no robot ever reach S 4 . But ac- cording to Lemma A5, both robots eventually reach S 3 . Hence we consider a configuration in which both robots are in S 3 and we derive a contradiction by proving that at least one of them does reach S 4 . Let r i be the robot induced by Lemma A7. The variables r i .pos 1 and r i .pos 2 of r i correspond to two different positions occupied by r 1 − i while it was on the positive part of its x-axis. By assumption, r 1 − i eventually reaches state S 3 . At the end of this cycle, r 1 − i is either located in a position of its negative x-axis or it keeps executing lines 3 .b. ∗ each time it is activated until it reaches such a position, let’s call it p . The next cycles it is activated, r 1 − i executes the line 3 .c of the code because we assumed that r 1 − i never reaches S 4 . It results that r 1 − i never quits p . Hence, r 1 − i is guaranteed to be eventually observed by r i in a position that is smaller than r i .pos 2 with respect to the local coordinate system of r 1 − i . At this point, the condition of line 3 . 4 becomes true for robot r i , which executes the block of the code labelled by 3 . 4 . ∗ and sets is state to S 4 . Lemma A9 Eventually, both robots reach state S 4 . Proof. According to Lemma A8 at least one robot, say r i , eventually reaches S 4 . When r i reaches S 4 , r 1 − i is in a state S j with j ≥ 3 according to Lemma A3. If j = 4 the lemma holds trivially, so we consider in the following a configuration in which r 1 − i is in state S 3 and we prove that it eventually joins r i in state S 4 . The variables r 1 − i .pos 1 and r 1 − i .pos 2 of r 1 − i describe two distinct positions located in the x-axis of robot r i . Let p i describes the position of r i at the end of the cycle in which it reaches S 4 . Once in state S 4 , r i moves towards the point myIntersect each time it is activated until it reaches it (lines 3 .a. 2 and 4 .a of the 22 code). myIntersect is the point located at the intersection of l i and nextl i and its distance from p i is finite. Since r i is guaranteed to move a minimal distance of δ i at each cycle in which it is activated, it reaches myIntersect after a finite number of cycles. The next cycle, r i chooses a destination located outside l i (4 .b. 2) and moves towards it before finishing the algorithm. Let t i be the time of the end of the Move phase of this cycle and let q i be the position occupied by r i at t i . q i / ∈ l i means that q i / ∈ line ( r 1 − i .pos 1 , r 1 − i .pos 2 ). It follows that r 1 − i .pos 2 / ∈ line ( r 1 − i .pos 1 , q i ). By fairness, there is a time t > t i at which r 1 − i is activated again, and at which it observes r i in the position q i . But we showed that q i is such that r 1 − i .pos 2 / ∈ line ( r 1 − i .pos 1 , q i ). Hence the condition of line 3 .a is true for robot r 1 − i in t and it reaches state S 4 in this cycle. Lemma A10 Algorithm 1 satisfies the termination property of the Line Robo- cast Problem for two robots. Proof. Eventually, both robots reach S 4 as proved by Lemma A9. Let r i be a robot in state S 4 and let p i be its position at the end of the cycle in which it reaches S 4 , Let d i be the distance between p i and myIntersect i . Since the scheduler is fair and a robot is allowed to move in each cycle a minimal distance of σ i before it can be stopped by the scheduler, it follows that r i is guaranteed to cover the distance d i and to reach myIntersect after at most d i /σ i cycles. The next cycle, r i moves outside l i and terminates. Theorem A11 Algorithm 1 solves the Line Robocast Problem for two robots in unoblivious CORDA systems. Proof. Follows directly from Lemmas A6 and A10. 23 B Generic RoboCast In this section, we describe the RoboCast Algorithm for the general case of n robots. Then we give its formal proof of correctness. B.1 Description of the Algorithm The Line RoboCast algorithm for the general case of n processes is a simple generalization of the algorithm for two robots. The code of this algorithm is in Algorithm 3. It consists in the following steps: The first time a robot r i is activated in state S 1 , it simply records the positions of all other robots in the array pos 1 []. Then, it moves towards the point (1 , 0) of its local coordinate system and goes to state S 2 . When r i is in state S 2 , each time it observes some robot r j in a position different from the one recorded in pos 1 [ j ], it stores it in pos 2 [ j ]. At this point, r i can infer the line sent by r j which passes through both pos 1 [ j ] and pos 2 [ j ]. Hence, r i delivers line ( r i , r j ) which corresponds to l j . r i does not move from its current position until it assigns a value to all the cells of pos 2 [] (apart from the one associated with itself which is meaningless). That is, until it delivers all the lines sent by its peers. Upon this, it transitions to S 3 and heads to the point ( − 1 , 0). At state S 3 , r i waits until it observes that all other robots changed the direction of their movement or moved outside their sent line. Then, it moves towards a position located outside its current line l i . In particular, it goes to a position located in nextl i , the next line it will robocast. Hence r i first passes by the intersection of l i and nextl i . Then, it moves outside l i and terminates the algorithm. B.2 A Correctness Argument We prove the correctness of our algorithm by proving that it satisfies the validity and termination property of the RoboCast Problem specification. The general idea of the proof is similar to that of the two robots algorithms even if it is a little more involved. Proof of the Validity property. Lemma B1 Eventually, all robots reach state S 2 . Proof. Similar to the proof of Lemma A1. Lemma B2 Eventually, at least one robot reaches state S 3 . Proof. Let R = { r 1 , r 2 , . . . , r n } be a set of n robots executing Algorithm 3. We assume towards contradiction that neither of them ever reach S 3 . But according to Lemma B1, all robots eventually reach state S 2 . Thus we proceed in the following way: we consider a configuration in which all robots are in S 2 and we prove that at least one of them eventually reaches S 3 which leads us to a contradiction. Consider for each robot r i ∈ R the cycle in which it reaches the 24 Algorithm 3 Line RoboCast LineRbcast1 for n robots: Algorithm for robot r i Variables : state : initially S 1 . pos 1 [1 . . . n ]: initially ⊥ pos 2 [1 . . . n ]: initially ⊥ destination, intersection : initially ⊥ Actions : 1. State [ S 1 ] : %Robot r i starts the algorithm% a. foreach 1 ≤ j ≤ n do pos 1 [ j ] ← observe ( j ) enddo b. destination ← (1 , 0) i c. state ← S 2 d. Move to destination 2. State [ S 2 ] : % r i knows at least one position of the lines of all other robots% a. if ∃ j 6 = i s.t. ( pos 2 [ j ] = ⊥ ) and ( pos 1 [ j ] 6 = observe ( j )) then 1. pos 2 [ j ] ← observe ( j ) 2. Deliver ( line ( pos 1 [ j ] , pos 2 [ j ]) endif b. if ∃ j 6 = i s.t. ( pos 2 [ j ] = ⊥ ) then destination ← observe ( i ) c. else 1. destination ← ( − 1 , 0) i 2. state ← S 3 endif d. Move to destination 3. State [ S 3 ] : % r i knows the lines of all other robots% a. if ∀ j 6 = i pos 2 [ j ] is outside the line segment [ pos 1 [ j ] , observe ( j )] then 1. intersection ← l i ∩ nextl i 2. destination ← intersection 3. state ← S 4 b. else if ( observe ( i ) ≥ (0 , 0) i ) then destination ← ( − 1 , 0) c. else destination ← observe ( i ) endif endif d. Move to destination 4. State [ S 4 ] : % r i knows that all robots have learned its line l i % a. if ( observe ( i ) 6 = intersection ) then destination ← intersection b. else 1. r i rotates its coordinate system such that its x-axis and the origin match with nextl i and intersection respectively. 2. destination ← (1 , 0) i 3. return endif c. Move to destination state S 2 , and define t i and t ′ i to be respectively the time of the end of the Look and the Move phases of this cycle. Let t k be equal to min { t 1 , t 2 , . . . , t n } and let r k be the corresponding robot. That is, at t k , robot r k finishes to execute a Look phase and at the end of this cycle it reaches state S 2 . This means that for robot r k , the array r k .pos 1 [] corresponds to the configuration of the network at time t k . Between t i ≥ t k and t ′ i each robot executes complete Compute and Move phases. The movement performed in this phase cannot be null because robots move from the point (0 , 0) towards the point (1 , 0) of their local coordinate system (line 1 .b of the code). Moreover, the scheduler cannot stop a robot before it reaches the point ( δ i , 0). Hence, the position of each robot r i at t ′ i is different from its position at t i . But the position of r i at t i is equal to its position at t k . 25 Thus, the position of each robot at t ′ i is different from its position at t k which is stored in r k .pos 1 [ i ]. We have by assumption that no robot ever reaches S 3 . So each time a robot r i is activated after t ′ i , it keeps executing the lines 2 .b and 2 .d of the code and never moves from its current position reached at t ′ i . Define t end to be equal to max { t ′ 1 , . . . , t ′ n } . It follows that at ∀ t ≥ t end , the position of each robot r i at t is different from r k .pos 1 [ i ]. But by fairness, there is a time t a ≥ t end at which r k is activated again. At this cycle, r k observes that each robot r i is located in a position different from r k .pos 1 [ i ]. Consequently, if there exists a robot r i such that r k .pos 2 [ i ] was equal to ⊥ before this cycle, then r k assigns the current observed position of r i to r k .pos 2 [ i ]. This implies that the condition of line 2 .b is now false for r k . Hence r k executes the else block of the condition and reaches state S 3 . This is the required contradiction that proves the lemma. Lemma B3 For each robot i , for each robot j , if r i .pos 1 [ j ] 6 = ⊥ , then pos 1 [ j ] describes a position that is necessarily located in the positive x-axis of robot j . Proof. The proof follows the same lines as that of Lemma A4. Lemma B4 If at some time robot r i is in state S j and robot r ′ i is in state S k then | j − k | ≤ 1 . Proof. The lemma can be proved by generalising the proof of Lemma A3 to the case of n robots. We divide the analysis into two subcases: – If robot r i is in state S 3 and robot r ′ i is in state S j then j ≥ 2. proof: If r i is in state S 3 , this means that it observed all other robots in at least two distinct positions. This means that all other robots started a Move phase, which implies that they all finished a complete Compute phase in which they executed the lines 1 .a . . . 1 .c of the code and reached S 2 . – If robot r i is in state S 4 and robot r ′ i is in state S j then j ≥ 3. proof: For a robot to reach S 4 , it must detect a change of direction by all other robots in the network which is captured by the condition of line 3 .a . We prove that this condition cannot be true unless all robots have reached S 3 and no robot in the network is still in state S 2 . Indeed, robots in state S 1 move in the positive direction of their x-axis and those in state S 2 does not move. So, a robot cannot change its direction before reaching state S 3 . This change of direction is reflected by the choice of point ( − 1 , 0) as a destination in line 2 .c. 1 of the code before the transition to state S 3 in line 2 .c. 2. Corollary B5 If at some time t , ∃ i, j such that robots r i and r j are respectively in state S k and S k +1 at t with k ∈ { 1 , 2 , 3 } , then all the robots of the network are either in state S k or S k +1 at t . Proof. Since r i is in state S k , no robot in the network can be in a state S l with l ≥ k + 2 according to Lemma B4. Similarly, the fact that r j is in state S k +1 implies that no robot in the network can be in a state S l with l ≤ k − 1. By the conjunction of the two facts, we obtain that all robots are either in state S k or S k +1 . 26 The following lemma proves the fact that if at time t a , some robots of the network are in state S 2 and others are in state S 3 , then at least one robot that is in state S 2 at t a eventually reaches S 3 . Lemma B6 Let G 2 ( t ) , G 3 ( t ) be the groups of robots that are respectively in state S 2 and S 3 at time t . If at some time t a ‖ G 2 ( t a ) ‖ > 0 and ‖ G 3 ( t a ) ‖ > 0 , then there exists a time t ≥ t a at which ‖ G 3 ( t ) ‖ ≥ ‖ G 3 ( t a ) ‖ + 1 . Proof. Since by assumption ‖ G 2 ( t a ) ‖ > 0 and ‖ G 3 ( t a ) ‖ > 0, it follows from Corollary B5 that R = G 2 ( t a ) ∪ G 3 ( t a ). We assume toward contradiction that ∀ t ≥ t a G 2 ( t ) = G 2 ( t a ) = G 2 , that is, no robot that is in S 2 at t a ever reach S 3 . But by assumption we have ‖ G 2 ‖ > 0. Hence ∀ t > t a ‖ G 2 ( t ) ‖ = ‖ G 2 ‖ > 0. This implies, in accordance with Lemma B4, that no robot of the network can reach S 4 after t a . Consequently, ∀ t ′ ≥ t G 3 ( t ′ ) = G 3 ( t ) = G 3 = R \ G 2 . – As discussed above, we have by assumption that all robots have reached S 2 at t a . This means that all robots have executed the line 1 .a of the code at t a . Consequently, at t a , ∀ r i ∈ R , ∀ r j ∈ R with j 6 = i , r i .pos 1 [ j ] 6 = ⊥ . Moreover, according to Lemma B3 all these positions stored in the arrays pos 1 [] describe positions located in the positive x-axis of the corresponding robots. – By assumption we have that ∀ t > t a ‖ G 3 ( t ) ‖ = ‖ G 3 ‖ . This means that no robot in G 3 ever reach S 4 . Hence robots of G 3 never execute the block 3 .a. ∗ of the code and they keep executing the line 3 .b each time they are activated until they reach a position in their negative x-axes. Then, once a robot of G 3 arrives to the negative part of its x-axis, it keeps executing the line 3 .c of the code each time it is activated. As we showed above, the positions stored in the different arrays pos 1 [] are different from ⊥ and correspond to points located in the positive x-axes of the corresponding robots. Since robots of G 3 eventually get to positions in their negative x-axes and stay there, they eventually get observed by each robot in the network in a position different from the one that is stored in its local variable pos 1 [] which correspond to a positive x-axis position. Formally, there is a time t v > t at which ∀ r i ∈ G 3 , ∀ r j ∈ R with r j 6 = r i r j .pos 2 [ i ] 6 = ⊥ . – Let r 1 , . . . , r m be the robots of G 2 . Consider for each robot r i ∈ G 2 the cycle in which it reaches the state S 2 , and define t i and t ′ i to be respectively the time of the end of the Look and the Move phase and of this cycle. Let t k be equal to min { t 1 , t 2 , . . . , t m } and let r k ∈ G 2 be the corresponding robot. That is, at t k , robot r k finishes to execute a Look phase and at the end of this cycle it reaches state S 2 . This means that for robot r k , r k .pos 1 [] describes the configuration of the network at time t k . Following the lines of the proof of Lemma B2 we obtain that there exist a time t u > t at which ∀ r j ∈ G 2 \ { r k } , r k .pos 2 [ j ] 6 = ⊥ . Now, let t x = max { t v , t u } . From the discussion above it results that at time t x , for robot r k ∈ G 2 it holds that ∀ r j ∈ G 3 , r k .pos 2 [ j ] 6 = ⊥ and ∀ r j ∈ G 2 \ { r k } , r k .pos 2 [ j ] 6 = ⊥ . Hence, at t x , ∀ j ∈ R \ { r k } , r k .pos 2 [ j ] 6 = ⊥ . This 27 means that the condition of line 2 .a is false for r k at t x , so r k executes the else block of this condition when activated after t x and reaches state S 3 which contradicts the assumption that ∀ t > t a ‖ G 2 ( t ) ‖ = ‖ G 2 ‖ . Lemma B7 Eventually, all robots of the network reach state S 3 Proof. Follows from Lemmas B1, B2 and B6. Lemma B8 Algorithm 3 satisfies the validity property. Proof. The idea of the proof is similar to Lemma A6. According to Lemma B7, all robots eventually reach state S 3 . Each robot that reach state S 3 has necessarily executed the block 1 . ∗ and the line 2 .c of Algorithm 3. Hence, this robot has its two arrays pos 1 [] and pos 2 [] well defined and according to the way the elements of pos 2 [] are defined (refer to line 2 .a of the code), we conclude that ∀ 1 ≤ j ≤ n , pos 2 [ j ] 6 = pos 1 [ j ]. Moreover, since robots move only through their x-axes, ∀ j, pos 2 [ j ] and pos 1 [ j ] correspond to two positions of the x-axis of robot j . Hence, each robot in state S 3 can infer the x-axes of its peers from pos 1 [] and pos 2 [] which proves the lemma. Proof of the Termination property. Lemma B9 Eventually, at least one robot reaches state S 4 Proof. We assume for the sake of contradiction that no robot ever reach S 4 . However, according to Lemma B7, all robots eventually reach state S 3 . Hence we consider a configuration in which all robots are in state S 3 and we prove that at least one of them eventually reaches S 4 which leads us to a contradiction. The idea of the proof is similar to that of Lemma B2: we consider the first robot r k that executes a Look phase of a cycle leading it from S 2 to S 3 . Let t k be the time of the end of this Look phase. Clearly, ∀ r i ∈ R \ { r k } , r k .pos 1 [ i ] and r k .pos 2 [ i ] describe two positions of r i located in its positive x-axis. This is because these two positions were observed by r k before r i reaches S 3 and changes its direction of movement towards its negative x-axis. Moreover, r k .pos 2 [ i ] > r k .pos 1 [ i ] with respect to the local coordinate system of r i since r i was observed in r k .pos 1 [ i ] and then in r k .pos 2 [ i ] while it was moving along the positive direction of its x-axis. The claim can be proved formally as in Lemma A7. After t k , all other robots of the network perform a transition from S 2 to S 3 . Then, they head towards the negative part of their local x-axes (lines 2 .c. 1 and 3 .b of the code) and stay there (line 3 .c ) since they cannot reach S 4 by assumption. Each robot r i that reaches the negative part of its x-axis is located in a position p i such that r k .pos 2 [ i ] is outside the line segment [ r k .pos 2 [ i ], p i ]. Hence the condition of line 3 .a eventually becomes true for robot r k , and it reaches S 4 after executing the block 3 .a. ∗ of the code. This is the required contradiction. Lemma B10 Eventually, all robots of the network reach S 4 . 28 Proof. The proof is similar to that of Lemma A9. The intuition behind it is as follows: we proved in Lemma B9 that at least one robot, say r i , eventually reaches S 4 . After reaching S 4 , and after a finite number of executed cycles, r i quits l i (line 4 .b. 2). When they observe r i outside l i , the other robots transition to state S 4 . Lemma B11 Algorithm 3 satisfies the termination property. Proof. The proof is similar to that of Lemma A10 Theorem B12 Algorithm 1 solves the Line Robocast Problem for n robots in unoblivious CORDA systems. Proof. Follows directly from Lemmas B8 and B11.