Sampling-Based Temporal Logic Path Planning Cristian Ioan Vasile and Calin Belta Abstract — In this paper, we propose a sampling-based motion planning algorithm that finds an infinite path satisfying a Linear Temporal Logic (LTL) formula over a set of properties satisfied by some regions in a given environment. The algorithm has three main features. First, it is incremental, in the sense that the procedure for finding a satisfying path at each iteration scales only with the number of new samples generated at that iteration. Second, the underlying graph is sparse, which guarantees the low complexity of the overall method. Third, it is probabilistically complete. Examples illustrating the usefulness and the performance of the method are included. I. INTRODUCTION Motion planning is a fundamental problem that has re- ceived a lot of attention from the robotics community [16]. The goal is to generate a feasible path for a robot to move from an initial to a final configuration while avoiding obstacles. Exact solutions to this problem are intractable, and relaxations using potential fields, navigation functions, and cell decompositions are commonly used [5]. These approaches, however, become prohibitively expensive in high dimensional configuration spaces. Sampling-based methods were proposed to overcome this limitation. Examples in- clude the probabilistic roadmap (PRM) algorithm proposed by Kavraki et.al. in [13], which is very useful for multi- query problems, but is not well suited for the integration of differential constraints. In [17], Kuffner and LaValle proposed rapidly-exploring random trees (RRT). RRTs grow randomly, are biased to explore “new” space [17] (Voronoi bias), and find solutions quite fast. Moreover, PRM and RRT were shown to be probabilistically complete [13], [17], but not probabilistically optimal [12]. Karaman and Frazzoli proposed RRT ∗ and PRM ∗ , the probabilistically optimal counterparts of RRT and PRM in [12]. Recently, there has been increasing interest in improving the expressivity of motion planning specifications from the classical scenario (“Move from A to B and avoid obstacles.”) to richer languages that allow for Boolean and temporal requirements, e.g., “Visit A, then B, and then C, in this order infinitely often. Always avoid D unless E was visited.” It has been shown that temporal logics, such as Linear Temporal Logic (LTL), Computation Tree Logic (CTL), μ - calculus, and their probabilistic versions (PLTL, PCTL) [1] can be used as formal and expressive specification languages for robot motion [15], [21], [3], [11], [7]. In the above works, adapted model checking algorithms and automata This work was partially supported by the ONR under grants MURI N00014-09-1051 and MURI N00014-10-10952 and by the NSF under grant NSF CNS-1035588. The authors are with the Division of Systems Engineering, Boston University, Boston MA, { cvasile, cbelta } @bu.edu) . game techniques [15], [4] are used to generate motion plans and control policies for a finite model of robot motion, which is usually obtained through an abstraction process based on partitioning the configuration space [2]. The main limitation of these approaches is their high complexity, as both the synthesis and abstraction algorithms scale at least exponentially with the dimension of the configuration space. To address this issue, in [11], Karaman and Frazzoli proposed a sampling-based path planning algorithm from specifications given in deterministic μ -calculus. However, de- terministic μ -calculus formulae have unnatural syntax based on fixed point operators, and are difficult to use by untrained human operators. In contrast, Linear Temporal Logic (LTL), has very friendly syntax and semantics, which can be easily translated to natural language. One idea would be to translate LTL specifications to deterministic μ -calculus formulae and then proceed with generation of motion plans as in [11]. However, there is no known procedure to transform an LTL formula φ into a μ -calculus formula Ψ such that the size of Ψ is polynomial in the size of φ (for details see [6]). In this paper, we propose a sampling-base path planning algorithm that finds an infinite path satisfying an LTL for- mula over a set of properties that hold at some regions in the configuration space. The procedure is based on the incremental construction of a transition system followed by the search for one of its satisfying paths. One important feature of the algorithm is that, at a given iteration, it only scales with the number of samples and transitions added to the transitions system at that iteration. This, together with a notion of “sparsity” that we define and enforce on the transition system, play an important role in keeping the overall complexity at a manageable level. In fact, we show that, under some mild assumptions, our definition of sparsity leads to the best possible complexity bound for finding a satisfying path. Finally, while the number of samples increases, the probability that a satisfying path is found approaches 1, i.e., our algorithm is probabilistically complete. Among the above mentioned papers, the closest to this work is [11]. As in this paper, the authors of [11] can guar- antee probabilistic completeness and scalability with added samples only at each iteration of their algorithm. However, in [11], the authors employ the fixed point (Knaster-Tarski) theorem to find a satisfying path. Their method is based on maintaining a “product” graph between the transition sys- tem and every sub-formula of their deterministic μ -calculus specification and checking for reachability and the existence of a “type” of cycle on the graph. On the other hand, our algorithm maintains the product automaton between the arXiv:1307.7263v1 [cs.RO] 27 Jul 2013 transition system and a B ̈ uchi automaton corresponding to the given LTL specification. Note that, as opposed to LTL model checking [1], we use a modified version of product automaton that ensures reachability of the final states. More- over, we impose that the states of the transition system be bounded away from each other (by a given function decaying in terms of the size of the transition system). Sparseness is also explored by Dobson and Berkis in [8] for PRM using different techniques. Our long term goal is to develop a computational frame- work for automatic deployment of autonomous vehicles from rich, high level specifications that combine static, a priori known information with dynamic, locally sensed events. An example is search and rescue in a disaster relief scenario: an unmanned aircraft is required to keep on photographing some known affected regions and uploading the photos at a known base region. While executing this (global) mission, the aircraft uses its sensors to (locally) identify survivors and fires, with the goal of immediately providing medical assis- tance to the survivors and extinguishing the fires. The main challenge in this problem is to generate control strategies that guarantee the satisfaction of the global specification in long term while at the same time correctly reacting to locally sensed events. The algorithm proposed in this paper solves the first part of this problem, i.e., the generation of a motion plan satisfying the global specification. II. PRELIMINARIES For a finite set Σ , we use | Σ | and 2 Σ to denote its cardinality and power set, respectively. ∅ denotes the empty set. Definition 2.1 (Deterministic Transition System): A deterministic transition system (DTS) is a tuple T = ( X, x 0 , ∆ , Π , h ) , where: • X is a finite set of states; • x 0 ∈ X is the initial state; • ∆ ⊆ X × X is a set of transitions; • Π is a set of properties (atomic propositions); • h : X → 2 Π is a labeling function. We denote a transition ( x, x ′ ) ∈ ∆ by x → T x ′ . A trajectory (or run) of the system is an infinite sequence of states x = x 0 x 1 . . . such that x k → T x k +1 for all k ≥ 0 . A state trajectory x generates an output trajectory o = o 0 o 1 . . . , where o k = h ( x k ) for all k ≥ 0 . The absence of inputs (control actions) in a DTS implicitly means that a transition ( x, x ′ ) ∈ ∆ can be chosen deterministically at every state x . A Linear Temporal Logic (LTL) formula over a set of properties (atomic propositions) is defined using standard Boolean operators, ¬ (negation), ∧ (conjunction) and ∨ (disjunction), and temporal operators, X (next), U (until), F (eventually), G (always). The semantics of LTL formulae over Π are given with respect to infinite words over 2 Π , such as the output trajectories of the DTS defined above. Any infinite word satisfying a LTL formula can be written in the form of a finite prefix followed by infinitely many repetitions of a suffix. Verifying whether all output trajectories of a DTS with set of propositions Π satisfy an LTL formula over Π is called LTL model checking. LTL formulae can be used to describe rich mission specifications. For example, formula G ( F ( R 1 ∧ F R 2 ) ∧ ¬ O 1 ) specifies a persistent surveillance task: “visit regions R 1 and R 2 infinitely many times and always avoid obstacle O 1 ” (see Figure 1). Formal definitions for the LTL syntax, semantics, and model checking can be found in [1]. 3 2 1 0 1 2 3 3 2 1 0 1 2 3 O1 R1 R2 0 1 2 3 4 Fig. 1. A simple map with three features: an obstacle O 1 and two regions of interest R 1 and R 2 . The mission specification is φ = G ( F ( R 1 ∧ F R 2 ) ∧ ¬ O 1 ) . The initial position of the robot is marked by the blue disk. The graph (in black and red) represents the generated transition system T . The red arrows specify a satisfying trajectory composed of a prefix [0 , 2 , 3] and infinitely many repetitions of the suffix [4 , 3 , 2 , 3] . Definition 2.2 (B ̈ uchi Automaton): A (nondeterministic) B ̈ uchi automaton is a tuple B = ( S B , S B 0 , Σ , δ, F B ) , where: • S B is a finite set of states; • S B 0 ⊆ S B is the set of initial states; • Σ is the input alphabet; • δ : S B × Σ → 2 S B is the transition function; • F B ⊆ S B is the set of accepting states. A transition ( s, s ′ ) ∈ δ ( s, σ ) is denoted by s σ → B s ′ . A trajectory of the B ̈ uchi automaton s 0 s 1 . . . is generated by an infinite sequence of symbols σ 0 σ 1 . . . if s 0 ∈ S B 0 and s k σ k → s k +1 for all k ≥ 0 . A input infinite sequence over Σ is said to be accepted by a B ̈ uchi automaton B if it generates at least one trajectory of B that intersects the set F B of accepting states infinitely many times. It is shown in [1] that for every LTL formula φ over Π there exists a B ̈ uchi automaton B over alphabet Σ = 2 Π such that B accepts all and only those infinite sequences over Π that satisfy φ . There exist efficient algorithms that translate LTL formulae into B ̈ uchi automata [9]. Note, that the converse is not true, there are some B ̈ uchi automata for which there is no corresponding LTL formulae. However, there are logics such as deterministic μ -calculus which are in 1-to-1 correspondence with the set of languages accepted by B ̈ uchi automata. Model checking a DTS against an LTL formula is based on the construction of the product automaton between the DTS and the B ̈ uchi automaton corresponding to the formula. In this paper, we used a modified definition of the product automaton that is optimized for incremental search of a satisfying run. Specifically, the product automaton is defined such that all its states are reachable from the set of initial states. Definition 2.3 (Product Automaton): Given a DTS T = ( X, x 0 , ∆ , Π , h ) and a B ̈ uchi automaton B = ( S B , S B 0 , 2 Π , δ B , F B ) , their product automaton, denoted by P = T × B , is a tuple P = ( S P , S P 0 , ∆ P , F P ) where: • S P 0 = { x 0 } × S B 0 is the set of initial states; • S P ⊆ X × S B is a finite set of states which are reachable from some initial state: for every ( x ∗ , s ∗ ) ∈ S P there exists a sequence of x = x 0 x 1 . . . x n x ∗ , with x k → T x k +1 for all 0 ≤ k < n and x n → T x ∗ , and a sequence s = s 0 s 1 . . . s n s ∗ such that s 0 ∈ S B 0 , s k h ( x k ) → B s k +1 for all 0 ≤ k < n and s n h ( x n ) → T s ∗ ; • ∆ P ⊆ S P × S P is the set of transitions, defined by: (( x, s ) , ( x ′ , s ′ )) ∈ ∆ P iff x → T x ′ and s h ( x ) → B s ′ ; • F P = ( X × F B ) ∩ S P is the set of accepting states. A transition in P is denoted by ( x, s ) → P ( x ′ , s ′ ) if (( x, s ) , ( x ′ , s ′ )) ∈ ∆ P . A trajectory p = ( x 0 , s 0 )( x 1 , s 1 ) . . . of P is an infinite sequence, where ( x 0 , s 0 ) ∈ S P 0 and ( x k , s k ) → P ( x k +1 , s k +1 ) for all k ≥ 0 . Such a trajectory is said to be accepting if and only if it intersects the set of final states F P infinitely many times. It follows by construction that a trajectory p = ( x 0 , s 0 )( x 1 , s 1 ) . . . of P is accepting if and only if the trajectory s 0 s 1 . . . is accepting in B . As a result, a trajectory of T obtained from an accepting trajectory of P satisfies the given specification encoded by B . For x ∈ X , we define β P ( x ) = { s ∈ S B : ( x, s ) ∈ S P } as the set of B ̈ uchi states that correspond to x in P . Also, we denote the projection of a trajectory p = ( x 0 , s 0 )( x 1 , s 1 ) . . . onto T by γ T ( p ) = x 0 x 1 . . . . A similar notation is used for projections of finite trajectories. For both DTS and automata, we use |·| to denote size, which is the cardinality of the corresponding set of states. A state of a DTS or an automaton is called non-blocking if it has at least one outgoing transition. III. PROBLEM FORMULATION AND APPROACH Let D ⊂ R n be a compact set denoting the configuration space of a robot. Let R be a set of disjoint regions in D and Π be a set of properties of interest corresponding to these regions. A map ∼ : R → 2 Π specifies how properties are associated to the regions. Throughout this paper, we will assume that R is composed of connected sets with non- empty interior, which implies they have non-zero Lebesgue measure (i.e. all regions of interest have full dimension). Also, all connected sets in D \ ⋃ R ∈R R have full dimension. Examples of properties include “obstacle”, “target”, “drop- off”, etc. (see Fig. 1). In practice, the regions and properties are defined in the workspace, and then mapped to the configuration space by using standard techniques [5]. Problem 3.1: Given an environment described by ( D , R , Π , ∼ ) , the initial configuration of the robot x 0 ∈ D and an LTL formula φ over the set of properties Π , find a satisfying (infinite) path for the robot originating at x 0 . A possible approach to Problem 3.1 is to construct a par- tition of the configuration space that contains the regions of interest as elements of the partition. By using input - output linearizations and vector field assignments in the regions of the partition, it was shown that “equivalent” abstractions in the form of finite (not necessarily deterministic) transition systems can be constructed for a large variety of robot dynamics that include car-like vehicles and quadrotors [2], [18], [19]. Model checking and automata game techniques can then be used to control the abstractions from the temporal logic specification [14]. The main limitation of this approach is its high complexity, as both the synthesis and abstraction algorithms scale at least exponentially with the dimension of the configuration space. In this paper, we propose a sampling-based approach that can be summarized as follows: (1) the LTL formula φ is translated to the B ̈ uchi automaton B ; (2) a transition system T is incrementally constructed from the initial position x 0 using an RRG-based algorithm; (3) concurrently with (2), the product automaton P = T × B is updated and used to check if there is a trajectory of T that satisfies φ . As it will become clear later, our proposed algorithm is probabilistically complete [16], [12] (i.e., it finds a solution with probability 1 if one exists and the number of samples approaches infinity) and the resulting transition system is sparse (i.e., its states are “far” away from each other). IV. PROBLEM SOLUTION The starting point for our solution to Problem 3.1 is the RRG algorithm, which is an extension of RRT [12] that maintains a digraph instead of a tree, and can therefore be used as a model for general ω -regular languages [11]. However, we modify the RRG to obtain a “sparse” tran- sition system that satisfies a given LTL formula. More precisely, a transition system T is “sparse” if the mini- mum distance between any two states of T is greater than a prescribed function dependent only on the size of T ( min x,x ′ ∈T ‖ x − x ′ ‖ 2 ≥ η ( |T | ) ). The distance used to define sparsity is inherited from the underlying configuration space and is not related to the graph theoretical distance between states in T . Throughout this paper, we will assume that this distance is Euclidean. As stated in Section I, sparsity of T is desired since the solution to Problem 3.1 will be the off-line part of a more general procedure that will combine global and local temporal logic specifications. Sparseness also plays an important role in establishing the complexity bounds for the incremental search algorithm (see Section IV-B). A. Sparse RRG We first briefly introduce the functions used by the algo- rithm. a) Sampling function: The algorithm has access to a sampling function Sample : N → D , which generates independent and identically distributed samples from a given distribution P . We assume that the support of P is the entire configuration space D . b) Steer function: The steer function Steer : D × D → D is defined based on the robot’s dynamics. 1 Given a configurations x and goal configuration x g , it returns a new configuration x n that can be reached from x by following the dynamics of the robot and that satisfies ‖ x n − x g ‖ 2 < ‖ x − x g ‖ 2 . c) Near function: N ear : D × R → 2 X is a function of a configuration x and a parameter η , which returns the set of states from the transition system T that are at most at η distance away from x . In other words, N ear returns all states in T that are inside the n-dimensional sphere of center x and radius η . d) Far function: F ar : D × R × R → 2 X is a function of a configuration x and two parameters η 1 and η 2 . It returns the set of states from the transition system T that are at most at η 2 distance away from x . However, the difference from the N ear function is that F ar returns an empty set if any state of T is closer to x than η 1 . Geometrically, this means that F ar returns a non-empty set for a given state x if there are states in T which are inside the n -dimensional sphere of center x and radius η 2 and all states of T are outside the sphere with the same center, but radius η 1 . Thus, x has to be “far” away from all states in its immediate neighborhood (see Figure 2). This function is used to achieve the “sparseness” of the resulting transition system. e) isSimpleSegment function: isSimpleSegment : D × D → { 0 , 1 } is a function that takes two configurations x 1 , x 2 in D and returns 1 if the line segment [ x 1 , x 2 ] ( { x ∈ R n : x = λx 1 + (1 − λ ) x 2 , λ ∈ [0 , 1] } ) is simple, otherwise it returns 0. A line segment [ x 1 , x 2 ] is simple if [ x 1 , x 2 ] ⊂ D and the number of times [ x 1 , x 2 ] crosses the boundary of any region R ∈ R is at most one. Therefore, isSimpleSegment returns 1 if either: (1) x 1 and x 2 belong to the same region R and [ x 1 , x 2 ] does not cross the boundary of R or (2) x 1 and x 2 belong to two regions R 1 and R 2 , respectively, and [ x 1 , x 2 ] crosses the common boundary of R 1 and R 2 once. R or at most one of R 1 and R 2 may be a free space region (a connected set in D \ ⋃ R ∈R R ). See Figure 2 for examples. In Algorithm 1, a transition is rejected if it corresponds to a non-simple line segment (i.e. isSimpleSegment function returns 0). Under this condition, the satisfaction of the mission specification can be checked by only looking at the properties corresponding to the states of the transition system. f) Bound functions: η 1 : Z + → R (lower bound) and η 2 : Z + → R (upper bound) are functions that define the bounds on the distance between a configuration in D and the states of the transition system T in terms of the size of T . These are used as parameters for functions F ar and N ear . We impose η 1 ( k ) < η 2 ( k ) for all k ≥ 1 . We also assume that c η 1 ( k ) > η 2 ( k ) , for some finite c > 1 and all k ≥ 0 . Also, η 1 tends to 0 as k tends to infinity. The rate of decay of 1 In this paper, we will assume that we have access to such a function. For more details about planning under differential constraints see [16]. η 1 ( · ) has to be fast enough such that a new sample may be generated. Specifically, the set of all configurations where the center of an n -sphere of radius η 1 / 2 may be placed such that it does not intersect any of the n -spheres corresponding to the states in T has to have non-zero measure with respect to the probability measure P used by the sampling function. One conservative upper bound is η 1 ( k ) < 1 √ π n √ μ ( D )Γ( n/ 2+1) k for all k ≥ 1 , where μ ( D ) is the total measure (volume) of the configuration space, n is the dimension of D , and Γ is the gamma function. This bound corresponds to the case when there is enough space to insert an n -sphere of radius η 1 / 2 between every two distinct states of T . To simplify the notation, we drop the parameter for these functions and assume that k is always given by the current size of the transition system, k = |T | . 3 2 1 0 1 2 3 3 2 1 0 1 2 3 O1 R1 R2 0 1 2 3 new 1 new 2 Fig. 2. A simple map with three features: an obstacle O 1 and two regions R 1 , R 2 . The robot is assumed to be a fully actuated point. At the current iteration the states of T are { 0 , 1 , 2 , 3 } . The transitions of T are represented by the black arrows. The initial configuration is 0 and is marked by the blue disk. The radii of the dark gray (inner) disks and the light gray (outer) disks are η 1 and η 2 , respectively. A new sample new 1 ∈ D is generated, but it will not be considered as a potential new state of T , because it is within η 1 distance from state 3 ( F ar ( new 1 , η 1 , η 2 ) = ∅ ). Another sample new 2 ∈ D is generated, which is at least η 1 distance away from all states in T . In this case, F ar ( new 2 , η 1 , η 2 ) = { 0 , 1 , 2 , 3 } and the algorithm attempts to create transition to and from the new sample new 2 . The transitions { ( new 2 , 0) , (0 , new 2 ) , ( new 2 , 1) , (1 , new 2 ) , ( new 2 , 2) , (2 , new 2 ) } (marked by black dashed lines) are added to T , because all these transitions correspond to simple line segments ( isSimpleSegment returns 1 for all of them). For example, isSimpleSegment ( new 2 , 0) = 1 , because new 2 and 0 belong to the same region (the free space region) and [ new 2 , 0] does not intersect any other region. isSimpleSegment ( new 2 , 2) = 1 , because [ new 2 , 2] crosses the boundary between the free space region and region R 1 once. On the other hand, the transitions { ( new 2 , 3) , (3 , new 2 ) } (marked by red dashed lines) are not added to T , since they pass over the obstacle O 1 . In this case, isSimpleSegment (3 , new 2 ) = 0 , because 3 and new 2 are in the same region, but [3 , new 2 ] crosses the boundary of O 1 twice. The goal of the modified RRG algorithm (see Algorithm 1) is to find a satisfying run, but such that the resulting transition system is “sparse”, i.e. states are “sufficiently” apart from each other. The algorithm iterates until a satisfying run originating in x 0 is found. At each iteration, a new sample x r is generated (line 6 in Algorithm 1). For each state x in T which is “far” from the sample x r ( x ∈ F ar ( x r , η 1 , η 2 ) ), a new configuration x ′ r is computed such that the robot can be steered from x to x ′ r and the distance to x r is decreased (line 10). The two loops of the algorithm (lines 7–13 and 16–21) are executed if and only if the F ar function returns a non-empty set. However, x ′ r is regarded as a potential new state of T , and not x r . Thus, the Steer function plays an important role in the “sparsity” of the final transition system. Next, it is checked if the potential new transition ( x, x ′ r ) is a simple segment (line 9). It is also verified if x ′ r may lead to a solution, which is equivalent to testing if x ′ r induces at least one non-blocking state in P (see Algorithm 2). If configuration x ′ r and the corresponding transition ( x, x ′ r ) pass all tests, then they are added to the list of new states and list of new transitions of T , respectively (lines 12–13). After all “far” neighbors of x r are processed, the transition system is updated. Note that at this point T was only extended with states that explore “new space”. However, in order to model ω -regular languages the algorithm must also close cycles. Therefore, the same procedure as before (lines 7–14) is also applied to the newly added states X ′ (lines 15–21 of Algorithm 1). The difference is that it is checked if states from X ′ can steer the robot back to states in T in order to close cycles. Also, because we know that the states in X ′ are “far” from their neighbors, the N ear function will be used instead of the F ar function. The algorithm returns a (prefix, suffix) pair in T obtained by projection from the corresponding path ( p 0 ∗ → p F ) and cycle ( p F + → p F ) in P , respectively. The ∗ above the transition symbol means that the length of the path can be 0, while + denotes that the length of the cycle must be at least 1. In the end, the result is a transition system T which captures the general topology of the environment. In the next section, we will show that T also yields a run that satisfies the given specification. B. Incremental search for a satisfying run The proposed approach of incrementally constructing a transition system raises the problem of how to efficiently check for a satisfying run at each iteration. As mentioned in the previous section, the search for satisfying runs is performed on the product automaton. Note that testing whether there exists a trajectory of T from the initial position x 0 that satisfies the given LTL formula φ is equivalent to searching for a path from an initial state p 0 to a final state p F in the product automaton P = T × B and for a cycle containing p F of length greater than 1, where B is the B ̈ uchi automaton corresponding to φ . If such a path and a cycle are found then their projection onto T represents a satisfying infinite trajectory (line 23 of Algorithm 1). Testing whether p F belongs to a non-degenerate cycle (length greater than 1) is equivalent to testing if p F belong to a non-trivial strongly connected component – SCC (the size of the SCC is greater than 1). Checking for a satisfying trajectory in P is performed incrementally as the transition system is modified. The reachability of the final states from initial ones in P is guaranteed by construction (see Definition 2.3). However, we need to define a procedure (see Algorithm 2) to incrementally Algorithm 1: Sparse RRG Input : B – B ̈ uchi automaton corresponding to φ Input : x 0 initial configuration of the robot Output : (prefix, suffix) in T 1 Construct T with x 0 as initial state 2 Construct P = T × B 3 Initialize scc ( · ) 4 while ¬ ( x 0 | = φ ) ( ≡ ¬ ( ∃ p ∈ F P s.t. | scc ( p ) | > 1)) do 5 X ′ ← ∅ , ∆ ′ ← ∅ , ∆ ′ P ← ∅ 6 x r ← Sample () 7 foreach x ∈ F ar ( x r , η 1 , η 2 ) do 8 x ′ r ← Steer ( x, x r ) 9 if isSimpleSegment ( x r , x ′ r ) then 10 added ← updateP A ( P , B , ( x, x ′ r )) 11 if added is True then 12 X ′ ← X ′ ∪ { x ′ r } 13 ∆ ′ ← ∆ ′ ∪ { ( x, x ′ r ) } 14 T ← T ∪ ( X ′ , ∆ ′ ) 15 ∆ ′ ← ∅ , ∆ ′ P ← ∅ 16 foreach x ′ r ∈ X ′ do 17 foreach x ∈ N ear ( x ′ r , η 2 ) do 18 if ( x = Steer ( x ′ r , x )) ∧ isSimpleSegment ( x ′ r , x ) then 19 added ← updateP A ( P , B , ( x, x ′ r )) 20 if added is True then 21 ∆ ′ ← ∆ ′ ∪ { ( x ′ r , x ) } 22 T ← T ∪ ( X ′ , ∆ ′ ) 23 return ( γ T ( p 0 ∗ → p F ) , γ T ( p F + → p F )) , where p F ∈ F P update P when a new transition is added to T . Consider the (non-incremental) case of constructing P = T × B . This is done by a traversal of ̄ P = ( X × S B , ̄ ∆ P ) from all initial states, where (( x, s ) , ( x ′ , s ′ )) ∈ ̄ ∆ P if p → p ′ and s h ( x ) → s ′ . ̄ P is a product automaton but without the reachability requirement. This suggests that the way to update P when a transition ( x, x ′ ) is added to T , is to do a traversal from all states p of P such that γ T ( p ) = x . Also, it is checked if x ′ induces any non-blocking states in P (lines 1-3 of Algorithm 2). The test is performed by computing the set S ′ P of non-blocking states of P (line 1) such that p ′ ∈ S ′ P has γ T ( p ′ ) = x ′ and p ′ is obtained by a transition from { ( x, s ) : s ∈ β P ( x ) } . If S ′ P is empty then the transition ( x, x ′ ) of T is discarded and the procedure stops (line 3). Otherwise, the product automaton P is updated recursively to add all states that become reachable because of the states in S ′ P . The recursive procedure is performed from each state in S ′ P as follows: if a state p (line 7) is not in P , then it is added to P together with all its outgoing transitions (line 10) and the recursive procedure continues from the outgoing states of p ; if p is in P then the traversal stops, but its outgoing transitions are still added to P (line 14). The incremental construction of P has the same overall complexity as constructing P from the final T and B , because the recursive procedure just performes traversals that do not visit states already in P . Thus, we focus our complexity analysis on the next step of the incremental search algorithm. The second part of the incremental search procedure is Algorithm 2: Incremental Search for a Satisfying Run Input : P – product automaton Input : B – B ̈ uchi automaton Input : ( x, x ′ ) – new transition in T Output : Boolean value – indicates if P was modified 1 S ′ P ← { ( x ′ , s ′ ) : s h ( x ) → B s ′ , s ∈ β P ( x ) , s ′ non-blocking } 2 ∆ ′ P ← { (( x, s ) , ( x ′ , s ′ )) : s ∈ β P ( x ) , s h ( x ) → B s ′ , ( x ′ , s ′ ) ∈ S ′ P } 3 if S ′ P 6 = ∅ then 4 P ← P ∪ ( S ′ P , ∆ ′ P ) 5 stack ← S ′ P 6 while stack 6 = ∅ do 7 p 1 = ( x 1 , s 1 ) ← stack.pop () 8 foreach p 2 ∈ { ( x 2 , s 2 ) : x 1 → T x 2 , s 1 h ( x 1 ) → B s 2 } do 9 if p 2 / ∈ S P then 10 P ← P ∪ ( { p 2 } , { ( p 1 , p 2 ) } ) 11 ∆ ′ P ← ∆ ′ P ∪ { ( p 1 , p 2 ) } 12 stack ← stack ∪ { p 2 } 13 else if ( p 1 , p 2 ) / ∈ ∆ P then 14 ∆ P ← ∆ P ∪ { ( p 1 , p 2 ) } 15 ∆ ′ P ← ∆ ′ P ∪ { ( p 1 , p 2 ) } 16 updateSCC( P , scc , ∆ ′ P ) 17 return True 18 return False concerned with maintaining the strongly connected compo- nents (SCCs) of P (line 14 of Algorithm 2) as new transitions are added (these are stored in ∆ ′ P in Algorithm 2). To incrementally maintain the SCCs of the product automaton, we employ the soft-threshold-search algorithm presented in [10]. The algorithm maintains a topological order of the super-vertices corresponding to each SCC. When a new transition is added to P , the algorithm proceeds to re- establish a topological order and merges vertices if new SCCs are formed. The details of the algorithm are presented in [10]. The authors also offer insight about the complexity of the algorithm. They show that, under a mild assumption, the incremental algorithm has the best possible complexity bound. Incrementally maintaining P and its SCCs yields a quick way to check if a trajectory of T satisfies φ (line 4 of Algo- rithm 1). The next theorem establishes the overall complexity of Algorithm 2. Theorem 4.1: The execution time of the incremental search algorithm 2 is O ( m 3 2 ) , where m is the number of transitions added to T in Algorithm 1. Remark 4.2: First, note that the execution time of the incremental procedure is better by a polynomial factor than naively running a linear-time SCC algorithm at each step, since this will have complexity O ( m 2 ) . The algorithm pre- sented in [10] improves the previously best known bound by a logarithmic factor (for sparse graphs). The proof exploits the fact that the “sparseness” (metric) property we defined implies a topological sparseness, i.e., T is a sparse graph. Proof: The proof of the theorem is based on the analysis from [10] of incremental SCC algorithms. Haeupler et.al. show that any incremental algorithm that satisfies a “local” property must take at least Ω( n √ m ) time, where n is the number of nodes in the graph and m is the number of edges. The “local” property is a mild assumption that restricts the algorithm to reorder only vertices that are affected by the addition of an edge. This implies that the incremental SCC algorithm has the best possible complexity bound, in asymptotic sense, if and only if the graph is sparse. A graph is sparse if the the number of edges is asymptotically the same as the number of nodes, i.e. m = O ( n ) . What we need to show is that the transition system generated by Algorithm 1 is sparse. Note that although we run the SCC algorithm on the product automaton, the asymptotic execution time is not affected by analyzing the transition system instead of the product automaton, because the B ̈ uchi automaton is fixed. This follows from | S P | ≤ | S B | · | X | and | ∆ P | ≤ | δ B | · | ∆ | . Intuitively, the underlying graph of T is sparse, because the states were generated “far” from each other. When a new state is added to T , it will be connected to other states that are at least η 1 and at most η 2 distance away. Also, all states in T are at least η 1 distance away from each other. This implies that there is a bound on the density of states. This bound is related to the kissing number [20]. The kissing number is the maximum number of non-overlapping spheres that touch another given sphere. Using this intuition, the problem of estimating the maximum number of neighbors of a state can be restated as a sphere packing problem. Given a state x , each neighbor can be thought of as a sphere with radius η 1 / 2 and center belonging to the volume delimited by two spheres centered at x and with radii η 1 and η 2 , respectively. Since, η 1 < η 2 < cη 1 for some c > 1 it follows, that there will be only a finite number of spheres which can be placed inside the described volume. Thus there is a finite bound on the number of neighbors a state can have, which depends only on the dimension and shape of the configuration space and the volume between two concentric spheres of radii η 1 and η 2 , respectively. Remarks 4.3: The bound on the number of neighbors can become very large as the dimension of the configuration space increases. As we have seen in the proof, under the “local” property assumption, the incremental search algorithm has the best possible complexity bound. Because we do the search for a satisfying run using a B ̈ uchi automaton, through the product automaton, and not the LTL formula directly, the proposed method is general enough to be applied in conjunction with logics (such as μ -calculus), which are as expressive as the languages accepted by B ̈ uchi automata. Also, we do not expect to obtain search algorithms that are asymptotically faster then the proposed one (for sparse graphs), since this would violate the lower bound obtained in [10] (assuming the “local” property). C. Probabilistic completeness The presented RRG-based algorithm retains the proba- bilistic completeness of RRT, since the constructed transition system is composed of an RRT-like tree and some transitions which close cycles. Theorem 4.4: Algorithm 1 is probabilistically complete. Proof: (Sketch) First we start by noting that any word in a ω -regular language can be represented by a finite prefix and a finite suffix, which is repeated indefinitely [1]. This is important, since this shows that a solution, represented by a transition system, is completely characterized by a finite number of states. Let us denote by ̄ X the finite set of states that define a solution. It follows from the way regions are defined that we can choose a neighborhood around each state in ̄ X such that the system can be steered in one step from all points in one neighborhood to all points in the next neighborhood. Thus, we can use induction to show that [ ? ]: (1) there is a non-zero probability that a sample will be generated inside the neighborhood of the first state in the solution sequence; (2) if there is a state in T that is inside the neighborhood of the k -th state from the solution sequence, then there is a non-zero probability that a sample will be generated inside the k +1 -st state’s neighborhood. Therefore, as the number of samples goes to infinity, the probability that the transition system T has nodes belonging to all neighborhoods of states in ̄ X goes to 1. To finish the proof, note that we have to show that the algorithm is always able to generate samples with the desired “sparseness” property. However, recall that the bound functions must converge to 0 (as the number of states goes to infinity) fast enough such that the set of configurations for which “Far” function returns a non-empty list has non-zero measure with respect to the sampling distribution. This concludes the proof. V. IMPLEMENTATION AND CASE STUDIES We implemented the algorithms presented in this paper in Python2.7. In this section, we present some examples in configuration spaces of dimensions 2, 10 and 20. In all cases, we assume for simplicity that the Steer function is trivial, i.e., there are no actuation constraints at any given configuration. All examples were ran on an iMac system with a 3.4 GHz Intel Core i7 processor and 16GB of memory. Case Study 1: Consider the configuration space depicted in Figure 3. The initial configuration is at (0 . 3; 0 . 3) . The specification is to visit regions r 1 , r 2 , r 3 and r 4 infinitely many times while avoiding regions o 1 , o 2 , o 3 and o 4 . The corresponding LTL formula for the given mission specifica- tion is φ 1 = G ( F r 1 ∧ ( F r 2 ∧ ( F r 3 ∧ ( F r 4))) (1) ∧¬ ( o 1 ∨ o 2 ∨ o 3 ∨ o 4)) A solution to this problem is shown in Figures 3 and 4. We ran the overall algorithm 20 times and obtained an average execution time of 6.954 sec, out of which the average of the incremental search algorithm was 6.438 sec. The resulting transition system had a mean size of 51 states and 277 transitions, while the corresponding product automaton had a mean size of 643 states and 7414 transitions. The B ̈ uchi automaton corresponding to φ 1 had 20 states and 155 transitions. 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 r1 r2 r3 r4 o1 o2 o3 o4 0 1 34 3 4 37 19 7 40 10 43 47 16 11 50 35 32 8 36 29 5 Fig. 3. One of the solutions corresponding to Case Study 1: the specification is to visit all the colored regions labelled r 1 (yellow), r 2 (green), r 3 (blue) and r 4 (cyan) infinitely often, while avoiding the dark gray obstacles labelled o 1 , o 2 , o 3 , o 4 . The black dots represent the states of the transition system T (51 states and 264 transitions). The starting configuration of the robot (the initial state of T ) is denoted by the blue circle. The red arrows represent the satisfying run (finite prefix, suffix pair) found by Algorithm 1, which is composed of 21 states from T . In this case, the prefix and suffix are [0, 1, 4, 3] and [7, 10, 16, 40, 50, 40, 32, 34, 35, 43, 47, 36, 37, 29, 11, 19, 11, 8, 5, 1, 4, 3], respectively. 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 r1 r2 r3 r4 o1 o2 o3 o4 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 r1 r2 r3 r4 o1 o2 o3 o4 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 r1 r2 r3 r4 o1 o2 o3 o4 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 r1 r2 r3 r4 o1 o2 o3 o4 Fig. 4. Transition systems obtained at earlier iterations corresponding to the solution shown in Figure 3 (to be read from left to right and top to bottom). The black dots and arrows represent the state and transitions of T , respectively. Case Study 2: Consider a 10-dimensional unit hypercube configuration space. The specification is to visit regions r 1 , r 2 , r 3 infinitely many times, while avoiding region o 1 . The LTL formula corresponding to this specification is φ 2 = G ( F r 1 ∧ ( F r 2 ∧ ( F r 3)) ∧ ¬ o 1) . (2) The corresponding B ̈ uchi automaton has 9 states and 43 transitions. Regions r 1 = [0; 0 . 4] × [0; 0 . 75] 9 , r 2 = [0 . 6; 1] × [0 . 25; 1] 9 , r 3 = [0 . 6; 1] × [0; 0 . 2] × ([0 . 2; 1] × [0; 0 . 8]) 4 and o 1 = [0 . 41; 59] × [0 . 3; 0 . 9] × [0 . 12; 0 . 88] 8 are hypercubes and their volumes are 0.03, 0.03, 0.013 and 0.012, respectively. r 1 , r 2 , r 3 are positioned in the corners of the configuration space, while o 1 is positioned in the center. In this case, the algorithm took 16.75 sec on average (20 experiments), while just the incremental search procedure for a satisfying run took 14.471 sec. The transition system had a mean size of 69 states and 1578 transitions, while the product automaton had a mean size of 439 states and 21300 transitions. Case Study 3: We also considered a 20-dimensional unit hypercube configuration space. Two hypercube regions r 1 and r 2 were defined and the robot was required to visit both of them infinitely many times ( φ 3 = G ( F ( r 1 ∧ F r 2)) ). The overall algorithm took 7.45 minutes, while the transition system grew to 414 states and 75584 transitions. The corre- sponding product automaton had a size of 1145 states and 425544 transitions. This example illustrates the fact that the bound on the number of neighbors of a state in the transition system grows at least exponentially in the dimension of the configuration space. The performed tests suggest some possible practical im- provements of the execution time of the proposed algorithms. One idea is to postpone the incremental maintenance of SCCs until there is at least one final state in P . We are interested in SCCs only for final states anyway, therefore there is no benefit to compute them before final states are found. A linear-time SCC algorithm can be used to initialize the corresponding incremental SCC structure when final states are found. Another important improvement may be obtained by processing transitions of P in batches. At each iteration, multiple transitions are added simultaneously, thus a batch version of the incremental SCC may greatly reduce the total execution time. Note that these heuristics will not improve the asymptotic bound of the algorithm. Also, the size of the B ̈ uchi automaton has a significant impact on execution time of the procedure, even though it is fixed and does not contribute to the overall asymptotic bound. VI. FUTURE WORK As already suggested, future work will include integrat- ing the presented algorithm with a local on-line sensing and planning procedure. The overall framework will ensure correctness with respect to both global and local temporal logic specifications. We will also incorporate realistic robot dynamics and environment topologies and we will perform experimental validations with air and ground vehicles in our lab. R EFERENCES [1] Christel Baier and Joost-Pieter Katoen. Principles of model checking . MIT Press, 2008. [2] C. Belta, V. Isler, and G. J. Pappas. Discrete abstractions for robot planning and control in polygonal environments. IEEE Trans. on Robotics , 21(5):864–874, 2005. [3] A. Bhatia, L.E. Kavraki, and M.Y. Vardi. Sampling-based motion planning with temporal goals. In Robotics and Automation (ICRA), IEEE International Conference on , pages 2689–2696. IEEE, 2010. [4] Yushan Chen, Jana Tumova, and Calin Belta. LTL Robot Motion Control based on Automata Learning of Environmental Dynamics. In IEEE International Conference on Robotics and Automation (ICRA) , Saint Paul, MN, USA, 2012. [5] H. Choset, K.M. Lynch, S. Hutchinson, G. Kantor, W. Burgard, L.E. Kavraki, and S. Thrun. Principles of Robot Motion: Theory, Algorithms, and Implementations . MIT Press, Boston, MA, 2005. [6] S. Cranen, J.F. Groote, and M.A. Reniers. A linear translation from LTL to the first-order modal μ -calculus. Technical Report 10-09, Computer Science Reports , 2010. [7] Xu Chu Ding, Marius Kloetzer, Yushan Chen, and Calin Belta. Formal Methods for Automatic Deployment of Robotic Teams. IEEE Robotics and Automation Magazine , 18:75–86, 2011. [8] A. Dobson and K. E. Bekris. Improving Sparse Roadmap Spanners. In IEEE International Conference on Robotics and Automation (ICRA) , March 2013. [9] Paul Gastin and Denis Oddoux. Fast LTL to B ̈ uchi automata trans- lation. In G ́ erard Berry, Hubert Comon, and Alain Finkel, editors, Proceedings of the 13th International Conference on Computer Aided Verification (CAV’01) , volume 2102 of Lecture Notes in Computer Science , pages 53–65, Paris, France, jul 2001. Springer. [10] Bernhard Haeupler, Telikepalli Kavitha, Rogers Mathew, Siddhartha Sen, and Robert E. Tarjan. Incremental Cycle Detection, Topological Ordering, and Strong Component Maintenance. ACM Trans. Algo- rithms , 8(1):3:1–3:33, January 2012. [11] S. Karaman and E. Frazzoli. Sampling-based Motion Planning with Deterministic μ -Calculus Specifications. In IEEE Conference on Decision and Control (CDC) , Shanghai, China, December 2009. [12] S. Karaman and E. Frazzoli. Sampling-based Algorithms for Opti- mal Motion Planning. International Journal of Robotics Research , 30(7):846–894, June 2011. [13] L.E. Kavraki, P. Svestka, J.C. Latombe, and M.H. Overmars. Proba- bilistic roadmaps for path planning in high-dimensional configuration spaces. IEEE Transactions on Robotics and Automation , 12(4):566– 580, 1996. [14] M. Kloetzer and C. Belta. A fully automated framework for control of linear systems from temporal logic specifications. IEEE Transactions on Automatic Control , 53(1):287–297, 2008. [15] H. Kress-Gazit, G. E. Fainekos, and G. J. Pappas. Where’s Waldo? Sensor-based temporal logic motion planning. In IEEE International Conference on Robotics and Automation , pages 3116–3121, 2007. [16] S. M. LaValle. Planning Algorithms . Cambridge University Press, Cambridge, U.K., 2006. Available at http://planning.cs.uiuc.edu/. [17] S. M. LaValle and J. J. Kuffner. Randomized kinodynamic planning. In IEEE International Conference on Robotics and Automation , pages 473–479, 1999. [18] S. R. Lindemann and S. M. LaValle. Simple and Efficient Algorithms for Computing Smooth, Collision-Free Feedback Laws Over Given Cell Decompositions. International Journal of Robotics Research , 28(5):600–621, 2009. [19] Alphan Ulusoy, Michael Marrazzo, Konstantinos Oikonomopoulos, Ryan Hunter, and Calin Belta. Temporal Logic Control for an Autonomous Quadrotor in a Nondeterministic Environment. In IEEE International Conference on Robotics and Automation (ICRA) , 2013. [20] Kissing Number Problem, February 2013. [21] T. Wongpiromsarn, U. Topcu, and R. M. Murray. Receding Horizon Temporal Logic Planning for Dynamical Systems. In Conference on Decision and Control (CDC) 2009 , pages 5997 –6004, 2009.