Efficient Sampling-Based Bottleneck Pathfinding over Cost Maps Kiril Solovey ∗ and Dan Halperin ∗ November 12, 2018 Abstract We introduce a simple yet effective sampling-based planner that is tailored for bottleneck pathfinding : Given an implicitly-defined cost map M : R d → R , which assigns to every point in space a real value, we wish to find a path connecting two given points, which minimizes the maximal value with respect to M . We demonstrate the capabilities of our algorithm, which we call bottleneck tree (BTT), on several challenging instances of the problem involving multiple agents, where it outperforms the state-of-the-art cost-map planning technique T- RRT*. In addition to its efficiency, BTT requires the tuning of only a single parameter: the number of samples. On the theoretical side, we study the asymptotic properties of our method and consider the special setting where the computed trajectories must be monotone in all coordinates. This constraint arises in cases where the problem involves the coordination of multiple agents that are restricted to forward motions along predefined paths. 1 Introduction Motion planning is a widely studied problem in robotics. In its basic form it is concerned with moving a robot between two given configurations while avoiding collisions with obstacles. Typically, we are interested in paths of high quality, which lead to lower energy consumption, shorter execution time, or safer execution for the robot and its surrounding. One of the most studied quality measures is path length, which was first studied from the combinatorial and geometric perspective [28] and has recently gained popularity with the introduction of PRM*, RRT* [23] and subsequent work. Another popular measure is the clearance (see, e.g, [50]) of a path: the clearance of a particular robot configuration is the distance to the closest forbidden configuration. The goal here is to find a path that maximizes the minimum clearance of the configurations along it. The latter example is a special case of the bottleneck pathfinding problem: we are given an implicitly-defined cost map M : C → R which assigns to every configuration c ∈ C of the robot a value; the goal is to find, given start and target configurations, a continuous path ν : [0 , 1] → C , which minimizes the expression max τ ∈ [0 , 1] {M ( ν ( τ )) } . Bottleneck pathfinding can also arise in settings involving multiple robots. However, in most cases multi-robot motion planning is computationally intractable (see, e.g., [20, 38]), even when one is only concerned with finding a solution. Thus, decoupled planners (see, e.g., [48, 37, 4]) usually construct a set of d paths—one for each robot—and attempt to coordinate between the robots along their predefined paths in order to avoid collisions. Now suppose that we also wish ∗ Kiril Solovey and Dan Halperin are with the Blavatnik School of Computer Science, Tel Aviv University, Israel. Email: { kirilsol,danha } @post.tau.ac.il . This work has been supported in part by the Israel Sci- ence Foundation (grant no. 1102/11), by the Blavatnik Computer Science Research Fund, and by the Hermann Minkowski–Minerva Center for Geometry at Tel Aviv University. Kiril Solovey is also supported by the Clore Israel Foundation. 1 arXiv:1608.00261v2 [cs.RO] 27 Sep 2016 to find the safest coordination—the one which keeps the robots in a maximal distance apart. This problem again, can be reformulated as a bottleneck-pathfinding problem, where d is the dimension of the effective search space. In particular, denote by σ i : [0 , 1] → C the path assigned for robot i , where 1 ≤ i ≤ d , and define for a given d -dimensional point x = ( x 1 , . . . , x m ) ∈ [0 , 1] d the cost map M ( x ) := 1 / max 1 ≤ i 0 returns the shortest path with that clearance. Agarwal et al. [2] consider a particular cost function suggested by Wein et al. [51] that combines path length and clearance and develop an efficient approximation algorithm for this case. Perhaps the most popular example of bottleneck pathfinding from the recent years is Fr ́ echet matching , which is concerned with quantifying the similarity of two given curves: the Fr ́ echet distance between two given curves can be described as the shortest leash that allows a person to walk her dog, where the former is restricted to move along the first curve and latter along the other. It is generally assumed that this quantity tends to be more informative when only forward motions along the curves are permitted, which transform into monotone paths in the search space. Fr ́ echet matching has been extensively studied for the case of two curves: several efficient techniques that solve the problem exist (see, e.g., [3, 8]). The problem can be extended to multiple curves although only algorithms that are exponential in the number of curves are known [18, 9]. Several papers consider extensions of the problem involving more complex input objects [6, 7] or cases where additional constraints such as obstacles are imposed on the “leash” [12, 11]. Finally, we mention our recent work [39] where we apply a sampling-based PRM-like technique for solving several Fr ́ echet-type problems. 3 Preliminaries We provide a formal definition of the bottleneck-pathfinding problem. For some fixed dimension d ≥ 2, let M : [0 , 1] d → R be a cost map that assigns to every point in [0 , 1] d a value in R . We will refer from now on to [0 , 1] d as the parameter space in order to distinguish it from the configuration space of the underlying problem. Notice that the definition of M does not imply that we restrict our discussion to the setting of a single robot whose configuration space is [0 , 1] d . For instance, [0 , 1] d can represent the parameter space of d curves of the form σ i : [0 , 1] → C i , where for every 1 ≤ i ≤ d , σ i describes that motion of robot i in the configuration space C i . Typically in such settings involving multiple robots having predefined paths the robots are only permitted to move forward along their paths and never backtrack. This restriction induces monotone motions in [0 , 1] d . Given two points x = ( x 1 , . . . , x d ) , y = ( y 1 , . . . , y d ) ∈ R d the notation x  y implies that for every 1 ≤ i ≤ d it hold that x i ≤ y i . Additionally, we use the notation x  δ y for δ > 0 to indicate that x  y and for every 1 ≤ i ≤ d it hold that y i − x i ≥ δ . For simplicity we will assume that the goal consists of planning paths between the points 0 , 1 ∈ [0 , 1] d , where 0 = (0 , . . . , 0) , 1 = (1 , . . . , 1). Definition 1. A plan ν : [0 , 1] → [0 , 1] d is a continuous path in [0 , 1] d that satisfies the following two constraints: (i) ν (0) = 0 , ν (1) = 1 ; (ii) it is monotone in each of the d coordinates, i.e., for every 0 ≤ τ ≤ τ ′ ≤ 1 and ν ( τ )  ν ( τ ′ ) . Given a path (or a plan) ν , its bottleneck cost is defined to be M ( ν ) = max τ ∈ [0 , 1] M ( ν ( τ )). In Section 6 we will address specific examples of the following problem: Definition 2. Given a cost map M : [0 , 1] d → R the (monotone) bottleneck-pathfinding problem consists of finding a plan ν which minimizes M ( ν ) . 4 Bottleneck-tree planner In Algorithm 1 we describe our sampling-based technique for bottleneck pathfinding, which we call bottleneck tree (BTT). The algorithm can be viewed as a lazy version of PRM which explores the underlying graph in a Dijkstra-like fashion, according to cost-to-come, with respect to the bottleneck cost over M . Thus, it bears some resemblance to FMT* [22] since the two implicitly explore an underlying PRM. However, FMT* behaves very differently from BTT since it minimizes the path-length cost. Algorithm 1 bottleneck-tree ( n, r n ) 1: V := { 0 , 1 } ∪ sample ( n ); 2: for all x ∈ V do 3: c ( x ) := ∞ ; p ( x ) := null 4: c ( 0 ) := M ( 0 ) 5: while ∃ x ∈ V such that c ( x ) < ∞ do 6: z := get min ( V ) 7: if z == 1 then 8: return path ( T, 0 , 1 ) 9: V := V \ { z } 10: N z := near ( z, V, r n ) 11: for all x ∈ N z do 12: if z  x and c ( z ) < c ( x ) then 13: c new := max { c ( z ) , M ( z, x ) } 14: if c new < c ( x ) then 15: c ( x ) := c new ; p ( x ) := z 16: return ∅ BTT accepts the number of samples n and the connection radius r n as parameters. It maintains a directed minimum spanning tree T of an underlying monotone PRM or a random geometric graph (RGG) [41] 1 : Definition 3. Given n ∈ N + denote by X n = { X 1 , . . . , X n } , n points chosen independently and uniformly at random from [0 , 1] d . For a connection radius r n > 0 denote by G n = G ( { 0 , 1 } ∪ X n ; r n ) the monotone RGG defined over the vertex set { 0 , 1 } ∪ X n . G n has a directed edged ( x, y ) for x, y ∈ { 0 , 1 } ∪ X n iff x  y and ‖ x − y ‖ ≤ r n , where ‖ · ‖ represents the standard Euclidean distance. The algorithm keeps track of the unvisited vertices of G n using the set V . It maintains for each vertex x the cost-to-come over the visited portion of G n and its parent for this specific path, are denoted by c ( x ) and p ( x ), respectively. In line 1 we initialize V with 0 , 1 , and a set of n uniform samples in [0 , 1] d , which are generated using sample . In lines 2-3 the cost-to-come and the parent attributes are initialized. In each iteration of the while loop, which begins in line 5, the algorithm extracts the vertex z ∈ V which minimizes c ( z ) , using get min (line 6). If z turns out to be 1 (line 7), then a path from 0 to 1 is returned. This is achieved using path , which simply follows the ancestors of 1 stored in the p attribute until reaching 0 . Otherwise, z is removed from V (line 9) and a subset of its neighbors in G n that are also members of V are obtained using near (line 10). For each such neighbor x (line 11) it is checked whether z  x and whether the cost-to-come of x can potentially improve by arriving to it from z (line 12). An alternative cost-to-come for x which is induced by a path that reaches from z is calculated (line 13). In particular, this cost is the maximum between the cost-to-come of z and the cost of the edge 2 from z to x with respect to M . If the alternative option improves its cost then the parents and the cost of x are updated accordingly (line 15). In preparation for the following section, where we study the asymptotic behavior of BTT, we mention here that given that the underlying graph G n contains a path from 0 to 1 , BTT finds 1 PRMs and RGGs are equivalent in our context. 2 The cost of an edge with respect to a given cost map can be approximated by dense sampling along the edge, as is customary in motion planning. the minimum bottleneck path over G n , namely the path whose maximal M value is minimized. We omit a formal proof of this fact here. It is rather straightforward and is very similar similar to the completeness proof of the standard Dijkstra algorithm, but for the bottleneck cost; see, e.g., [13, Theorem 25.10]. 5 Asymptotic analysis In the previous section we have argued that BTT returns a solution that minimizes the bot- tleneck cost over a fixed graph G ∈ G n . A major question in the analysis of sampling-based motion-planning algorithms is under what conditions does a discrete graph structure captures the underlying continuous space. For this purpose, we study the asymptotic properties of G n = G ( { 0 , 1 } ∪ X n ; r n ) (Definition 3). To the best of our knowledge, existing proofs concern- ing asymptotic optimality or completeness of sampling-based planners (see, e.g., [23, 22]) only apply to the non-monotone (and standard) setting. Such results cannot be extended as-are to the analysis of G n . The remainder of this section is dedicated to strengthening our our analysis concerning the special and harder case that imposes the monotonicity requirement. Previously [39], we were able to give a convergence guarantee in the monotone case by introducing a connection radius ( r n ) of the form f ( n ) ( log n n ) 1 /d , where f ( n ) is any function that diminishes as n tends to ∞ . Unfortunately, this statement does not indicate which specific function should be used in practice, and an uncareful selection of f could lead to undesired behavior of BTT. In particular, if f grows too slowly with n the graph becomes disconnected and BTT may not be able to find a solution at all. On the other hand, f that grows too fast with n could result in a needlessly dense graph which will take very long time to traverse. In this paper, we replace this somewhat abstract definition with a connection radius of the form γ ( log n n ) 1 /d , where γ is a constant depending only on d . This is a more typical structure of bounds in the context of random geometric graphs and sampling-based planners. Furthermore, we show in Section 6 that this formulation of r n leads to a quick convergence of BTT to the optimum. For the purpose of the analysis, we define a supergraph of G n , denoted by L n = L ( { 0 , 1 } ∪ X n ; r n ), which corresponds to the standard and non-monotone random geometric graph (see [41]). In particular, it connects every pair of vertices x, y ∈ { 0 , 1 } ∪ X n with an edge if and only if ‖ x − y ‖ ≤ r n , regardless of whether the monotonicity constraint is satisfied. In order to guarantee asymptotic optimality, one typically has to make some assumptions with respect to the solution one hopes to converge to. First, we introduce basic definitions. Denote by B r ( x ) the d -dimensional Euclidean ball of radius r > 0 centered at x ∈ R d and B r (Γ) = ⋃ x ∈ Γ B r ( x ) for any Γ ⊆ R d . Given a curve ν : [0 , 1] → R d define B r ( ν ) = ⋃ τ ∈ [0 , 1] B r ( ν ( τ )). Additionally, denote the image of a curve ν by Im( ν ) = ⋃ τ ∈ [0 , 1] { ν ( τ ) } . Definition 4. Given M : [0 , 1] d → R , a plan ν := [0 , 1] → [0 , 1] d is called robust if for every ε > 0 there exists δ > 0 such that for any plan ν ′ for which Im( ν ) ⊂ B δ ( ν ) it follows that M ( ν ′ ) ≤ (1 + ε ) M ( ν ) . A plan that attains the infimum cost, over all robust plans, is termed robustly optimal and is denoted by ν ∗ . The following theorem is the main theoretical contribution of this paper. The constant γ , which is defined below, was first obtained in [22], for a different problem, and involving non-monotone connections. Theorem 1. Suppose that L n = L ( X n ∪ { 0 , 1 } ; r n ) with r n = γ ( log n n ) 1 /d , where γ = (1 + η )2( dθ d ) − 1 /d with θ d representing the Lebesgue measure of the unit Euclidean ball, and η > 0 is any positive constant. Then L n contains a path ν n connecting 0 to 1 which has the following properties a.s.: (i) M ( ν ) = (1 + o (1)) M ( ν ∗ ) ; (ii) at most o (1) fraction of the edges along ν n are non-monotone. This theorem implies that L n contains a path ν that is almost entirely monotone and its cost converges to the optimum. Notice that in theory this does not necessarily mean that BTT is asymptotically optimal, as the formulation of the algorithm in Alg. 1 is concerned with paths that are entirely monotone. However, our experiments (Section 6) suggest that this theoretical gap can be bridged. 5.1 Proof of Theorem 1 This subsection is devoted to the proof of Theorem 1. It relies on some components that were previously described in the works of Janson et al. [22], Karaman and Frazzoli [23], and our work [39]. However, we note that several ideas employed here are brand new and so is the final result. Our proof follows the common “ball-covering” argument [23, 22, 24], which shows that there exists a collection of samples from X n , that are located in the vicinity of “landmark” points that lie on some plan of interest. This guarantees that L n contains a path that has a cost similar to the robustly optimal plan ν ∗ . Then we use a refined “( α, β ) ball-covering” [22] argument, which shows that many of the samples lie very closely to the landmarks—so close that most of them form monotone subchains. This induces the existence of paths that are almost-entirely monotone and have desirable bottleneck cost. We start by selecting a fixed ε > 0. As ν ∗ is robustly optimal, there exists δ > 0 such that for every plan ν with Im( ν ) ⊆ B δ ( ν ∗ ) it follows that M ( ν ) ≤ (1 + ε ) M ( ν ∗ ). Define ` = 2 /δ . Similarly to the proof of Theorem 6 in [39], there exists a sequence of ` − 1 ≤ k ≤ ` points Q = { q 1 , . . . , q k } ⊂ Im( ν ∗ ) where q 1 = 0 , q k = 1 , and q j  δ/ 2 q j +1 for any 1 ≤ j < k . Note that ` is finite and independent of n . This follows from the fact that 0  1 1 and every subpath of ν ∗ must “make progress” in relation to at least one of the coordinates. In particular, for every 1 ≤ j ≤ d define Q j = { q j 1 , . . . , q j ` } ⊂ Im( ν ∗ ) to be a collection of ` points along ν ∗ with the following property: for every 1 ≤ i ′ ≤ ` the j th coordinate of q j i ′ is equal to j · 2 /δ . Observe that for any 1 ≤ i ′ ≤ ` there exists 1 ≤ j ≤ d and 1 ≤ i ′′ ≤ ` such that 0  ( j − 1) · 2 /δ q j i ′′ and 0 6  ( j − 1) · 2 /δ +∆ q j i ′′ for any ∆ > 0, i.e., one of the coordinates of q j i ′′ is equal to j · 2 /δ (see Figure 1). Thus, it follows that Q ⊂ ⋃ d j =1 Q j . As ν ∗ can be arbitrarily complex we define a simplified plan, which is located in the vicinity of ν ∗ , and is consequently of low cost. Firstly, for every 1 ≤ j < k denote by ν j,j +1 the straight- line path from q j to q j +1 (see Figure 1). Now, define ν 1 ,k to be the concatenation of these k − 1 subpaths. Observe that ν 1 ,k is a plan. Fix ψ ∈ (0 , 1) and define r ′ n := r n 2(2+ ψ ) . We generate a set of M n “landmark” points P = { p 1 , . . . , p Mn } , which are placed along ν 1 ,k , i.e., P ⊂ Im( ν 1 ,k ), such that for every 1 ≤ i ≤ M n − 1 it holds that ‖ p i − p i +1 ‖ ≤ r ′ n . Note that p 1 = 0 . For simplicity we also assume that p Mn = 1 . Now define a set of balls centered at the landmarks. In particular, for every 1 ≤ i ≤ M n define the set B n,i := B r ′ n ( p i ). Additionally, denote by A n the event representing that for every 1 ≤ i ≤ M n it holds that X n ∩ B n,i 6 = ∅ . We have the following lemma, which is proven in [22, Lemma 4.4]: Lemma 1. The event A n holds a.s. Corollary 1. The graph L n contains a path ν n connecting 0 to 1 such that M ( ν n ) ≤ (1 + ε ) M ( ν ∗ ) a.s.. δ/ 2 δ/ 2 q 1 = q 1 1 = q 2 1 q 2 = q 1 2 = q 2 3 q 2 2 q 1 3 ν 2 , 3 ν ∗ ν 1 , 2 q 3 = q 1 4 = q 2 4 Figure 1: Visualization of the proof of Theorem 1 for d = 2. The red curve represents ν ∗ , on which lie the points Q 1 ∪ Q 2 (represented by red bullets). Observe that Q ⊂ Q 1 ∪ Q 2 . The dashed blue straight lines represent ν 1 , 2 , ν 2 , 3 . Proof. Suppose that A n holds. Then for every 1 < i < M n there exists x i ∈ X n such that x i ∈ B n,i . Observe that for any 1 ≤ i < M n it holds that ‖ x i − x i +1 ‖ ≤ r n and so L n contains an edge from x i to x i +1 . Thus, we can define ν n to be the path consisting of the points 0 , x 2 , . . . , x M n − 1 , 1 . For any 1 < i < M n denote by p ′ i a point along ν ∗ which is the closest to p i . Observe that ‖ p i − p ′ i ‖ ≤ δ/ 2. This, combined with the fact that ‖ x i − p i ‖ ≤ r ′ n , implies that ‖ x i − p ′ i ‖ ≤ δ/ 2 + r ′ n ≤ δ , which then guarantees that M ( x i ) ≤ (1 + ε ) M ( ν ∗ ), by definition of ν ∗ . Similar arguments can be applied to the edges of ν n . Thus, M ( ν n ) ≤ (1 + ε ) M ( ν ∗ ). In order to reason about the monotonicity of ν n we make the following crucial observation: Lemma 2. There exists a fixed constant β ∈ (0 , 1) such that for any 1 ≤ i < M n it holds that p i  βr n p i +1 . Proof. First, assume that there exists 1 ≤ j < k for which p i , p i +1 ∈ Im( ν j,j +1 ), and note that by definition the straight line from p i to p i +1 is a subsegment of ν j,j +1 . Now, recall that q j  δ/ 2 q j +1 and ‖ p i − p i +1 ‖ = r ′ n . Additionally, denote by L j,j +1 = ‖ q j − q j +1 ‖ and note that this value is a constant independent of n . Finally, let β j be the maximal value for which p i  β j r n p i +1 and observe that L j,j +1 r ′ n = δ/ 2 β j r n . Thus, β j = ψδ 2(2+ ψ ) . Now consider the case where p i ∈ Im( ν j,j +1 ) , p i +1 ∈ Im( ν j +1 ,j +2 ). Without loss of generality, assume that ‖ p i − q j +1 ‖ ≥ ‖ p i +1 − q j +1 ‖ . Thus, p i  β j r n / 2 q j +1 , which implies that p i  β j r n / 2 p i +1 . Finally, define β = min 1 ≤ j