Efficient Sampling-Based Bottleneck Pathfinding over Cost Maps Kiril Solovey∗and Dan Halperin∗ November 12, 2018 Abstract We introduce a simple yet effective sampling-based planner that is tailored for bottleneck pathfinding: Given an implicitly-defined cost map M : Rd →R, which assigns to every point in space a real value, we wish to find a path connecting two given points, which minimizes the maximal value with respect to M. We demonstrate the capabilities of our algorithm, which we call bottleneck tree (BTT), on several challenging instances of the problem involving multiple agents, where it outperforms the state-of-the-art cost-map planning technique T- RRT*. In addition to its efficiency, BTT requires the tuning of only a single parameter: the number of samples. On the theoretical side, we study the asymptotic properties of our method and consider the special setting where the computed trajectories must be monotone in all coordinates. This constraint arises in cases where the problem involves the coordination of multiple agents that are restricted to forward motions along predefined paths. 1 Introduction Motion planning is a widely studied problem in robotics. In its basic form it is concerned with moving a robot between two given configurations while avoiding collisions with obstacles. Typically, we are interested in paths of high quality, which lead to lower energy consumption, shorter execution time, or safer execution for the robot and its surrounding. One of the most studied quality measures is path length, which was first studied from the combinatorial and geometric perspective [28] and has recently gained popularity with the introduction of PRM*, RRT* [23] and subsequent work. Another popular measure is the clearance (see, e.g, [50]) of a path: the clearance of a particular robot configuration is the distance to the closest forbidden configuration. The goal here is to find a path that maximizes the minimum clearance of the configurations along it. The latter example is a special case of the bottleneck pathfinding problem: we are given an implicitly-defined cost map M : C →R which assigns to every configuration c ∈C of the robot a value; the goal is to find, given start and target configurations, a continuous path ν : [0, 1] →C, which minimizes the expression maxτ∈[0,1]{M(ν(τ))}. Bottleneck pathfinding can also arise in settings involving multiple robots. However, in most cases multi-robot motion planning is computationally intractable (see, e.g., [20, 38]), even when one is only concerned with finding a solution. Thus, decoupled planners (see, e.g., [48, 37, 4]) usually construct a set of d paths—one for each robot—and attempt to coordinate between the robots along their predefined paths in order to avoid collisions. Now suppose that we also wish ∗Kiril Solovey and Dan Halperin are with the Blavatnik School of Computer Science, Tel Aviv University, Israel. Email: {kirilsol,danha}@post.tau.ac.il. This work has been supported in part by the Israel Sci- ence Foundation (grant no. 1102/11), by the Blavatnik Computer Science Research Fund, and by the Hermann Minkowski–Minerva Center for Geometry at Tel Aviv University. Kiril Solovey is also supported by the Clore Israel Foundation. 1 arXiv:1608.00261v2 [cs.RO] 27 Sep 2016 to find the safest coordination—the one which keeps the robots in a maximal distance apart. This problem again, can be reformulated as a bottleneck-pathfinding problem, where d is the dimension of the effective search space. In particular, denote by σi : [0, 1] →C the path assigned for robot i, where 1 ≤i ≤d, and define for a given d-dimensional point x = (x1, . . . , xm) ∈[0, 1]d the cost map M(x) := 1/ max1≤i 0 returns the shortest path with that clearance. Agarwal et al. [2] consider a particular cost function suggested by Wein et al. [51] that combines path length and clearance and develop an efficient approximation algorithm for this case. Perhaps the most popular example of bottleneck pathfinding from the recent years is Fr´echet matching, which is concerned with quantifying the similarity of two given curves: the Fr´echet distance between two given curves can be described as the shortest leash that allows a person to walk her dog, where the former is restricted to move along the first curve and latter along the other. It is generally assumed that this quantity tends to be more informative when only forward motions along the curves are permitted, which transform into monotone paths in the search space. Fr´echet matching has been extensively studied for the case of two curves: several efficient techniques that solve the problem exist (see, e.g., [3, 8]). The problem can be extended to multiple curves although only algorithms that are exponential in the number of curves are known [18, 9]. Several papers consider extensions of the problem involving more complex input objects [6, 7] or cases where additional constraints such as obstacles are imposed on the “leash” [12, 11]. Finally, we mention our recent work [39] where we apply a sampling-based PRM-like technique for solving several Fr´echet-type problems. 3 Preliminaries We provide a formal definition of the bottleneck-pathfinding problem. For some fixed dimension d ≥2, let M : [0, 1]d →R be a cost map that assigns to every point in [0, 1]d a value in R. We will refer from now on to [0, 1]d as the parameter space in order to distinguish it from the configuration space of the underlying problem. Notice that the definition of M does not imply that we restrict our discussion to the setting of a single robot whose configuration space is [0, 1]d. For instance, [0, 1]d can represent the parameter space of d curves of the form σi : [0, 1] →Ci, where for every 1 ≤i ≤d, σi describes that motion of robot i in the configuration space Ci. Typically in such settings involving multiple robots having predefined paths the robots are only permitted to move forward along their paths and never backtrack. This restriction induces monotone motions in [0, 1]d. Given two points x = (x1, . . . , xd), y = (y1, . . . , yd) ∈Rd the notation x ⪯y implies that for every 1 ≤i ≤d it hold that xi ≤yi. Additionally, we use the notation x ⪯δ y for δ > 0 to indicate that x ⪯y and for every 1 ≤i ≤d it hold that yi −xi ≥δ. For simplicity we will assume that the goal consists of planning paths between the points 0, 1 ∈[0, 1]d, where 0 = (0, . . . , 0), 1 = (1, . . . , 1). Definition 1. A plan ν : [0, 1] →[0, 1]d is a continuous path in [0, 1]d that satisfies the following two constraints: (i) ν(0) = 0, ν(1) = 1; (ii) it is monotone in each of the d coordinates, i.e., for every 0 ≤τ ≤τ ′ ≤1 and ν(τ) ⪯ν(τ ′). Given a path (or a plan) ν, its bottleneck cost is defined to be M(ν) = maxτ∈[0,1] M(ν(τ)). In Section 6 we will address specific examples of the following problem: Definition 2. Given a cost map M : [0, 1]d →R the (monotone) bottleneck-pathfinding problem consists of finding a plan ν which minimizes M(ν). 4 Bottleneck-tree planner In Algorithm 1 we describe our sampling-based technique for bottleneck pathfinding, which we call bottleneck tree (BTT). The algorithm can be viewed as a lazy version of PRM which explores the underlying graph in a Dijkstra-like fashion, according to cost-to-come, with respect to the bottleneck cost over M. Thus, it bears some resemblance to FMT* [22] since the two implicitly explore an underlying PRM. However, FMT* behaves very differently from BTT since it minimizes the path-length cost. Algorithm 1 bottleneck-tree(n, rn) 1: V := {0, 1} ∪sample(n); 2: for all x ∈V do 3: c(x) := ∞; p(x) := null 4: c(0) := M(0) 5: while ∃x ∈V such that c(x) < ∞do 6: z := get min(V ) 7: if z == 1 then 8: return path(T, 0, 1) 9: V := V \ {z} 10: Nz := near(z, V, rn) 11: for all x ∈Nz do 12: if z ⪯x and c(z) < c(x) then 13: cnew := max{c(z), M(z, x)} 14: if cnew < c(x) then 15: c(x) := cnew; p(x) := z 16: return ∅ BTT accepts the number of samples n and the connection radius rn as parameters. It maintains a directed minimum spanning tree T of an underlying monotone PRM or a random geometric graph (RGG) [41]1: Definition 3. Given n ∈N+ denote by Xn = {X1, . . . , Xn}, n points chosen independently and uniformly at random from [0, 1]d. For a connection radius rn > 0 denote by Gn = G({0, 1} ∪ Xn; rn) the monotone RGG defined over the vertex set {0, 1} ∪Xn. Gn has a directed edged (x, y) for x, y ∈{0, 1} ∪Xn iffx ⪯y and ∥x −y∥≤rn, where ∥· ∥represents the standard Euclidean distance. The algorithm keeps track of the unvisited vertices of Gn using the set V . It maintains for each vertex x the cost-to-come over the visited portion of Gn and its parent for this specific path, are denoted by c(x) and p(x), respectively. In line 1 we initialize V with 0, 1, and a set of n uniform samples in [0, 1]d, which are generated using sample. In lines 2-3 the cost-to-come and the parent attributes are initialized. In each iteration of the while loop, which begins in line 5, the algorithm extracts the vertex z ∈V which minimizes c(z) , using get min (line 6). If z turns out to be 1 (line 7), then a path from 0 to 1 is returned. This is achieved using path, which simply follows the ancestors of 1 stored in the p attribute until reaching 0. Otherwise, z is removed from V (line 9) and a subset of its neighbors in Gn that are also members of V are obtained using near (line 10). For each such neighbor x (line 11) it is checked whether z ⪯x and whether the cost-to-come of x can potentially improve by arriving to it from z (line 12). An alternative cost-to-come for x which is induced by a path that reaches from z is calculated (line 13). In particular, this cost is the maximum between the cost-to-come of z and the cost of the edge2 from z to x with respect to M. If the alternative option improves its cost then the parents and the cost of x are updated accordingly (line 15). In preparation for the following section, where we study the asymptotic behavior of BTT, we mention here that given that the underlying graph Gn contains a path from 0 to 1, BTT finds 1PRMs and RGGs are equivalent in our context. 2The cost of an edge with respect to a given cost map can be approximated by dense sampling along the edge, as is customary in motion planning. the minimum bottleneck path over Gn, namely the path whose maximal M value is minimized. We omit a formal proof of this fact here. It is rather straightforward and is very similar similar to the completeness proof of the standard Dijkstra algorithm, but for the bottleneck cost; see, e.g., [13, Theorem 25.10]. 5 Asymptotic analysis In the previous section we have argued that BTT returns a solution that minimizes the bot- tleneck cost over a fixed graph G ∈Gn. A major question in the analysis of sampling-based motion-planning algorithms is under what conditions does a discrete graph structure captures the underlying continuous space. For this purpose, we study the asymptotic properties of Gn = G({0, 1} ∪Xn; rn) (Definition 3). To the best of our knowledge, existing proofs concern- ing asymptotic optimality or completeness of sampling-based planners (see, e.g., [23, 22]) only apply to the non-monotone (and standard) setting. Such results cannot be extended as-are to the analysis of Gn. The remainder of this section is dedicated to strengthening our our analysis concerning the special and harder case that imposes the monotonicity requirement. Previously [39], we were able to give a convergence guarantee in the monotone case by introducing a connection radius (rn) of the form f(n)  log n n 1/d , where f(n) is any function that diminishes as n tends to ∞. Unfortunately, this statement does not indicate which specific function should be used in practice, and an uncareful selection of f could lead to undesired behavior of BTT. In particular, if f grows too slowly with n the graph becomes disconnected and BTT may not be able to find a solution at all. On the other hand, f that grows too fast with n could result in a needlessly dense graph which will take very long time to traverse. In this paper, we replace this somewhat abstract definition with a connection radius of the form γ  log n n 1/d , where γ is a constant depending only on d. This is a more typical structure of bounds in the context of random geometric graphs and sampling-based planners. Furthermore, we show in Section 6 that this formulation of rn leads to a quick convergence of BTT to the optimum. For the purpose of the analysis, we define a supergraph of Gn, denoted by Ln = L({0, 1} ∪ Xn; rn), which corresponds to the standard and non-monotone random geometric graph (see [41]). In particular, it connects every pair of vertices x, y ∈{0, 1} ∪Xn with an edge if and only if ∥x −y∥≤rn, regardless of whether the monotonicity constraint is satisfied. In order to guarantee asymptotic optimality, one typically has to make some assumptions with respect to the solution one hopes to converge to. First, we introduce basic definitions. Denote by Br(x) the d-dimensional Euclidean ball of radius r > 0 centered at x ∈Rd and Br(Γ) = S x∈Γ Br(x) for any Γ ⊆Rd. Given a curve ν : [0, 1] →Rd define Br(ν) = S τ∈[0,1] Br(ν(τ)). Additionally, denote the image of a curve ν by Im(ν) = S τ∈[0,1]{ν(τ)}. Definition 4. Given M : [0, 1]d →R, a plan ν := [0, 1] →[0, 1]d is called robust if for every ε > 0 there exists δ > 0 such that for any plan ν′ for which Im(ν) ⊂Bδ(ν) it follows that M(ν′) ≤(1 + ε)M(ν). A plan that attains the infimum cost, over all robust plans, is termed robustly optimal and is denoted by ν∗. The following theorem is the main theoretical contribution of this paper. The constant γ, which is defined below, was first obtained in [22], for a different problem, and involving non-monotone connections. Theorem 1. Suppose that Ln = L(Xn ∪{0, 1}; rn) with rn = γ  log n n 1/d , where γ = (1 + η)2(dθd)−1/d with θd representing the Lebesgue measure of the unit Euclidean ball, and η > 0 is any positive constant. Then Ln contains a path νn connecting 0 to 1 which has the following properties a.s.: (i) M(ν) = (1 + o(1))M(ν∗); (ii) at most o(1) fraction of the edges along νn are non-monotone. This theorem implies that Ln contains a path ν that is almost entirely monotone and its cost converges to the optimum. Notice that in theory this does not necessarily mean that BTT is asymptotically optimal, as the formulation of the algorithm in Alg. 1 is concerned with paths that are entirely monotone. However, our experiments (Section 6) suggest that this theoretical gap can be bridged. 5.1 Proof of Theorem 1 This subsection is devoted to the proof of Theorem 1. It relies on some components that were previously described in the works of Janson et al. [22], Karaman and Frazzoli [23], and our work [39]. However, we note that several ideas employed here are brand new and so is the final result. Our proof follows the common “ball-covering” argument [23, 22, 24], which shows that there exists a collection of samples from Xn, that are located in the vicinity of “landmark” points that lie on some plan of interest. This guarantees that Ln contains a path that has a cost similar to the robustly optimal plan ν∗. Then we use a refined “(α, β) ball-covering” [22] argument, which shows that many of the samples lie very closely to the landmarks—so close that most of them form monotone subchains. This induces the existence of paths that are almost-entirely monotone and have desirable bottleneck cost. We start by selecting a fixed ε > 0. As ν∗is robustly optimal, there exists δ > 0 such that for every plan ν with Im(ν) ⊆Bδ(ν∗) it follows that M(ν) ≤(1 + ε)M(ν∗). Define ℓ= 2/δ. Similarly to the proof of Theorem 6 in [39], there exists a sequence of ℓ−1 ≤k ≤ℓpoints Q = {q1, . . . , qk} ⊂Im(ν∗) where q1 = 0, qk = 1, and qj ⪯δ/2 qj+1 for any 1 ≤j < k. Note that ℓis finite and independent of n. This follows from the fact that 0 ⪯1 1 and every subpath of ν∗must “make progress” in relation to at least one of the coordinates. In particular, for every 1 ≤j ≤d define Qj = {qj 1, . . . , qj ℓ} ⊂Im(ν∗) to be a collection of ℓpoints along ν∗with the following property: for every 1 ≤i′ ≤ℓthe jth coordinate of qj i′ is equal to j · 2/δ. Observe that for any 1 ≤i′ ≤ℓthere exists 1 ≤j ≤d and 1 ≤i′′ ≤ℓsuch that 0 ⪯(j−1)·2/δ qj i′′ and 0 ̸⪯(j−1)·2/δ+∆qj i′′ for any ∆> 0, i.e., one of the coordinates of qj i′′ is equal to j · 2/δ (see Figure 1). Thus, it follows that Q ⊂Sd j=1 Qj. As ν∗can be arbitrarily complex we define a simplified plan, which is located in the vicinity of ν∗, and is consequently of low cost. Firstly, for every 1 ≤j < k denote by νj,j+1 the straight- line path from qj to qj+1 (see Figure 1). Now, define ν1,k to be the concatenation of these k −1 subpaths. Observe that ν1,k is a plan. Fix ψ ∈(0, 1) and define r′ n := rn 2(2+ψ). We generate a set of Mn “landmark” points P = {p1, . . . , pMn }, which are placed along ν1,k, i.e., P ⊂Im(ν1,k), such that for every 1 ≤i ≤Mn−1 it holds that ∥pi −pi+1∥≤r′ n. Note that p1 = 0. For simplicity we also assume that pMn = 1. Now define a set of balls centered at the landmarks. In particular, for every 1 ≤i ≤Mn define the set Bn,i := Br′n(pi). Additionally, denote by An the event representing that for every 1 ≤i ≤Mn it holds that Xn ∩Bn,i ̸= ∅. We have the following lemma, which is proven in [22, Lemma 4.4]: Lemma 1. The event An holds a.s. Corollary 1. The graph Ln contains a path νn connecting 0 to 1 such that M(νn) ≤(1 + ε)M(ν∗) a.s.. δ/2 δ/2 q1 = q1 1 = q2 1 q2 = q1 2 = q2 3 q2 2 q1 3 ν2,3 ν∗ ν1,2 q3 = q1 4 = q2 4 Figure 1: Visualization of the proof of Theorem 1 for d = 2. The red curve represents ν∗, on which lie the points Q1 ∪Q2 (represented by red bullets). Observe that Q ⊂Q1 ∪Q2. The dashed blue straight lines represent ν1,2, ν2,3. Proof. Suppose that An holds. Then for every 1 < i < Mn there exists xi ∈Xn such that xi ∈Bn,i. Observe that for any 1 ≤i < Mn it holds that ∥xi −xi+1∥≤rn and so Ln contains an edge from xi to xi+1. Thus, we can define νn to be the path consisting of the points 0, x2, . . . , xMn−1, 1. For any 1 < i < Mn denote by p′ i a point along ν∗which is the closest to pi. Observe that ∥pi −p′ i∥≤δ/2. This, combined with the fact that ∥xi −pi∥≤r′ n, implies that ∥xi −p′ i∥≤ δ/2 + r′ n ≤δ, which then guarantees that M(xi) ≤(1 + ε)M(ν∗), by definition of ν∗. Similar arguments can be applied to the edges of νn. Thus, M(νn) ≤(1 + ε)M(ν∗). In order to reason about the monotonicity of νn we make the following crucial observation: Lemma 2. There exists a fixed constant β ∈(0, 1) such that for any 1 ≤i < Mn it holds that pi ⪯βrn pi+1. Proof. First, assume that there exists 1 ≤j < k for which pi, pi+1 ∈Im(νj,j+1), and note that by definition the straight line from pi to pi+1 is a subsegment of νj,j+1. Now, recall that qj ⪯δ/2 qj+1 and ∥pi −pi+1∥= r′ n. Additionally, denote by Lj,j+1 = ∥qj −qj+1∥and note that this value is a constant independent of n. Finally, let βj be the maximal value for which pi ⪯βjrn pi+1 and observe that Lj,j+1 r′n = δ/2 βjrn . Thus, βj = ψδ 2(2+ψ). Now consider the case where pi ∈Im(νj,j+1), pi+1 ∈Im(νj+1,j+2). Without loss of generality, assume that ∥pi −qj+1∥≥∥pi+1 −qj+1∥. Thus, pi ⪯βjrn/2 qj+1, which implies that pi ⪯βjrn/2 pi+1. Finally, define β = min1≤j