The Critical Radius in Sampling-based Motion Planning Kiril Solovey∗and Michal Kleinbort† December 27, 2018 Abstract We develop a new analysis of sampling-based motion planning in Euclidean space with uniform random sampling, which significantly improves upon the celebrated result of Kara- man and Frazzoli (2011) and subsequent work. Particularly, we prove the existence of a critical connection radius proportional to Θ(n−1/d) for n samples and d dimensions: Be- low this value the planner is guaranteed to fail (similarly shown by the aforementioned work, ibid.). More importantly, for larger radius values the planner is asymptotically (near-)optimal. Furthermore, our analysis yields an explicit lower bound of 1 −O(n−1) on the probability of success. A practical implication of our work is that asymptotic (near-)optimality is achieved when each sample is connected to only Θ(1) neighbors. This is in stark contrast to previous work which requires Θ(log n) connections, that are induced by a radius of order  log n n 1/d . Our analysis is not restricted to PRM and applies to a variety of “PRM-based” planners, in- cluding RRG, FMT∗and BTT. Continuum percolation plays an important role in our proofs. Lastly, we develop similar theory for all the aforementioned planners when constructed with deterministic samples, which are then sparsified in a randomized fashion. We believe that this new model, and its analysis, is interesting in its own right. 1 Introduction Motion planning is a fundamental problem in robotics concerned with allowing autonomous robots to efficiently navigate in environments cluttered with obstacles. Although motion plan- ning has originated as a strictly theoretical problem in computer science [18], nowadays it is applied in various fields. Notably, motion planning arises in coordination of multiple autonomous vehicles [12], steering surgical needles [5], and planning trajectories of spacecrafts in orbit [47], to name just a few examples. Motion planning is notoriously challenging from a computational perspective due to the con- tinuous and high-dimensional search space it induces, which accounts for the structure of the robot, the physical constraints that it needs to satisfy, and the environment in which it operates. Low-dimensional instances of the problem can typically be solved by efficient and complete al- gorithms [17], and the same applies to several high-dimensional instances (see, e.g., [1, 44]). However, in general, motion planning is computationally intractable (see, e.g., [7, 35, 42]). Nowadays the majority of practical approaches for motion planning capture the connectivity of the free space by sampling (typically in a randomized fashion) configurations and connecting nearby samples, to form a graph data structure. Although such sampling-based planners are inherently incomplete, i.e., cannot detect situations in which a solution (collision-free path) does ∗Department of Aeronautics and Astronautics, Stanford University, CA 94305, US. email: kirilsol@ stanford.edu †Blavatnik School of Computer Science, Tel Aviv University, Israel. email: balasmic@post.tau.ac.il arXiv:1709.06290v3 [cs.RO] 26 Dec 2018 not exist, most have the desired property of being able to find a solution eventually, if one exists. That is, a planner is probabilistically complete (PC) if the probability of finding a solution tends to 1 as the number of samples n tends to infinity. Moreover, some recent sampling-based techniques are also guaranteed to return high-quality solutions that tend to the optimum as n diverges—a property called asymptotic optimality (AO). Quality can be measured in terms of energy, length of the plan, clearance from obstacles, etc. An important attribute of sampling-based planners, which dictates both the running time and the quality of the returned solution, is the number of neighbors considered for connection for each added sample. In many techniques this number is directly affected by a connection radius rn: Decreasing rn reduces the number of neighbors. This in turn reduces the running time of the planner for a given number of samples n, but may also reduce the quality of the solution or its availability altogether. Thus, it is desirable to come up with a radius rn that is small, but not to the extent that the planner loses its favorable properties of PC and AO. 1.1 Contribution We develop a new analysis of PRM [24] for uniform random sampling in Euclidean space, which relies on a novel connection between sampling-based planners and continuum percolation (see, e.g., [11]). Our analysis is tight and proves the existence of a critical connection radius r∗ n = γ∗n−1/d, where γ∗> 0 is a constant (in particular, 0.4 ⩽γ∗⩽0.6 for all d ⩾2), and d ⩾2 is the dimension: If rn < r∗ n then PRM is guaranteed to fail, where d is the dimension. Above the threshold, i.e., when rn > r∗ n, PRM is AO for the bottleneck cost, and asymptotically near optimal1 (AnO) with respect to the path-length cost. Furthermore, our analysis yields concrete bounds on the probability of success, which is lower-bounded by 1 −O(n−1). Notice that this bound is comparable to the one obtained in [48] (see Section 2) although we show this for a much smaller radius. Our analysis is not restricted to PRM and applies to a variety of planners that maintain PRM- like roadmaps, explicitly or implicitly. For instance, when rn is above the threshold, FMT∗[21] is AnO with respect to the path-length cost, while BTT [43] is AO with respect to the bottleneck cost. RRG [23] behaves similarly for the two cost functions. Our results are also applicable to multi- robot motion planners such as the recently introduced dRRT∗[10], and M∗[51] when applied to a continuous domain. See Figure 1 for additional PRM-based planners to which our analysis is applicable, and which are mentioned further on. A practical implication of our work is that AO (or AnO), under the regime of uniform random sampling, can be achieved even when every sample is connected to Θ(1) neighbors. This is in stark contrast with previous work, e.g., [21, 23, 41, 45], which provided a rough estimate of this number, that is proportional to O(log n). Interestingly, our Θ(1) bound for uniform samples is comparable to the best known bound of for deterministic samples [22]. Lastly, we also study the asymptotic properties of the aforementioned planners when con- structed using a deterministic point process, rather than uniform random sampling which we have mentioned so far. We introduce an analysis which shows that the graphs constructed using such deterministic samples can be sparsified in a randomized fashion while maintaining AO or AnO of the planners. We term this regime as semi-deterministic sampling. 1AnO means that the cost of the solution tends to at most a constant factor times the optimum, compared with AO in which this constant is equal to one. 2 single-robot planners LBT-RRT MPLB FMT* RRG PRM BTT dRRT dRRT* M* multi-robot planners length cost bottleneck cost BFMT* SPARS2 RSEC Figure 1: The PRM dynasty. Single robot and multi-robot planners, are bounded into the blue and red frames, respectively. Planners inside the magenta frame aim to minimize the path-length cost, whereas BTT is designed for bottleneck cost, and RRG works for both costs (see more information below). Roughly speaking, arrow from “A” to “B” indicates that theoretical properties of “A” extend to “B”, or that the latter maintains “A” as a substructure. 1.2 Organization In Section 2 we discuss related work. Then we proceed to basic definitions and problem statement in Section 3. In that section we also include a precise description of the robotic system our theory is developed for. Our central contribution (Theorem 1), which states the existence of a critical radius r∗ n with respect to PRM, is presented in Section 4. This section also contains an outline of the proof. In Section 5 we lay the foundations of the proof and discuss important aspects of continuum percolation, that would later be employed in the main proof, which appears in Section 6. In Section 7 we extend the analysis to FMT∗, BTT, RRG and discuss the implications of our results to the multi-robot planners dRRT∗and M∗. In Section 8 we present experimental work, which validates our theory. We conclude this paper with directions for future work (Section 9). In the appendix we include proofs that were omitted from the main document and a table with values of γ∗. We also present in the appendix a new analysis for the aforementioned planners when constructed with deterministic samples, which are then sparsified in a randomized fashion. We believe that this new model, and its analysis, is interesting in its own right. 2 Related work This section is devoted to a literature review of sampling-based planners with emphasis on their theoretical analysis. Sampling-based techniques were first described in the mid 90’s and several prominent examples of that time, that are still used extensively today, include PRM [24], RRT [29], and EST [20]. Those planners are also known to be PC (see [25, 28, 27, 20]). More recent work has been concerned with the quality of the returned solution. [32] were one of the first to address this matter in a mathematically-rigorous manner. This work proved that RRT can produce arbitrarily-bad (long) paths with non-negligible probability. The influential work of [23] laid the theoretical foundations for analyzing quality in sampling- based planning. The authors introduced a set of techniques to prove AO. Using their frame- work, they showed that the following algorithms are AO: PRM∗, which is a special case of PRM (throughout this work we will refer to the more general algorithm PRM rather than PRM∗) with a specific value of the connection radius rn; an AO variant of RRT [29] termed RRT∗; RRG, which can be viewed as a combination between RRT and PRM. The analysis in [23] establishes that 3 rn = Θ (log n/n)1/d guarantees AO, where the configuration space of the robot is assumed to be [0, 1]d. This indicates that the expected number of neighbors used per vertex should be O(log n). The authors also proved that for sufficiently-small radii of order O(n−1/d) the planner is guaranteed to fail (asymptotically) in finding any solution. Following the breakthrough of [23], other AO planners have emerged (see e.g., [2, 4, 14]). [21] introduced FMT∗, which is a single-query planner that traverses an implicitly-represented PRM graph, and is comparable in performance to RRT∗. The authors refined the proof technique of [23], which allowed them to slightly reduce the connection radius rn necessary to FMT∗and PRM to achieve AO. We do mention that here again rn = Θ (log n/n)1/d . BFMT∗, which is a bidirectional version of FMT∗, was introduced by [48]. In this paper the authors also proved that the success rate of PRM, FMT∗, BFMT∗can be lower bounded by 1−O  n−η/d log−1/d n  , where η > 0 is a tuning parameter. In this context, we also mention the work of [9], which bounds the success rate with an expression that depends on the amount of deviation from the optimum. A recent work by [45] developed a different method for analyzing sampling-based planners. It exploits a connection with random geometric graphs (RGGs), which have been extensively studied (see, e.g., [33]). Their work shows that one can slightly reduce the PRM and FMT∗radius obtained in [21]. Furthermore, the connection with RGGs yields additional analyses of different extensions of PRM, which have not been analyzed before in a mathematically-rigorous setting. A number of methods have been developed to reduce the running time or space requirements of existing planners by relaxing AO constraints to AnO. For instance, LBT-RRT [37] interpo- lates between the quick RRT and the AO yet slower RRG, according to a predefined parameter. MPLB [36] can be viewed as a relaxation of FMT∗. SPARS2 [8] and RSEC [38] perform a spar- sification of PRM in an online or offline fashion, respectively, to reduce the space footprint of the produced roadmap. 2.1 Extensions The aforementioned papers deal mainly with the cost function of path length. Two recent works [41, 43] considered the bottleneck-pathfinding problem in a sampling-based setting and introduced the BTT algorithm, which traverses an implicitly-represented PRM graph. The bottleneck-cost func- tion, which arises for instance in high-clearance multi-robot motion [43] and Fr´echet matching between curves [19], is defined as follows: Every robot configuration x is paired with a value M(x), and the cost of a path is the maximum value of M along any configuration on the path. It was shown [41, 43] that BTT is AO, with respect to bottleneck cost, for the reduced connection radius that was obtained by [45]. The results reported until this point have dealt exclusively with holonomic robotic systems. Two recent papers by [39, 40] develop the theoretical foundations of PRM and FMT∗-flavored planners when applied to robots having differential constraints. [30] develop an AO algorithm that does not require a steering function, as PRM for instance does. Interestingly, the authors of the last paper also obtain bounds on the rate of convergence of their algorithm. 3 Preliminaries We provide several basic definitions that will be used throughout the paper. Given two points x, y ∈Rd, denote by ∥x−y∥the standard Euclidean distance. Denote by Br(x) the d-dimensional ball of radius r > 0 centered at x ∈Rd and Br(Γ) = S x∈Γ Br(x) for any Γ ⊆Rd. Similarly, given a curve π : [0, 1] →Rd define Br(π) = S τ∈[0,1] Br(π(τ)). Given a subset D ⊂Rd we denote by |D| its Lebesgue measure. All logarithms are at base e. 4 3.1 Motion planning Denote by C the configuration space of the robot, and by F ⊆C the free space, i.e., the set of all collision free configurations. Though our proofs may be extended to more complex robotic sys- tems (see discussion in Section 9), in this work we investigate the geometric (holonomic) setting of the problem in which no constraints are imposed on the motion of the robot. Additionally, we assume that C is some subset of the Euclidean space. In particular, C = [0, 1]d ⊂Rd for some fixed d ⩾2. We also assume that for any two configurations x, x′ ∈C the robot is capable of following precisely the straight-line path from x to x′. Given start and target configurations s, t ∈F, the problem consists of finding a continuous path (curve) π : [0, 1] →F such that π(0) = s, π(1) = t. That is, the robot starts its motion along π on s, and ends in t, while remaining collision free. An instance of the problem is defined for a given (F, s, t), where s, t ∈F. 3.2 Cost function It is usually desirable to obtain paths that minimize a given criterion. In this paper we consider the following two cost functions. Definition 1. Given a path σ, its length is cℓ(σ) = sup n∈N+,0=τ1⩽...⩽τn=1 n X i=2 ∥π(τi) −π(τi−1)∥. Definition 2. Given a path σ, and a cost map M : C →R, its bottleneck cost is cb(σ, M) = max τ∈[0,1] M(π(τ)). We proceed to describe the notion of robustness, which is essential when discussing properties of sampling-based planners. Given a subset Γ ⊂C and two configurations x, y ∈Γ, denote by ΠΓ x,y the set of all continuous paths, whose image is in Γ, that start in x and end in y, i.e., if π ∈ΠΓ x,y then π : [0, 1] →Γ and π(0) = x, π(1) = y. Definition 3. Let (F, s, t) be a motion-planning problem. A path π ∈ΠF s,t is robust if there exists δ > 0 such that Bδ(π) ⊂F. We also say that (F, s, t) is robustly feasible if there exists such a robust path. Definition 4. The robust optimum with respect to cℓis defined as c∗ ℓ= inf  cℓ(π) π ∈ΠF s,t is robust . The corresponding definition for the bottleneck cost is slightly more involved. Definition 5. Let M be a cost map. A path π ∈ΠF s,t is M-robust if it is robust and for every ε > 0 there exists δ > 0 such that for every x ∈Bδ(π), M(x) ⩽(1 + ε)cb(π, M). We also say that M is well behaved if there exists at least one M-robust path. Definition 6. The robust optimum with respect to cb is defined as c∗ b = inf  cb(π, M) π ∈ΠF s,t is M-robust . 5 3.3 Poisson point processes We draw our main analysis techniques from the literature of continuum percolation, where point samples are generated with the following distribution. Thus we will use this point distribution in PRM, which would be formally defined in the following section. Definition 7 [11]. A random set of points X ⊂Rd is a Poisson point process (PPP) of density λ > 0 if it satisfies the conditions: 1. For mutually disjoint domains D1, . . . , Dℓ⊂Rd, the random variables |D1∩X|, . . . , |Dℓ∩ X| are mutually independent. 2. For any bounded domain D ⊂Rd we have that for every k ⩾0, Pr[|X ∩D| = k] = e−λ|D| (λ|D|)k k! . Informally, property (1) states that the number of points from X in any two disjoint sets Di, Dj ⊂Rd is independent. Another useful property that follows from (2) is that the expected number of points in a certain region is known precisely and corresponds to the volume of this region. That is, for any D ⊂Rd it holds that E(|X ∩D|) = λ|D|. The following provides a simple recipe for generating PPP. Claim 1 [11]. Let N be a Poisson random variable with mean λ. For a given z ∈Zd draw a sample Nz ∈N and define Xz = {X1, . . . , XNz} to be Nz points chosen independently and uniformly at random from z + [0, 1]d. Then X = S z∈Zd Xz is a PPP of density λ. It will be convenient to think about PRM as a subset of the following random geometric graph (RGG). We will describe various properties of this graph in later sections. Definition 8. [33] Let X ⊂Rd be a PPP. Given r > 0, the random geometric graph G(X; r) is an undirected graph with the vertex set X. Given two vertices x, y ∈X, (x, y) ∈G(X; r) if ∥x −y∥⩽r. 4 Analysis of PRM In this section we provide a mathematical description of PRM, which essentially maintains an underlying RGG with PPP samples. We proceed to describe our main contribution (Theorem 1) which is concerned with the conditions for which PRM converges to the (robust) optimum. We then provide an outline of the proof, in preparation for the following sections. Recall that the configuration space of the robot is represented by C = [0, 1]d, and the free space is denoted by F ⊆C. The motion-planning problem (F, s, t) will remain fixed throughout this section. Recall that PRM accepts as parameters the number of samples n ∈N+ and a connection radius rn. Denote by Xn a PPP with mean density n. In relation to the definitions of the previous section, the graph data structure obtained by the preprocessing stage of PRM can be viewed as an RGG. For instance, when F = C, the graph obtained by PRM is precisely G(Xn ∩[0, 1]d; rn). In the more general case, when F ⊂C, PRM produces the graph G(Xn ∩F; rn). As F can be non-convex, we emphasize that the latter notation describes the maximal subgraph of G(Xn; rn) such that vertices and edges are contained in F. In the query stage, recall that PRM accepts two configurations s, t ∈F, which are then con- nected to the preprocessed graph. Here we slightly diverge from the standard definition of PRM in the literature. In particular, instead of using the same radius rn when connecting s, t, we use the (possibly larger) radius rst n . The graph obtained after query is formally defined below: 6 Definition 9. The PRM graph Pn is the union between G(Xn ∩F; rn) and the supplementary edges [ y∈{s,t}  (x, y) x ∈Xn ∩Brst n (y) and xy ⊂F . Remark. We emphasize that the larger radius rst n is only used when s, t are connected to the preprocessed RGG. We reach our main contribution. Theorem 1. Suppose that (F, s, t) is robustly feasible. Then there exists a critical radius r∗ n = γ∗n−1/d, where γ∗is a constant (see supplementary material), such that the following holds: i. If rn < r∗ n and rst n = ∞then PRM fails (to find a solution) a.a.s.2 ii. Suppose that rn > r∗ n. There exists β0 > 0 such that for rst n = β log1/(d−1) n n1/d , where β ⩾β0, and any ε > 0 the following holds with probability 1 −O(n−1): 1. Pn contains a path πn ∈ΠF s,t with cℓ(πn) ⩽(1 + ε)ξc∗ ℓ, where ξ is independent of n; 2. If M is well behaved then Pn contains a path π′ n ∈ΠF s,t with cb(π′ n, M) ⩽(1 + ε)c∗ b. Remark: Theorem 1 also holds, with a slight modification, when the PPP is replaced with the standard binomial point process (BPP), in which the number of sampled points is fixed a priori. The only difference is that the probability of success should be reduced from 1 −O(n−1) to 1 −O(n−1/2). This follows from Lemma 1 in [13], which states that if a property holds for PPP then it also holds for BPP, albeit with slightly smaller probability. 4.1 Outline of proof For the remainder of this section we briefly describe our technique for proving this theorem, in preparation for the full proof, which is given in Section 6. The critical radius r∗ n defined above, coincides with the percolation threshold, which determines the emergence of a connected component of G that is of infinite size. In particular, if rn < r∗ n then G(Xn; rn) breaks into tiny connected components of size O(log n) each. Thus, unless s, t are infinitesimally close, no connected component can have both s and t simultaneously. More interestingly, the radius of rn > r∗ n leads to the emergence of a unique infinite compo- nent of G(Xn; rn). That is, in such a case one of the components of G(Xn; rn) must contain an infinite number of vertices (Section 5.1). Denote this component by C∞. In contrast to G(Xn; rn), which is defined for the unbounded space Rd, our motion-planning problem is bounded to C = [0, 1]d. Thus, the next step is to investigate the properties of G(Xn; rn) when restricted to [0, 1]d (Section 5.2). Denote by Cn the largest connected component of C∞∩ [0, 1]d. This structure plays a key role in our proof (see Section 6): With high probability (to be defined), there exist vertices of Cn that are sufficiently close to s and t, respectively, so that a connection between the two vertices can be made through Cn. Of course, this overlooks the fact that some portions of Cn lie in forbidden regions of C. Thus, we also have to take the structure of F into consideration. To do so, we rely on [34] to prove that any small subset of [0, 1]d must contain at least one point of Cn (see lemmata 2 and 3). This allows us to trace the robust optimum (and collision-free) path with points from Cn. The final ingredient, which allows to bound the path length along G(Xn; rn)∩[0, 1]d, is Theo- rem 4. It states that the distance over this graph is proportional to the Euclidean distance between 2Let A1, A2, . . . be random variables in some probability space and let B be an event depending on An. We say that B occurs asymptotically almost surely (a.a.s., in short) if limn→∞Pr[B(An)] = 1. 7 the end points. This also ensures that the trace points from Cn can be connected with collision- free paths over the graph. To conclude, in Section 5 we provide background on continuum percolation in unbounded and bounded domains, and prove two key lemmata (Lemma 2 and Lemma 3). In Section 6 we return to the setting of motion planning and utilize the aforementioned results in the proof of Theorem 1. 5 Elements of continuum percolation In this section we describe some of the properties of the unbounded graph G(Xn; rn) that will be employed in our analysis in the following section. 5.1 The basics A fundamental question is when G contains an infinite connected component around the origin. Definition 10. The percolation probability θ(n, r) is the probability that the origin o ∈Rd is contained in a connected component of G(Xn ∪{o}; r) of an infinite number of vertices. That is, if Co denotes the set of vertices connected to o in the graph, then θ(n, r) = Pr(|Co| = ∞). We say that a graph percolates iff θ(n, r) > 0. Note that the selection of the origin is arbitrary, and the following result can be obtained for any x ∈Rd alternative to o. Theorem 2. [16, Theorem 12.35] There exists a critical radius r∗ n = γ∗n−1/d, where γ∗is a constant, such that θ(n, rn) = 0 when rn < r∗ n, and θ(n, rn) > 0 when rn > r∗ n. The following lemma states that the infinite connected component exists with probability strictly 0 or 1. Lemma 1. Let ψ(n, r) be the probability that G(Xn; r) contains an infinite connected component, i.e., without conditioning on any specific additional vertex. Then ψ(n, r) = 0 when θ(n, r) = 0 and ψ(n, r) = 1 when θ(n, r) > 0. Proof. Suppose that θ(n, r) = 0, and for any x ∈Rd, denote by θx(n, r) the percolation proba- bility of G(Xn ∪{x}; r), and note that θx(n, r) = 0. Define Zr =  r · z|z ∈Zd , and observe that for any y ∈Rd there exists z ∈Zr such that y ∈Br(z). By definition, G(Xn ∪{x}; r) per- colates, i.e., the infinite component touches Br(x), iff there exists y ∈C∞such that ∥x−y∥⩽r. By definition of Zr, if the latter event occurs then there exists z ∈Zr such that ∥z −y∥⩽r. Thus, by applying the union bound, and noting that a sum of countable number of zeros is still zero, we establish that ψ(n, r) = Pr h ∃x ∈Rd, G(Xn ∪{x}; r) percolates i = Pr [∃z ∈Zr, G(Xn ∪{z}; r) percolates] ⩽ X z∈Zr θz(n, r) = 0. For the other direction we employ Kolmogorov’s zero-one law (see, e.g., [6, Theorem 1, p.36]). Informally, it states that an event, e.g., existence of an infinite connected component in G, that occurs independently of any finite subset of independent random variables, e.g., points from Xn, has a ability of either 0 or 1. Thus, as ψ(n, r) ⩾θ(n, r) it immedietly follows that ψ(n, r) = 1 when θ(n, r) > 0. 8 The following theorem establishes that the infinite connected component is unique. Theorem 3. [31, Theorem 2.3] With probability 1, G(Xn; r) contains at most one infinite con- nected component. 5.2 Bounded domains We study different properties of G(Xn; r) when it is restricted to the domain [0, 1]d. In case that θ(n, r) > 0, we use C∞to refer to the infinite connected component of the unbounded graph G(Xn; r). Note that C∞exists (Lemma 1) and is unique (Theorem 3) with probability 1. Denote by Cn the largest connected component of C∞∩[0, 1]d. By definition, Cn is also a subgraph of G(Xn ∩[0, 1]d; rn). The following lemma shows that with high probability all the points from C∞, that are sufficiently close to the center of [0, 1]d, are members of Cn. Lemma 2. Let rn > r∗ n. Define Hn = [0, 1]d \ B1/ log n  Rd \ [0, 1]d . Denote by E1 n the event that C∞∩Hn ⊂Cn. Then there exist n0 ∈N and α > 0 such that for any n > n0 it holds that Pr[E1 n] ⩾1 −exp  −αn1/d log−1 n  . Proof. This statement is an adaptation of Lemma 8 and Theorem 2 of [34]. We mention that [34] uses a slightly different, but nevertheless equivalent model. While we consider an RGG that is bounded to [0, 1]d and PPP of density n, they consider the domain [0, n]d with density 1. It is only a matter of rescaling and variable substitution to import their results to our domain. Let C′ 1, . . . , C′ k denote the connected components of C∞∩[0, 1]d, and set Cn to be the component C′ i with the largest number of vertices. Without loss of generality, Cn = C′ 1. See illustration in Figure 2. Observe that if C′ i ⊂C∞then it must have at least one vertex x′ i ∈C′ i that lies closely to the boundary of [0, 1]d, i.e., ∥x −∂([0, 1]d)∥⩽rn, as otherwise C′ i will not be able to connect to the rest of C∞. Thus, it must be the case that diam(C′ i) ⩽log−1 n −rn in order to be able to reach Hn, where diam(D) = supx,x′∈D ∥x −x′∥defines the diameter of a given D ⊂Rd. We shall show that this does not hold. Theorem 2 in [34] states that for any φn large enough there exist α′, n0 such that for any n > n0 it holds with probability at least 1 −exp −α′n1/dφn  that diam(C′ i) < φn for any 1 < i ⩽k. By setting α = α′/2, φn = 1/(2 log n) the conclusion immediately follows. Observe that for any fixed point x internal to [0, 1]d there exists n0 ∈(0, ∞) such that for any n > n0 it holds that x ∈Hn. The following lemma bounds the probability of having at least one point from Cn in a (small) subregion of Hn. Note that the value β corresponds to the value used in Theorem 1. Lemma 3. Let rn > r∗ n. Define H′ n ⊂Hn to be a hypercube of side length h′ n = β log1/(d−1) n n1/d . Denote by E2 n the event that H′ n ∩Cn ̸= ∅. Then there exists n0 ∈N and β0 > 0 such that for any n > n0, β > β0 it holds that Pr[E2 n|E1 n] ⩾1 −n−1. 9 1 1 log n Hn [0, 1]d Cn = C′ 1 C′ 2 C′ 3 C′ 4 C′ 5 C∞ Figure 2: Illustration for Lemma 2. The outer cube represents [0, 1]d, whereas the inner cube depicted with dashed boundary is Hn. Observe that Hn has side length of 1−2/ log n. The purple, blue and red graphs combined describe C∞, whereas Cn is depicted in blue, and C′ 2, C′ 3, C′ 4, C′ 5 are in red. Proof. Define Gn = G(Xn, rn) ∩H′ n, n′ = E[|Xn ∩H′ n|] and observe that n′ = n · |H′ n| = βd logd/(d−1) n. We treat Gn as a subset of Rd in order to apply a rescaling argument. Observe that (scalar) multiplication of every point of H′ n with 1/h′ n yields a translation of [0, 1]d. We will use the superscript 1/h′ n to describe this rescaling to a given object. For instance, applying the same transformation on Gn yields the graph G1/h′ n n , which has the same topology as Gn. Denote by xℓ∈H′ n the (lexicographically) smallest point of H′ n, i.e., xℓ= (1/ log n, . . . , 1/ log n). Notice that G1/h′ n n −x1/h′ n ℓ = G(X 1/h′ n n ∩[0, 1]d, rn/h′ n) = G(Xn′ ∩[0, 1]d, rn′), where the minus sign in the left-hand side represents a translation by a vector. This implies that Gn, which is defined over H′ n behaves as G(Xn′ ∩[0, 1]d; rn′). This allows to leverage Theorem 1 from [34], which bounds the number of vertices from the unbounded component. In particular, there exists β0 > 0 such that Pr[C∞∩H′ n = Θ(n′)] ⩾1 −exp  −β−(d−1) 0 n′ d−1 d  = 1 −exp  −β−(d−1) 0 βd−1 log n  ⩾1 −exp(−log n) = 1 −n−1. While this is an overkill for our purpose, it does the job in proving that Pr[C∞∩H′ n ̸= ∅] ⩾ 1 −n−1. As we assume that E1 n holds (Lemma 2), it follows that Cn ∩H′ n ̸= ∅holds with probability at least 1 −n−1. The following statement allows to bound the graph distance between two connected vertices. We endow every edge of the graph with a length attribute that represents the Euclidean distance between the edges’ endpoints. For every two vertices x, x′ of G, dist(G, x, x′) denotes the length of the shortest (weighted) path on G between the two vertices. 10 Theorem 4. [13, Theorem 3] Let rnr∗ n. There exists a constant ξ ⩾1, independent of n, such that Pr[E3 n] = 1 −O(n−1), where the event E3 n is defined as follows: For any two vertices x, x′ in the same connected component of G(Xn ∩[0, 1]d; rn), with ∥x −x′∥= ω(rn), it holds that dist(Gn, x, x′) ⩽ξ∥x −x′∥. 6 Proof of Theorem 1 Proofs for all the three settings of the main theorem are given individually in the subsections below. 6.1 Case i Here we provide proof for the first (and easy) part of the theorem, which states that PRM fails when rn < γ∗n−1/d, even with rst n = ∞. We mention that our proof is a simpler and shorter version of a similar proof that was given in [23] for a slightly different setting. We show below that in the subcritical regime, i.e., rn < r∗ n, the graph breaks into many small connected components. In particular, the probability of having the largest connected component of size m decays exponentially in m. Denote by L(G) the size of the largest connected component in G. Proposition 1. Let rn < r∗ n. Then L(Gn) = O(log n) a.a.s. Proof. First, we start by stating that there exist a constant ζ > 0 and an integer m0 such that for all m ⩾m0 it holds that Pr[L(Gn) ⩾m] ⩽ne−mζ. This is a simplified version of Proposition 11.2 in [33], which is given for n uniformly sampled points, to the case of PPP, which we have here. In particular, the original proof relies on a relation between these two distributions, which transforms uniform sampling into a PPP and induces an additional factor that is not necessary in our setting. In particular, the factor “exp(−µn)” which appears in Equation 11.1 in [33] should be eliminated. To conclude the proof, we set m = α · log n, where α > 1/ζ, similarly to the proof of Theorem 11.1, Equation 11.4 in [33]. Observe that ne−mζ →0 as n tends to ∞. Now, consider the configuration space C = [0, 1]2 depicted in Figure 3 (similar examples can be devised for any d ⩾2). The white and blue regions represent the free space. Observe that any path connecting s to t must go through the blue region, whose width is greater than 1/2. Also note that any point contained in the blue region cannot be connected by an edge from s or t, which deems the large value of rst n as irrelevant. Suppose that Gn = G(Xn, rn) contains a path πn (black dashed curve) connecting s and t, and denote by π′ n ⊂πn the subpath that is contained in the blue region. Also, denote by s′ and t′ the first and last points along π′ n, respectively. Obviously 1/2 ⩽∥s′ −t′∥⩽cℓ(πn). Then, the number of edges of Gn, which induce π′ n, is at least ∥s′ −t′∥/rn = Ω(n1/d). However, this is in contradiction with the fact that every connected component of Gn is of logarithmic size in n (Proposition 1). 6.2 Case ii.1 We provide a full proof for the positive setting with length cost. The proof for the bottleneck case, which appears later on, is very similar to the length case. Suppose that rn > r∗ n, rst n = β log1/(d−1) n n1/d , β > β0. For simplicity, we set rn = γn−1/d, where γ > γ∗. By Lemma 1 and Theorem 3, G(Xn; rn) contains a unique infinite connected 11 s t πn s′ t′ > 1/2 Figure 3: Scenario for the proof of Theorem 1.i. component C∞. Recall that Cn denotes the largest connected component of C∞∩[0, 1]d. Also note that rst n = h′ n, where h′ n is defined in Lemma 3. Recall that c∗ ℓdenotes the robust optimum, with respect to path length (Definition 4). Fix ε > 0. By definition, there exists a robust path πε ∈ΠF s,t and δ > 0 such that cℓ(πε) ⩽(1 + ε)c∗ ℓ and Bδ(πε) ⊂F. See illustration in Figure 4. We now define a sequence of k points p1, . . . , pk along πε that are separated by exactly δ/2ξ units, where ξ is as defined in Theorem 4. In particular, define k = ⌈cℓ(πε) · 2ξ/δ⌉, set p1 = s, pk = t, and assign pi along πε, such that cℓ  πi−1,i ε  = δ/2ξ, where πi−1,i ε represents the subpath of πε starting at pi−1 and ending at pi. Notice that k is finite. Claim 2. Denote by E4 n the event that for all i ∈[k] there exists qi ∈Cn such that qi ∈Brst n (pi) and qi ∈Cn. Then Pr[E4 n|E1 n] ⩾1 −k Pr[E2n|E1 n] (see definition of E1 n, E2 n in Lemma 2 and Lemma 3, respectively). Proof. Define H′ n(x) ⊂Rd to represent a d-dimensional (axis-aligned) hypercube of side length h′ n that is centered in x ∈Rd. Formally, H′ n(x) = x + h′ n ·  −1 2, 1 2 d. Observe that H′ n(pi) ⊂Hn for n large enough. Also note that H′ n(pi) ⊂Brst n (pi). Thus, the result follows from the union bound. Suppose that E1 n, E4 n are satisfied. Let q1, . . . , qk ∈Cn be the points obtained from Claim 2. These points reside in a single connected component of Gn. Define the path πn := s →q1 ⇝q2 ⇝. . . ⇝qk →t, where s →q1 represents a straight-line path from s to q1, and qi ⇝qi+1 represents the shortest path from qi to qi+1 in Gn. The next claim states that if in addition E3 n is satisfied (Theorem 4), then πn is also collision free. Claim 3. Suppose that E1 n, E3 n, E4 n are satisfied. Then πn ∈ΠF s,t is a path in Pn (with probabil- ity 1). Proof. First, observe that the straight-line paths s →q1, qk →t are contained in Pn due to the definition of PRM and the fact that rst n < δ. Let us consider a specific subpath qi ⇝qi+1. Recall from Claim 2 and by definition of pi, pi+1 that dist(Gn, qi, qi+1) ⩽2rst n + ξ∥pi+1 − pi∥= o(1) + δ/2, which is bounded by δ. Thus, for any point q′ i along qi ⇝qi+1 it holds that ∥q′ i −pi∥< δ, ∥q′ i −pi+1∥< δ. Thus, Im(πn) ⊂Bδ(πε) ⊂F. 12 Hn C \ F π∗ πε s t pi qi Brst n (pi) δ Figure 4: Illustration for the proof of Theorem 1.ii. The outer cube represents the configuration space, whereas the dashed cube is Hn. The gray area represents the forbidden regions. The robust feasible path π∗and πε are depicted in green and red, respectively. Observe that every point along πε is at least δ away from C \F. p1 = s, p2, . . . , pk−1, pk = t are depicted as black bullets, where k = 8. Brst n (pi) are depicted as blue circles, while the blue cross in each such circle represents qi. The next claim states that the length of πn is a constant factor from the optimum. Claim 4. Suppose that E1 n, E3 n, E4 n are satisfied. We have that cℓ(πn) ⩽(1 + ε)ξc∗ ℓ(with proba- bility 1). Proof. First, observe that by the triangle inequality, it holds that ∥qi−1 −qi∥ ⩽∥qi−1 −pi−1∥+ ∥pi −pi−1∥+ ∥qi −pi∥ ⩽∥qi−1 −pi−1∥+ cℓ(πi−1,i ε ) + ∥qi −pi∥ ⩽2rn + cℓ(πi−1,i ε ) = o(1) + cℓ(πi−1,i ε ). By Theorem 4 and the triangle inequality, it follows that cℓ(πn) = ∥s −q1∥+ dist(Gn, q1, qk) + ∥t −qk∥ ⩽2rst n + k X i=2 dist(Gn, qi−1, qi) ⩽o(1) + k X i=2 ξ∥qi−1 −qi∥ ⩽o(1) + ξ k X i=2 cℓ(πi−1,i ε ) = o(1) + ξcℓ(πε) ⩽o(1) + (1 + ε)ξc∗ ℓ. We note that in order to eliminate the o(1) summand one can fix ε but work with πε′ instead of πε, where ε′ < ε. For simplicity, we chose to leave the proof as is. 13 To conclude, Claim 3 and Claim 4 show the existence of a (collision free) path πn in Pn, whose length is at most (1 + ε)ξc∗ ℓ. It remains to bound the probability that E1 n, E3 n, E4 n hold simultaneously: Pr[E1 n ∧E3 n ∧E4 n] = Pr  E3 n ∧E4 n|E1 n  · Pr[E1 n] =  1 −Pr h E3n ∧E4n|E1 n i · Pr[E1 n] =  1 −Pr h E3n ∨E4n|E1 n i · Pr[E1 n] ⩾  1 −Pr h E3n|E1 n i −Pr h E4n|E1 n i · Pr[E1 n] By Claim 2 and Lemma 3, Pr[E4n|E1 n] ⩽k Pr h E2n|E1 n i ⩽kn−1. Also note that Pr[E3 n|E1 n] ⩾ Pr[E3 n], since Pr[E3 n|E1n] = 0. Therefore, Pr[E1 n ∧E3 n ∧E4 n] ⩾  1 −Pr h E3n|E1 n i −Pr h E4n|E1 n i · Pr[E1 n] ⩾ 1 −O(n−1) −kn−1  1 −exp  −αn1/d log−1 n  = 1 −O(n−1). 6.3 Case ii.2 We prove the asymptotic optimality of PRM for bottleneck cost, with appropriate values of rn, rst n . The proof follows very similar lines to that of the length cost. Fix ε > 0. For simplicity we assume that ε < 1 (the proof can be adapted to larger values of ε). By definition, there exists an M-robust path π′ ∈ΠF s,t and δ, δ′ > 0 such that (a) Bδ(π′) ⊂F, (b) cb(π′, M) ⩽(1 + ε/3)c∗ b, (c) and ∀x ∈Bδ′(π′) : M(x) ⩽(1 + ε/3)cb(π′, M). Now, define δ∗= min{δ, δ′}. By substituting δ with δ∗, and πε with π′ in the proof of Theorem 1.ii, it follows that if E1 n, E3 n, E4 n are satisfied, Pn contains a path π′ n ∈ΠF s,t such that Im(π′ n) ⊂Bδ′(π′). This implies that cb(π′ n, M) ⩽(1 + ε/3)cb(π′, M) ⩽(1 + ε/3)2c∗ b ⩽(1 + ε)c∗ b, which concludes this proof. 7 Analysis of other planners In this section we describe the implications of Theorem 1 to additional planners, which are closely related to PRM. In all the results below, the values r∗ n, γ∗, β, ξ are identical to those in Theorem 1. Furthermore, for simplicity of presentation, we assume that (F, s, t) is robustly feasible, and M is well behaved, where relevant. We first consider FMT∗and BTT, and then proceed to RRG, whose analysis is more involved. In all the following algorithms we assume that the sampling method is a PPP. 14 7.1 The FMT∗and the BTT planners The following two corollaries are direct consequences of Theorem 1 as the two algorithms FMT∗ and BTT traverse an implicitly-represented PRM graph while minimizing the cost functions cℓand cb, respectively. Corollary 1. If rn < r∗ n and rst n = ∞then FMT∗fails a.a.s. If rn > r∗ n and rst n = β log1/(d−1) n n1/d , then for any ε > 0 FMT∗returns a path πn ∈ΠF s,t with cℓ(πn) ⩽(1 + ε)ξc∗ ℓwith probability 1 −O(n−1). We remark that FMT∗can ignore certain edges of the underlying PRM graph, which may result in a path of lower quality than that obtained by PRM. However, this situation only occurs for vertices which lie within distance of rn from C \ F (see [21, Remark 3.3]). As our analysis of PRM in the previous section utilizes only vertices and edges that are far from obstacles our proofs extend to FMT∗straightforwardly. Corollary 2. If rn < r∗ n and rst n = ∞then BTT fails a.a.s. If rn > r∗ n and rst n = β log1/(d−1) n n1/d , then for any ε > 0 BTT returns a path π′ n ∈ΠF s,t with cb(π′ n, M) ⩽(1 + ε)c∗ b with probability 1 −O(n−1). 7.2 The RRG planner We consider the incremental planner RRG, which can be viewed as a cross between RRT and PRM. Due to this relation we can extend the analysis of PRM to RRG. (The term “incremental” refers to planners that generate samples one after the other, and connect the current samples to previous ones.) We introduce an incremental version of a PPP to extend our theory to RRG. Claim 5. Let N be a Poisson random variable with mean 1. At each iteration i draw a sample Ni ∈N and define Xi = {X1, . . . , XNi} to be Ni points chosen independently and uniformly at random from [0, 1]d. The set X = S 1⩽i⩽n Xi, obtained after n iterations, is a PPP of density λ = n. Note that the sum of n i.i.d. Poisson random variables with means λ1, λ2, . . . , λn is a Poisson random variable with mean λ = Pn i=1 λi. Claim 5 follows directly from this property. We now describe an adaptation of RRG for an incremental PPP. Given a start configuration s ∈F and a goal region Xgoal ⊆F, RRG initializes a roadmap with a single node s. Let η denote the constant used by RRG for local steering (see [23]). Similarly to PRM, RRG employs the connection radii rRRG n , rs n RRG. Let N be a Poisson random variable with mean 1 and let ni−1 denote the number of nodes of the constructed roadmap after i −1 iterations. At the ith iteration we draw a sample Ni ∈N and define Xi = {X1, . . . , XNi} to be Ni points chosen independently and uniformly at random from [0, 1]d. RRG will iteratively process the samples X1, . . . , XNi, as follows: For a sample Xk ∈Xi RRG will first locate the nearest node xnear to Xk. Let xnew be a node at distance at most η from xnear at the direction of Xk. RRG will attempt to connect xnew to xnear. If the connection attempt is successful, the new node xnew will be added to the roadmap. Then, RRG will attempt to connect xnew to all the existing nodes within a ball of radius min{rRRG n , η}. It will additionally attempt to connect xnew to s if ∥xnew −s∥⩽min{rs n RRG, µ}. Note that for the ith iteration of RRG we set n = ni−1. The connection radii rRRG n , rs n RRG that will be used during the ith iteration are fixed, as both are functions of the number of nodes in the roadmap at the beginning of the iteration. Both radii will 15 be set to decrease with the number of roadmap nodes, as in PRM. However, since we fix the radius at each iteration of RRG we use a slightly larger radius than the one obtained had we considered the current roadmap size. This will clearly keep all relevant connections and perhaps even add more. Theorem 5. Suppose that (F, s, Xgoal) is robustly feasible (we extend Definition 3 to describe a robustly feasible path (F, s, Xgoal) for RRG). Let Rn denote the roadmap constructed by RRG after n iterations with rRRG n = (1 + µ)rn, rs n RRG = (1 + µ)rst n , where rn, rst n are as in Theorem 1.ii, and µ is any positive constant. Then for any ε > 0 the following holds with probability 1 −O(n−1): 1. Rn contains a path πn ∈ΠF s,Xgoal with cℓ(πn) ⩽(1 + ε)ξc∗ ℓ; 2. If M is well behaved then, Rn contains a path π′ n ∈ΠF s,Xgoal with cb(π′ n, M) ⩽(1 + ε)c∗ b. 7.2.1 Proof of Theorem 5 We provide proof only for cℓ, as the case for cb is almost identical. In what follows we consider a robust path πε ∈ΠF s,Xgoal for which cℓ(πε) ⩽(1 + ε)c∗ ℓ. Let δ > 0 denote the clearance of πε, and L denote its length. Finally, set κ = min (δ, η)/5. Our proof uses Theorem 1 as a central ingredient, due to the following observation, which is formalized below: For n large enough, with probability 1−O(e−n), Rn contains a PRM roadmap Pn′ in the vicinity of πε, where n′ is slightly smaller than n. This in turn follows from the unique structure of RRG, which can be viewed as a combination of PRM and RRT, as RRG supplements the RRT tree with additional edges. In particular, the following corollary can be deduced from the probabilistic completeness of RRT with samples from a PPP (see Appendix B), which does not rely on the parameters rRRG n , rs n RRG. Claim 6. It is possible to tile πε with m = L/κ balls of radius κ, such that after n iterations of RRG, with probability at least 1 −a · e−bn for some constants a, b ∈R>0, every ball will contain at least one RRG vertex. The following lemma formalizes the connection between RRG and PRM. Lemma 4. Fix n and define n′ = (1 + µ)−dn. Let Rn be an RRG graph constructed after n iterations with connection radii rRRG n , rs n RRG. With probability at least 1 −a · e−b′n, for some constants a, b′ ∈R>0, the graph Rn ∩Bκ/2(π) contains a PRM graph Pn′ ∩Bκ/2(π), constructed with the radius rn′. Proof. Consider the n −n′ = n 1 −(1 + µ)−d first iterations of RRG. From Claim 6 we obtain that given a robust path πε, it can be tiled using a constant number of balls of radius κ such that with probability of at least 1 −ae−b(n−n′) = 1 −ae−b′n every ball will contain an RRG vertex after n iterations of RRG, for some constants a, b ∈R>0, and b′ = b 1 −(1 + µ)−d . Now, let us consider a new sample x ∈Bκ/2(π) that is added in iteration n′′ > n −n′. By Claim 6, Rn′′ includes (with certain probability) a node v that can be connected to x; that is, both ∥x −v∥⩽η and the straight-line path from x to v is collision free. Thus, v is a added as a node to RRG and then connected in a PRM-fashion to all its neighbors within a radius of rRRG n′′ . 16 Note that the smallest value of the latter radius is when n′′ = n. Observe that rRRG n = (1 + µ)rn = γ  (1 + µ)−dn −1/d = γ(n′)−1/d = rn′. Additionally, rs n RRG = (1 + µ)rst n = (1 + µ)β log1/(d−1) n n1/d = β log1/(d−1) n ((1 + µ)−dn)1/d ⩾β log1/(d−1)((1 + µ)−dn) ((1 + µ)−dn)1/d = rs n′RRG. This implies that all samples added after iteration n −n′ are connected in a PRM-fashion with a radius rn′. Thus, Rn ∩Bκ/2(π) contains a PRM graph Pn′ ∩Bκ/2(π), constructed with the radius rn′ with probability 1 −a · e−b′n, as required. We are now ready to finish the proof of Theorem 5. Let us denote by A the event that the graph Rn contains a PRM graph Pn′ T Bκ/2(πε). Given that A occurs, we denote by B the event that the constructed roadmap Rn T Bκ/2(πε) maintains the good properties of PRM, as defined in Theorem 1.ii.1. That is, Pn′ T Bκ/2(πε) contains a path πn′ ∈ΠF s,Xgoal with cℓ(πn′) ⩽(1+ε)ξ·c∗ ℓ; Given that A occurs, from Theorem 1 we obtain that B occurs on the PRM-like graph inside Bκ/2(π) with probability of at least 1 −O (n′)−1 = 1 −O  (1 + µ)−dn−1 = 1 −O n−1 . That is, Pr[B|A] ⩾1 −O(n−1). Additionally, Lemma 4 states that Pr[A] ⩾1 −ae−b′n for some b′. Note that we would like to bound the probability that both A and B occur simultane- ously. From the definition of conditional probability we have that Pr[A ∩B] = Pr[B|A] · Pr[A]. Therefore, Pr[A ∩B] ⩾ 1 −O(n−1)  ·  1 −ae−b′n ⩾1 −O(e−b′n) −O(n−1) = 1 −O(n−1). 7.3 Multi-robot planners In this section we briefly state the implications of our analysis of PRM for sampling-based multi- robot motion planning. In particular, we consider the planners M∗[51], dRRT [46] and dRRT∗[10]. The multi-robot setting involves m ⩾2 robots operating in a shared workspace. In this more challenging setting collisions between different robots must be avoided, in addition to stan- dard robot-obstacle collisions. Single-robot sampling-based planners can be applied directly to the problem by considering the robot fleet as one highly-complex robot. However, recently- introduced planners that are tailored for the multi-robot case have proved to be much more effec- tive than the aforementioned naive approach. 17 Such recent approaches include M∗, dRRT, and dRRT∗. A common ground to these techniques is an implicit construction of a composite roadmap G [46], which results from a tensor product between m single-robot roadmaps G1, . . . , Gm—for every 1 ⩽i ⩽m the graph Gi is a PRM embedded in the configuration space of robot i. It was recently proved that if G1, . . . , Gm are AO, with respect to the individual robots for which they are defined, then G is AO with respect to the multi-robot problem [10]. Thus, by Theorem 1 it follows that the underlying single-robot PRM graphs G1, . . . , Gm can be constructed using a smaller connection radius, while still guaranteeing the AO (or AnO) of G. This in turn, guarantees AO (or AnO) of M∗, dRRT∗, and PC of dRRT. 8 Experimental results We present experiments demonstrating the effect of using different values of the connection radius rn on the running time, cost of solution for the length cost cℓ, and success rate, when running the algorithms PRM and FMT∗on problems of dimensions up to 12. In particular, rn ranges between the critical radius (Theorem 1) and previously obtained upper bounds from [21, 23]. We validate our theory for PRM (Theorem1) and FMT∗(Corollary 1). We observe that smaller connection radii, than previously-obtained bounds, still allow the planners to converge to high- quality, near-optimal paths. Furthermore, we identify situations in which using a radius that is close to r∗ n allows to obtain a high-quality solution more quickly. Moreover, although the resulting cost for the smaller radii can be slightly worse, we observe that postprocessing the paths using standard simplification methods yields solutions that are only marginally inferior to the best (postprocessed) solution. Specifically, in harder scenarios the advantage in using a smaller connection radius is more prominent; in some cases we obtain a reduction of 50% in running time, with an improved cost, and similar success rates when compared to the results obtained using the original FMT∗connection radius. 8.1 Implementation details In our experiments, we used the Open Motion Planning Library (OMPL 1.3, [49]) on a 2.6GHz×2 Intel Core i5 processor with 16GB of memory. Results were averaged over 50 runs and computed for dimensions up to 12. The planners that we used are PRM and the batch variant of FMT∗, which were adapted for samples from a PPP, where n is the expected number of samples. Specifically, given the expected number of samples n, these variants generate a set of samples according to the recipe in Claim 1. The two planners use the connection radii rn, rst n . Note that rst n should be at least β log1/(d−1) n n1/d , but the exact value of β is unknown. For simplicity, we set rst n to be identical to rPRM∗, defined in [23]. We emphasize that although we use an asymptotically smaller value, it still yields (empirically) convergence in cost. This suggests that the bound on rst n can be further reduced. 0.96 0.98 1.00 1.02 1.04 0.96 0.98 1.00 1.02 1.04 r0 r1 = r0 + 1∆ r2 = r0 + 2∆ r3 = r0 + 3∆ r4 = r0 + 4∆ r5 = r0 + 5∆ r6 = r0 + 6∆ r7 = r0 + 7∆ r8 = r0 + 8∆ r9 = r0 + 9∆ r10 = rFMT∗= r0 + 10∆ r11 = rPRM∗ Given a scenario and a value n, we define a set of k + 1 increasing connection radii, {r0, . . . , rk}, as follows. We set the minimal connection radius to be r0 = γn−1/d, where γ = 1. Note that γ is larger than γ∗by a factor of roughly 2. The maximal connection radius, denoted by rk = rFMT∗, is as defined in [21]. For each 1 ⩽i ⩽k−1 we define ri = r0+i·∆, where ∆= (rk −r0)/k. Now, for every scenario and number of samples n we run our planning algorithm with rst n , and rn ∈{r0, . . . , rk}. Note that all our plots are for k = 10, and that in some experiments an additional radius rk+1 = rPRM∗> rFMT∗appears as well (see [23]). The figure to the right depicts the colors and labeling that will be used throughout this section. 18 (a) (b) (c) Figure 5: Scenarios used in experiments. (a) dD Hypercube, (b) dD Hypercube with 2d hyper- cubical obstacles, and (c) 3D Cubicles. Start and target configurations for a robot are depicted in green and red, respectively. Scenario (c) is provided with the OMPL distribution. 8.2 Results Euclidean space. The scenario we consider (see Figure 5a) consists of a point robot moving in the obstacle-free unit d-dimensional hypercube. Therefore, F = C = [0, 1]d. We set the start and target positions of the robot to be s = (0.1, . . . , 0.1) and t = (0.9, . . . , 0.9), respectively. We run PRM and plot (i) the overall running time, (ii) the normalized cost (cℓ) of the obtained solution, where a value of 1 represents the best possible cost, and (iii) the portion of successful runs—all as a function of the expected number of samples n. Results are depicted in Figure 6. The plots demonstrate the following trend: for each radius r the cost obtained by PRM con- verges to some constant times the optimal cost, which is marked with the dashed red curve in the “Cost vs. n” plot. (We note that it is possible that for larger values of n the cost values for different radii will eventually converge to the same value.) Clearly, rPRM∗yields the best cost but at the price of increased running time. r10 = rFMT∗obtains the next best cost, with improved running time, and so on. Note that for d = 4, already for n = 1K a solution is found for all radii except r0, whereas for d = 8 and n = 5K a solution is found for all radii above r4. It is important to note that there is a clear speedup in the running times of PRM when using ri for i ⩽9, over rFMT∗, rPRM∗, with a slight penalty (a factor of roughly 2) in the resulting costs. For the first set of experiments of PRM in an obstacle free d-dimensional hypercube, we also study how the value of ri affects the size of the connected components. Note that Theorem 1 in [34] states that |Cn| = Θ(n). In particular, we measure the size of the two largest connected components, denoted by Cn and C′ n, as a function of both the radius ri and the expected number of samples n. The results for d = 2, d = 12 are summarized in Table 1. Already for r2, Cn is significantly larger than C′ n. However, for r0 there is no clear difference. That is, we do not see in practice the expected emergence of the “huge” component. As n increases, the proportion of Cn increases as well, whereas that of C′ n shrinks. Also note that for specific ri, n the maximal component is smaller for d = 12 than for d = 2. General Euclidean space. We consider the following d-dimensional scenario (based on a sce- nario from [45]), for d ∈{4, 8}, depicted in Figure 5b in 3D, and a point robot: C = [0, 1]d is subdivided into 2d sub-cubes by halving it along each axis. Each sub-cube contains a centered d-dimensional axis-aligned hypercubical obstacle that covers 25% of the sub-cube. The start po- sition s of the robot is placed on the diagonal between (0, . . . , 0) and (1, . . . , 1), such that it is equidistant from the origin and from the closest hypercubical obstacle. t is selected similarly, with respect to (1, . . . , 1). 19 r0 r2 r10 d n |Cn|/n |C′ n|/n |Cn|/n |C′ n|/n |Cn|/n |C′ n|/n 2 1K 0.17 0.1 0.88 0.05 1.0 0.0 2 5K 0.08 0.05 0.97 0.001 0.999 0.0 2 10K 0.05 0.04 0.98 0.003 1.0 0.0 2 50K 0.02 0.01 0.99 0.001 1.0 0.0 12 1K 0.04 0.02 0.84 0.003 1.0 0.0 12 5K 0.12 0.01 0.91 0.001 1.0 0.0 12 10K 0.21 0.005 0.94 0.001 1.0 0.0 Table 1: The average proportion of the two largest connected components Cn, C′ n obtained using PRM as a function of ri, n, d. 0 20000 40000 60000 n 0 10 20 30 40 Time (sec) Time vs. n 0 20000 40000 60000 n 1.0 1.2 1.4 Normalized cost Cost vs. n 0 20000 40000 60000 n 0.00 0.25 0.50 0.75 1.00 success rate (%) Success rate vs. n PRM in [0,1]d, d=4 (a) d = 4 0 20000 40000 60000 n 0 200 400 600 Time (sec) Time vs. n 0 20000 40000 60000 n 1.0 1.5 2.0 Normalized cost Cost vs. n 0 20000 40000 60000 n 0.00 0.25 0.50 0.75 1.00 success rate (%) Success rate vs. n PRM in [0,1]d, d=8 (b) d = 8 2000 4000 6000 8000 10000 n 0 20 40 Time (sec) Time vs. n 2000 4000 6000 8000 10000 n 1.0 1.5 2.0 Normalized cost Cost vs. n 2000 4000 6000 8000 10000 n 0.00 0.25 0.50 0.75 1.00 success rate (%) Success rate vs. n PRM in [0,1]d, d=12 (c) d = 12 Figure 6: PRM in the (a) 4D, (b) 8D, and 12D obstacle free hypercube. Here we use FMT∗with radii ranging from r0 to r10. We also ran PRM which exhibited a similar behavior. Figure 7a presents the results for d = 4. We plot the average cost after simplifying the resulting paths in addition to the average original cost. We mention that there is a difference in cost between the various radii and that costs obtained using larger radii are often better. However, after applying OMPL’s default path-simplification procedure we obtain paths with negligible differences in cost. This suggests that paths obtained using the smaller connection radii are of the same homotopy class as the path obtained using rFMT∗. For d = 8 (Figure 7b), as in the case of d = 4, costs obtained using larger radii are often better. However, applying OMPL’s default path-simplification procedure yields paths with negligible differences in cost. We do mention, though, that the success rates of all radii deteriorate, when compared to d = 4. General non-Euclidean space. We use the Cubicles scenario (see Figure 5c) provided with the OMPL distribution [49]. Here the goal is to find a collision-free path for one or two L- shaped robots that need to exchange their positions. In the single-robot case, the robot needs to 20 move between the two configurations depicted in red and green. In the two-robot case, these two configurations represent the start positions of the robots, where the goal is for each robot to finish at the start of the other robot. Since the robots are allowed to translate and rotate, then d = 6 or d = 12, respectively, and C is non-Euclidean. Although our theory does not directly apply here, we chose to test it empirically for such a setting. We ran our experiments with FMT∗using a connection radii ranging from r0 to r10 = rFMT∗. Initially, no solution was found in all runs. Indeed, as was mentioned in [26], this is not surprising, since the theory from which the radius values are derived assumes that the configuration space is Euclidean. In the same paper the authors propose a heuristic for effectively using radius- based connections in motion planning. In our experiments (Figure 8b) we increased all radii by a multiplicative factor of 3 and 10, respectively, in order to increase the success rates. We mention that this yielded similar behavior to that when using the connection scheme suggested in [26]. Results for the single-robot case are depicted in 8a. In this case, even with 25K samples already with r3 (green) we were able to obtain high successes rates. The running time was slightly better than that of r10, whereas the cost was higher than the one obtained using r10. However, after applying the smoothing procedure the difference in cost decreased significantly. Let us proceed to the two-robot setting. Observe in Figure 8b, that the variance in cost after path simplification is again significantly smaller than the variance in the original cost. Clearly, smaller radii exhibit shorter running times, but with smaller success rates. However, since the success rate improves as the number of samples n increases, one could use the smaller radii with larger n and still obtain a solution with comparable cost and success rate in shorter running time. Indeed, using r3 (green) with 75K samples we obtain a solution whose cost (after simplification) is slightly better than the cost obtained using r10 with 42K samples (after simplification). Moreover, the running time of the former is roughly 50% of the time taken for the latter, and both obtain similar success rates. This indicates that in such settings, one could benefit from using smaller radii in terms of favorable running times and obtain a comparable or even better cost with similar success rates. 9 Future work In this work we leveraged techniques from percolation theory to develop stronger analysis of PRM-based sampling-based motion planners. In the hope of providing mathematically rigorous presentation, while still being accessible, we chose to focus the discussion on simplified, possibly unrealistic, robotic systems. In particular, we assumed a holonomic system having a Euclidean configuration space. Our immediate future goal is to extend our theory to non-Euclidean spaces, such as those arising from rigid bodies translating and rotating in space. The next challenge will be to extend the model to robots with differential constraints. While the latter task seems daunting, we should keep in mind that the state space of such systems can be modeled as a differential manifold. This may allow to locally apply our Euclidean-space techniques to analyze manifold spaces. Indeed, a similar approach has already been considered in previous work [39, 40]. The next algorithmic challenge is to consider robotic systems for which precise steering, i.e., solving the two-point boundary value problem, cannot be performed, at least not efficiently (see discussion in [30, Section 1.3]). In such cases, PRM-based planners (as we considered here), or RRT∗-based techniques, cannot be applied. We pose the following question: Is it possible to extend existing planners to work in the absence of a steering function, while maintaining their theoretical properties? We plan to tackle this question in the near future. 21 0 20000 40000 60000 n 0 2 4 6 Time (sec) Time vs. n 0 20000 40000 60000 n 2.00 2.25 2.50 2.75 3.00 Cost Cost vs. n Original Simplified 0 20000 40000 60000 n 0.00 0.25 0.50 0.75 1.00 success rate ( Success rate vs. n FMT* in [0,1]d, d=4 (a) d = 4 0 10000 20000 30000 40000 50000 n 0 50 100 150 Time (sec) Time vs. n 0 10000 20000 30000 40000 50000 n 3.0 3.5 4.0 4.5 5.0 Cost Cost vs. n Original Simplified 0 10000 20000 30000 40000 50000 n 0.00 0.25 0.50 0.75 1.00 success rate ( Success rate vs. n FMT* in [0,1]d, d=8 (b) d = 8 Figure 7: FMT∗for a point robot in the (a) 4D and (b) 8D hypercube with obstacles scenario (Fig- ure 5b). Both the average original cost and the average cost after simplification are presented for each radius. There is a difference in cost between the various radii (dashed lines), that diminishes after simplifying the resulting paths (solid lines). 0 20000 40000 60000 n 10 15 20 25 Time (sec) Time vs. n 0 20000 40000 60000 n 1800 2000 2200 2400 Cost Cost vs. n Original Simplified 0 20000 40000 60000 n 0.00 0.25 0.50 0.75 1.00 success rate (%) Success rate vs. n FMT* in OMPL cubicles scenario (1 robots, d=6) (a) d = 6 20000 40000 60000 n 0 50 100 Time (sec) Time vs. n 0 20000 40000 60000 n 4500 5000 5500 6000 6500 Cost Cost vs. n Original Simplified 20000 40000 60000 n 0.00 0.25 0.50 0.75 1.00 success rate (%) Success rate vs. n FMT* in OMPL cubicles scenario (2 robots, d=12) (b) d = 12 Figure 8: FMT∗for (a) one and (b) two rigid-body robot(s) moving in OMPL’s Cubicles scenario (Figure 5c). Both the average original cost and the average cost after simplification are presented for each radius. 22 10 Acknowledgments We thank the participants of the Workshop on Random Geometric Graphs and Their Applications to Complex Networks, 2015, and in particular Carl P. Dettmann, Mathew Penrose, Paul Balister, and Yuval Peres, for fruitful discussions. We also thank Dan Halperin for commenting on early drafts of the paper. This work has been supported in part by the Israel Science Foundation (grant no. 825/15), by the Blavatnik Computer Science Research Fund, and by the Blavatnik Interdisciplinary Cyber Research Center at Tel Aviv University. M.K. is supported in part by the Yandex Foundation. This work was prepared as part of K.S.’s Ph.D. thesis in Tel Aviv University, with the support of the Clore Israel Foundation. Appendix A Theory for (semi-)deterministic sampling So far we have studied the theoretical properties of various sampling-based planners constructed with the PPP sample set Xn. In this section we study similar aspects but for a different sampling regime. Particularly, we replace Xn with Ln which represents a rectilinear lattice, also known as a Sukharev sequence. A recent paper by [22] studies the theoretical properties of PRM and FMT∗when constructed with Ln (and other deterministic low-dispersion sequences). Here we introduce a new analysis which shows that PRM constructed with Ln can be further sparsified while maintaining AO. This result can also be extended to FMT∗, BTT, RRG and other planners, as was done in Section 7 for the setting of PPP. A.1 The basics We start with some basic definitions. For a given n define ˜r∗ n = n−1/d and Ln = ˜r∗ n · Zd. That is, Ln contains all the points of a rectangular lattice, with edge length ˜r∗ n. Observe that G(Ln; ˜r∗ n) is a deterministic graph which connects by an edge every two points x, x′ ∈Ln that differ in exactly one coordinate and the distance between them is exactly ˜r∗ n. We proceed to define a randomized subset of G(Ln; ˜r∗ n): Definition 11. Given 0 ⩽p ⩽1, the graph Gp n = G(Ln; ˜r∗ n; p) is defined in the following manner. Every point of Ln is added independently with probability p to Gp n as a vertex. Two vertices x, x′ ∈Gp n are connected by an edge if ∥x −x′∥⩽˜r∗ n. This corresponds to a well-studied model of site percolation. Similarly to the continuous model studied in the previous section, we say that Gp n percolates if it contains an infinite connected component Cp ∞that includes the origin o. Also denote by Cp n the largest connected component of [0, 1]d ∩Cp ∞. The following theorem corresponds to a combination of Theorem 2, Lemma 1, and Theorem 3 for the continuous case. Theorem 6. [11] There exists a critical value 0 < p∗< 1 such that for p < p∗the graph Gp n percolates with probability zero. If p > p∗then Gp n percolates with non-zero probability. Furthermore, if p > p∗then Cp ∞is unique with probability 1. The next two lemmata serve as discrete versions of Lemma 2 and Lemma 3, respectively. Their proofs are very similar to the original proofs and are therefore omitted. We do mention that they rely on Theorem 5 and Theorem 4 of [34], respectively. 23 Lemma 5. Suppose that p > p∗. Denote by E′1 n the event that Cp ∞∩Hn ⊂Cp n, where Hn is as defined in Lemma 2. Then there exists n0 ∈N and α > 0 such that for any n > n0 it holds that Pr[E′1 n ] ⩾1 −exp  −αn1/d log−1 n  . Lemma 6. Suppose that p > p∗. Define H′′ n ⊂Hn to be a hypercube of side length h′′ n = β′ log1/(d−1) n n1/d . Denote by E′2 n the event that H′′ n ∩Cp n ̸= ∅. Then there exist n0 ∈N and β′ 0 > 0 such that for any n > n0, β′ > β′ 0 it holds that Pr[E′2 n |E′1 n ] ⩾1 −n−1. Now we proceed to the discrete counterpart of Theorem 4. For any v ∈Rd define x(v) = argminx∈Ln∥x −v∥. The following is a simplified version of [3, Theorem 1.1]. Theorem 7. Suppose that p > p∗. There exists a positive constant ξ′ ⩾1 such that for any two fixed points v, v′ ∈Rd it holds that Pr[x(v), x(v′) ∈C∞, dist Gp n, x(v), x(v′)  > ξ′∥x(v) −x(v′)∥] = o(1). For the subcritical regime (p < p∗), we have the following theorem. Define Sk =  x ∈Ln ∥x∥1 ˜r∗n ⩽k  . That is, Sk represents all the points in Ln that can be reached from o by at most k edges of length ˜r∗ n over the corresponding rectilinear lattice. Denote by ∂Sk the boundary of Sk, i.e., all the points that are exactly k edges from o. Let Ak be the event that Gp n contains a path from o to some vertex in Sk. Theorem 8. ([16, Theorem 5.4]) If p < p∗, there exists a constant c > 0 such that Pr[Ak] < e−ck for all k. This theorem implies that in the subcritical regime it is improbable that Gp n has a path con- necting two lattice points x(v), x(v′) as defined above, as such a path must have ω(1) edges. A.2 Back to motion planning To conclude this section, we define a sparsified version of PRM defined on the deterministic sam- ples Ln, and state its theoretical properties. The proof is omitted as it is almost identical to the one obtained in Section 4, with the difference that it now uses theorems and lemmata presented earlier this section, rather than the ones from Section 5. Recall that PRM uses the two connection radii rn, rst n . For simplicity we fix rn = ˜r∗ n. Definition 12. The sparsified PRM graph Pp n is the union between Gp n ∩F and the supplementary edges [ y∈{s,t}  (x, y) x ∈Gp n ∩Brst n (y) and xy ⊂F . Theorem 9. Suppose that (F, s, t) is robustly feasible. Then there exists p∗> 0 such that the following holds: 24 i. If p < p∗and rst n = ∞then PRM fails (to find a solution) a.a.s. ii. Suppose that p > p∗. There exists β′ 0 > 0 such that for rst n = β′ log1/(d−1) n n1/d , where β′ > β′ 0, and any ε > 0 the following holds a.a.s.: 1. Pp n contains a path πn ∈ΠF s,t with cℓ(πn) ⩽(1 + ε)ξ′ · c∗ ℓ; 2. If M is well behaved then, Pp n contains a path π′ n ∈ΠF s,t with cb(π′ n, M) ⩽(1 + ε)c∗ b. B Probabilistic completeness of RRT with samples from a PPP This section is devoted to the proof of Claim 6. A probabilistic completeness proof of RRT, for the setting of uniform random sampling, is given in [27, Section 3]. We describe here how to modify this proof for samples from a PPP, which are of interest in the present paper. Note that RRT is an incremental planning algorithm, which generates samples one after the other, and connects the current sample to previous ones. Therefore, it should employ the incre- mental PPP sampling defined in Claim 5. We wish to apply the line of arguments as for the uniform-sampling case described in [27, Theorem 1]. This theorem assumes that a path π from start to target exists, whose clearance is δ > 0 and length is L. The path π is covered with a set of m balls of radius r, where m = L/r and r is a constant depending on both the steering parameter η of RRT and the clearance δ. The proof describes the process as a Markov chain with transition probability p, where p is the probability to produce an RRT vertex in the (i + 1)st ball Br(xi+1) given that there exists an RRT vertex in the ith ball Br(xi). It then shows that the probability that RRT will not produce samples within all m balls decays to zero exponentially as the number of samples tends to infinity. To adapt the proof for samples from a PPP, it suffices to outline how to compute the transition probability p, Recall that at each iteration i we draw a sample Ni ∈N, where N is a Poisson random variable with mean 1, and define Xi = {X1, . . . , XNi} to be Ni points chosen independently and uniformly at random from [0, 1]d. Let A denote the event that Ni ⩾1, and let B denote the event that at least one of X1, . . . , XNi falls inside Br(xi+1). Since p = Pr[A ∩B] = Pr[A] · Pr[B|A], we will bound Pr[A], Pr[B|A]. Observe that Pr[A] = Pr[Ni ⩾1] = 1 −Pr[Ni = 0] = 1 −e−1, and note that Pr[B|A] ⩾|Br|/|F|. Therefore, p = Pr[A ∩B] ⩾(1 −e−1) · |Br|/|F|. The rest of the proof of Claim 6 follows the same lines as [27, Theorem 1]. C Critical values In this section we provide for reference a list of (estimates) of the values γ∗and p∗. C.1 Continuum percolation Table 2 presents an estimate of γ∗such that Gn percolates only if rn > γ∗n−1/d, for 2 ⩽d ⩽11. It is directly derived from [50, Table I], where an estimate of the critical node degree ∆(n, rn) is derived. We mention that for d large enough γ∗∼ 1 2b1/d d [50], where bd is volume of the Lebesgue measure of the unit ball in Rd. 25 d γ∗ 2 0.5992373341(3) 3 0.4341989179(2) 4 0.4031827664(5) 5 0.4007817822(7) 6 0.4067135508(5) 7 0.4178609367(3) 8 0.4317877097(6) 9 0.447061366(4) 10 0.462335684(3) 11 0.4773913785(3) Table 2: Values of γ∗in continuum percolation. d p∗ 2 0.5 3 0.24881182(10) 4 0.1601314(13) 5 0.118172(1) 6 0.0942019(6) 7 0.0786752(3) 8 0.06770839(7) 9 0.05949601(5) 10 0.05309258(4) 11 0.04794969(1) 12 0.04372386(1) 13 0.04018762(1) Table 3: Values of p∗in lattice percolation. The case for d = 2 is due to [11]; d = 3 is due to [52]; d ∈{4, . . . , 13} is due to [15]. C.2 Lattice percolation Table 3 presents the value of p∗for d = 2, the best known estimates for 4 ⩽d ⩽13. Numbers in round brackets are single standard deviations. References [1] Aviv Adler, Mark de Berg, Dan Halperin, and Kiril Solovey. Efficient multi-robot motion planning for unlabeled discs in simple polygons. IEEE Trans. Automation Science and Engineering, 12(4):1309–1317, 2015. [2] Ron Alterovitz, Sachin Patil, and Anna Derbakova. Rapidly-exploring roadmaps: Weighing exploration vs. refinement in optimal motion planning. In IEEE International Conference on Robotics and Automation (ICRA), pages 3706–3712, 2011. [3] Peter Antal and Agoston Pisztora. On the chemical distance for supercritical bernoulli per- colation. The Annals of Probability, 24(2):1036–1048, 1996. 26 [4] Oktay Arslan and Panagiotis Tsiotras. Use of relaxation methods in sampling-based algo- rithms for optimal motion planning. In IEEE International Conference on Robotics and Automation (ICRA), pages 2413–2420, 2013. [5] Cenk Baykal and Ron Alterovitz. Asymptotically optimal design of piecewise cylindrical robots using motion planning. In Robotics: Science and Systems, 2017. [6] Bela Bollob´as and Oliver Riordan. Percolation. Cambridge University Press, 2006. [7] John Canny. The complexity of robot motion planning. MIT press, 1988. [8] Andrew Dobson and Kostas E. Bekris. Sparse roadmap spanners for asymptotically near- optimal motion planning. International Journal of Robotics Research, 33(1):18–47, 2014. [9] Andrew Dobson, George V. Moustakides, and Kostas E. Bekris. Geometric probability results for bounding path quality in sampling-based roadmaps after finite computation. In IEEE International Conference on Robotics and Automation, pages 4180–4186, 2015. [10] Andrew Dobson, Kiril Solovey, Rahul Shome, Dan Halperin, and Kostas E. Bekris. Scalable asymptotically-optimal multi-robot motion planning. In International Symposium on Multi- Robot and Multi-Agent Systems, 2017. [11] Massimo Franceschetti and Ronald Meester. Random Networks for Communication: From Statistical Physics to Information Systems. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 2008. [12] Emilio Frazzoli and Marco Pavone. Multi-vehicle routing. In John Baillieul and Tariq Samad, editors, Encyclopedia of Systems and Control, pages 1–11. Springer London, 2013. [13] Tobias Friedrich, Thomas Sauerwald, and Alexandre Stauffer. Diameter and broadcast time of random geometric graphs in arbitrary dimensions. Algorithmica, 67(1):65–88, 2013. [14] Jonathan D. Gammell, Siddhartha S. Srinivasa, and Timothy D. Barfoot. Batch informed trees (BIT*): Sampling-based optimal planning via the heuristically guided search of im- plicit random geometric graphs. In IEEE International Conference on Robotics and Au- tomation (ICRA), pages 3067–3074, 2015. [15] Peter Grassberger. Critical percolation in high dimensions. Phys. Rev. E, 67:036101, Mar 2003. [16] Geoffrey R. Grimmett. Percolation, volume 321 of Comprehesive Studies in Mathematics. Springer, 1999. [17] Dan Halperin, Lydia Kavraki, and Kiril Solovey. Robotics. In Jacob E. Goodman, Joseph O’Rourke, and Csaba D. Toth, editors, Handbook of Discrete and Computational Geom- etry, chapter 51. CRC Press LLC, 3rd edition, 2016. URL http://www.csun.edu/ ˜ctoth/Handbook/HDCG3.html. [18] Dan Halperin, Oren Salzman, and Micha Sharir. Algorithmic motion planning. In Ja- cob E. Goodman, Joseph O’Rourke, and Csaba D. Toth, editors, Handbook of Discrete and Computational Geometry, chapter 50. CRC Press LLC, 3rd edition, 2016. URL http://www.csun.edu/˜ctoth/Handbook/HDCG3.html. 27 [19] Rachel M. Holladay, Oren Salzman, and Siddhartha Srinivasa. Minimizing task space frechet error via efficient incremental graph search. CoRR, abs/1710.06738, 2017. URL http://arxiv.org/abs/1710.06738. [20] David Hsu, Jean-Claude Latombe, and Rajeev Motwani. Path planning in expansive config- uration spaces. International Journal of Computational Geometry and Applications, 9(4/5), 1999. [21] Lucas Janson, Edward Schmerling, Ashley A. Clark, and Marco Pavone. Fast marching tree: A fast marching sampling-based method for optimal motion planning in many dimensions. International Journal of Robotics Research, 34(7):883–921, 2015. [22] Lucas Janson, Brian Ichter, and Marco Pavone. Deterministic sampling-based motion plan- ning: Optimality, complexity, and performance. International Journal of Robotics Research, 2017. [23] Sertac Karaman and Emillio Frazzoli. Sampling-based algorithms for optimal motion plan- ning. International Journal of Robotics Research, 30(7):846–894, 2011. [24] Lydia E. Kavraki, Petr Svestka, Jean-Claude Latombe, and Mark Overmars. Probabilistic roadmaps for path planning in high dimensional configuration spaces. IEEE Transactions on Robotics and Automation, 12(4):566–580, 1996. [25] Lydia E. Kavraki, Mihail N. Kolountzakis, and Jean-Claude Latombe. Analysis of proba- bilistic roadmaps for path planning. IEEE Trans. Robotics and Automation, 14(1):166–171, 1998. [26] Michal Kleinbort, Oren Salzman, and Dan Halperin. Collision detection or nearest-neighbor search? On the computational bottleneck in sampling-based motion planning. In Interna- tional Workshop on the Algorithmic Foundations of Robotics (WAFR), 2016. [27] Michal Kleinbort, Kiril Solovey, Zakary Littlefield, Kostas E. Bekris, and Dan Halperin. Probabilistic completeness of RRT for geometric and kinodynamic planning with forward propagation. IEEE Robotics and Automation Letters, 2018. To appear. [28] Andrew M. Ladd and Lydia E. Kavraki. Measure theoretic analysis of probabilistic path planning. IEEE Trans. Robotics and Automation, 20(2):229–242, 2004. [29] Steven M. LaValle and James J. Kuffner Jr. Randomized kinodynamic planning. I. J. Robotics Res., 20(5):378–400, 2001. [30] Yanbo Li, Zakary Littlefield, and Kostas E. Bekris. Asymptotically optimal sampling-based kinodynamic planning. I. J. Robotics Res., 35(5):528–564, 2016. [31] Ronald Meester and Rahul Roy. Uniqueness of unbounded occupied and vacant components in boolean models. The Annals of Applied Probability, 4(3):933–951, 1994. [32] Oren Nechushtan, Barak Raveh, and Dan Halperin. Sampling-diagram automata: A tool for analyzing path quality in tree planners. In International Workshop on the Algorithmic Foundations of Robotics (WAFR), pages 285–301, 2010. [33] Mathew D. Penrose. Random geometric graphs. Oxford University Press, 2003. [34] Mathew D. Penrose and Agoston Pisztora. Large deviations for discrete and continuous percolation. Advances in Applied Probability, 28(1):2952, 1996. 28 [35] John H Reif. Complexity of the movers problem and generalization. In IEEE Symposium on Foundations of Computer Science (FOCS), pages 421–427, 1979. [36] Oren Salzman and Dan Halperin. Asymptotically-optimal motion planning using lower bounds on cost. In IEEE International Conference on Robotics and Automation (ICRA), pages 4167–4172, 2015. [37] Oren Salzman and Dan Halperin. Asymptotically near-optimal RRT for fast, high-quality motion planning. IEEE Trans. Robotics, 32(3):473–483, 2016. [38] Oren Salzman, Doron Shaharabani, Pankaj K. Agarwal, and Dan Halperin. Sparsification of motion-planning roadmaps by edge contraction. I. J. Robotics Res., 33(14):1711–1725, 2014. [39] Edward Schmerling, Lucas Janson, and Marco Pavone. Optimal sampling-based motion planning under differential constraints: The drift case with linear affine dynamics. In IEEE Conference on Decision and Control, pages 2574–2581, 2015. [40] Edward Schmerling, Lucas Janson, and Marco Pavone. Optimal sampling-based motion planning under differential constraints: The driftless case. In IEEE International Conference on Robotics and Automation, pages 2368–2375, 2015. [41] Kiril Solovey and Dan Halperin. Sampling-based bottleneck pathfinding with applications to Fr´echet matching. In European Symposium on Algorithms, pages 76:1–76:16, 2016. [42] Kiril Solovey and Dan Halperin. On the hardness of unlabeled multi-robot motion planning. I. J. Robotics Res., 35(14):1750–1759, 2016. [43] Kiril Solovey and Dan Halperin. Efficient sampling-based bottleneck pathfinding over cost maps. In IEEE/RSJ International Conference on Intelligent Robots and Systems, 2017. to appear. [44] Kiril Solovey, Jingjin Yu, Or Zamir, and Dan Halperin. Motion planning for unlabeled discs with optimality guarantees. In Robotics: Science and Systems, 2015. [45] Kiril Solovey, Oren Salzman, and Dan Halperin. New perspective on sampling-based motion planning via random geometric graphs. In Robotics: Science and Systems, 2016. [46] Kiril Solovey, Oren Salzman, and Dan Halperin. Finding a needle in an exponential haystack: Discrete RRT for exploration of implicit roadmaps in multi-robot motion plan- ning. I. J. Robotics Res., 35(5):501–513, 2016. [47] J. A. Starek, E. Schmerling, G. D. Maher, B. W. Barbee, and M. Pavone. Real-time, propellant-optimized spacecraft motion planning under Clohessy-Wiltshire-Hill dynamics. In IEEE Aerospace Conference, Big Sky, Montana, 2016. [48] Joseph A. Starek, Javier V. G´omez, Edward Schmerling, Lucas Janson, Luis Moreno, and Marco Pavone. An asymptotically-optimal sampling-based algorithm for Bi-directional mo- tion planning. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 2072–2078, 2015. [49] Ioan A. S¸ucan, Mark Moll, and Lydia E. Kavraki. The Open Motion Planning Library. IEEE Robotics & Automation Magazine, 19(4):72–82, 2012. http://ompl.kavrakilab. org. 29 [50] S. Torquato and Y. Jiao. Effect of dimensionality on the continuum percolation of over- lapping hyperspheres and hypercubes. ii. simulation results and analyses. The Journal of Chemical Physics, 137(7):074106, 2012. [51] Glenn Wagner and Howie Choset. Subdimensional expansion for multirobot path planning. Artif. Intell., 219:1–24, 2015. [52] Junfeng Wang, Zongzheng Zhou, Wei Zhang, Timothy M. Garoni, and Youjin Deng. Bond and site percolation in three dimensions. Phys. Rev. E, 87:052107, May 2013. 30