An Incremental Sampling-based Algorithm for Stochastic Optimal Control Vu Anh Huynh Sertac Karaman Emilio Frazzoli∗ Abstract In this paper, we consider a class of continuous-time, continuous-space stochastic optimal control problems. Building upon recent advances in Markov chain approximation methods and sampling-based algorithms for deterministic path planning, we propose a novel algorithm called the incremental Markov Decision Process (iMDP) to compute incrementally control policies that approximate arbitrarily well an optimal policy in terms of the expected cost. The main idea behind the algorithm is to generate a sequence of finite discretizations of the original problem through random sampling of the state space. At each iteration, the discretized problem is a Markov Decision Process that serves as an incrementally refined model of the original problem. We show that with probability one, (i) the sequence of the optimal value functions for each of the discretized problems converges uniformly to the optimal value function of the original stochastic optimal control problem, and (ii) the original optimal value function can be computed efficiently in an incremental manner using asynchronous value iterations. Thus, the proposed algorithm provides an anytime approach to the computation of optimal control policies of the continuous problem. The effectiveness of the proposed approach is demonstrated on motion planning and control problems in cluttered environments in the presence of process noise. 1 Introduction Stochastic optimal control has been an active research area for several decades with many applica- tions in diverse fields ranging from finance, management science and economics [1, 2] to biology [3] and robotics [4]. Unfortunately, general continuous-time, continuous-space stochastic optimal con- trol problems do not admit closed-form or exact algorithmic solutions and are known to be compu- tationally challenging [5]. Many algorithms are available to compute approximate solutions of such problems. For instance, a popular approach is based on the numerical solution of the associated Hamilton-Jacobi-Bellman PDE (see, e.g., [6, 7, 8]). Other methods approximate a continuous prob- lem with a discrete Markov Decision Process (MDP), for which an exact solution can be computed in finite time [9, 10]. However, the complexity of these two classes of deterministic algorithms scales exponentially with the dimension of the state and control spaces, due to discretization. Remarkably, algorithms based on random (or quasi-random) sampling of the state space provide a possibility to alleviate the curse of dimensionality in the case in which the control inputs take values from a finite set, as noted in [11, 12, 5]. Algorithms based on random sampling of the state space have recently been shown to be very effective, both in theory and in practice, for computing solutions to deterministic path planning ∗The authors are with the Laboratory of Information and Decision Systems, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA 02139. {vuhuynh, sertac, frazzoli}@mit.edu 1 arXiv:1202.5544v1 [cs.RO] 24 Feb 2012 problems in robotics and other disciplines. For example, the Probabilistic RoadMap (PRM) algo- rithm first proposed by Kavraki et al. [13] was the first practical planning algorithm that could han- dle high-dimensional path planning problems. Their incremental counterparts, such as RRT [14], later emerged as sampling-based algorithms suited for online applications and systems with differ- ential constraints on the solution (e.g., dynamical systems). The RRT algorithm has been used in many applications and demonstrated on various robotic platforms [15, 16]. Recently, optimality properties of such algorithms were analyzed in [17]. In particular, it was shown that the RRT algo- rithm fails to converge to optimal solutions with probability one. The authors have proposed the RRT∗algorithm which guarantees almost-sure convergence to globally optimal solutions without any substantial computational overhead when compared to the RRT. Although the RRT∗algorithm is asymptotically optimal and computationally efficient (with respect to RRT), it can not handle problems involving systems with uncertain dynamics. In this work, building upon the Markov chain approximation method [18] and the rapidly-exploring sam- pling technique [14], we introduce a novel algorithm called the incremental Markov Decision Process (iMDP) to approximately solve a wide class of stochastic optimal control problems. More precisely, we consider a continuous-time optimal control problem with continuous state and control spaces, full state information, and stochastic process noise. In iMDP, we iteratively construct a sequence of discrete Markov Decision Processes (MDPs) as discrete approximations to the original continuous problem, as follows. Initially, an empty MDP model is created. At each iteration, the discrete MDP is refined by adding new states sampled from the boundary as well as from the interior of the state space. Subsequently, new stochastic transitions are constructed to connect the new states to those already in the model. For the sake of efficiency, stochastic transitions are computed only when needed. Then, an anytime policy for the refined model is computed using an incremental value iteration algorithm, based on the value function of the previous model. The policy for the discrete system is finally converted to a policy for the original continuous problem. This process is iterated until convergence. Our work is mostly related to the Stochastic Motion Roadmap (SMR) algorithm [19] and Markov chain approximation methods [18]. The SMR algorithm constructs an MDP over a sampling-based roadmap representation to maximize the probability of reaching a given goal region. However, in SMR, actions are discretized, and the algorithm does not offer any formal optimality guarantees. On the other hand, while available Markov chain approximation methods [18] provide formal optimality guarantees under very general conditions, a sequence of a priori discretizations of state and control spaces still impose expensive computation. The iMDP algorithm addresses this issue by sampling in the state space and sampling or discovering necessary controls. The main contribution of this paper is a method to incrementally refine a discrete model of the original continuous problem in a way that ensures convergence to optimality while maintaining low time and space complexity. We show that with probability one, the sequence of optimal value functions induced by optimal control policies for each of the discretized problems converges uniformly to the optimal value function of the original stochastic control problem. In addition, the optimal value function of the original problem can be computed efficiently in an incremental manner using asynchronous value iterations. Thus, the proposed algorithm provides an anytime approach to the computation of optimal control policies of the continuous problem. Distributions of approximating trajectories and control processes returned from the iMDP algorithm approximate arbitrarily well distributions of optimal trajectories and optimal control processes of the original problem. Each iteration of the iMDP algorithm can be implemented with the time complexity O(kθ log k) where 0 < θ ≤1 while the space complexity is O(k), where k is the number of states in an MDP model in the algorithm which increases linearly due to the sampling strategy. Thus, the entire processing time until the algorithm stops can be implemented in O(k1+θ log k). Hence, the 2 above space and time complexities make iMDP a practical incremental algorithm. The effectiveness of the proposed approach is demonstrated on motion planning and control problems in cluttered environments in the presence of process noise. This paper is organized as follows. In Section 2, a formal problem definition is given. The Markov chain approximation methods and the iMDP algorithm are described in Sections 3 and 4. The analysis of the iMDP algorithm is presented in Section 5. Section 6 is devoted to simulation examples and experimental results. The paper is concluded with remarks in Section 7. We provide additional notations and preliminary results as well as proofs for theorems and lemmas in Appendix. 2 Problem Definition In this section, we present a generic stochastic optimal control problem. Subsequently, we discuss how the formulation extends the standard motion planning problem of reaching a goal region while avoiding collision with obstacles. Stochastic Dynamics Let dx, du, and dw be positive integers. The dx-dimensional and du- dimensional Euclidean spaces are Rdx and Rdu respectively. Let S be a compact subset of Rdx, which is the closure of its interior So and has a smooth boundary ∂S. The state of the system at time t is x(t) ∈S, which is fully observable at all times. We also define a compact subset U of Rdu as a control set. Suppose that a stochastic process {w(t); t ≥0} is a dw-dimensional Brownian motion, also called a Wiener process, on some probability space (Ω, F, P). Let a control process {u(t); t ≥0} be a U-valued, measurable process also defined on the same probability space. We say that the control process u(·) is nonanticipative with respect to the Wiener process w(·) if there exists a filtration {Ft; t ≥0} defined on (Ω, F, P) such that u(·) is Ft-adapted, and w(·) is an Ft-Wiener process. In this case, we say that u(·) is an admissible control inputs with respect to w(·), or the pair (u(·), w(·)) is admissible. Let Rdx×dw denote the set of all dx by dw real matrices. We consider stochastic dynamical systems, also called controlled diffusions, of the form dx(t) = f(x(t), u(t)) dt + F(x(t), u(t)) dw(t), ∀t ≥0 (1) where f : S ×U →Rdx and F : S ×U →Rdx×dw are bounded measurable and continuous functions as long as x(t) ∈So. The matrix F(·, ·) is assumed to have full rank. More precisely, a solution to the differential form given in Eq. (1) is a stochastic process {x(t); t ≥0} such that x(t) equals the following stochastic integral in all sample paths: x(t) = x(0) + Z t 0 f(x(τ), u(τ)) dτ + Z t 0 F(x(τ), u(τ))dw(τ), (2) until x(·) exits So, where the last term on the right hand side is the usual Itˆo integral (see, e.g., [20]). When the process x(·) hits ∂S, the process x(·) is stopped. Weak Existence and Weak Uniqueness of Solutions Let Γ be the sample path space of admissible pairs (u(·), w(·)). Suppose we are given probability measures Λ and P0 on Γ and on S respectively. We say that solutions of (2) exist in the weak sense if there exists a probability space (Ω, F, P), a filtration {Ft; t ≥0}, an Ft-Wiener process w(·), an Ft-adapted control process u(·), and an Ft-adapted process x(·) satisfying Eq. (2), such that Λ and P0 are the distributions of (u(·), w(·)) and x(0) under P. We call such tuple {(Ω, F, P), Ft, w(·), u(·), x(·)} a weak sense solution of Eq. (1) [21, 18]. 3 Assume that we are given weak sense solutions {(Ωi, Fi, Pi), Ft,i, wi(·), ui(·), xi(·)}, i = 1, 2, to Eq. (1). We say solutions are weakly unique if equality of the joint distributions of (wi(·), ui(·), xi(0)) under Pi, i = 1, 2, implies the equality of the distributions (xi(·), wi(·), ui(·), xi(0)) under Pi, i = 1, 2 [21, 18]. In this paper, given the boundedness of the set S, and the definition of the functions f and F in Eq. (1), we have a weak solution to Eq. (1) that is unique in the weak sense [21]. The boundedness requirement is naturally satisfied in many applications and is also needed for the implementation of the proposed numerical method. We will also handle the case in which f and F are discontinuous with extra mild technical assumptions to ensure asymptotic optimality in Section 3. Policy and Cost-to-go Function A particular class of admissible controls, called Markov con- trols, depends only on the current state, i.e., u(t) is a function only of x(t), for all t ≥0. It is well known that in control problems with full state information, the best Markov control performs as well as the best admissible control (see, e.g., [20, 21]). A Markov control defined on S is also called a policy, and is represented by the function µ : S →U. The set of all policies is denoted by Π. Define the first exit time Tµ : Π →[0, +∞] under policy µ as Tµ = inf  t : x(t) /∈So and Eq. (1) and u(t) = µ(x(t)) . Intuitively, Tµ is the first time that the trajectory of the dynamical system given by Eq. (1) with u(t) = µ(x(t)) hits the boundary ∂S of S. By definition, Tµ = +∞if x(·) never exits So. Clearly, Tµ is a random variable. Then, the expected cost-to-go function under policy µ is a mapping from S to R defined as Jµ(z) = E Z Tµ 0 αt g x(t), µ(x(t))  dt + h(x(Tµ)) | x(0) = z  , where g : S × U →R and h : S →R are bounded measurable and continuous functions, called the cost rate function and the terminal cost function, respectively, and α ∈[0, 1) is the discount rate. We further assume that g(x, u) is uniformly H¨older continuous in x with exponent 2ρ ∈(0, 1] for all u ∈U. That is, there exists some constant C > 0 such that |g(x, u) −g(x′, u)| ≤C||x −x′||2ρ 2 , ∀x, x′ ∈S. We will address the discontinuity of g and h in Section 3. The optimal cost-to-go function J∗: S →R is defined as J∗(z) = infµ∈Π Jµ(z) for all z ∈S. A policy µ∗is called optimal if Jµ∗= J∗. For any ϵ > 0, a policy µ is called an ϵ-optimal policy if ||Jµ −J∗||∞≤ϵ. In this paper, we consider the problem of computing the optimal cost-to-go function J∗and an optimal policy µ∗if obtainable. Our approach, outlined in Section 4, approximates the optimal cost-to-go function and an optimal policy in an anytime fashion using incremental sampling-based algorithms. This sequence of approximations is guaranteed to converge uniformly to the optimal cost-to-go function and to find an ϵ-optimal policy for an arbitrarily small non-negative ϵ, almost surely, as the number of samples approaches infinity. Relationship with Standard Motion Planning The standard motion planning problem of finding a collision-free trajectory that reaches a goal region for a deterministic dynamical system can be defined as follows (see, e.g., [17]). Let X ⊂Rdx be a compact set. Let the open sets Xobs and Xgoal denote the obstacle region and the goal region, respectively. Define the obstacle- free space as Xfree := X \ Xobs. Let xinit ∈Xfree. Consider the deterministic dynamical system 4 ˙x = f(x(t), u(t)) dt, where f : X × U →Rdx. The feasible motion planning problem is to find a measurable control input u : [0, T] →U such that the resulting trajectory x(t) is collision free , i.e., x(t) ∈Xfree and reaches the goal region, i.e., x(T) ∈Xgoal. The optimal motion planning problem is to find a measurable control input u such that the resulting trajectory x solves the feasible motion planning problem with minium trajectory cost. The problem considered in this paper extends the classical motion planning problem with stochastic dynamics as described by Eq. (1). Given a goal set Xgoal and an obstacle set Xobs, define S := X \ (Xgoal ∪Xobs) and thus ∂Xgoal ∪∂Xobs ∪∂X = ∂S. Due to the nature of Brownian motion, under most policies, there is some non-zero probability that collision with an obstacle set will occur. However, to penalize collision with obstacles in the control design process, the cost of terminating by hitting the obstacle set, i.e., h(z) for z ∈∂Xobs, can be made arbitrarily high. Clearly, the higher this number is, the more conservative the resulting policy will be. Similarly, the terminal cost function on the goal set, i.e., h(z) for z ∈∂Xgoal, can be set to a small value to encourage terminating by hitting the goal region. 3 Markov Chain Approximation A discrete-state Markov decision process (MDP) is a tuple M = (X, A, P, G, H) where X is a finite set of states, A is a set of actions that is possibly a continuous space, P(· | ·, ·) : X × X × A →R≥0 is a function that denotes the transition probabilities satisfying P ξ′∈X P(ξ′ | ξ, v) = 1 for all ξ ∈X and all v ∈A, G(·, ·) : X ×A →R is an immediate cost function, and H : X →R is a terminal cost funtion. If we start at time 0 with a state ξ0 ∈X, and at time i ≥0, we apply an action vi ∈A at a state ξi to arrive at a next state ξi+1 according to the transition probability function P, we have a controlled Markov chain {ξi; i ∈N}. The chain {ξi; i ∈N} due to the control sequence {vi; i ∈N} and an initial state ξ0 will also be called the trajectory of M under the said sequence of controls and initial state. Given a continuous-time dynamical system as described in Eq. (1), the Markov chain approxima- tion method approximates the continuous stochastic dynamics using a sequence of MDPs {Mn}∞ n=0 in which Mn = (Sn, U, Pn, Gn, Hn) where Sn is a discrete subset of S, and U is the original control set. We define ∂Sn = ∂S ∩Sn. For each n ∈N, let {ξn i ; i ∈N} be a controlled Markov chain on Mn until it hits ∂Sn. We associate with each state z in S a non-negative interpolation interval ∆tn(z), known as a holding time. We define tn i = Pi−1 0 ∆tn(ξn i ) for i ≥1 and tn 0 = 0. Let ∆ξn i = ξn i+1 −ξn i . Let un i denote the control used at step i for the controlled Markov chain. In addition, we define Gn(z, v) = g(z, v)∆tn(z) and Hn(z) = h(z) for each z ∈Sn and v ∈U. Let Ωn be the sample space of Mn. Holding times ∆tn and transition probabilities Pn are chosen to satisfy the local consistency property given by the following conditions: 1. For all z ∈S, lim n→∞∆tn(z) = 0, (3) 2. For all z ∈S and all v ∈U: lim n→∞ EPn[∆ξn i | ξn i = z, un i = v] ∆tn(z) = f(z, v), (4) lim n→∞ CovPn[∆ξn i | ξn i = z, un i = v] ∆tn(z) = F(z, v)F(z, v)T , (5) lim n→∞ sup i∈N,ω∈Ωn ||∆ξn i ||2 = 0. (6) 5 The chain {ξn i ; i ∈N} is a discrete-time process. In order to approximate the continuous- time process x(·) in Eq. (2), we use an approximate continuous-time interpolation. We define the (random) continuous-time interpolation ξn(·) of the chain {ξn i ; i ∈N} and the continuous-time interpolation un(·) of the control sequence {un i ; i ∈N} under the holding times function ∆tn as follows: ξn(τ) = ξn i , and un(τ) = un i for all τ ∈[tn i , tn i+1). Let Ddx[0, +∞) denote the set of all Rdx-valued functions that are continuous from the left and has limits from the right. The process ξn can be thought of as a random mapping from Ωn to the function space Ddx[0, +∞). A control problem for the MDP Mn is analogous to that defined in Section 2. Similar to previous section, a policy µn is a function that maps each state z ∈Sn to a control µn(z) ∈U. The set of all such policies is Πn. Given a policy µn, the (discounted) cost-to-go due to µn is: Jn,µn(z) = EPn "In−1 X i=0 αtn i Gn(ξn i , µn(ξn i )) + αtn InHn(ξn In) ξn 0 = z # , where EPn denotes the conditional expectation under Pn, the sequence {ξn i ; i ∈N} is the controlled Markov chain under the policy µn, and In is termination time defined as In = min{i : ξn i ∈∂Sn}. The optimal cost function, denoted by J∗ n satisfies J∗ n(z) = inf µn∈Πn Jn,µn(z), ∀z ∈Sn. (7) An optimal policy, denoted by µ∗ n, satisfies Jn,µ∗n(z) = J∗ n(z) for all z ∈Sn. For any ϵ > 0, µn is an ϵ-optimal policy if ||Jn,µn −J∗ n||∞≤ϵ. As stated in the following theorem, under mild technical assumptions, local consistency implies the convergence of continuous-time interpolations of the trajectories of the controlled Markov chain to the trajectories of the stochastic dynamical system described by Eq. (1). Theorem 1 (see Theorem 10.4.1 in [18]) Let us assume that f(·, ·) and F(·, ·) are measurable, bounded and continuous. Thus, Eq. (1) has a weakly unique solution. Let {Mn}∞ n=0 be a sequence of MDPs, and {∆tn}∞ n=0 be a sequence of holding times that are locally consistent with the stochastic dynamical system described by Eq. (1). Let {un i ; i ∈N} be a sequence of controls defined for each n ∈N. For all n ∈N, let {ξn(t); t ∈R≥0} denote the continuous-time interpolation to the chain {ξn i ; i ∈N} under the control sequence {un i ; i ∈N} starting from an initial state zinit, and {un(t); t ∈R≥0} denote the continuous-time interpolation of {un i ; i ∈N}, according to the holding time ∆tn. Then, any subsequence of {(ξn(·), un(·))}∞ n=0 has a further subsequence that converges in distribution to (x(·), u(·)) satisfying x(t) = zinit + Z t 0 f(x(τ), u(τ))dτ + Z t 0 F(x(τ), u(τ))dw(τ). Under the weak uniqueness condition for solutions of Eq. (1), the sequence {(ξn(·), un(·))}∞ n=0 also converges to (x(·), u(·)). Furthermore, a sequence of minimizing controls guarantees pointwise convergence of the cost func- tion to the original optimal cost function in the following sense. Theorem 2 (see Theorem 10.5.2 in [18]) Assume that f(·, ·), F(·, ·), g(·, ·) and h(·) are mea- surable, bounded and continuous. For any trajectory x(·) of the system described by Eq. (1), define ˆτ(x) := inf{t : x(t) /∈So}. Let {Mn = (Sn, U, Pn, Gn, Hn)}∞ n=0 and {∆tn}∞ n=0 be locally consistent with the system described by Eq. (1). 6 We suppose that the function ˆτ(·) is continuous (as a mapping from Ddx[0, +∞) to the com- pactified interval [0, +∞]) with probability one relative to the measure induced by any solution to Eq. (1) for an initial state z, which is satisfied when the matrix F(·, ·)F(·, ·)T is nondegenerate. Then, for any z ∈Sn, the following equation holds: lim n→∞|J∗ n(z) −J∗(z)| = 0. In particular, for any z ∈Sn, for any sequence {ϵn > 0}∞ n=0 such that limn→∞ϵn = 0, and for any sequence of policies {µn}∞ n=0 such that µn is an ϵn-optimal policy of Mn, we have: lim n→∞|Jn,µn(z) −J∗(z)| = 0. Moreover, the sequence {tn In; n ∈N} converges in distribution to the termination time of the optimal control problem for the system in Eq. (1) when the system is under optimal control processes. Under the assumption that the cost rate g is H¨older continuous [22] with exponent 2ρ, the sequence of optimal value functions for approximating chains J∗ n indeed converges uniformly to J∗with a proven rate. Let us denote ||b||Sn = supz∈Sn b(x) as the sup-norm over Sn of a function b with domain containing Sn. Let ζn = max z∈Sn min z′∈Sn ||z′ −z||2 (8) be the dispersion of Sn. Theorem 3 (see Theorem 2.3 in [23] and Theorem 2.1 in [24]) Consider an MDP sequence {Mn = (Sn, U, Pn, Gn, Hn)}∞ n=0 and holding times {∆tn}∞ n=0 that are locally consistent with the sys- tem described by Eq. (1). Let J∗ n be the optimal cost of Mn. Given the assumptions on the dynamics and cost rate functions in Section 2, as n approaches ∞, we have ||J∗ n −J∗||Sn = O(ζρ n). Discontinuity of dynamics and objective functions We note that the above theorems continue to hold even when the functions f, F, g, and h are discontinuous. In this case, the following conditions are sufficient to use the theorems: (i) For r to be f, F, g, or h, r(x, u) takes either the form r0(x)+r1(u) or r0(x)r1(u) where the control dependent terms are continuous and the x-dependent terms are measurable, and (ii) f(x, ·), F(x, ·), g(x, ·), and h(x) are nondegenerate for each x, and the set of discontinuity in x of each function is a uniformly smooth surface of lower dimension. Furthermore, instead of uniform H¨older continuity, the cost rate g can be relaxed to be locally H¨older continuous with exponent 2ρ on S (see, e.g., page 275 in [18]). Let us remark that the controlled Markov chain differs from the stochastic dynamical systems described in Section 2 in that the former possesses a discrete state structure and evolves in a discrete time manner while the latter is a continuous model both in terms of its state space and the evolution of time. Yet, both models possess a continuous control space. It will be clear in the following discussion that the control space does not have to be discretized if a certain optimization problem can be solved numerically or via sampling. The above theorems assert the asymptotic optimality given a sequence of a priori discretizations of the state space and the availability of ϵ-optimal policies. In what follows, we describe an algorithm that incrementally computes the optimal cost-to-go function and an optimal control policy of the continuous problem. 7 4 The iMDP Algorithm Based on Markov chain approximation results, the iMDP algorithm incrementally builds a sequence of discrete MDPs with probability transitions and cost-to-go functions that consistently approxi- mate the original continuous counterparts. The algorithm refines the discrete models by using a number of primitive procedures to add new states into the current approximate model. Finally, the algorithm improves the quality of discrete-model policies in an iterative manner by effectively using the computations inherited from the previous iterations. Before presenting the algorithm, some primitive procedures which the algorithm relies on are presented in this section. 4.1 Primitive Procedures 4.1.1 Sampling The Sample() and SampleBoundary() procedures sample states independently and uniformly from the interior So and the boundary ∂S, respectively. 4.1.2 Nearest Neighbors Given z ∈S and a set Y ⊆S of states. For any k ∈N, the procedure Nearest(z, Y, k) returns the k nearest states z′ ∈Y that are closest to z in terms of the Euclidean norm. 4.1.3 Time Intervals Given a state z ∈S and a number k ∈N, the procedure ComputeHoldingTime(z, k) returns a holding time computed as follows: ComputeHoldingTime(z, k) = γt log k k θςρ/dx , where γt > 0 is a constant, and ς, θ are constants in (0, 1) and (0, 1] respectively1. The parameter ρ ∈(0, 0.5] defines the H¨older continuity of the cost rate function g(·, ·) as in Section 2. 4.1.4 Transition Probabilities Given a state z ∈S, a subset Y ∈S, a control v ∈U, and a positive number τ describing a holding time, the procedure ComputeTranProb(z, v, τ, Y ) returns (i) a finite set Znear ⊂S of states such that the state z + f(z, v)τ belongs to the convex hull of Znear and ||z′ −z||2 = O(τ) for all z′ ̸= z ∈Znear, and (ii) a function p that maps Znear to a non-negative real numbers such that p(·) is a probability distribution over the support Znear. It is crucial to ensure that these transition probabilities result in a sequence of locally consistent chains in the algorithm. There are several ways to construct such transition probabilities. One possible construction by solving a system of linear equations can be found in [18]. In particular, we choose Znear = Nearest(z + f(z, v)τ, Y, s) where s ∈N is some constant. We define the transition probabilities p : Znear →R≥0 that satisfies: (i) P z′∈Znear p(z′)(z′ −z) = f(z, v)τ + o(τ), (ii) P z′∈Znear p(z′)(z′ −z)(z′ −z)T = F(z, v)F(z, v)T τ + f(z, v)f(z, v)T τ 2 + o(τ). 1Typical values of ς is [0.999,1). 8 (iii) P z′∈Znear p(z′) = 1. An alternate way to compute the transition probabilities is to approximate using local Gaussian distributions. We choose Znear = Nearest(z + f(z, v)τ, Y, s) where s = Θ(log(|Y |)). Let Nm,σ(·) denote the density of the (possibly multivariate) Gaussian distribution with mean m and variance σ. Define the transition probabilities as follows: p(z′) = Nm,σ(z′) P y∈Znear Nm,σ(y), where m = z + f(z, v)τ and σ = F(z, v)F(z, v)T τ. This expression can be evaluated easily for any fixed v ∈U. As |Znear| approaches infinity, the above construction satisfies the local consistency almost surely. As we will discuss in Section 4.2, the size of the support Znear affects the complexity of the iMDP algorithm. We note that solving a system of linear equations requires computing and handling a matrix of size (d2 x + dx + 1) × |Znear| where |Znear| is constant. When dx and |Znear| are large, the constant factor of the complexity is large. In contrast, computing local Gaussian approximation requires only |Znear| evaluations. Thus, although local Gaussian approximation yields higher time complexity, this approximation is more convenient to compute. 4.1.5 Backward Extension Given T > 0 and two states z, z′ ∈S, the procedure ExtendBackwards(z, z′, T) returns a triple (x, v, τ) such that (i) ˙x(t) = f(x(t), u(t))dt and u(t) = v ∈U for all t ∈[0, τ], (ii) τ ≤T, (iii) x(t) ∈S for all t ∈[0, τ], (iv) x(τ) = z, and (v) x(0) is close to z′. If no such trajectory exists, then the procedure returns failure2. We can solve for the triple (x, v, τ) by sampling several controls v and choose the control resulting in x(0) that is closest to z′. 4.1.6 Sampling and Discovering Controls The procedure ConstructControls(k, z, Y, T) returns a set of k controls in U. We can uniformly sample k controls in U. Alternatively, for each state z′ ∈Nearest(z, Y, k), we solve for a control v ∈U such that (i) ˙x(t) = f(x(t), u(t))dt and u(t) = v ∈U for all t ∈[0, T], (ii) x(t) ∈S for all t ∈[0, T], (iii) x(0) = z and x(T) = z′. 4.2 Algorithm Description The iMDP algorithm is given in Algorithm 1. The algorithm incrementally refines a sequence of (finite-state) MDPs Mn = (Sn, U, Pn, Gn, Hn) and the associated holding time function ∆tn that consistently approximates the sytem in Eq. (1). In particular, given a state z ∈Sn and a holding time ∆tn(z), we can implicitly define the stage cost function Gn(z, v) = ∆tn(z)g(z, v) for all v ∈U and terminal cost function Hn(z) = h(z). We also associate with z ∈Sn a cost value Jn(z), and a control µn(z). We refer to Jn as a cost value function over Sn. In the following discussion, we describe how to construct Sn, Pn, Jn, µn over iterations. We note that, in most cases, we only need to construct and access Pn on demand. In every iteration of the main loop (Lines 4-16), we sample an additional state from the boundary of the state space S. We set Jn, µn, ∆tn for those states at Line 5. Subsequently, we also sample a 2This procedure is used in the algorithm solely for the purpose of inheriting the “rapid exploration” property of the RRT algorithm [14, 17]. 9 Algorithm 1: iMDP() 1 (n, S0, J0, µ0, ∆t0) ←(1, ∅, ∅, ∅, ∅); 2 while n < N do 3 (Sn, Jn, µn, ∆tn) ←(Sn−1, Jn−1, µn−1, ∆tn−1); // Add a new state to the boundary 4 zs ←SampleBoundary(); 5 (Sn, Jn(zs), µn(zs), ∆tn(zs)) ←(Sn ∪{zs}, h(zs), null, 0) ; // Add a new state to the interior 6 zs ←Sample(); 7 znearest ←Nearest(zs, Sn, 1); 8 if (xnew, unew, τ) ←ExtendBackwards(znearest, zs, T0) then 9 znew ←xnew(0); 10 cost = τg(znew, unew) + ατJn(znearest); 11 (Sn, Jn(znew), µn(znew), ∆tn(znew)) ←(Sn ∪{znew}, cost, unew, τ) ; // Perform Ln ≥1 (asynchronous) value iterations 12 for i = 1 →Ln do // Update znew and Kn = Θ |Sn|θ states 0 < θ ≤1, Kn < |Sn|  13 Zupdate ←Nearest(znew, Sn\∂Sn, Kn) ∪{znew}; 14 for z ∈Zupdate do 15 Update(z, Sn, Jn, µn, ∆tn); 16 n ←n + 1; state from the interior of S (Line 6) denoted as zs. We compute the nearest state znearest, which is already in the current MDP, to the sampled state (Line 7). The algorithm computes a trajectory that reaches znearest starting at some state near zs (Line 8) using a control signal unew(0..τ). The new trajectory is denoted by xnew : [0, τ] →S and the starting state of the trajectory, i.e., xnew(0), is denoted by znew. The new state znew is added to the state set, and the cost value Jn(znew), control µn(znew), and holding time ∆tn(znew) are initialized at Line 11. Update of cost value and control The algorithm updates the cost values and controls of the finer MDP in Lines 13-15. We perform Ln ≥1 value iterations in which we update the new state znew and other Kn = Θ |Sn|θ states in the state set where Kn < |Sn|. When all states in the MDP are updated, i.e. Kn + 1 = |Sn|, Ln value iterations are implemented in a synchronous manner. Otherwise, Ln value iterations are implemented in an asynchronous manner. The set of states to be updated is denoted as Zupdate (Line 13). To update a state z ∈Zupdate that is not on the boundary, in the call to the procedure Update (Line 15), we solve the following Bellman equation:3 Jn(z) = min v∈U{Gn(z, v) + α∆tn(z)EPn [Jn−1(y)|z, v]}, (9) 3Although the argument of Update at Line 15 is Jn, we actually process the previous cost values Jn−1 due to Line 3. We can implement Line 3 by simply sharing memory for (Sn, Jn, µn, ∆tn) and (Sn−1, Jn−1, µn−1, ∆tn−1). 10 Algorithm 2: Update(z ∈Sn, Sn, Jn, µn, ∆tn) 1 τ ←ComputeHoldingTime(z, |Sn|); // Sample or discover Cn = Θ(log(|Sn|)) controls 2 Un ←ConstructControls(Cn, z, Sn, τ); 3 for v ∈Un do 4 (Znear, pn) ←ComputeTranProb(z, v, τ, Sn); 5 J ←τg(z, v) + ατ P y∈Znear pn(y)Jn(y); 6 if J < Jn(z) then 7 (Jn(z), µn(z), ∆tn(z), κn(z)) ←(J, v, τ, |Sn|); Algorithm 3: Policy(z ∈S, n) 1 znearest ←Nearest(z, Sn, 1); 2 return µ(z) = (µn(znearest), ∆tn(znearest)) and set µn(z) = v∗(z), where v∗(z) is the minimizing control of the above optimization problem. There are several ways to solve Eq. (9) over the the continuous control space U efficiently. If Pn(· | z, v) and g(z, v) are affine functions of v, and U is convex, the above optimization has a linear objective function and a convex set of constraints. Such problems are widely studied in the literature [25]. More generally, we can uniformly sample the set of controls, called Un, in the control space U. Hence, we can evaluate the right hand side (RHS) of Eq. (9) for each v ∈Un to find the best v∗in Un with the smallest RHS value and thus to update Jn(z) and µn(z). When limn→∞|Un| = ∞, we can solve Eq. (9) arbitrarily well (see Theorem 8). Thus, it is sufficient to construct the set Un with Θ(log(|Sn|)) controls using the procedure ConstructControls as described in Algorithm 2 (Line 2). The set Znear and the transition probability Pn(· | z, v) constructed consistently over the set Znear are returned from the procedure ComputeTranProb for each v ∈Un (Line 4). Depending on a particular method to build Pn (i.e. solving a system of linear equations or evaluating a local Gaussian distribution), the cardinality of Znear is set to a constant or increases as Θ(log(|Sn|)). Subsequently, the procedure chooses the best control among the constructed controls to update Jn(z) and µn(z) (Line 7). We note that in Algorithm 2, before making improvement for the cost value at z by comparing new controls, we can re-evaluate the cost value with the current control µn(z) over the holding time ∆tn(z) by adding the current control µn(z) to Un. The reason is that the current control may be still the best control compared to other controls in Un. Complexity of iMDP The time complexity per iteration of the implementation in Algorithms 1-2 is either O |Sn|θ log |Sn|  or O |Sn|θ(log |Sn|)2 . In particular, if the procedure ComputeTranProb solves a set of linear equa- tions to construct Pn such that the cardinality of Znear can remain constant, the time complexity per iteration is O |Sn|θ log |Sn|  where log |Sn| accounts for the number of processed controls, and |Sn|θ accounts for the number of updated states in one iteration. Otherwise, if the procedure ComputeTranProb uses a local Gaussian distribution to construct Pn such that the cardinality of Znear increases as Θ(log |Sn|), the time complexity per iteration is O |Sn|θ(log |Sn|)2 . The pro- cessing time from the beginning until the iMDP algorithm stops after n iterations is thus either 11 O |Sn|1+θ log |Sn|  or O |Sn|1+θ(log |Sn|)2 . Since we only need to access locally consistent transi- tion probability on demand, the space complexity of the iMDP algorithm is O(|Sn|). Finally, the size of state space Sn is |Sn| = Θ(n) due to our sampling strategy. 4.3 Feedback Control As we will see in Theorems 7-8, the sequence of cost value functions Jn arbitrarily approximates the original optimal cost-to-go J∗. Therefore, we can perform a Bellman update based on the approxi- mated cost-to-go Jn (using the stochastic continuous-time dynamics) to obtain a policy control for any n. However, we will discuss in Theorem 9 that the sequence of µn also approximates arbitrarily well an optimal control policy. In other words, in the iMDP algorithm, we also incrementally con- struct an optimal control policy. In the following paragraph, we present an algorithm that converts a policy for a discrete system to a policy for the original continuous problem. Given a level of approximation n ∈N, the control policy µn generated by the iMDP algorithm is used for controlling the original system described by Eq. (1) using the procedure given in Al- gorithm 3. This procedure computes the state in Mn that is closest to the current state of the original system and applies the control attached to this closest state over the associated holding time. 5 Analysis In this section, let (Mn = (Sn, U, Pn, Gn, Hn), ∆tn, Jn, µn) denote the MDP, holding times, cost value function, and policy returned by Algorithm 1 at the end n iterations. The proofs of lemmas and theorems in this section can be found in Appendix. For large n, states in Sn are sampled uniformly in the state space S [17]. Moreover, the dispersion of Sn shrinks with the rate O((log |Sn|/|Sn|)1/dx) as described in the next lemma. Lemma 4 Recall that ζn measures of the dispersion of Sn (Eq. 8). We have the following event happens with probability one: ζn = O((log |Sn|/|Sn|)1/dx). The proof is based on the fact that, if we partition Rdx into cells of volume O (log(|Sn|)/|Sn|), then, almost surely, every cell contains at least an element of Sn, as |Sn| approaches infinity. The above lemma leads to the following results. Lemma 5 The MDP sequence {Mn}∞ n=0 and holding times {∆tn}∞ n=0 returned by Algorithm 1 are locally consistent with the system described by Eq. (1) for large n with probability one. Theorem 1 and Lemma 5 together imply that the trajectories of the controlled Markov chains approximate those of the original stochastic dynamical system in Eq. (1) arbitrarily well as n approaches to infinity. Moreover, recall that || · ||Sn is the sup-norm over Sn, the following theorem shows that J∗ n converges uniformly, with probability one, to the original optimal value function J∗. Theorem 6 Given n ∈N, for all z ∈Sn, J∗ n(z) denotes the optimal value function evaluated at state z for the finite-state MDP Mn returned by Algorithm 1. Then, the following event holds with probability one: lim n→∞||J∗ n −J∗||Sn = 0. 12 In other words, J∗ n converges to J∗uniformly. In particular, ||J∗ n −J∗||Sn = O((log |Sn|/|Sn|)ρ/dx). The proof follows immediately from Lemmas 4-5 and Theorems 2-3. The theorem suggests that we can compute J∗ n for each discrete MDP Mn before sampling more states to construct Mn+1. Indeed, in Algorithm 1, when updated states are chosen randomly as subsets of Sn, and Ln is large enough, we compute J∗ n using asynchronous value iterations [26, 27]. Subsequent theorems present stronger results. We will prove the asymptotic optimality of the cost value Jn returned by the iMDP algorithm when n approaches infinity without directly approximating J∗ n for each n. We first consider the case when we can solve the Bellman update (Eq. 9) exactly and 1 ≤Ln, Kn = Θ(|Sn|θ) < |Sn|. Theorem 7 For all z ∈Sn, Jn(z) is the cost value of the state z computed by Algorithm 1 and Algorithm 2 after n iterations with 1 ≤Ln, and Kn = Θ(|Sn|θ) < |Sn|. Let Jn,µn be the cost-to-go function of the returned policy µn on the discrete MDP Mn. If the Bellman update at Eq. 9 is solved exactly, then, the following events hold with probability one: i. limn→∞||Jn −J∗ n||Sn = 0, and limn→∞||Jn −J∗||Sn = 0, ii. limn→∞|Jn,µn(z) −J∗(z)| = 0, ∀z ∈Sn. Theorem 7 enables an incremental computation of the optimal cost J∗without the need to com- pute J∗ n exactly before sampling more samples. Moreover, cost-to-go functions Jn,µn induced by approximating policies µn also converges pointwise to the optimal cost-to-go J∗with probability one. When we solve the Bellman update at Eq. 9 via sampling, the following result holds. Theorem 8 For all z ∈Sn, Jn(z) is the cost value of the state z computed by Algorithm 1 and Algorithm 2 after n iterations with 1 ≤Ln, and Kn = Θ(|Sn|θ) < |Sn|. Let Jn,µn be the cost-to-go function of the returned policy µn on the discrete MDP Mn. If the Bellman update at Eq. 9 is solved via sampling such that limn→∞|Un| = ∞, then i. ||Jn −J∗ n||Sn converges to 0 in probability. Thus, Jn converges uniformly to J∗in probability, ii. limn→∞|Jn,µn(z) −J∗(z)| = 0 for all z ∈Sn with probability one. We emphasize that while the convergence of Jn to J∗is weaker than the convergence in Theorem 7, the convergence of Jn,µn to J∗remains intact. Importantly, Theorem 1 and Theorems 7-8 together assert that starting from any initial state, trajectories and control processes provided by the iMDP algorithm approximate arbitrarily well optimal trajectories and optimal control processes of the original continuous problem. More precisely, with probability one, the induced random probability measures of approximating trajectories and approximating control processes converge weakly to the probability measures of optimal trajectories and optimal control processes of the continuous problem. Finally, the next theorem evaluates the quality of any-time control policies returned by Algo- rithm 3. Theorem 9 Let µn : S →U be the interpolated policy on S of µn : Sn →U as described in Algorithm 3: ∀z ∈S : µn(z) = µn(yn) where yn = argminz′∈Sn||z′ −z||2. 13 Then there exists an optimal control policy µ∗of the original problem4 so that for all z ∈S: lim n→∞µn(z) = µ∗(z) w.p.1, if µ∗is continuous at z. 6 Experiments −6 −4 −2 0 2 4 6 0 50 100 150 200 250 300 350 400 450 z Cost J ∗ J600 J200 (a) Optimal and approximated cost. 0 0.5 1 1.5 2 2.5 3 3.5 4 −3 −2 −1 0 1 2 3 4 5 u* x* Second (s) Value x200 u200 x∗ u∗ (b) After 200 iterations (0.39s). 0 0.5 1 1.5 2 2.5 3 3.5 4 −3 −2 −1 0 1 2 3 4 5 u* x* Second (s) Value x600 u600 x∗ u∗ (c) After 600 iterations (2.16s). 0 200 400 600 800 1000 1200 1400 1600 1800 2000 0 50 100 150 Iteration index n Value (d) Mean and 1-σ interval of ||Jn −J∗||Sn. 10 2 10 3 10 100 1,000 500 Iteration index n Value mean standard deviation (e) Log-log plot of Fig. 1(d) . 0 200 400 600 800 1000 1200 1400 1600 1800 2000 0 500 1000 1500 Iteration index n Value (f) Plot of ratio ||Jn −J∗||Sn/ log(|Sn|)/|Sn| 0.5 0 200 400 600 800 1000 1200 1400 1600 1800 2000 0 2 4 6 x 10 −4 Iteration index n Value (g) Plot of ratio Tn/ |Sn|0.5 log(|Sn|)  . Figure 1: Results of iMDP on a stochastic LQR problem. Figure 1(a) shows the convergence of approximated cost-to-go to the optimal analytical cost-to-go over iterations. Anytime solutions are compared to the analytical optimal solution after 200 and 600 iterations in Figs. 1(b)-1(c). Mean and 1-σ interval of the error ||Jn −J∗||Sn are shown in 1(d) using 50 trials. The corresponding mean and standard deviation of the error ||Jn −J∗||Sn are depicted on a log-log plot in Fig. 1(e). In Fig. 1(f), we plot the ratio of ||Jn −J∗||Sn to (log(|Sn|)/|Sn|)0.5 to show the convergence rate of Jn to J∗. Figure 1(g) shows the ratio of running time per iteration Tn to |Sn|0.5 log(|Sn|). Ratios in Figs. 1(f)-1(g) are averaged over 50 trials. We used a computer with a 2.0-GHz Intel Core 2 Duo T6400 processor and 4 GB of RAM to run experiments. In the first experiment, we investigated the convergence of the iMDP algorithm on a stochastic LQR problem: inf E  R τ 0 0.95t{3.5x(t)2 + 200u(t)2}dt + h(x(τ))  such that dx = 4Otherwise, an optimal relaxed control policy m∗exists [18], and µn approximates m∗arbitrarily well. 14 −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 (a) Policy after 500 iterations (0.5s). −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 (b) Policy after 1,000 iterations (1.2s). −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 (c) Policy after 2,000 iterations (2.1s). −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 (d) Contour of J500 −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 (e) Contour of J1,000 −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 (f) Contour of J2,000 −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 (g) Policy after 4,000 iterations (7.6s). −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 (h) Policy with 10,000 nodes (28s). −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 (i) Policy after 20,000 iterations (80s). −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 (j) Contour of J4,000 −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 (k) Contour of J10,000 −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 (l) Contour of J20,000 Figure 2: A system with stochastic single integrator dynamics in a cluttered environment. With appropriate cost structure assigned to the goal and obstacle regions, the system reaches the goal in the upper right corner and avoids obstacles. The standard deviation of noise in x and y directions is 0.26. The maximum velocity is one. Anytime control policies and corresponding contours of approximated cost-to-go as shown in Figs. 2(a)-2(l) indicate that iMDP quickly explores the state space and refines control policies over time. 15 −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 (a) Noise-free: 1,000 iterations(1.2s). −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 (b) Stochastic: 300 iterations (0.4s). −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 (c) Stochastic: 1,000 iterations (1.1s). Figure 3: Performance against different process noise magnitude. The system starts from (0,-5) to reach the goal. In Fig. 3(a), the environment is noise-free. In Figs. 3(b)-3(c), standard deviation of noise in x and y directions is 0.37. In the latter, the system first discovers an unsafe route that is prone to collisions and discovers a safer route after a few seconds. (In Fig. 3(b), we temporarily let the system continue even after collision to observe the entire trajectory.) (3x+11u)dt+ √ 0.2dw on the state space S = [−6, 6] where τ is the first hitting time to the boundary ∂S = {−6, 6}, and h(z) = 414.55 for z ∈∂S and 0 otherwise. The optimal cost-to-go from x(0) = z is 10.39z2+40.51, and the optimal control policy is u(t) = −0.5714x(t). Since the cost-rate function is bounded on S and H¨older continuous with exponent 1.0, we use ρ = 0.5. In addition, we choose θ = 0.5, and ς = 0.99 in the procedure ComputeHoldingTime. We used the procedure Update as presented in Alogrithm 2 with log(n) sampled controls and transition probabilities having constant support size. Figures 1(a)-1(c) show the convergence of approximated cost-to-go, anytime controls and trajectory to the optimal analytical counterparts over iterations. We observe that in Fig. 1(d), both the mean and variance of cost-to-go error decreases quickly to zero. The log-log plot in Fig. 1(e) clearly indicates that both mean and standard deviation of the error ||Jn −J∗||Sn continue to decrease. This observation is consistent with Theorems 7-8. Moreover, Fig. 1(f) shows the ratio of ||Jn −J∗||Sn to (log(|Sn|)/|Sn|)0.5 indicating the convergence rate of Jn to J∗, which agrees with Theorem 6. Finally, Fig. 1(g) plots the ratio of running time per iteration Tn to |Sn|0.5 log(|Sn|) asserting that the time complexity per iteration is O |Sn|0.5 log(|Sn|)  . In the second experiment, we controlled a system with stochastic single integrator dynamics to a goal region with free ending time in a cluttered environment. The cost objective function is discounted with α = 0.95. The system pays zero cost for each action it takes and pays a cost of -1 when reaching the goal region Xgoal. The maximum velocity of the system is one. The system stops when it collides with obstacles. We show how the system reaches the goal in the upper right corner and avoids obstacles with different anytime controls. Anytime control policies after up-to 2,000 iterations in Figs. 2(a)-2(c), which were obtained within 2.1 seconds, indicate that iMDP quickly explores the state space and refines control policies over time. Corresponding contours of cost value functions are shown in Figs. 2(d)-2(f) further illustrate the refinement and convergence of cost value functions to the original optimal cost-to-go over time. We observe that the performance is suitable for real-time control. Furthermore, anytime control policies and cost value functions after up-to 20,000 iterations are shown in Figs. 2(g)-2(i) and Figs. 2(j)-2(l) respectively. We note that the control policies seem to converge faster than cost value functions over iterations. The phenomenon is due to the fact that cost value functions Jn are the estimates of the optimal cost-to-go J∗. Thus, when Jn(z) −J∗(z) is constant for all z ∈Sn, updated controls after a Bellman update are close to their optimal values. Thus, the phenomenon favors the use of the iMDP algorithm in real-time 16 −1 0 1 2 3 4 5 6 −3 −2 −1 0 1 2 3 4 5 6 1 2 3 4 5 (a) Trajectory snapshots after 3000 iterations (15.8s). 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 10 −1 10 0 10 1 10 2 10 3 10 4 10 5 10 6 10 7 10 8 10 9 Iteration index n Value mean standard deviation (b) Mean and standard deviation of cost values Jn(x0). Figure 4: Results of a 6D manipulator example. The system is modeled as a single integrator with states representing angles between segments and the horizontal line. Control magnitude is bounded by 0.3. The standard deviation of noise at each joint is 0.032 rad. In Fig. 4(a), the manipulator is controlled to reach a goal with the final upright position. In Fig. 4(b), the mean and standard deviation of the computed cost values for the initial position are plotted using 50 trials. applications where only a small number of iterations are executed. In the third experiment, we tested the effect of process noise magnitude on the solution trajec- tories. In Figs. 3(a)-3(c), the system wants to arrive at a goal area either by passing through a narrow corridor or detouring around the two blocks. In Fig. 3(a), when the dynamics is noise-free (by setting a small diffusion matrix), the iMDP algorithm quickly determines to follow a narrow corridor. In contrast, when the environment affects the dynamics of the system (Figs. 3(b)-3(c)), the iMDP algorithm decides to detour to have a safer route. This experiment demonstrates the benefit of iMDP in handling process noise compared to RRT-like algorithms [14, 17]. We empha- size that although iMDP spends slightly more time on computation per iteration, iMDP provides feedback policies rather than open-loop policies; thus, re-planning is not crucial in iMDP. In the forth experiment, we examined the performance of the iMDP algorithm for high dimen- sional systems such as a manipulator with six degrees of freedom. The manipulator is modeled as a single integrator where states represents angles between segments and the horizontal line. The maximum control magnitude for all joints is 0.3. The standard deviation of noise at each joint is 0.032 rad. The manipulator is controlled to reach a goal with the final upright position in minimum time. In Fig. 4(a), we show a resulting trajectory after 3000 iterations computed in 15.8 seconds. In addition, we show the mean and standard deviation of the computed cost values for the initial position using 50 trials in Fig. 4(b). As shown in the plots, the solution converges quickly after about 1000 iterations. These results highlight the suitability of the iMDP algorithm to compute feedback policies for complex high dimensional systems in stochastic environments. 7 Conclusions We have introduced and analyzed the incremental sampling-based iMDP algorithm for stochastic optimal control. The algorithm natively handles continuous time, continuous state space as well as continuous control space. The main idea is to consistently approximate underlying continuous 17 problems by discrete structures in an incremental manner. In particular, we incrementally build discrete MDPs by sampling and extending states in the state space. The iMDP algorithm refines the quality of anytime control policies from discrete MDPs in terms of expected costs over iterations and ensures almost sure convergence to an optimal continuous control policy. The iMDP algorithm can be implemented such that its time complexity per iteration grows as O kθ log k  with 0 < θ ≤1 leading to the total processing time O k1+θ log k  , where k is the number of states in MDPs which increases linearly over iterations. Together with linear space complexity, iMDP is a practical incremental algorithm. The enabling technical ideas lie in novel methods to compute Bellman updates. Further extension of the work is broad. In the future, we would like to study the effect of biased- sampling techniques on the performance of iMDP. The algorithm is also highly parallelizable, and efficient parallel versions of the iMDP algorithm are left for future study. Remarkably, Markov chain approximation methods are also tools to handle deterministic control and non-linear filtering problems. Thus, applications of the iMDP algorithm can be extended to classical path planning with deterministic dynamics. We emphasize that the iMDP algorithm would remove the necessity for exact point-to-point steering of RRT-like algorithms in path planning applications. In addition, we plan to investigate incremental sampling-based algorithms for online smoothing and estimation in the presence of sensor noise. The combination of incremental sampling-based algorithms for control and estimation will provide insights into addressing stochastic optimal control problems with imperfect state information, known as Partially Observable Markov Decision Processes (POMDPs). Although POMDPs are fundamentally more challenging than the problem that is studied in this paper, our approach differentiates itself from existing sampling-based POMDP solvers (see, e.g., [28, 29]) with its incremental nature and computationally-efficient search. Hence, the research presented in this paper opens a new alley to handle POMDPs in our future work. ACKNOWLEDGMENTS This research was supported in part by the National Science Foundation, grant CNS-1016213. V. A. Huynh gratefully thanks the Arthur Gelb Foundation for supporting him during this work. References [1] W. H. Fleming and J. L. Stein, “Stochastic optimal control, international finance and debt,” Journal of Banking and Finance, vol. 28, pp. 979–996, 2004. [2] S. P. Sethi and G. L. Thompson, Optimal Control Theory: Applications to Management Science and Economics, 2nd ed. Springer, 2006. [3] E. Todorov, “Stochastic optimal control and estimation methods adapted to the noise charac- teristics of the sensorimotor system,” Neural Computation, vol. 17, pp. 1084–1108, 2005. [4] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics (Intelligent Robotics and Au- tonomous Agents), 2001. [5] V. D. Blondel and J. N. Tsitsiklis, “A survey of computational complexity results in systems and control,” Automatica, vol. 36, no. 9, pp. 1249–1274, 2000. [6] L. Grne, “An adaptive grid scheme for the discrete hamilton-jacobi-bellman equation,” Nu- merische Mathematik, vol. 75, pp. 319–337, 1997. 18 [7] S. Wang, L. S. Jennings, and K. L. Teo, “Numerical solution of hamilton-jacobi-bellman equa- tions by an upwind finite volume method,” J. of Global Optimization, vol. 27, pp. 177–192, November 2003. [8] M. Boulbrachene and B. Chentouf, “The finite element approximation of hamilton-jacobi- bellman equations: the noncoercive case,” Applied Mathematics and Computation, vol. 158, no. 2, pp. 585–592, 2004. [9] C. Chow and J. Tsitsiklis, “An optimal one-way multigrid algorithm for discrete-time stochastic control,” IEEE Transactions on Automatic Control, vol. AC-36, pp. 898–914, 1991. [10] R. Munos, A. Moore, and S. Singh, “Variable resolution discretization in optimal control,” in Machine Learning, 2001, pp. 291–323. [11] J. Rust, “Using Randomization to Break the Curse of Dimensionality,,” Econometrica, vol. 56, no. 3, May 1997. [12] ——, “A comparison of policy iteration methods for solving continuous-state, infinite-horizon markovian decision problems using random, quasi-random, and deterministic discretizations,” 1997. [13] L. E. Kavraki, P. Svestka, L. E. K. P. Vestka, J. claude Latombe, and M. H. Overmars, “Probabilistic roadmaps for path planning in high-dimensional configuration spaces,” IEEE Transactions on Robotics and Automation, vol. 12, pp. 566–580, 1996. [14] S. M. Lavalle, “Rapidly-exploring random trees: A new tool for path planning,” Tech. Rep., 1998. [15] J. Kim and J. P. Ostrowski, “Motion planning of aerial robot using rapidly-exploring random trees with dynamic constraints,” in ICRA, 2003, pp. 2200–2205. [16] Y. Kuwata, J. Teo, G. Fiore, S. Karaman, E. Frazzoli, and J. How, “Real-time motion planning with applications to autonomous urban driving,” IEEE Trans. on Control Systems Technolo- gies, vol. 17, no. 5, pp. 1105–1118, 2009. [17] Karaman and Frazzoli, “Sampling-based algorithms for optimal motion planning,” Interna- tional Journal of Robotics Research, vol. 30, no. 7, pp. 846–894, June 2011. [18] H. J. Kushner and P. G. Dupuis, Numerical Methods for Stochastic Control Problems in Con- tinuous Time (Stochastic Modelling and Applied Probability). Springer, Dec. 2000. [19] R. Alterovitz, T. Simon, and K. Goldberg, “The stochastic motion roadmap: A sampling framework for planning with markov motion uncertainty,” in in Robotics: Science and Systems III (Proc. RSS 2007. MIT Press, 2008, pp. 246–253. [20] B. Oksendal, Stochastic differential equations (3rd ed.): an introduction with applications. New York, NY, USA: Springer-Verlag New York, Inc., 1992. [21] I. Karatzas and S. E. Shreve, Brownian Motion and Stochastic Calculus (Graduate Texts in Mathematics), 2nd ed. Springer, Aug. 1991. [22] L. C. Evans, Partial Differential Equations (Graduate Studies in Mathematics, V. 19) GSM/19. American Mathematical Society, Jun. 1998. 19 [23] J. L. Menaldi, “Some estimates for finite difference approximations,” SIAM J. on Control and Optimization, vol. 27, pp. 579–607, 1989. [24] P. Dupuis and M. R. James, “Rates of convergence for approximation schemes in optimal control,” SIAM J. Control Optim., vol. 36, pp. 719–741, March 1998. [25] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge University Press, Mar. 2004. [26] D. P. Bertsekas, Dynamic Programming and Optimal Control, Two Volume Set, 2nd ed. Athena Scientific, 2001. [27] D. P. Bertsekas and J. N. Tsitsiklis, Parallel and distributed computation: numerical methods. Upper Saddle River, NJ, USA: Prentice-Hall, Inc., 1989. [28] H. Kurniawati, D. Hsu, and W. Lee, “SARSOP: Efficient point-based POMDP planning by approximating optimally reachable belief spaces,” in Proc. Robotics: Science and Systems, 2008. [29] S. Prentice and N. Roy, “The belief roadmap: Efficient planning in linear pomdps by factoring the covariance,” in Proceedings of the 13th International Symposium of Robotics Research (ISRR), Hiroshima, Japan, November 2007. [30] G. R. Grimmett and D. R. Stirzaker, Probability and Random Processes, 3rd ed. Oxford University Press, USA, Aug. 2001. Appendix A Notations and Preliminaries We denote N as a set of natural numbers and R as a set of real numbers. A sequence on a set X is a mapping from N to X, denoted as {xn}∞ n=0, where xn ∈X for each n ∈N. Given a metric space X endowed with a metric d, a sequence {xn}∞ n=0 ⊂X is said to converge if there is a point x ∈X, denoted as limn→∞xn, with the following property: For every ϵ > 0, there is an integer N such that n ≥N implies that d(xn, x) < ϵ. On the one hand, a sequence of functions {fn}∞ n=1 in which each function fn is a mapping from X to R converges pointwise to a function f on X if for every x ∈X, the sequence of numbers {fn(x)}∞ n=0 converges to f(x). On the other hand, a sequence of functions {fn}∞ n=1 converges uniformly to a function f on X if the following sequence {Mn | Mn = supx∈X |fn(x) −f(x)|}∞ n=0 converges to 0. Let us consider a probability space (Ω, F, P) where Ωis a sample space, F is a σ-algebra, and P is a probability measure. A subset A of F is called an event. The complement of an event A is denoted as Ac. Given a sequence of events {An}∞ n=0, we define lim supn→∞An as ∩∞ n=0 ∪∞ k=n Ak, i.e. the event that An occurs infinitely often. In addition, the event lim infn→∞An is defined as ∪∞ n=0 ∩∞ k=n Ak. A random variable is a measurable function mapping from Ωto R. The expected value of a random variable Y is defined as E[Y ] = R ΩY dP. A sequence of random variables {Yn}∞ n=0 converges surely to a random variable Y if limn→∞Yn(ω) = Y (ω) for all ω ∈Ω. A sequence of random variables {Yn}∞ n=0 converges almost surely or with probability one (w.p.1) to a random variable Y if P(ω ∈Ω| limn→∞Yn(ω) = Y (ω)) = 1. Almost sure convergence of {Yn}∞ n=0 to Y is denoted as Yn a.s. →Y . We say that a sequence of random variables {Yn}∞ n=0 converges in distribution to a random variable Y if limn→∞Fn(x) = F(x) for every x ∈R at which F is continuous where {Fn}∞ n=0 and F are the associated CDFs of {Yn}∞ n=0 and Y srespectively. We 20 denote this convergence as Yn d→Y . Convergence in distribution is also called weak convergence. If Yn d→Y , then limn→∞E[f(Yn)] = E[f(Y )] for all bounded continuous functions f. As a corollary, when {Yn}∞ n=0 converges in distribution to 0, and Yn is bounded for all n, we have limn→∞E[Yn] = 0 and limn→∞E[Y 2 n ] = 0, which together imply limn→∞V ar(Yn) = 0. We say that a sequence of random variables {Yn}∞ n=0 converges in probability to a random variable Y , denoted as Yn p→Y , if for every ϵ > 0, we have limn→∞P(|Xn −X| ≥ϵ) = 0. For every continuous function f(·), if Yn p→Y , then we also have f(Yn) p→f(Y ). If Yn p→Y and Zn p→Z, then (Yn, Zn) p→(Y, Z) . If |Zn −Yn| p→0 and Yn d→Y , we have Zn d→Y . Finally, we say that a sequence of random variables {Yn}∞ n=0 converges in rth mean to a random variable Y , denoted as Yn r→Y , if E[|Xn|r] < ∞for all n, and limn→∞E[|Xn−X|r] = 0. We have the following implications: (i) almost sure convergence or rth mean convergence (r ≥1) implies convergence in probability, and (ii) convergence in probability implies convergence in distribution. The above results still hold for random vectors in higher dimensional spaces. Let f(n) and g(n) be two functions with domain and range N or R. The function f(n) is called O(g(n)) if there exists two constants M and n0 such that f(n) ≤Mg(n) for all n ≥n0. The function f(n) is called Ω(g(n)) if g(n) is O(f(n)). Finally, the function f(n) is called Θ(g(n)) if f(n) is both O(g(n)) and Ω(g(n)). B Proof of Lemma 4 For each n ∈N, divide the state space S into grid cells with side length 1/2γr(log |Sn|/|Sn|)1/dx as follows. Let Z denote the set of integers. Define the grid cell i ∈Zdx as Wn(i) := i γr 2 log |Sn| |Sn| 1/dx + " −1 4 γr log |Sn| |Sn| 1/dx , 1 4 γr log |Sn| |Sn| 1/dx#dx , where [−a, a]dx denotes the dx-dimensional cube with side length 2 a centered at the origin. Hence, the expression above translates the dx-dimensional cube with side length (1/2) γr(log |Sn|/|Sn|)1/dx to the point with coordinates i γr 2 (log n/n)1/dx. Let Qn denote the indices of set of all cells that lie completely inside the state space S, i.e., Qn = {i ∈Zd : Wn(i) ⊆S}. Clearly, Qn is finite since S is bounded. Let ∂Qn denote the set of all grid cells that intersect the boundary of S, i.e., ∂Qn = {i ∈Zd : Wn(i) ∩∂S ̸= ∅}. We claim for all large n, all grid cells in Qn contain one vertex of Sn, and all grid cells in ∂Qn contain one vertex from ∂Sn. First, let us show that each cell in Qn contains at least one vertex. Given an event A, let Ac denote its complement. Let An,k denote the event that the cell Wn(k), where k ∈Qn contains a vertex from Sn, and let An denote the event that all grid cells in Qn contain a vertex in Sn. Then, for all k ∈Qn, P Ac n,k  =  1 −(γr/2)dx m(S) log |Sn| |Sn| |Sn| ≤exp  − (γr/2)dx/m(S)  log |Sn|  = |Sn|−(γr/2)dx/m(S), where m(S) denotes Lebesgue measure assigned to S. Then, P(Ac n) = P \ k∈Qn An,k c = P [ k∈Qn Ac n,k  ≤ X k∈Qn P Ac n,k  = |Qn| |Sn|−(γr/2)dx/m(S), where the first inequality follows from the union bound and |Qn| denotes the cardinality of the set Qn. By calculating the maximum number of cubes that can fit into S, we can bound |Qn|: |Qn| ≤ m(S) (γr/2)dx log |Sn| |Sn| = m(S) (γr/2)dx |Sn| log |Sn|. 21 Note that by construction, we have |Sn| = Θ(n). Thus, P (Ac n) ≤ m(S) (γr/2)dx |Sn| log |Sn| |Sn|−(γr/2)dx/m(S) = m(S) (γr/2)dx 1 log |Sn| |Sn|1−(γr/2)dx/m(S) ≤ m(S) (γr/2)dx |Sn|1−(γr/2)dx/m(S), which is summable for all γr > 2 (2 m(S))1/dx. Hence, by the Borel-Cantelli lemma, the probability that Ac n occurs infinitely often is zero, which implies that the probability that An occurs for all large n is one, i.e., P(lim infn→∞An) = 1. Similarly, each grid cell in ∂Qn can be shown to contain at least one vertex from ∂Sn for all large n, with probability one. This implies each grid cell in both sets Qn and ∂Qn contain one vertex of Sn and ∂Sn, respectively, for all large n, with probability one. Hence the following event happens with probability one: ζn = max z∈Sn min z′∈Sn ||z′ −z||2 = O((log |Sn|/|Sn|)1/dx). □ C Proof of Lemma 5 We show that each state that is added to the approximating MDPs is updated infinitely often. That is, for any z ∈Sn, the set of all iterations in which the procedure Update is applied on z is unbounded. Indeed, let us denote ζn(z) = minz′∈Sn ||z′ −z||2. From Lemma 4, limn→∞ζn(z) = 0 happens almost surely. Therefore, with probability one, there are infinitely many n such that ζn(z) < ζn−1(z) . In other words, with probability one, we can find infinitely many znew at Line 13 of Algorithm 1 such that z is updated. For those n, the holding time at z is recomputed as ∆tn(z) = γt  log |Sn| |Sn| θςρ/dx at Line 1 of Algorithm 2. Thus, the following event happens with probability one: lim n→∞∆tn(z) = 0, which satisfies the first condition of local consistency in Eq. 3. The other conditions of local consistency in Eqs. 4-6 are satisfied immediately by the way that the transition probabilities are computed (see the description of the ComputeTranProb procedure given in Section 4.1). Hence, the MDP sequence {Mn}∞ n=0 and holding times {∆tn}∞ n=0 are locally consistent for large n with probability one. □ D Proof of Theorem 7 To highlight the idea of the entire proof, we first prove the convergence under synchronous value iterations before presenting the convergence under asynchronous value iterations. As we will see, the shrinking rate of holding times plays a crucial role in the convergence proof. The outline of the proof is as follows. S1: Convergence under synchronous value iterations: In Algorithm 1, we take Ln ≥1 and Kn = |Sn|−1. In other words, in each iteration, we perform synchronous value iterations. Moreover, we assume that we are able to solve the Bellman equation (Eq. 9) exactly. We show that Jn converges uniformly to J∗almost surely in this setting. 22 S2: Convergence under asynchronous value iterations: When Kn = Θ(|Sn|θ) < |Sn|, we only update a subset of Sn in each of Ln passes. We show that Jn still converges uniformly to J∗ almost surely in this new setting. In the following discussion and next sections, we need to compare functions on different do- mains Sn. To ease the discussion and simplify the notation, we adopt the following interpolation convention. Given X ⊂Y and J : X →R, we interpolate J to J on the entire domain Y via nearest neighbor value: ∀y ∈Y : J(y) = J(z) where z = argminz′∈X||z′ −y||. To compare J : X →R and J′ : Y →R where X, Y ⊂S, we define the sup-norm: ||J −J′||∞= ||J −J′||∞, where J and J′ are interpolations of J and J′ from the domains X and Y to the entire domain S respectively. In particular, given Jn : Sn →R, and J : S →R, then ||Jn −J||Sn ≤||Jn −J||∞. Thus, if ||Jn −J||∞approaches 0 when n approaches ∞, so does ||Jn −J||Sn. Hence, we will work with the (new) sup-norm || · ||∞instead of || · ||Sn in the proofs of Theorems 7-8. The triangle inequality also holds for any functions J, J′, J′′ defined on subsets of S with respect to the above sup-norm: ||J −J′||∞≤||J −J′′||∞+ ||J′′ −J′||∞. Let B(X) denote a set of all real-valued bounded functions over a domain X. For Sn ⊂Sn′ when n < n′, a function J in B(Sn) also belongs to B(Sn′), meaning that we can interpolate J on Sn to a function J′ on Sn′. In particular, we say that J in B(Sn) also belongs to B(S). Lastly, due to random sampling, Sn is a random set, and therefore functions Jn and J∗ n defined on Sn are random variables. In the following discussion, inequalities hold surely without further explanation when it is clear from the context, and inequalities hold almost surely if they are followed by “w.p.1”. S1: Convergence under synchronous value iterations In this step, we first set Ln ≥1 and Kn = |Sn| −1 in Algorithm 1. Thus, for all z ∈Sn, the holding time ∆tn(z) equals γt  log |Sn| |Sn| θςρ/dx and is denoted as ∆tn. We consider the MDP Mn = (Sn, U, Pn, Gn, Hn) at nth iteration and define the following operator Tn : B(Sn) →B(Sn) that transforms every J ∈B(Sn) after a Bellman update as: TnJ(z) = min v∈U{Gn(z, v) + α∆tnEPn [J(y)|z, v]}, ∀z ∈Sn, (10) assuming that we can solve the minimization on the RHS of Eq. 10 exactly. For each k ≥2, operators T k n are defined recursively as T k n = TnT k−1 n and T 1 n = Tn. When we apply Tn on J ∈B(Sk) where k < n, J is interpolated to Sn before applying Tn. Thus, in Algorithms 1-2, we implement the next update Jn = T Ln n Jn−1. From [26], we have the following results: J∗ n = TnJ∗ n, and Tn is a contraction mapping. For any J and J′ in B(Sn), the following inequality happens surely: ||TnJ −TnJ′||∞≤α∆tn||J −J′||∞. 23 Combining the above results: ||J∗ n −Jn||∞= ||T Ln n J∗ n −T Ln n Jn−1||∞≤αLn∆tn||J∗ n −Jn−1||∞ ≤α∆tn(||J∗ n −J∗ n−1||∞+ ||J∗ n−1 −Jn−1||∞), where the second inequality follows from the triangle inequality, and Ln ≥1, α ∈(0, 1). Thus, by iterating over n, for any N ≥1 and n > N, we have: ||J∗ n −Jn||∞≤An + α∆tn+∆tn−1...+∆tN+1||J∗ N −JN||∞, (11) where An are defined recursively: An = α∆tn(||J∗ n −J∗ n−1||∞+ An−1), ∀n > N + 1, (12) AN+1 = α∆tN+1||J∗ N+1 −J∗ N||∞. (13) Note that for any N ≥1: lim n→∞∆tn + ∆tn−1... + ∆tN+1 = ∞, due to the choice of holding times ∆tn in the procedure ComputeHoldingTime. Therefore, lim n→∞α∆tn+...+∆tN+1||J∗ N −JN||∞= 0. By Theorem 6, the following event happens with probability 1 (w.p.1): lim n→∞||J∗ n −J∗||∞= 0, hence, lim n→∞||J∗ n −J∗ n−1||∞= 0 w.p.1. Thus, for any fixed ϵ > 0, we can choose N large enough such that: ||J∗ n −J∗ n−1||1−ς ∞ < ϵ w.p.1 for all n > N, and (14) α∆tn+...+∆tN+1||J∗ N −JN||∞< ϵ surely, (15) where ς ∈(0, 1) is the constant defined in the procedure ComputeHoldingTime. Now, for all n > N, we rearrange Eqs.12-13 to have An ≤ϵBn w.p.1, where Bn = α∆tn(||J∗ n −J∗ n−1||ς ∞+ Bn−1), ∀n > N + 1, BN+1 = α∆tN+1||J∗ N+1 −J∗ N||ς ∞. We can see that for n > N + 1: Bn = α∆tn(||J∗ n −J∗ n−1||ς ∞+ Bn−1) < ϵς/(1−ς) + Bn−1 w.p.1, (16) BN+1 = α∆tN+1||J∗ N+1 −J∗ N||ς ∞< ϵς/(1−ς) w.p.1. (17) 24 We now prove that almost surely, Bn is bounded for all n ≥N. Indeed, we derive the conditions so that Bn−1 < Bn or Bn−1 ≥Bn as follows: Bn−1 < Bn ⇔Bn−1 < α∆tn(||J∗ n −J∗ n−1||ς ∞+ Bn−1) ⇔Bn−1 < α∆tn||J∗ n −J∗ n−1||ς ∞ 1 −α∆tn ⇒Bn−1 < K αγt  log |Sn| |Sn| θςρ/dx  log |Sn| |Sn| ςρ/dx 1 −αγt  log |Sn| |Sn| θςρ/dx w.p.1. The last inequality is due to Theorem 6 and |Sn| = Θ(n), |Sn−1| = Θ(n −1): ||J∗ n −J∗ n−1||∞= O((log |Sn−1|/|Sn−1|)ρ/dx) < K log |Sn| |Sn| ρ/dx w.p.1, for large n where K is some finite constant. Let β = αγt ∈(0, 1). For large n, log |Sn| |Sn| are in (0, 1) and θ ∈(0, 1]. Let us define xn = log |Sn| |Sn| θςρ/dx , and yn = log |Sn| |Sn| ςρ/dx . Then, xn ≥yn > 0. The above condition is simplified to Bn−1 < K βxnyn 1 −βxn ≤K βxnxn 1 −βxn , w.p.1. Consider the function r : [0, ∞) →R such that r(x) = βxx 1−βx , we can verify that r(x) is non- increasing and is bounded by r(0) = −1/ log(β). Therefore: Bn−1 < Bn ⇒ Bn−1 < − K log(β) = − K γt log(α) w.p.1. (18) Or conversely, Bn−1 ≥− K γt log(α) w.p.1 ⇒ Bn−1 ≥Bn w.p.1. (19) The above discussion characterizes the random sequence Bn. In particular, Fig. 5 shows a possible realization of the random sequence Bn for n > N. As shown visually in this plot, BN+1 is less than ϵς/(1−ς) w.p.1 and thus is less than ϵς/(1−ς) − K γt log(α) w.p.1. For n > N + 1, assume that we have already shown that Bn−1 is bounded from above by ϵς/(1−ς) − K γt log(α) w.p.1. When Bn−1 ≥− K γt log(α) w.p.1, the sequence is non-increasing w.p.1. Conversely, when the sequence is increasing, i.e. Bn−1 < Bn, we assert that Bn−1 < − K γt log(α) w.p.1 due to Eq. 18, and the increment is less than ϵς/(1−ς) due to Eq. 16. In both cases, we conclude that Bn is also bounded by ϵς/(1−ς) − K γt log(α) w.p.1. Hence, from Eqs. 16-19, we infer that Bn is bounded w.p.1 for all n > N: Bn < ϵς/(1−ς) − K γt log(α) w.p.1. 25 N+1 N+2 N+3 N+k Figure 5: A realization of the random sequence Bn. We have BN+1 less than ϵς/(1−ς) w.p.1. For n larger than N + 1, when Bn−1 ≥− K γt log(α) w.p.1, the sequence is non-increasing w.p.1, i.e. Bn−1 ≥Bn w.p.1. Conversely, when the sequence is increasing, i.e. Bn−1 < Bn, we have Bn−1 < − K γt log(α) w.p.1, and the increment is less than ϵς/(1−ς). Hence, the random sequence Bn is bounded by ϵς/(1−ς) − K γt log(α) w.p.1. Thus, for all n > N: An ≤ϵBn < ϵ  ϵς/(1−ς) − K γt log(α)  w.p.1. (20) Combining Eqs. 11,15, and 20, we conclude that for any ϵ > 0, there exists N ≥1 such that for all n > N, we have ||J∗ n −Jn||∞< ϵ  ϵς/(1−ς) − K γt log(α) + 1  w.p.1. Therefore, lim n→∞||J∗ n −Jn||∞= 0 w.p.1. Combining with lim n→∞||J∗ n −J∗||∞= 0 w.p.1, we obtain lim n→∞||Jn −J∗||∞= 0 w.p.1. In the above analysis, the shrinking rate  log |Sn| |Sn| θςρ/dx of holding times plays an important role to construct an upper bound of the sequence Bn. This rate must be slower than the convergence rate  log |Sn| |Sn| ρ/dx of J∗ n to J∗so that the function r(x) is bounded, enabling the convergence of cost value functions Jn to the optimal cost-to-go J∗. Remarkably, we have accomplished this convergence by carefully selecting the range (0, 1) of the parameter ς. The role of the parameter θ in this convergence will be clear in Step S2. Lastly, we note that if we are able to obtain a faster convergence rate of J∗ n to J∗, we can have faster shrinking rate for holding times. 26 S2: Convergence under asynchronous value iterations When 1 ≤Ln and Kn = Θ(|Sn|θ) < |Sn|, we first claim the following result: Lemma 10 Consider any increasing sequence {nk}∞ k=0 as a subset of N such that n0 = 0 and k ≤|Snk| ≤k1/θ. For J ∈B(S), we define: A {nj}k j=0  = α∆tnk+∆tnk−1+...+∆tn1||J∗ n1 −J||∞+ α∆tnk+∆tnk−1+...+∆tn2||J∗ n2 −J∗ n1||∞ + ... + α∆tnk||J∗ nk −J∗ nk−1||∞. The following event happens with probability one: lim k→∞A {nj}k j=0  = 0. Proof We rewrite A {nj}k j=0  = Ank where Ank are defined recursively: Ank = α∆tnk(||J∗ nk −J∗ nk−1||∞+ Ank−1), ∀k > K, (21) AnK = A {nj}K j=0  , ∀K ≥1. (22) We note that ∆tnk + ∆tnk−1 + ... + ∆tnK = γt log |Snk| |Snk| θςρ/dx + γt log |Snk−1| |Snk−1| θςρ/dx + ... + γt log |SnK| |SnK| θςρ/dx ≥γt  1 |Snk| θςρ/dx + γt  1 |Snk−1| θςρ/dx + ... + γt  1 |SnK| θςρ/dx ≥γt 1 kςρ/dx + γt 1 (k −1)ςρ/dx + ... + γt 1 (K)ςρ/dx ≥γt(1 k + 1 k −1 + ... + 1 K ), where the second inequality uses the given fact that |Snk| ≤k1/θ. Therefore, for any K ≥1: lim k→∞α∆tnk+∆tnk−1...+∆tnK = 0. We choose a constant ϱ > 1 such that ϱς < 1. For any fixed ϵ > 0, we can choose K large enough such that: ||J∗ nk −J∗ nk−1||1−ϱς ∞ < ϵ w.p.1 for all k > K. (23) For all k > K, we can write Ank ≤ϵBnk + α∆tnk+...+∆tnK+1A {nj}K j=0  . where Bnk = α∆tnk(||J∗ nk −J∗ nk−1||ϱς ∞+ Bnk−1), ∀k > K, BnK = 0. Furthermore, we can choose K′ sufficiently large such that K′ ≥K and for all k > K′: α∆tnk+...+∆tnK+1A {nj}K j=0  ≤ϵ. 27 We obtain: Ank ≤ϵBnk + ϵ, ∀k > K′ ≥K ≥1. We can also see that for k > K: Bnk = α∆tnk(||J∗ nk −J∗ nk−1||ϱς ∞+ Bnk−1) < ϵϱς/(1−ϱς) + Bnk−1 w.p.1. (24) Similar to Step S1, we characterize the random sequence Bnk as follows: Bnk−1 < Bnk ⇔Bnk−1 < α∆tnk||J∗ nk −J∗ nk−1||ϱς ∞ 1 −α∆tnk ⇒Bnk−1 < K α γt  log |Snk | |Snk | θςρ/dx  log |Snk−1| |Snk−1| ϱςρ/dx 1 −α γt  log |Snk | |Snk | θςρ/dx w.p.1. Let β = αγt ∈(0, 1). We define: xk = log |Snk| |Snk| θςρ/dx , and yk = log |Snk−1| |Snk−1| ϱςρ/dx . We note that log x x is a decreasing function for positive x. Since |Snk−1| ≥k −1 and |Snk| ≤k1/θ, we have the following inequalities: xk ≥ ( log k θ )θ k !ςρ/dx , yk ≤ (log(k −1))ϱ (k −1)ϱ ςρ/dx . Since θ ∈(0, 1] and ϱ > 1, we can find a finite constant K1 such that yk < K1xk for large k. Thus, the above condition leads to Bnk−1 < K βxkyk 1 −βxk < KK1 βxkxk 1 −βxk , w.p.1. Therefore: Bnk−1 < Bnk ⇒ Bnk−1 < −KK1 log(β) = − KK1 γt log(α) w.p.1. Or conversely, Bnk−1 ≥− KK1 γt log(α) w.p.1 ⇒ Bn−1 ≥Bn w.p.1. Arguing similarly to Step S1, we infer that for all k > K′ ≥K ≥1: Bnk < ϵϱς/(1−ϱς) − KK1 γt log(α) w.p.1. Thus, for any ϵ > 0, we can find K′ ≥1 such that for all k > K′: Ank ≤ϵBnk + ϵ < ϵ  ϵϱς/(1−ϱς) − KK1 γt log(α) + 1  w.p.1. We conclude that lim k→∞A {nj}k j=0  = 0. w.p.1. □ 28 Returning to the main proof, we use the tilde notation to indicate asynchronous operations to differentiate with our synchronous operations in Step S1. We will also assume that Ln = 1 for all n to simplify the following notations. The proof for general Ln ≥1 is exactly the same. We define the following (asynchronous) mappings eTn : B(Sn) →B(Sn) as the restricted mappings of Tn on Dn, a non-empty random subset of Sn, such that for all J ∈B(Sn): eTnJ(z) = min v∈U n Gn(z, v) + α∆tnEPn  J(y)|z, v o , ∀z ∈Dn ⊂Sn, (25) eTnJ(z) = J(z), ∀z ∈Sn\Dn. (26) We require that ∩∞ n=1 ∪∞ k=n Dk = S. (27) In other words, every state in S are sampled infinitely often. We can see that in Algorithm 1, if the set Zupdate is assigned to Dn in every iteration (Line 13), the sequence {Dn}∞ n=1 has the above property, and |Dn| = Θ(|Sn|θ) < |Sn|. Starting from any eJ0 ∈B(S0), we perform the following asynchronous iteration eJn+1 = eTn+1 eJn, ∀n ≥0. (28) Consider the following sequence {mk}∞ k=0 such that m0 = 0 and for all k ≥0, from mk to mk+1 −1, all states in Smk+1−1 are chosen to be updated at least once, and a subset of states in Smk+1−1 is chosen to be updated exactly once. We observe that as the size of Sn increases linearly with n, if we schedule states in Dn ⊂Sn to be updated in a round-robin manner, we have k ≤Smk ≤k1/θ. When Dn is chosen as shown in Algorithm 1, with high probability, k ≤Smk ≤k1/θ. However, we will assume that the event k ≤Smk ≤k1/θ happens surely because we can always schedule a fraction of Dn to be updated in a round-robin manner. We define Wn as the set of increasing sub-sequences of the sequence {0, 1, ..., n} such that each sub-sequence contains {mj}k j=0 where mk ≤n < mk+1: Wn = n {ij}T j=0 {mj}k j=0 ⊂{ij}T j=0 ⊂{0, 1, ..., n} ∧T ≥2 ∧mk ≤n < mk+1 o . Clearly, if {ij}T j=0 ∈Wn, we have i0 = 0. For each {ij}T j=0 ∈Wn, we define A {ij}T j=0  = α∆tiT +∆tiT −1+...+∆ti1||J∗ i1 −eJ0||∞+ α∆tiT +∆tiT −1+...+∆ti2||J∗ i2 −J∗ i1||∞ + ... + α∆tiT ||J∗ iT −J∗ iT −1||∞. We will prove by induction that ∀z ∈Dn ⇒| eJn(z) −J∗ n(z)| ≤ max {ij}T j=0∈Wn A {ij}T j=0  . (29) When n = 1, the only sub-sequence is {ij}T j=0 = {0, 1} ∈W1. It is clear that for z ∈D1, due to the contraction property of T1: |J∗ 1(z) −eJ1(z)| ≤ max {ij}T j=0∈W1 A {ij}T j=0  = α∆t1||J∗ 1 −eJ0||∞. Assuming that Eq. 29 holds upto n = mk, we need to prove that the equation also holds for those n ∈(mk, mk+1) and n = mk+1. Indeed, let us assume that Eq. 29 holds for some n ∈[mk, mk+1−1). 29 Denote nz ≤n as the index of the most recent update of z. For z ∈Dn, we compute new values for z in eJn+1, and by the contraction property of Tn+1, it follows that | eJn+1(z) −J∗ n+1(z)| ≤α∆tn+1||J∗ n+1 −eJn||∞ = α∆tn+1 max z∈Sn+1 |J∗ n+1(z) −eJn(z)| = α∆tn+1 max z∈Sn+1 |J∗ n+1(z) −eJnz(z)| ≤α∆tn+1 max z∈Sn+1 |J∗ nz(z) −eJnz(z)| + ||J∗ n+1 −J∗ nz||∞  ≤max z∈Sn+1  α∆tn+1 max {ij}T j=0∈Wnz A {ij}T j=0  + α∆tn+1||J∗ n+1 −J∗ nz||∞  = max {ij}T j=0∈Wn+1 A {ij}T j=0  . The last equality is due to n + 1 ≤mk+1 −1, and {mj}k j=0 ⊂{{ij}T j=0, n + 1} ⊂{0, 1, ..., n + 1} for any {ij}T j=0 ∈Wnz. Therefore, Eq. 29 holds for all n ∈(mk, mk+1 −1]. When n = mk+1 −1, we also have the above relation for all z ∈Dn+1: | eJn+1(z) −J∗ n+1(z)| ≤max z∈Sn+1  α∆tn+1 max {ij}T j=0∈Wnz A {ij}T j=0  + α∆tn+1||J∗ n+1 −J∗ nz||∞  = max {ij}T j=0∈Wn+1 A {ij}T j=0  . The last equality is due to n + 1 = mk+1 and thus {mj}k+1 j=0 ⊂{{ij}T j=0, n + 1} ⊂{0, 1, ..., n + 1} for any {ij}T j=0 ∈Wnz. Therefore, Eq. 29 also holds for n = mk+1 and this completes the induction. We see that all {ij}T j=0 ∈Wn, we have j ≤ij ≤mj, and thus j ≤Sij ≤j1/θ. By Lemma 10, lim n→∞A {ij}T j=0 ∈Wn  = 0 w.p.1. Therefore, lim n→∞sup z∈Dn | eJn(z) −J∗ n(z)| = 0 w.p.1. Since all states are updated infinitely often, and J∗ n converges uniformly to J∗with probability one, we conlude that: limn→∞|| eJn −J∗ n||∞= 0 w.p.1. and limn→∞|| eJn −J∗||∞= 0 w.p.1. In both Steps S1 and S2, we have limn→∞||Jn −J∗ n||∞= 0 w.p.1 5, therefore µn converges to µ∗ n pointwise w.p.1 as µn and µ∗ n are induced from Bellman updates based on Jn and J∗ n respectively. Hence, the sequence of policies {µn}∞ n=0 has each policy µn as an ϵn-optimal policy for the MDP Mn such that limn→∞ϵn = 0. By Theorem 2, we conclude that lim n→∞|Jn,µn(z) −J∗(z)| = 0, ∀z ∈Sn w.p.1. □ E Proof of Theorem 8 We fix an initial starting state x(0) = z. In Theorem 7, starting from an initial state x(0) = z, we construct a sequence of Markov chains {ξn i ; i ∈N}∞ n=1 under minimizing control sequences 5The tilde notion is dropped at this point. 30 {un i ; i ∈N}∞ n=1. By convention, we denote the associated interpolated continuous time trajectories and control processes as {ξn(t); t ∈R}∞ n=1 and {un(t); t ∈R}∞ n=1 repsectively. By Theorem 1, {ξn(t); t ∈R}∞ n=1 converges in distribution to an optimal trajectory {x∗(t); t ∈R} under an optimal control process {u∗(t); t ∈R} with probability one. In other words, (ξn(·), un(·)) d→(x∗(·), u∗(·)) w.p.1. We will show that this result can hold even when the Bellman equation is not solved exactly at each iteration. In this theorem, we solve the Bellman equation (Eq. 9) by sampling uniformly in U to form a control set Un such that limn→∞|Un| = ∞. Let us denote the resulting Markov chains and control sequences due to this modification as {ξ n i ; i ∈N}∞ n=1 and {un i ; i ∈N}∞ n=1 with associated continuous time interpolations {ξ n(t); t ∈R}∞ n=1 and {un(t); t ∈R}∞ n=1. In this case, randomness is due to both state and control sampling. We will prove that there exists minimizing control sequences {un i ; i ∈N}∞ n=1 and the induced sequence of Markov chains {ξn i ; i ∈N}∞ n=1 in Theorem 7 such that (ξ n(·) −ξn(·), un(·) −un(·)) p→(0, 0), (30) where (0, 0) denotes a pair of zero processes. To prove Eq. 30, we first prove the following lemmas. In the following analysis, we assume that the Bellman update (Eq. 9) has minima in a neighborhood of positive Lebesgue measure. We also assume additional continuity of cost functions for discrete MDPs. Lemma 11 Let us consider the sequence of approximating MDPs {Mn}∞ n=0. For each n and a state z ∈Sn, let v∗ n be an optimal control minimizing the Bellman update, which is refered to as an optimal control from z: v∗ n ∈V ∗ n = argminv∈U{Gn(z, v) + α∆tn(z)EPn [Jn−1(y)|z, v]}, Jn(z, v∗ n) = J∗ n(z) = Gn(z, v∗ n) + α∆tn(z)EPn [Jn−1(y)|z, v∗ n] , ∀v∗ n ∈V ∗ n . Let vn be the best control in a sampled control set Un from z: vn = argminv∈Un{Gn(z, v) + α∆tn(z)EPn [Jn−1(y)|z, v]}, Jn(z, vn) = Gn(z, vn) + α∆tn(z)EPn [Jn−1(y)|z, vn] . Then, when limn→∞|Un| = ∞, we have |Jn(z, vn)−J∗ n(z)| p→0 as n approaches ∞, and there exists a sequence {v∗ n | v∗ n ∈V ∗ n }∞ n=0 such that ||vn −v∗ n||2 p→0. Proof We assume that for any ϵ > 0, the set An ϵ = {v ∈U| |Jn(z, v) −J∗ n(z)| ≤ϵ} has positive Lebesgue measure. That is, m(An ϵ ) > 0 for all ϵ > 0 where m is Lebesgue measure assigned to U. For any ϵ > 0, we have: P {|Jn(z, vn) −J∗ n(z)| ≥ϵ}  = 1 −m(An ϵ )/m(U) |Un|. Since 1 −m(An ϵ )/m(U) ∈[0, 1) and limn→∞|Un| = ∞, we infer that: lim n→∞P {|Jn(z, vn) −J∗ n(z)| ≥ϵ}  = 0. Hence, we conclude that |Jn(z, vn) −J∗ n(z)| p→0 as n →∞. Under the mild assumption that Jn(z, v) is continuous on U for all z ∈Sn, thus there exists a sequence {v∗ n | v∗ n ∈V ∗ n }∞ n=0 such that ||vn −v∗ n||2 p→0 as n approaches ∞. □ 31 Figure 6: An illustration for Lemma 12. We have ξ n 0 converges in probability to ξn 0 . From ξn 0 , the optimal control is v∗ n that results in the next random state ξn 1 . From ξ n 0, the optimal control and the best sampled control are vn and vn respectively. The next random state from ξ n 0 due to the control vn is ξ n 1. By Lemma 11, we conclude that ||Jn −J∗ n||∞converges to 0 in probability. Thus, Jn returned from the iMDP algorithm when the Bellman update is solved via sampling converges uniformly to J∗in probability. We, however, claim that Jn,µn still converges pointwise to J∗almost surely in the next discussion. Lemma 12 With the notations in Lemma 11, consider two states ξn 0 and ξ n 0 such that ||ξ n 0 −ξn 0 ||2 p→ 0 as n approaches ∞. Let ξ n 1 be the next random state of ξ n 0 under the best sampled control vn from ξ n 0. Then, there exists a sequence of optimal controls v∗ n from ξn 0 such that ||vn −v∗ n||2 p→0 and ||ξ n 1 −ξn 1 ||2 p→0 as n approaches ∞, where ξn 1 is the next random state of ξn 0 under the optimal control v∗ n from ξn 0 . Proof We have vn as the best sampled control from ξ n 0. By Lemma 11, there exists a sequence of optimal controls vn from ξ n 0 such that ||vn −vn||2 p→0. We assume that the mapping from state space Sn, which is endowed with the usual Euclidean metric, to optimal controls in U is continuous. As ||ξ n 0 −ξn 0 ||2 p→0, there exists a sequence of optimal controls v∗ n from ξn 0 such that ||vn−v∗ n||2 p→0. Now, ||vn −vn||2 p→0 and ||vn −v∗ n||2 p→0 lead to ||vn −v∗ n||2 p→0 as n →∞. Figure 6 illustrates how vn, vn, and v∗ n relate ξ n 1 and ξn 1 . Using the probability transition Pn of the MDP Mn that is locally consistent with the original continuous system, we have: E[ξn 1 | ξn 0 , un 0 = v∗ n] = ξn 0 + f(ξn 0 , v∗ n)∆tn(ξn 0 ) + o(∆tn(ξn 0 )), E[ξ n 1 | ξ n 0, un 0 = vn] = ξ n 0 + f(ξ n 0, vn)∆tn(ξ n 0) + o(∆tn(ξ n 0)), Cov[ξn 1 | ξn 0 , un 0 = v∗ n] = F(ξn 0 , v∗ n)F(ξn 0 , v∗ n)T ∆tn(ξn 0 ) + o(∆tn(ξn 0 )), Cov[ξ n 1 | ξ n 0, un 0 = vn] = F(ξ n 0), vn)F(ξ n 0), vn)T ∆tn(ξ n 0)) + o(∆tn(ξ n 0))), where f(·, ·) is the nominal dynamics, and F(·, ·)F(·, ·)T is the diffusion of the original system that are assumed to be continuous almost everywhere. We note that ∆tn(ξ n 0) = ∆tn(ξn 0 ) = γt log(|Sn|)/|Sn| θςρ/dx as ξ n 0 and ξn 0 are updated at the nth iteration in this context, and the hold- ing times converge to 0 as n approaches infinity. Therefore, when ||ξ n 0 −ξn 0 ||2 p→0, ||vn −v∗ n||2 p→0, we have: E[ξ n 1 −ξn 1 | ξn 0 , ξ n 0, un 0 = v∗ n, un 0 = vn] p→0, (31) Cov(ξ n 1 −ξn 1 | ξn 0 , ξ n 0, un 0 = v∗ n, un 0 = vn) p→0. (32) 32 Since ξ n 1 and ξn 1 are bounded, the random vector E[ξ n 1 −ξn 1 | ξn 0 , ξ n 0, un 0 = v∗ n, un 0 = vn] and random matrix Cov(ξ n 1 −ξn 1 | ξn 0 , ξ n 0, un 0 = v∗ n, un 0 = vn) are bounded. We recall that if Yn p→0, and hence Yn d→0, when Yn is bounded for all n, limn→∞E[Yn] = 0 and limn→∞Cov(Yn) = 0. Therefore, Eqs. 31-32 imply: lim n→∞E h E[ξ n 1 −ξn 1 | ξn 0 , ξ n 0, un 0 = v∗ n, un 0 = vn] i = 0, (33) lim n→∞Cov  E[ξ n 1 −ξn 1 | ξn 0 , ξ n 0, un 0 = v∗ n, un 0 = vn]  = 0, (34) lim n→∞E h Cov(ξ n 1 −ξn 1 | ξn 0 , ξ n 0, un 0 = v∗ n, un 0 = vn) i = 0. (35) The above outer expectations and covariance are with resepect to the randomness of states ξn 0 , ξ n 0 and sampled controls Un. Using the iterated expectation law for Eq. 33, we obtain: lim n→∞E[ξ n 1 −ξn 1 ] = 0. Using the law of total covariance for Eqs. 34-35, we have: lim n→∞Cov[ξ n 1 −ξn 1 ] = 0. Since E[||ξ n 1 −ξn 1 ||2 2] = E[(ξ n 1 −ξn 1 )T (ξ n 1 −ξn 1 )] = ||E[ξ n 1 −ξn 1 )]||2 2 + tr(Cov[ξ n 1 −ξn 1 ]), the above limits together imply: lim n→∞E[||ξ n 1 −ξn 1 ||2 2] = 0. In other words, ξ n 1 converges in 2th-mean to ξn 1 , which leads to ||ξ n 1 −ξn 1 ||2 p→0 as n approaches ∞. □ Returning to the proof of Eq. 30, we know that ξn 0 = ξ n 0 = z as the starting state. From any y ∈Sn, an optimal control from y is denoted as v∗(y), and the best sampled control from the same state y is denoted as v(y). By Lemma 12, as un 0 = v(ξ n 0), there exists un 0 = v∗(ξn 0 ) such that ||un 0 −un 0||2 p→0 and ||ξ n 1 − ξn 1 ||2 p→0. Let us assume that (||un k−1 −un k−1||2, ||ξ n k −ξn k ||2) converges in probability to (0, 0) upto index k. We have un k = v(ξ n k). Using Lemma 12, there exists un k = v∗(ξn k ) such that (||un k − un k||2, ||ξ n k+1 −ξn k+1||2) p→(0, 0). Thus, for any i ≥1, we can construct a minimizing control un i in Theorem 7 such that (||ξ n i −ξn i ||2, ||un i −un i ||2) p→(0, 0) as n →∞. Hence, Eq. 30 follows immediately: (ξ n(·) −ξn(·), un(·) −un(·)) p→(0, 0). We have (ξn(·), un(·)) d→(x∗(·), u∗(·)) w.p.1. Thus, by hierarchical convergence of random variables [30], we achieve (ξ n(·), un(·)) d→(x∗(·), u∗(·)) w.p.1. Therefore, for all z ∈Sn: lim n→∞|Jn,µn(z) −J∗(z)| = 0 w.p.1. □ 33 F Proof of Theorem 9 Fix n ∈N, for all z ∈S, and yn = argminz′∈Sn||z′ −z||2, we have µn(z) = µn(yn). We assume that optimal policies of the original continuous problem are obtainable. By Theorems 7- 8, we have: lim n→∞|Jn,µn(yn) −J∗(yn))| = 0 w.p.1. Thus, µn(yn) converges to µ∗(yn) almost surely where µ∗is an optimal policy of the original continuous problem. Thus, for all ϵ > 0, there exists N such that for all n > N: ||µn(yn) −µ∗(yn)||2 ≤ϵ 2 w.p.1. Under the assumption that µ∗is continuous at z, and due to limn→∞yn = z almost surely, we can choose N large enough such that for all n > N: ||µ∗(yn) −µ∗(z)||2 ≤ϵ 2 w.p.1. From the above inequalities: ||µn(yn) −µ∗(z)||2 ≤||µn(yn) −µ∗(yn)||2 + ||µ∗(yn) −µ∗(z)||2 ≤ϵ, ∀n > N w.p.1. Therefore, lim n→∞||µn(z) −µ∗(z)||2 = lim n→∞||µn(yn) −µ∗(z)||2 = 0 w.p.1. □ 34