An Incremental Sampling-based Algorithm for Stochastic Optimal Control Vu Anh Huynh Sertac Karaman Emilio Frazzoli ∗ Abstract In this paper, we consider a class of continuous-time, continuous-space stochastic optimal control problems. Building upon recent advances in Markov chain approximation methods and sampling-based algorithms for deterministic path planning, we propose a novel algorithm called the incremental Markov Decision Process (iMDP) to compute incrementally control policies that approximate arbitrarily well an optimal policy in terms of the expected cost. The main idea behind the algorithm is to generate a sequence of finite discretizations of the original problem through random sampling of the state space. At each iteration, the discretized problem is a Markov Decision Process that serves as an incrementally refined model of the original problem. We show that with probability one, (i) the sequence of the optimal value functions for each of the discretized problems converges uniformly to the optimal value function of the original stochastic optimal control problem, and (ii) the original optimal value function can be computed efficiently in an incremental manner using asynchronous value iterations. Thus, the proposed algorithm provides an anytime approach to the computation of optimal control policies of the continuous problem. The effectiveness of the proposed approach is demonstrated on motion planning and control problems in cluttered environments in the presence of process noise. 1 Introduction Stochastic optimal control has been an active research area for several decades with many applica- tions in diverse fields ranging from finance, management science and economics [1, 2] to biology [3] and robotics [4]. Unfortunately, general continuous-time, continuous-space stochastic optimal con- trol problems do not admit closed-form or exact algorithmic solutions and are known to be compu- tationally challenging [5]. Many algorithms are available to compute approximate solutions of such problems. For instance, a popular approach is based on the numerical solution of the associated Hamilton-Jacobi-Bellman PDE (see, e.g., [6, 7, 8]). Other methods approximate a continuous prob- lem with a discrete Markov Decision Process (MDP), for which an exact solution can be computed in finite time [9, 10]. However, the complexity of these two classes of deterministic algorithms scales exponentially with the dimension of the state and control spaces, due to discretization. Remarkably, algorithms based on random (or quasi-random) sampling of the state space provide a possibility to alleviate the curse of dimensionality in the case in which the control inputs take values from a finite set, as noted in [11, 12, 5]. Algorithms based on random sampling of the state space have recently been shown to be very effective, both in theory and in practice, for computing solutions to deterministic path planning ∗ The authors are with the Laboratory of Information and Decision Systems, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA 02139. { vuhuynh , sertac , frazzoli } @mit.edu 1 arXiv:1202.5544v1 [cs.RO] 24 Feb 2012 problems in robotics and other disciplines. For example, the Probabilistic RoadMap (PRM) algo- rithm first proposed by Kavraki et al. [13] was the first practical planning algorithm that could han- dle high-dimensional path planning problems. Their incremental counterparts, such as RRT [14], later emerged as sampling-based algorithms suited for online applications and systems with differ- ential constraints on the solution (e.g., dynamical systems). The RRT algorithm has been used in many applications and demonstrated on various robotic platforms [15, 16]. Recently, optimality properties of such algorithms were analyzed in [17]. In particular, it was shown that the RRT algo- rithm fails to converge to optimal solutions with probability one. The authors have proposed the RRT ∗ algorithm which guarantees almost-sure convergence to globally optimal solutions without any substantial computational overhead when compared to the RRT. Although the RRT ∗ algorithm is asymptotically optimal and computationally efficient (with respect to RRT), it can not handle problems involving systems with uncertain dynamics. In this work, building upon the Markov chain approximation method [18] and the rapidly-exploring sam- pling technique [14], we introduce a novel algorithm called the incremental Markov Decision Process (iMDP) to approximately solve a wide class of stochastic optimal control problems. More precisely, we consider a continuous-time optimal control problem with continuous state and control spaces, full state information, and stochastic process noise. In iMDP, we iteratively construct a sequence of discrete Markov Decision Processes (MDPs) as discrete approximations to the original continuous problem, as follows. Initially, an empty MDP model is created. At each iteration, the discrete MDP is refined by adding new states sampled from the boundary as well as from the interior of the state space. Subsequently, new stochastic transitions are constructed to connect the new states to those already in the model. For the sake of efficiency, stochastic transitions are computed only when needed. Then, an anytime policy for the refined model is computed using an incremental value iteration algorithm, based on the value function of the previous model. The policy for the discrete system is finally converted to a policy for the original continuous problem. This process is iterated until convergence. Our work is mostly related to the Stochastic Motion Roadmap (SMR) algorithm [19] and Markov chain approximation methods [18]. The SMR algorithm constructs an MDP over a sampling-based roadmap representation to maximize the probability of reaching a given goal region. However, in SMR, actions are discretized, and the algorithm does not offer any formal optimality guarantees. On the other hand, while available Markov chain approximation methods [18] provide formal optimality guarantees under very general conditions, a sequence of a priori discretizations of state and control spaces still impose expensive computation. The iMDP algorithm addresses this issue by sampling in the state space and sampling or discovering necessary controls. The main contribution of this paper is a method to incrementally refine a discrete model of the original continuous problem in a way that ensures convergence to optimality while maintaining low time and space complexity. We show that with probability one, the sequence of optimal value functions induced by optimal control policies for each of the discretized problems converges uniformly to the optimal value function of the original stochastic control problem. In addition, the optimal value function of the original problem can be computed efficiently in an incremental manner using asynchronous value iterations. Thus, the proposed algorithm provides an anytime approach to the computation of optimal control policies of the continuous problem. Distributions of approximating trajectories and control processes returned from the iMDP algorithm approximate arbitrarily well distributions of optimal trajectories and optimal control processes of the original problem. Each iteration of the iMDP algorithm can be implemented with the time complexity O ( k θ log k ) where 0 < θ ≤ 1 while the space complexity is O ( k ), where k is the number of states in an MDP model in the algorithm which increases linearly due to the sampling strategy. Thus, the entire processing time until the algorithm stops can be implemented in O ( k 1+ θ log k ). Hence, the 2 above space and time complexities make iMDP a practical incremental algorithm. The effectiveness of the proposed approach is demonstrated on motion planning and control problems in cluttered environments in the presence of process noise. This paper is organized as follows. In Section 2, a formal problem definition is given. The Markov chain approximation methods and the iMDP algorithm are described in Sections 3 and 4. The analysis of the iMDP algorithm is presented in Section 5. Section 6 is devoted to simulation examples and experimental results. The paper is concluded with remarks in Section 7. We provide additional notations and preliminary results as well as proofs for theorems and lemmas in Appendix. 2 Problem Definition In this section, we present a generic stochastic optimal control problem. Subsequently, we discuss how the formulation extends the standard motion planning problem of reaching a goal region while avoiding collision with obstacles. Stochastic Dynamics Let d x , d u , and d w be positive integers. The d x -dimensional and d u - dimensional Euclidean spaces are R d x and R d u respectively. Let S be a compact subset of R d x , which is the closure of its interior S o and has a smooth boundary ∂S . The state of the system at time t is x ( t ) ∈ S , which is fully observable at all times. We also define a compact subset U of R d u as a control set. Suppose that a stochastic process { w ( t ); t ≥ 0 } is a d w -dimensional Brownian motion, also called a Wiener process, on some probability space (Ω , F , P ). Let a control process { u ( t ); t ≥ 0 } be a U -valued, measurable process also defined on the same probability space. We say that the control process u ( · ) is nonanticipative with respect to the Wiener process w ( · ) if there exists a filtration {F t ; t ≥ 0 } defined on (Ω , F , P ) such that u ( · ) is F t -adapted, and w ( · ) is an F t -Wiener process. In this case, we say that u ( · ) is an admissible control inputs with respect to w ( · ), or the pair ( u ( · ) , w ( · )) is admissible. Let R d x × d w denote the set of all d x by d w real matrices. We consider stochastic dynamical systems, also called controlled diffusions, of the form dx ( t ) = f ( x ( t ) , u ( t )) dt + F ( x ( t ) , u ( t )) dw ( t ) , ∀ t ≥ 0 (1) where f : S × U → R d x and F : S × U → R d x × d w are bounded measurable and continuous functions as long as x ( t ) ∈ S o . The matrix F ( · , · ) is assumed to have full rank. More precisely, a solution to the differential form given in Eq. (1) is a stochastic process { x ( t ); t ≥ 0 } such that x ( t ) equals the following stochastic integral in all sample paths: x ( t ) = x (0) + ∫ t 0 f ( x ( τ ) , u ( τ )) dτ + ∫ t 0 F ( x ( τ ) , u ( τ )) dw ( τ ) , (2) until x ( · ) exits S o , where the last term on the right hand side is the usual Itˆ o integral (see, e.g., [20]). When the process x ( · ) hits ∂S , the process x ( · ) is stopped. Weak Existence and Weak Uniqueness of Solutions Let Γ be the sample path space of admissible pairs ( u ( · ) , w ( · )). Suppose we are given probability measures Λ and P 0 on Γ and on S respectively. We say that solutions of (2) exist in the weak sense if there exists a probability space (Ω , F , P ), a filtration {F t ; t ≥ 0 } , an F t -Wiener process w ( · ), an F t -adapted control process u ( · ), and an F t -adapted process x ( · ) satisfying Eq. (2), such that Λ and P 0 are the distributions of ( u ( · ) , w ( · )) and x (0) under P . We call such tuple { (Ω , F , P ) , F t , w ( · ) , u ( · ) , x ( · ) } a weak sense solution of Eq. (1) [21, 18]. 3 Assume that we are given weak sense solutions { (Ω i , F i , P i ) , F t,i , w i ( · ) , u i ( · ) , x i ( · ) } , i = 1 , 2 , to Eq. (1). We say solutions are weakly unique if equality of the joint distributions of ( w i ( · ) , u i ( · ) , x i (0)) under P i , i = 1 , 2, implies the equality of the distributions ( x i ( · ) , w i ( · ) , u i ( · ) , x i (0)) under P i , i = 1 , 2 [21, 18]. In this paper, given the boundedness of the set S , and the definition of the functions f and F in Eq. (1), we have a weak solution to Eq. (1) that is unique in the weak sense [21]. The boundedness requirement is naturally satisfied in many applications and is also needed for the implementation of the proposed numerical method. We will also handle the case in which f and F are discontinuous with extra mild technical assumptions to ensure asymptotic optimality in Section 3. Policy and Cost-to-go Function A particular class of admissible controls, called Markov con- trols , depends only on the current state, i.e., u ( t ) is a function only of x ( t ), for all t ≥ 0. It is well known that in control problems with full state information, the best Markov control performs as well as the best admissible control (see, e.g., [20, 21]). A Markov control defined on S is also called a policy , and is represented by the function μ : S → U . The set of all policies is denoted by Π. Define the first exit time T μ : Π → [0 , + ∞ ] under policy μ as T μ = inf { t : x ( t ) / ∈ S o and Eq. (1) and u ( t ) = μ ( x ( t )) } . Intuitively, T μ is the first time that the trajectory of the dynamical system given by Eq. (1) with u ( t ) = μ ( x ( t )) hits the boundary ∂S of S . By definition, T μ = + ∞ if x ( · ) never exits S o . Clearly, T μ is a random variable. Then, the expected cost-to-go function under policy μ is a mapping from S to R defined as J μ ( z ) = E [∫ T μ 0 α t g ( x ( t ) , μ ( x ( t )) ) dt + h ( x ( T μ )) | x (0) = z ] , where g : S × U → R and h : S → R are bounded measurable and continuous functions, called the cost rate function and the terminal cost function , respectively, and α ∈ [0 , 1) is the discount rate . We further assume that g ( x, u ) is uniformly H ̈ older continuous in x with exponent 2 ρ ∈ (0 , 1] for all u ∈ U . That is, there exists some constant C > 0 such that | g ( x, u ) − g ( x ′ , u ) | ≤ C|| x − x ′ || 2 ρ 2 , ∀ x, x ′ ∈ S. We will address the discontinuity of g and h in Section 3. The optimal cost-to-go function J ∗ : S → R is defined as J ∗ ( z ) = inf μ ∈ Π J μ ( z ) for all z ∈ S . A policy μ ∗ is called optimal if J μ ∗ = J ∗ . For any  > 0, a policy μ is called an  -optimal policy if || J μ − J ∗ || ∞ ≤  . In this paper, we consider the problem of computing the optimal cost-to-go function J ∗ and an optimal policy μ ∗ if obtainable. Our approach, outlined in Section 4, approximates the optimal cost-to-go function and an optimal policy in an anytime fashion using incremental sampling-based algorithms. This sequence of approximations is guaranteed to converge uniformly to the optimal cost-to-go function and to find an  -optimal policy for an arbitrarily small non-negative  , almost surely, as the number of samples approaches infinity. Relationship with Standard Motion Planning The standard motion planning problem of finding a collision-free trajectory that reaches a goal region for a deterministic dynamical system can be defined as follows (see, e.g., [17]). Let X ⊂ R d x be a compact set. Let the open sets X obs and X goal denote the obstacle region and the goal region, respectively. Define the obstacle- free space as X free := X \ X obs . Let x init ∈ X free . Consider the deterministic dynamical system 4 ̇ x = f ( x ( t ) , u ( t )) dt , where f : X × U → R d x . The feasible motion planning problem is to find a measurable control input u : [0 , T ] → U such that the resulting trajectory x ( t ) is collision free , i.e., x ( t ) ∈ X free and reaches the goal region, i.e., x ( T ) ∈ X goal . The optimal motion planning problem is to find a measurable control input u such that the resulting trajectory x solves the feasible motion planning problem with minium trajectory cost. The problem considered in this paper extends the classical motion planning problem with stochastic dynamics as described by Eq. (1). Given a goal set X goal and an obstacle set X obs , define S := X \ ( X goal ∪ X obs ) and thus ∂ X goal ∪ ∂ X obs ∪ ∂ X = ∂S . Due to the nature of Brownian motion, under most policies, there is some non-zero probability that collision with an obstacle set will occur. However, to penalize collision with obstacles in the control design process, the cost of terminating by hitting the obstacle set, i.e., h ( z ) for z ∈ ∂ X obs , can be made arbitrarily high. Clearly, the higher this number is, the more conservative the resulting policy will be. Similarly, the terminal cost function on the goal set, i.e., h ( z ) for z ∈ ∂ X goal , can be set to a small value to encourage terminating by hitting the goal region. 3 Markov Chain Approximation A discrete-state Markov decision process (MDP) is a tuple M = ( X, A, P, G, H ) where X is a finite set of states, A is a set of actions that is possibly a continuous space, P ( · | · , · ) : X × X × A → R ≥ 0 is a function that denotes the transition probabilities satisfying ∑ ξ ′ ∈ X P ( ξ ′ | ξ, v ) = 1 for all ξ ∈ X and all v ∈ A , G ( · , · ) : X × A → R is an immediate cost function, and H : X → R is a terminal cost funtion. If we start at time 0 with a state ξ 0 ∈ X , and at time i ≥ 0, we apply an action v i ∈ A at a state ξ i to arrive at a next state ξ i +1 according to the transition probability function P , we have a controlled Markov chain { ξ i ; i ∈ N } . The chain { ξ i ; i ∈ N } due to the control sequence { v i ; i ∈ N } and an initial state ξ 0 will also be called the trajectory of M under the said sequence of controls and initial state. Given a continuous-time dynamical system as described in Eq. (1), the Markov chain approxima- tion method approximates the continuous stochastic dynamics using a sequence of MDPs {M n } ∞ n =0 in which M n = ( S n , U, P n , G n , H n ) where S n is a discrete subset of S , and U is the original control set. We define ∂S n = ∂S ∩ S n . For each n ∈ N , let { ξ n i ; i ∈ N } be a controlled Markov chain on M n until it hits ∂S n . We associate with each state z in S a non-negative interpolation interval ∆ t n ( z ), known as a holding time . We define t n i = ∑ i − 1 0 ∆ t n ( ξ n i ) for i ≥ 1 and t n 0 = 0. Let ∆ ξ n i = ξ n i +1 − ξ n i . Let u n i denote the control used at step i for the controlled Markov chain. In addition, we define G n ( z, v ) = g ( z, v )∆ t n ( z ) and H n ( z ) = h ( z ) for each z ∈ S n and v ∈ U . Let Ω n be the sample space of M n . Holding times ∆ t n and transition probabilities P n are chosen to satisfy the local consistency property given by the following conditions: 1. For all z ∈ S , lim n →∞ ∆ t n ( z ) = 0 , (3) 2. For all z ∈ S and all v ∈ U : lim n →∞ E P n [∆ ξ n i | ξ n i = z, u n i = v ] ∆ t n ( z ) = f ( z, v ) , (4) lim n →∞ Cov P n [∆ ξ n i | ξ n i = z, u n i = v ] ∆ t n ( z ) = F ( z, v ) F ( z, v ) T , (5) lim n →∞ sup i ∈ N ,ω ∈ Ω n || ∆ ξ n i || 2 = 0 . (6) 5 The chain { ξ n i ; i ∈ N } is a discrete-time process. In order to approximate the continuous- time process x ( · ) in Eq. (2), we use an approximate continuous-time interpolation . We define the (random) continuous-time interpolation ξ n ( · ) of the chain { ξ n i ; i ∈ N } and the continuous-time interpolation u n ( · ) of the control sequence { u n i ; i ∈ N } under the holding times function ∆ t n as follows: ξ n ( τ ) = ξ n i , and u n ( τ ) = u n i for all τ ∈ [ t n i , t n i +1 ). Let D d x [0 , + ∞ ) denote the set of all R d x -valued functions that are continuous from the left and has limits from the right. The process ξ n can be thought of as a random mapping from Ω n to the function space D d x [0 , + ∞ ). A control problem for the MDP M n is analogous to that defined in Section 2. Similar to previous section, a policy μ n is a function that maps each state z ∈ S n to a control μ n ( z ) ∈ U . The set of all such policies is Π n . Given a policy μ n , the (discounted) cost-to-go due to μ n is: J n,μ n ( z ) = E P n [ I n − 1 ∑ i =0 α t n i G n ( ξ n i , μ n ( ξ n i )) + α t n In H n ( ξ n I n ) ∣ ∣ ∣ ξ n 0 = z ] , where E P n denotes the conditional expectation under P n , the sequence { ξ n i ; i ∈ N } is the controlled Markov chain under the policy μ n , and I n is termination time defined as I n = min { i : ξ n i ∈ ∂S n } . The optimal cost function , denoted by J ∗ n satisfies J ∗ n ( z ) = inf μ n ∈ Π n J n,μ n ( z ) , ∀ z ∈ S n . (7) An optimal policy , denoted by μ ∗ n , satisfies J n,μ ∗ n ( z ) = J ∗ n ( z ) for all z ∈ S n . For any  > 0, μ n is an  -optimal policy if || J n,μ n − J ∗ n || ∞ ≤  . As stated in the following theorem, under mild technical assumptions, local consistency implies the convergence of continuous-time interpolations of the trajectories of the controlled Markov chain to the trajectories of the stochastic dynamical system described by Eq. (1). Theorem 1 (see Theorem 10.4.1 in [18]) Let us assume that f ( · , · ) and F ( · , · ) are measurable, bounded and continuous. Thus, Eq. (1) has a weakly unique solution. Let {M n } ∞ n =0 be a sequence of MDPs, and { ∆ t n } ∞ n =0 be a sequence of holding times that are locally consistent with the stochastic dynamical system described by Eq. (1) . Let { u n i ; i ∈ N } be a sequence of controls defined for each n ∈ N . For all n ∈ N , let { ξ n ( t ); t ∈ R ≥ 0 } denote the continuous-time interpolation to the chain { ξ n i ; i ∈ N } under the control sequence { u n i ; i ∈ N } starting from an initial state z init , and { u n ( t ); t ∈ R ≥ 0 } denote the continuous-time interpolation of { u n i ; i ∈ N } , according to the holding time ∆ t n . Then, any subsequence of { ( ξ n ( · ) , u n ( · )) } ∞ n =0 has a further subsequence that converges in distribution to ( x ( · ) , u ( · )) satisfying x ( t ) = z init + ∫ t 0 f ( x ( τ ) , u ( τ )) dτ + ∫ t 0 F ( x ( τ ) , u ( τ )) dw ( τ ) . Under the weak uniqueness condition for solutions of Eq. (1) , the sequence { ( ξ n ( · ) , u n ( · )) } ∞ n =0 also converges to ( x ( · ) , u ( · )) . Furthermore, a sequence of minimizing controls guarantees pointwise convergence of the cost func- tion to the original optimal cost function in the following sense. Theorem 2 (see Theorem 10.5.2 in [18]) Assume that f ( · , · ) , F ( · , · ) , g ( · , · ) and h ( · ) are mea- surable, bounded and continuous. For any trajectory x ( · ) of the system described by Eq. (1) , define ˆ τ ( x ) := inf { t : x ( t ) / ∈ S o } . Let {M n = ( S n , U, P n , G n , H n ) } ∞ n =0 and { ∆ t n } ∞ n =0 be locally consistent with the system described by Eq. (1) . 6 We suppose that the function ˆ τ ( · ) is continuous (as a mapping from D d x [0 , + ∞ ) to the com- pactified interval [0 , + ∞ ] ) with probability one relative to the measure induced by any solution to Eq. (1) for an initial state z , which is satisfied when the matrix F ( · , · ) F ( · , · ) T is nondegenerate. Then, for any z ∈ S n , the following equation holds: lim n →∞ | J ∗ n ( z ) − J ∗ ( z ) | = 0 . In particular, for any z ∈ S n , for any sequence {  n > 0 } ∞ n =0 such that lim n →∞  n = 0 , and for any sequence of policies { μ n } ∞ n =0 such that μ n is an  n -optimal policy of M n , we have: lim n →∞ | J n,μ n ( z ) − J ∗ ( z ) | = 0 . Moreover, the sequence { t n I n ; n ∈ N } converges in distribution to the termination time of the optimal control problem for the system in Eq. (1) when the system is under optimal control processes. Under the assumption that the cost rate g is H ̈ older continuous [22] with exponent 2 ρ , the sequence of optimal value functions for approximating chains J ∗ n indeed converges uniformly to J ∗ with a proven rate. Let us denote || b || S n = sup z ∈ S n b ( x ) as the sup-norm over S n of a function b with domain containing S n . Let ζ n = max z ∈ S n min z ′ ∈ S n || z ′ − z || 2 (8) be the dispersion of S n . Theorem 3 (see Theorem 2.3 in [23] and Theorem 2.1 in [24]) Consider an MDP sequence {M n = ( S n , U, P n , G n , H n ) } ∞ n =0 and holding times { ∆ t n } ∞ n =0 that are locally consistent with the sys- tem described by Eq. (1) . Let J ∗ n be the optimal cost of M n . Given the assumptions on the dynamics and cost rate functions in Section 2, as n approaches ∞ , we have || J ∗ n − J ∗ || S n = O ( ζ ρ n ) . Discontinuity of dynamics and objective functions We note that the above theorems continue to hold even when the functions f, F, g, and h are discontinuous. In this case, the following conditions are sufficient to use the theorems: (i) For r to be f, F, g , or h , r ( x, u ) takes either the form r 0 ( x )+ r 1 ( u ) or r 0 ( x ) r 1 ( u ) where the control dependent terms are continuous and the x -dependent terms are measurable, and (ii) f ( x, · ) , F ( x, · ) , g ( x, · ), and h ( x ) are nondegenerate for each x , and the set of discontinuity in x of each function is a uniformly smooth surface of lower dimension. Furthermore, instead of uniform H ̈ older continuity, the cost rate g can be relaxed to be locally H ̈ older continuous with exponent 2 ρ on S (see, e.g., page 275 in [18]). Let us remark that the controlled Markov chain differs from the stochastic dynamical systems described in Section 2 in that the former possesses a discrete state structure and evolves in a discrete time manner while the latter is a continuous model both in terms of its state space and the evolution of time. Yet, both models possess a continuous control space. It will be clear in the following discussion that the control space does not have to be discretized if a certain optimization problem can be solved numerically or via sampling. The above theorems assert the asymptotic optimality given a sequence of a priori discretizations of the state space and the availability of  -optimal policies. In what follows, we describe an algorithm that incrementally computes the optimal cost-to-go function and an optimal control policy of the continuous problem. 7 4 The iMDP Algorithm Based on Markov chain approximation results, the iMDP algorithm incrementally builds a sequence of discrete MDPs with probability transitions and cost-to-go functions that consistently approxi- mate the original continuous counterparts. The algorithm refines the discrete models by using a number of primitive procedures to add new states into the current approximate model. Finally, the algorithm improves the quality of discrete-model policies in an iterative manner by effectively using the computations inherited from the previous iterations. Before presenting the algorithm, some primitive procedures which the algorithm relies on are presented in this section. 4.1 Primitive Procedures 4.1.1 Sampling The Sample () and SampleBoundary () procedures sample states independently and uniformly from the interior S o and the boundary ∂S , respectively. 4.1.2 Nearest Neighbors Given z ∈ S and a set Y ⊆ S of states. For any k ∈ N , the procedure Nearest ( z, Y, k ) returns the k nearest states z ′ ∈ Y that are closest to z in terms of the Euclidean norm. 4.1.3 Time Intervals Given a state z ∈ S and a number k ∈ N , the procedure ComputeHoldingTime ( z, k ) returns a holding time computed as follows: ComputeHoldingTime ( z, k ) = γ t ( log k k ) θςρ/d x , where γ t > 0 is a constant, and ς, θ are constants in (0 , 1) and (0 , 1] respectively 1 . The parameter ρ ∈ (0 , 0 . 5] defines the H ̈ older continuity of the cost rate function g ( · , · ) as in Section 2. 4.1.4 Transition Probabilities Given a state z ∈ S , a subset Y ∈ S , a control v ∈ U , and a positive number τ describing a holding time, the procedure ComputeTranProb ( z, v, τ, Y ) returns (i) a finite set Z near ⊂ S of states such that the state z + f ( z, v ) τ belongs to the convex hull of Z near and || z ′ − z || 2 = O ( τ ) for all z ′ 6 = z ∈ Z near , and (ii) a function p that maps Z near to a non-negative real numbers such that p ( · ) is a probability distribution over the support Z near . It is crucial to ensure that these transition probabilities result in a sequence of locally consistent chains in the algorithm. There are several ways to construct such transition probabilities. One possible construction by solving a system of linear equations can be found in [18]. In particular, we choose Z near = Nearest ( z + f ( z, v ) τ, Y, s ) where s ∈ N is some constant. We define the transition probabilities p : Z near → R ≥ 0 that satisfies: (i) ∑ z ′ ∈ Z near p ( z ′ )( z ′ − z ) = f ( z, v ) τ + o ( τ ), (ii) ∑ z ′ ∈ Z near p ( z ′ )( z ′ − z )( z ′ − z ) T = F ( z, v ) F ( z, v ) T τ + f ( z, v ) f ( z, v ) T τ 2 + o ( τ ). 1 Typical values of ς is [0.999,1). 8 (iii) ∑ z ′ ∈ Z near p ( z ′ ) = 1. An alternate way to compute the transition probabilities is to approximate using local Gaussian distributions. We choose Z near = Nearest ( z + f ( z, v ) τ, Y, s ) where s = Θ(log( | Y | )). Let N m,σ ( · ) denote the density of the (possibly multivariate) Gaussian distribution with mean m and variance σ . Define the transition probabilities as follows: p ( z ′ ) = N m,σ ( z ′ ) ∑ y ∈ Z near N m,σ ( y ) , where m = z + f ( z, v ) τ and σ = F ( z, v ) F ( z, v ) T τ . This expression can be evaluated easily for any fixed v ∈ U . As | Z near | approaches infinity, the above construction satisfies the local consistency almost surely. As we will discuss in Section 4.2, the size of the support Z near affects the complexity of the iMDP algorithm. We note that solving a system of linear equations requires computing and handling a matrix of size ( d 2 x + d x + 1) × | Z near | where | Z near | is constant. When d x and | Z near | are large, the constant factor of the complexity is large. In contrast, computing local Gaussian approximation requires only | Z near | evaluations. Thus, although local Gaussian approximation yields higher time complexity, this approximation is more convenient to compute. 4.1.5 Backward Extension Given T > 0 and two states z, z ′ ∈ S , the procedure ExtendBackwards ( z, z ′ , T ) returns a triple ( x, v, τ ) such that (i) ̇ x ( t ) = f ( x ( t ) , u ( t )) dt and u ( t ) = v ∈ U for all t ∈ [0 , τ ], (ii) τ ≤ T , (iii) x ( t ) ∈ S for all t ∈ [0 , τ ], (iv) x ( τ ) = z , and (v) x (0) is close to z ′ . If no such trajectory exists, then the procedure returns failure 2 . We can solve for the triple ( x, v, τ ) by sampling several controls v and choose the control resulting in x (0) that is closest to z ′ . 4.1.6 Sampling and Discovering Controls The procedure ConstructControls ( k, z, Y, T ) returns a set of k controls in U . We can uniformly sample k controls in U . Alternatively, for each state z ′ ∈ Nearest ( z, Y, k ), we solve for a control v ∈ U such that (i) ̇ x ( t ) = f ( x ( t ) , u ( t )) dt and u ( t ) = v ∈ U for all t ∈ [0 , T ], (ii) x ( t ) ∈ S for all t ∈ [0 , T ], (iii) x (0) = z and x ( T ) = z ′ . 4.2 Algorithm Description The iMDP algorithm is given in Algorithm 1. The algorithm incrementally refines a sequence of (finite-state) MDPs M n = ( S n , U, P n , G n , H n ) and the associated holding time function ∆ t n that consistently approximates the sytem in Eq. (1). In particular, given a state z ∈ S n and a holding time ∆ t n ( z ), we can implicitly define the stage cost function G n ( z, v ) = ∆ t n ( z ) g ( z, v ) for all v ∈ U and terminal cost function H n ( z ) = h ( z ). We also associate with z ∈ S n a cost value J n ( z ), and a control μ n ( z ). We refer to J n as a cost value function over S n . In the following discussion, we describe how to construct S n , P n , J n , μ n over iterations. We note that, in most cases, we only need to construct and access P n on demand. In every iteration of the main loop (Lines 4-16), we sample an additional state from the boundary of the state space S . We set J n , μ n , ∆ t n for those states at Line 5. Subsequently, we also sample a 2 This procedure is used in the algorithm solely for the purpose of inheriting the “rapid exploration” property of the RRT algorithm [14, 17]. 9 Algorithm 1: iMDP() 1 ( n, S 0 , J 0 , μ 0 , ∆ t 0 ) ← (1 , ∅ , ∅ , ∅ , ∅ ); 2 while n < N do 3 ( S n , J n , μ n , ∆ t n ) ← ( S n − 1 , J n − 1 , μ n − 1 , ∆ t n − 1 ); // Add a new state to the boundary 4 z s ← SampleBoundary (); 5 ( S n , J n ( z s ) , μ n ( z s ) , ∆ t n ( z s )) ← ( S n ∪ { z s } , h ( z s ) , null, 0) ; // Add a new state to the interior 6 z s ← Sample (); 7 z nearest ← Nearest ( z s , S n , 1); 8 if ( x new , u new , τ ) ← ExtendBackwards ( z nearest , z s , T 0 ) then 9 z new ← x new (0); 10 cost = τ g ( z new , u new ) + α τ J n ( z nearest ); 11 ( S n , J n ( z new ) , μ n ( z new ) , ∆ t n ( z new )) ← ( S n ∪ { z new } , cost, u new , τ ) ; // Perform L n ≥ 1 (asynchronous) value iterations 12 for i = 1 → L n do // Update z new and K n = Θ ( | S n | θ ) states ( 0 < θ ≤ 1 , K n < | S n | ) 13 Z update ← Nearest ( z new , S n \ ∂S n , K n ) ∪ { z new } ; 14 for z ∈ Z update do 15 Update ( z, S n , J n , μ n , ∆ t n ); 16 n ← n + 1; state from the interior of S (Line 6) denoted as z s . We compute the nearest state z nearest , which is already in the current MDP, to the sampled state (Line 7). The algorithm computes a trajectory that reaches z nearest starting at some state near z s (Line 8) using a control signal u new (0 ..τ ). The new trajectory is denoted by x new : [0 , τ ] → S and the starting state of the trajectory, i.e., x new (0), is denoted by z new . The new state z new is added to the state set, and the cost value J n ( z new ), control μ n ( z new ), and holding time ∆ t n ( z new ) are initialized at Line 11. Update of cost value and control The algorithm updates the cost values and controls of the finer MDP in Lines 13-15. We perform L n ≥ 1 value iterations in which we update the new state z new and other K n = Θ ( | S n | θ ) states in the state set where K n < | S n | . When all states in the MDP are updated, i.e. K n + 1 = | S n | , L n value iterations are implemented in a synchronous manner. Otherwise, L n value iterations are implemented in an asynchronous manner. The set of states to be updated is denoted as Z update (Line 13). To update a state z ∈ Z update that is not on the boundary, in the call to the procedure Update (Line 15), we solve the following Bellman equation: 3 J n ( z ) = min v ∈ U { G n ( z, v ) + α ∆ t n ( z ) E P n [ J n − 1 ( y ) | z, v ] } , (9) 3 Although the argument of Update at Line 15 is J n , we actually process the previous cost values J n − 1 due to Line 3. We can implement Line 3 by simply sharing memory for ( S n , J n , μ n , ∆ t n ) and ( S n − 1 , J n − 1 , μ n − 1 , ∆ t n − 1 ). 10 Algorithm 2: Update ( z ∈ S n , S n , J n , μ n , ∆ t n ) 1 τ ← ComputeHoldingTime ( z, | S n | ); // Sample or discover C n = Θ(log( | S n | )) controls 2 U n ← ConstructControls ( C n , z, S n , τ ); 3 for v ∈ U n do 4 ( Z near , p n ) ← ComputeTranProb ( z, v, τ, S n ); 5 J ← τ g ( z, v ) + α τ ∑ y ∈ Z near p n ( y ) J n ( y ); 6 if J < J n ( z ) then 7 ( J n ( z ) , μ n ( z ) , ∆ t n ( z ) , κ n ( z )) ← ( J, v, τ, | S n | ); Algorithm 3: Policy ( z ∈ S, n ) 1 z nearest ← Nearest ( z, S n , 1); 2 return μ ( z ) = ( μ n ( z nearest ) , ∆ t n ( z nearest )) and set μ n ( z ) = v ∗ ( z ), where v ∗ ( z ) is the minimizing control of the above optimization problem. There are several ways to solve Eq. (9) over the the continuous control space U efficiently. If P n ( · | z, v ) and g ( z, v ) are affine functions of v , and U is convex, the above optimization has a linear objective function and a convex set of constraints. Such problems are widely studied in the literature [25]. More generally, we can uniformly sample the set of controls, called U n , in the control space U . Hence, we can evaluate the right hand side (RHS) of Eq. (9) for each v ∈ U n to find the best v ∗ in U n with the smallest RHS value and thus to update J n ( z ) and μ n ( z ). When lim n →∞ | U n | = ∞ , we can solve Eq. (9) arbitrarily well (see Theorem 8). Thus, it is sufficient to construct the set U n with Θ(log( | S n | )) controls using the procedure ConstructControls as described in Algorithm 2 (Line 2). The set Z near and the transition probability P n ( · | z, v ) constructed consistently over the set Z near are returned from the procedure ComputeTranProb for each v ∈ U n (Line 4). Depending on a particular method to build P n (i.e. solving a system of linear equations or evaluating a local Gaussian distribution), the cardinality of Z near is set to a constant or increases as Θ(log( | S n | )). Subsequently, the procedure chooses the best control among the constructed controls to update J n ( z ) and μ n ( z ) (Line 7). We note that in Algorithm 2, before making improvement for the cost value at z by comparing new controls, we can re-evaluate the cost value with the current control μ n ( z ) over the holding time ∆ t n ( z ) by adding the current control μ n ( z ) to U n . The reason is that the current control may be still the best control compared to other controls in U n . Complexity of iMDP The time complexity per iteration of the implementation in Algorithms 1-2 is either O ( | S n | θ log | S n | ) or O ( | S n | θ (log | S n | ) 2 ) . In particular, if the procedure ComputeTranProb solves a set of linear equa- tions to construct P n such that the cardinality of Z near can remain constant, the time complexity per iteration is O ( | S n | θ log | S n | ) where log | S n | accounts for the number of processed controls, and | S n | θ accounts for the number of updated states in one iteration. Otherwise, if the procedure ComputeTranProb uses a local Gaussian distribution to construct P n such that the cardinality of Z near increases as Θ(log | S n | ), the time complexity per iteration is O ( | S n | θ (log | S n | ) 2 ) . The pro- cessing time from the beginning until the iMDP algorithm stops after n iterations is thus either 11 O ( | S n | 1+ θ log | S n | ) or O ( | S n | 1+ θ (log | S n | ) 2 ) . Since we only need to access locally consistent transi- tion probability on demand, the space complexity of the iMDP algorithm is O ( | S n | ). Finally, the size of state space S n is | S n | = Θ( n ) due to our sampling strategy. 4.3 Feedback Control As we will see in Theorems 7-8, the sequence of cost value functions J n arbitrarily approximates the original optimal cost-to-go J ∗ . Therefore, we can perform a Bellman update based on the approxi- mated cost-to-go J n (using the stochastic continuous-time dynamics) to obtain a policy control for any n . However, we will discuss in Theorem 9 that the sequence of μ n also approximates arbitrarily well an optimal control policy. In other words, in the iMDP algorithm, we also incrementally con- struct an optimal control policy. In the following paragraph, we present an algorithm that converts a policy for a discrete system to a policy for the original continuous problem. Given a level of approximation n ∈ N , the control policy μ n generated by the iMDP algorithm is used for controlling the original system described by Eq. (1) using the procedure given in Al- gorithm 3. This procedure computes the state in M n that is closest to the current state of the original system and applies the control attached to this closest state over the associated holding time. 5 Analysis In this section, let ( M n = ( S n , U, P n , G n , H n ) , ∆ t n , J n , μ n ) denote the MDP, holding times, cost value function, and policy returned by Algorithm 1 at the end n iterations. The proofs of lemmas and theorems in this section can be found in Appendix. For large n , states in S n are sampled uniformly in the state space S [17]. Moreover, the dispersion of S n shrinks with the rate O ((log | S n | / | S n | ) 1 /d x ) as described in the next lemma. Lemma 4 Recall that ζ n measures of the dispersion of S n (Eq. 8). We have the following event happens with probability one: ζ n = O ((log | S n | / | S n | ) 1 /d x ) . The proof is based on the fact that, if we partition R d x into cells of volume O (log( | S n | ) / | S n | ), then, almost surely, every cell contains at least an element of S n , as | S n | approaches infinity. The above lemma leads to the following results. Lemma 5 The MDP sequence {M n } ∞ n =0 and holding times { ∆ t n } ∞ n =0 returned by Algorithm 1 are locally consistent with the system described by Eq. (1) for large n with probability one. Theorem 1 and Lemma 5 together imply that the trajectories of the controlled Markov chains approximate those of the original stochastic dynamical system in Eq. (1) arbitrarily well as n approaches to infinity. Moreover, recall that || · || S n is the sup-norm over S n , the following theorem shows that J ∗ n converges uniformly, with probability one, to the original optimal value function J ∗ . Theorem 6 Given n ∈ N , for all z ∈ S n , J ∗ n ( z ) denotes the optimal value function evaluated at state z for the finite-state MDP M n returned by Algorithm 1. Then, the following event holds with probability one: lim n →∞ || J ∗ n − J ∗ || S n = 0 . 12 In other words, J ∗ n converges to J ∗ uniformly. In particular, || J ∗ n − J ∗ || S n = O ((log | S n | / | S n | ) ρ/d x ) . The proof follows immediately from Lemmas 4-5 and Theorems 2-3. The theorem suggests that we can compute J ∗ n for each discrete MDP M n before sampling more states to construct M n +1 . Indeed, in Algorithm 1, when updated states are chosen randomly as subsets of S n , and L n is large enough, we compute J ∗ n using asynchronous value iterations [26, 27]. Subsequent theorems present stronger results. We will prove the asymptotic optimality of the cost value J n returned by the iMDP algorithm when n approaches infinity without directly approximating J ∗ n for each n . We first consider the case when we can solve the Bellman update (Eq. 9) exactly and 1 ≤ L n , K n = Θ( | S n | θ ) < | S n | . Theorem 7 For all z ∈ S n , J n ( z ) is the cost value of the state z computed by Algorithm 1 and Algorithm 2 after n iterations with 1 ≤ L n , and K n = Θ( | S n | θ ) < | S n | . Let J n,μ n be the cost-to-go function of the returned policy μ n on the discrete MDP M n . If the Bellman update at Eq. 9 is solved exactly, then, the following events hold with probability one: i. lim n →∞ || J n − J ∗ n || S n = 0 , and lim n →∞ || J n − J ∗ || S n = 0 , ii. lim n →∞ | J n,μ n ( z ) − J ∗ ( z ) | = 0 , ∀ z ∈ S n . Theorem 7 enables an incremental computation of the optimal cost J ∗ without the need to com- pute J ∗ n exactly before sampling more samples. Moreover, cost-to-go functions J n,μ n induced by approximating policies μ n also converges pointwise to the optimal cost-to-go J ∗ with probability one. When we solve the Bellman update at Eq. 9 via sampling, the following result holds. Theorem 8 For all z ∈ S n , J n ( z ) is the cost value of the state z computed by Algorithm 1 and Algorithm 2 after n iterations with 1 ≤ L n , and K n = Θ( | S n | θ ) < | S n | . Let J n,μ n be the cost-to-go function of the returned policy μ n on the discrete MDP M n . If the Bellman update at Eq. 9 is solved via sampling such that lim n →∞ | U n | = ∞ , then i. || J n − J ∗ n || S n converges to 0 in probability. Thus, J n converges uniformly to J ∗ in probability, ii. lim n →∞ | J n,μ n ( z ) − J ∗ ( z ) | = 0 for all z ∈ S n with probability one. We emphasize that while the convergence of J n to J ∗ is weaker than the convergence in Theorem 7, the convergence of J n,μ n to J ∗ remains intact. Importantly, Theorem 1 and Theorems 7-8 together assert that starting from any initial state, trajectories and control processes provided by the iMDP algorithm approximate arbitrarily well optimal trajectories and optimal control processes of the original continuous problem. More precisely, with probability one, the induced random probability measures of approximating trajectories and approximating control processes converge weakly to the probability measures of optimal trajectories and optimal control processes of the continuous problem. Finally, the next theorem evaluates the quality of any-time control policies returned by Algo- rithm 3. Theorem 9 Let μ n : S → U be the interpolated policy on S of μ n : S n → U as described in Algorithm 3: ∀ z ∈ S : μ n ( z ) = μ n ( y n ) where y n = argmin z ′ ∈ S n || z ′ − z || 2 . 13 Then there exists an optimal control policy μ ∗ of the original problem 4 so that for all z ∈ S : lim n →∞ μ n ( z ) = μ ∗ ( z ) w.p.1 , if μ ∗ is continuous at z . 6 Experiments −6 −4 −2 0 2 4 6 0 50 100 150 200 250 300 350 400 450 z Cost J ∗ J 600 J 200 (a) Optimal and approximated cost. 0 0.5 1 1.5 2 2.5 3 3.5 4 −3 −2 −1 0 1 2 3 4 5 u * x * Second (s) Value x 200 u 200 x ∗ u ∗ (b) After 200 iterations (0.39s). 0 0.5 1 1.5 2 2.5 3 3.5 4 −3 −2 −1 0 1 2 3 4 5 u * x * Second (s) Value x 600 u 600 x ∗ u ∗ (c) After 600 iterations (2.16s). 0 200 400 600 800 1000 1200 1400 1600 1800 2000 0 50 100 150 Iteration index n Value (d) Mean and 1- σ interval of || J n − J ∗ || S n . 10 2 10 3 10 100 1,000 500 Iteration index n Value mean standard deviation (e) Log-log plot of Fig. 1(d) . 0 200 400 600 800 1000 1200 1400 1600 1800 2000 0 500 1000 1500 Iteration index n Value (f) Plot of ratio || J n − J ∗ || S n / ( log( | S n | ) / | S n | ) 0 . 5 0 200 400 600 800 1000 1200 1400 1600 1800 2000 0 2 4 6 x 10 −4 Iteration index n Value (g) Plot of ratio T n / ( | S n | 0 . 5 log( | S n | ) ) . Figure 1: Results of iMDP on a stochastic LQR problem. Figure 1(a) shows the convergence of approximated cost-to-go to the optimal analytical cost-to-go over iterations. Anytime solutions are compared to the analytical optimal solution after 200 and 600 iterations in Figs. 1(b)-1(c). Mean and 1- σ interval of the error || J n − J ∗ || S n are shown in 1(d) using 50 trials. The corresponding mean and standard deviation of the error || J n − J ∗ || S n are depicted on a log-log plot in Fig. 1(e). In Fig. 1(f), we plot the ratio of || J n − J ∗ || S n to (log( | S n | ) / | S n | ) 0 . 5 to show the convergence rate of J n to J ∗ . Figure 1(g) shows the ratio of running time per iteration T n to | S n | 0 . 5 log( | S n | ). Ratios in Figs. 1(f)-1(g) are averaged over 50 trials. We used a computer with a 2.0-GHz Intel Core 2 Duo T6400 processor and 4 GB of RAM to run experiments. In the first experiment, we investigated the convergence of the iMDP algorithm on a stochastic LQR problem: inf E [ ∫ τ 0 0 . 95 t { 3 . 5 x ( t ) 2 + 200 u ( t ) 2 } dt + h ( x ( τ )) ] such that dx = 4 Otherwise, an optimal relaxed control policy m ∗ exists [18], and μ n approximates m ∗ arbitrarily well. 14 −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 (a) Policy after 500 iterations (0.5s). −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 (b) Policy after 1,000 iterations (1.2s). −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 (c) Policy after 2,000 iterations (2.1s). −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 (d) Contour of J 500 −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 (e) Contour of J 1 , 000 −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 (f) Contour of J 2 , 000 −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 (g) Policy after 4,000 iterations (7.6s). −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 (h) Policy with 10,000 nodes (28s). −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 (i) Policy after 20,000 iterations (80s). −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 (j) Contour of J 4 , 000 −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 (k) Contour of J 10 , 000 −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0 (l) Contour of J 20 , 000 Figure 2: A system with stochastic single integrator dynamics in a cluttered environment. With appropriate cost structure assigned to the goal and obstacle regions, the system reaches the goal in the upper right corner and avoids obstacles. The standard deviation of noise in x and y directions is 0.26. The maximum velocity is one. Anytime control policies and corresponding contours of approximated cost-to-go as shown in Figs. 2(a)-2(l) indicate that iMDP quickly explores the state space and refines control policies over time. 15 −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 (a) Noise-free: 1,000 iterations(1.2s). −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 (b) Stochastic: 300 iterations (0.4s). −10 −8 −6 −4 −2 0 2 4 6 8 10 −10 −8 −6 −4 −2 0 2 4 6 8 10 (c) Stochastic: 1,000 iterations (1.1s). Figure 3: Performance against different process noise magnitude. The system starts from (0,-5) to reach the goal. In Fig. 3(a), the environment is noise-free. In Figs. 3(b)-3(c), standard deviation of noise in x and y directions is 0.37. In the latter, the system first discovers an unsafe route that is prone to collisions and discovers a safer route after a few seconds. (In Fig. 3(b), we temporarily let the system continue even after collision to observe the entire trajectory.) (3 x +11 u ) dt + √ 0 . 2 dw on the state space S = [ − 6 , 6] where τ is the first hitting time to the boundary ∂S = {− 6 , 6 } , and h ( z ) = 414 . 55 for z ∈ ∂S and 0 otherwise. The optimal cost-to-go from x (0) = z is 10 . 39 z 2 +40 . 51, and the optimal control policy is u ( t ) = − 0 . 5714 x ( t ). Since the cost-rate function is bounded on S and H ̈ older continuous with exponent 1 . 0, we use ρ = 0 . 5. In addition, we choose θ = 0 . 5, and ς = 0 . 99 in the procedure ComputeHoldingTime . We used the procedure Update as presented in Alogrithm 2 with log( n ) sampled controls and transition probabilities having constant support size. Figures 1(a)-1(c) show the convergence of approximated cost-to-go, anytime controls and trajectory to the optimal analytical counterparts over iterations. We observe that in Fig. 1(d), both the mean and variance of cost-to-go error decreases quickly to zero. The log-log plot in Fig. 1(e) clearly indicates that both mean and standard deviation of the error || J n − J ∗ || S n continue to decrease. This observation is consistent with Theorems 7-8. Moreover, Fig. 1(f) shows the ratio of || J n − J ∗ || S n to (log( | S n | ) / | S n | ) 0 . 5 indicating the convergence rate of J n to J ∗ , which agrees with Theorem 6. Finally, Fig. 1(g) plots the ratio of running time per iteration T n to | S n | 0 . 5 log( | S n | ) asserting that the time complexity per iteration is O ( | S n | 0 . 5 log( | S n | ) ) . In the second experiment, we controlled a system with stochastic single integrator dynamics to a goal region with free ending time in a cluttered environment. The cost objective function is discounted with α = 0 . 95. The system pays zero cost for each action it takes and pays a cost of -1 when reaching the goal region X goal . The maximum velocity of the system is one. The system stops when it collides with obstacles. We show how the system reaches the goal in the upper right corner and avoids obstacles with different anytime controls. Anytime control policies after up-to 2,000 iterations in Figs. 2(a)-2(c), which were obtained within 2 . 1 seconds, indicate that iMDP quickly explores the state space and refines control policies over time. Corresponding contours of cost value functions are shown in Figs. 2(d)-2(f) further illustrate the refinement and convergence of cost value functions to the original optimal cost-to-go over time. We observe that the performance is suitable for real-time control. Furthermore, anytime control policies and cost value functions after up-to 20,000 iterations are shown in Figs. 2(g)-2(i) and Figs. 2(j)-2(l) respectively. We note that the control policies seem to converge faster than cost value functions over iterations. The phenomenon is due to the fact that cost value functions J n are the estimates of the optimal cost-to-go J ∗ . Thus, when J n ( z ) − J ∗ ( z ) is constant for all z ∈ S n , updated controls after a Bellman update are close to their optimal values. Thus, the phenomenon favors the use of the iMDP algorithm in real-time 16 −1 0 1 2 3 4 5 6 −3 −2 −1 0 1 2 3 4 5 6 1 2 3 4 5 (a) Trajectory snapshots after 3000 iterations (15.8s). 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 10 −1 10 0 10 1 10 2 10 3 10 4 10 5 10 6 10 7 10 8 10 9 Iteration index n Value mean standard deviation (b) Mean and standard deviation of cost values J n ( x 0 ). Figure 4: Results of a 6D manipulator example. The system is modeled as a single integrator with states representing angles between segments and the horizontal line. Control magnitude is bounded by 0.3. The standard deviation of noise at each joint is 0.032 rad. In Fig. 4(a), the manipulator is controlled to reach a goal with the final upright position. In Fig. 4(b), the mean and standard deviation of the computed cost values for the initial position are plotted using 50 trials. applications where only a small number of iterations are executed. In the third experiment, we tested the effect of process noise magnitude on the solution trajec- tories. In Figs. 3(a)-3(c), the system wants to arrive at a goal area either by passing through a narrow corridor or detouring around the two blocks. In Fig. 3(a), when the dynamics is noise-free (by setting a small diffusion matrix), the iMDP algorithm quickly determines to follow a narrow corridor. In contrast, when the environment affects the dynamics of the system (Figs. 3(b)-3(c)), the iMDP algorithm decides to detour to have a safer route. This experiment demonstrates the benefit of iMDP in handling process noise compared to RRT-like algorithms [14, 17]. We empha- size that although iMDP spends slightly more time on computation per iteration, iMDP provides feedback policies rather than open-loop policies; thus, re-planning is not crucial in iMDP. In the forth experiment, we examined the performance of the iMDP algorithm for high dimen- sional systems such as a manipulator with six degrees of freedom. The manipulator is modeled as a single integrator where states represents angles between segments and the horizontal line. The maximum control magnitude for all joints is 0.3. The standard deviation of noise at each joint is 0.032 rad. The manipulator is controlled to reach a goal with the final upright position in minimum time. In Fig. 4(a), we show a resulting trajectory after 3000 iterations computed in 15.8 seconds. In addition, we show the mean and standard deviation of the computed cost values for the initial position using 50 trials in Fig. 4(b). As shown in the plots, the solution converges quickly after about 1000 iterations. These results highlight the suitability of the iMDP algorithm to compute feedback policies for complex high dimensional systems in stochastic environments. 7 Conclusions We have introduced and analyzed the incremental sampling-based iMDP algorithm for stochastic optimal control. The algorithm natively handles continuous time, continuous state space as well as continuous control space. The main idea is to consistently approximate underlying continuous 17 problems by discrete structures in an incremental manner. In particular, we incrementally build discrete MDPs by sampling and extending states in the state space. The iMDP algorithm refines the quality of anytime control policies from discrete MDPs in terms of expected costs over iterations and ensures almost sure convergence to an optimal continuous control policy. The iMDP algorithm can be implemented such that its time complexity per iteration grows as O ( k θ log k ) with 0 < θ ≤ 1 leading to the total processing time O ( k 1+ θ log k ) , where k is the number of states in MDPs which increases linearly over iterations. Together with linear space complexity, iMDP is a practical incremental algorithm. The enabling technical ideas lie in novel methods to compute Bellman updates. Further extension of the work is broad. In the future, we would like to study the effect of biased- sampling techniques on the performance of iMDP. The algorithm is also highly parallelizable, and efficient parallel versions of the iMDP algorithm are left for future study. Remarkably, Markov chain approximation methods are also tools to handle deterministic control and non-linear filtering problems. Thus, applications of the iMDP algorithm can be extended to classical path planning with deterministic dynamics. We emphasize that the iMDP algorithm would remove the necessity for exact point-to-point steering of RRT-like algorithms in path planning applications. In addition, we plan to investigate incremental sampling-based algorithms for online smoothing and estimation in the presence of sensor noise. The combination of incremental sampling-based algorithms for control and estimation will provide insights into addressing stochastic optimal control problems with imperfect state information, known as Partially Observable Markov Decision Processes (POMDPs). Although POMDPs are fundamentally more challenging than the problem that is studied in this paper, our approach differentiates itself from existing sampling-based POMDP solvers (see, e.g., [28, 29]) with its incremental nature and computationally-efficient search. Hence, the research presented in this paper opens a new alley to handle POMDPs in our future work. ACKNOWLEDGMENTS This research was supported in part by the National Science Foundation, grant CNS-1016213. V. A. Huynh gratefully thanks the Arthur Gelb Foundation for supporting him during this work. References [1] W. H. Fleming and J. L. Stein, “Stochastic optimal control, international finance and debt,” Journal of Banking and Finance , vol. 28, pp. 979–996, 2004. [2] S. P. Sethi and G. L. Thompson, Optimal Control Theory: Applications to Management Science and Economics , 2nd ed. Springer, 2006. [3] E. Todorov, “Stochastic optimal control and estimation methods adapted to the noise charac- teristics of the sensorimotor system,” Neural Computation , vol. 17, pp. 1084–1108, 2005. [4] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics (Intelligent Robotics and Au- tonomous Agents) , 2001. [5] V. D. Blondel and J. N. Tsitsiklis, “A survey of computational complexity results in systems and control,” Automatica , vol. 36, no. 9, pp. 1249–1274, 2000. [6] L. Grne, “An adaptive grid scheme for the discrete hamilton-jacobi-bellman equation,” Nu- merische Mathematik , vol. 75, pp. 319–337, 1997. 18 [7] S. Wang, L. S. Jennings, and K. L. Teo, “Numerical solution of hamilton-jacobi-bellman equa- tions by an upwind finite volume method,” J. of Global Optimization , vol. 27, pp. 177–192, November 2003. [8] M. Boulbrachene and B. Chentouf, “The finite element approximation of hamilton-jacobi- bellman equations: the noncoercive case,” Applied Mathematics and Computation , vol. 158, no. 2, pp. 585–592, 2004. [9] C. Chow and J. Tsitsiklis, “An optimal one-way multigrid algorithm for discrete-time stochastic control,” IEEE Transactions on Automatic Control , vol. AC-36, pp. 898–914, 1991. [10] R. Munos, A. Moore, and S. Singh, “Variable resolution discretization in optimal control,” in Machine Learning , 2001, pp. 291–323. [11] J. Rust, “Using Randomization to Break the Curse of Dimensionality,,” Econometrica , vol. 56, no. 3, May 1997. [12] ——, “A comparison of policy iteration methods for solving continuous-state, infinite-horizon markovian decision problems using random, quasi-random, and deterministic discretizations,” 1997. [13] L. E. Kavraki, P. Svestka, L. E. K. P. Vestka, J. claude Latombe, and M. H. Overmars, “Probabilistic roadmaps for path planning in high-dimensional configuration spaces,” IEEE Transactions on Robotics and Automation , vol. 12, pp. 566–580, 1996. [14] S. M. Lavalle, “Rapidly-exploring random trees: A new tool for path planning,” Tech. Rep., 1998. [15] J. Kim and J. P. Ostrowski, “Motion planning of aerial robot using rapidly-exploring random trees with dynamic constraints,” in ICRA , 2003, pp. 2200–2205. [16] Y. Kuwata, J. Teo, G. Fiore, S. Karaman, E. Frazzoli, and J. How, “Real-time motion planning with applications to autonomous urban driving,” IEEE Trans. on Control Systems Technolo- gies , vol. 17, no. 5, pp. 1105–1118, 2009. [17] Karaman and Frazzoli, “Sampling-based algorithms for optimal motion planning,” Interna- tional Journal of Robotics Research , vol. 30, no. 7, pp. 846–894, June 2011. [18] H. J. Kushner and P. G. Dupuis, Numerical Methods for Stochastic Control Problems in Con- tinuous Time (Stochastic Modelling and Applied Probability) . Springer, Dec. 2000. [19] R. Alterovitz, T. Simon, and K. Goldberg, “The stochastic motion roadmap: A sampling framework for planning with markov motion uncertainty,” in in Robotics: Science and Systems III (Proc. RSS 2007 . MIT Press, 2008, pp. 246–253. [20] B. Oksendal, Stochastic differential equations (3rd ed.): an introduction with applications . New York, NY, USA: Springer-Verlag New York, Inc., 1992. [21] I. Karatzas and S. E. Shreve, Brownian Motion and Stochastic Calculus (Graduate Texts in Mathematics) , 2nd ed. Springer, Aug. 1991. [22] L. C. Evans, Partial Differential Equations (Graduate Studies in Mathematics, V. 19) GSM/19 . American Mathematical Society, Jun. 1998. 19 [23] J. L. Menaldi, “Some estimates for finite difference approximations,” SIAM J. on Control and Optimization , vol. 27, pp. 579–607, 1989. [24] P. Dupuis and M. R. James, “Rates of convergence for approximation schemes in optimal control,” SIAM J. Control Optim. , vol. 36, pp. 719–741, March 1998. [25] S. Boyd and L. Vandenberghe, Convex Optimization . Cambridge University Press, Mar. 2004. [26] D. P. Bertsekas, Dynamic Programming and Optimal Control, Two Volume Set , 2nd ed. Athena Scientific, 2001. [27] D. P. Bertsekas and J. N. Tsitsiklis, Parallel and distributed computation: numerical methods . Upper Saddle River, NJ, USA: Prentice-Hall, Inc., 1989. [28] H. Kurniawati, D. Hsu, and W. Lee, “SARSOP: Efficient point-based POMDP planning by approximating optimally reachable belief spaces,” in Proc. Robotics: Science and Systems , 2008. [29] S. Prentice and N. Roy, “The belief roadmap: Efficient planning in linear pomdps by factoring the covariance,” in Proceedings of the 13th International Symposium of Robotics Research (ISRR) , Hiroshima, Japan, November 2007. [30] G. R. Grimmett and D. R. Stirzaker, Probability and Random Processes , 3rd ed. Oxford University Press, USA, Aug. 2001. Appendix A Notations and Preliminaries We denote N as a set of natural numbers and R as a set of real numbers. A sequence on a set X is a mapping from N to X , denoted as { x n } ∞ n =0 , where x n ∈ X for each n ∈ N . Given a metric space X endowed with a metric d , a sequence { x n } ∞ n =0 ⊂ X is said to converge if there is a point x ∈ X , denoted as lim n →∞ x n , with the following property: For every  > 0, there is an integer N such that n ≥ N implies that d ( x n , x ) <  . On the one hand, a sequence of functions { f n } ∞ n =1 in which each function f n is a mapping from X to R converges pointwise to a function f on X if for every x ∈ X , the sequence of numbers { f n ( x ) } ∞ n =0 converges to f ( x ). On the other hand, a sequence of functions { f n } ∞ n =1 converges uniformly to a function f on X if the following sequence { M n | M n = sup x ∈ X | f n ( x ) − f ( x ) |} ∞ n =0 converges to 0. Let us consider a probability space (Ω , F , P ) where Ω is a sample space, F is a σ -algebra, and P is a probability measure. A subset A of F is called an event. The complement of an event A is denoted as A c . Given a sequence of events { A n } ∞ n =0 , we define lim sup n →∞ A n as ∩ ∞ n =0 ∪ ∞ k = n A k , i.e. the event that A n occurs infinitely often. In addition, the event lim inf n →∞ A n is defined as ∪ ∞ n =0 ∩ ∞ k = n A k . A random variable is a measurable function mapping from Ω to R . The expected value of a random variable Y is defined as E [ Y ] = ∫ Ω Y d P . A sequence of random variables { Y n } ∞ n =0 converges surely to a random variable Y if lim n →∞ Y n ( ω ) = Y ( ω ) for all ω ∈ Ω. A sequence of random variables { Y n } ∞ n =0 converges almost surely or with probability one (w.p.1) to a random variable Y if P ( ω ∈ Ω | lim n →∞ Y n ( ω ) = Y ( ω )) = 1. Almost sure convergence of { Y n } ∞ n =0 to Y is denoted as Y n a.s. → Y . We say that a sequence of random variables { Y n } ∞ n =0 converges in distribution to a random variable Y if lim n →∞ F n ( x ) = F ( x ) for every x ∈ R at which F is continuous where { F n } ∞ n =0 and F are the associated CDFs of { Y n } ∞ n =0 and Y srespectively. We 20 denote this convergence as Y n d → Y . Convergence in distribution is also called weak convergence. If Y n d → Y , then lim n →∞ E [ f ( Y n )] = E [ f ( Y )] for all bounded continuous functions f . As a corollary, when { Y n } ∞ n =0 converges in distribution to 0, and Y n is bounded for all n , we have lim n →∞ E [ Y n ] = 0 and lim n →∞ E [ Y 2 n ] = 0, which together imply lim n →∞ V ar ( Y n ) = 0. We say that a sequence of random variables { Y n } ∞ n =0 converges in probability to a random variable Y , denoted as Y n p → Y , if for every  > 0, we have lim n →∞ P ( | X n − X | ≥  ) = 0. For every continuous function f ( · ), if Y n p → Y , then we also have f ( Y n ) p → f ( Y ). If Y n p → Y and Z n p → Z , then ( Y n , Z n ) p → ( Y, Z ) . If | Z n − Y n | p → 0 and Y n d → Y , we have Z n d → Y . Finally, we say that a sequence of random variables { Y n } ∞ n =0 converges in r th mean to a random variable Y , denoted as Y n r → Y , if E [ | X n | r ] < ∞ for all n , and lim n →∞ E [ | X n − X | r ] = 0. We have the following implications: (i) almost sure convergence or r th mean convergence ( r ≥ 1) implies convergence in probability, and (ii) convergence in probability implies convergence in distribution. The above results still hold for random vectors in higher dimensional spaces. Let f ( n ) and g ( n ) be two functions with domain and range N or R . The function f ( n ) is called O ( g ( n )) if there exists two constants M and n 0 such that f ( n ) ≤ M g ( n ) for all n ≥ n 0 . The function f ( n ) is called Ω( g ( n )) if g ( n ) is O ( f ( n )). Finally, the function f ( n ) is called Θ( g ( n )) if f ( n ) is both O ( g ( n )) and Ω( g ( n )). B Proof of Lemma 4 For each n ∈ N , divide the state space S into grid cells with side length 1 / 2 γ r (log | S n | / | S n | ) 1 /d x as follows. Let Z denote the set of integers. Define the grid cell i ∈ Z d x as W n ( i ) := i ( γ r 2 log | S n | | S n | ) 1 /d x + [ − 1 4 γ r ( log | S n | | S n | ) 1 /d x , 1 4 γ r ( log | S n | | S n | ) 1 /d x ] d x , where [ − a, a ] d x denotes the d x -dimensional cube with side length 2 a centered at the origin. Hence, the expression above translates the d x -dimensional cube with side length (1 / 2) γ r (log | S n | / | S n | ) 1 /d x to the point with coordinates i γ r 2 (log n/n ) 1 /d x . Let Q n denote the indices of set of all cells that lie completely inside the state space S , i.e., Q n = { i ∈ Z d : W n ( i ) ⊆ S } . Clearly, Q n is finite since S is bounded. Let ∂Q n denote the set of all grid cells that intersect the boundary of S , i.e., ∂Q n = { i ∈ Z d : W n ( i ) ∩ ∂S 6 = ∅} . We claim for all large n , all grid cells in Q n contain one vertex of S n , and all grid cells in ∂Q n contain one vertex from ∂S n . First, let us show that each cell in Q n contains at least one vertex. Given an event A , let A c denote its complement. Let A n,k denote the event that the cell W n ( k ), where k ∈ Q n contains a vertex from S n , and let A n denote the event that all grid cells in Q n contain a vertex in S n . Then, for all k ∈ Q n , P ( A c n,k ) = ( 1 − ( γ r / 2) d x m ( S ) log | S n | | S n | ) | S n | ≤ exp ( − ( ( γ r / 2) d x /m ( S ) ) log | S n | ) = | S n | − ( γ r / 2) dx /m ( S ) , where m ( S ) denotes Lebesgue measure assigned to S . Then, P ( A c n ) = P ((⋂ k ∈ Q n A n,k ) c ) = P (⋃ k ∈ Q n A c n,k ) ≤ ∑ k ∈ Q n P ( A c n,k ) = | Q n | | S n | − ( γ r / 2) dx /m ( S ) , where the first inequality follows from the union bound and | Q n | denotes the cardinality of the set Q n . By calculating the maximum number of cubes that can fit into S , we can bound | Q n | : | Q n | ≤ m ( S ) ( γ r / 2) d x log | S n | | S n | = m ( S ) ( γ r / 2) d x | S n | log | S n | . 21 Note that by construction, we have | S n | = Θ( n ). Thus, P ( A c n ) ≤ m ( S ) ( γ r / 2) d x | S n | log | S n | | S n | − ( γ r / 2) dx /m ( S ) = m ( S ) ( γ r / 2) d x 1 log | S n | | S n | 1 − ( γ r / 2) dx /m ( S ) ≤ m ( S ) ( γ r / 2) d x | S n | 1 − ( γ r / 2) dx /m ( S ) , which is summable for all γ r > 2 (2 m ( S )) 1 /d x . Hence, by the Borel-Cantelli lemma, the probability that A c n occurs infinitely often is zero, which implies that the probability that A n occurs for all large n is one, i.e., P (lim inf n →∞ A n ) = 1 . Similarly, each grid cell in ∂Q n can be shown to contain at least one vertex from ∂S n for all large n , with probability one. This implies each grid cell in both sets Q n and ∂Q n contain one vertex of S n and ∂S n , respectively, for all large n , with probability one. Hence the following event happens with probability one: ζ n = max z ∈ S n min z ′ ∈ S n || z ′ − z || 2 = O ((log | S n | / | S n | ) 1 /d x ) .  C Proof of Lemma 5 We show that each state that is added to the approximating MDPs is updated infinitely often. That is, for any z ∈ S n , the set of all iterations in which the procedure Update is applied on z is unbounded. Indeed, let us denote ζ n ( z ) = min z ′ ∈ S n || z ′ − z || 2 . From Lemma 4, lim n →∞ ζ n ( z ) = 0 happens almost surely. Therefore, with probability one, there are infinitely many n such that ζ n ( z ) < ζ n − 1 ( z ) . In other words, with probability one, we can find infinitely many z new at Line 13 of Algorithm 1 such that z is updated. For those n , the holding time at z is recomputed as ∆ t n ( z ) = γ t ( log | S n | | S n | ) θςρ/d x at Line 1 of Algorithm 2. Thus, the following event happens with probability one: lim n →∞ ∆ t n ( z ) = 0 , which satisfies the first condition of local consistency in Eq. 3. The other conditions of local consistency in Eqs. 4-6 are satisfied immediately by the way that the transition probabilities are computed (see the description of the ComputeTranProb procedure given in Section 4.1). Hence, the MDP sequence {M n } ∞ n =0 and holding times { ∆ t n } ∞ n =0 are locally consistent for large n with probability one.  D Proof of Theorem 7 To highlight the idea of the entire proof, we first prove the convergence under synchronous value iterations before presenting the convergence under asynchronous value iterations. As we will see, the shrinking rate of holding times plays a crucial role in the convergence proof. The outline of the proof is as follows. S1: Convergence under synchronous value iterations: In Algorithm 1, we take L n ≥ 1 and K n = | S n |− 1. In other words, in each iteration, we perform synchronous value iterations. Moreover, we assume that we are able to solve the Bellman equation (Eq. 9) exactly. We show that J n converges uniformly to J ∗ almost surely in this setting. 22 S2: Convergence under asynchronous value iterations: When K n = Θ( | S n | θ ) < | S n | , we only update a subset of S n in each of L n passes. We show that J n still converges uniformly to J ∗ almost surely in this new setting. In the following discussion and next sections, we need to compare functions on different do- mains S n . To ease the discussion and simplify the notation, we adopt the following interpolation convention. Given X ⊂ Y and J : X → R , we interpolate J to J on the entire domain Y via nearest neighbor value: ∀ y ∈ Y : J ( y ) = J ( z ) where z = argmin z ′ ∈ X || z ′ − y || . To compare J : X → R and J ′ : Y → R where X, Y ⊂ S , we define the sup-norm: || J − J ′ || ∞ = || J − J ′ || ∞ , where J and J ′ are interpolations of J and J ′ from the domains X and Y to the entire domain S respectively. In particular, given J n : S n → R , and J : S → R , then || J n − J || S n ≤ || J n − J || ∞ . Thus, if || J n − J || ∞ approaches 0 when n approaches ∞ , so does || J n − J || S n . Hence, we will work with the (new) sup-norm || · || ∞ instead of || · || S n in the proofs of Theorems 7-8. The triangle inequality also holds for any functions J, J ′ , J ′′ defined on subsets of S with respect to the above sup-norm: || J − J ′ || ∞ ≤ || J − J ′′ || ∞ + || J ′′ − J ′ || ∞ . Let B ( X ) denote a set of all real-valued bounded functions over a domain X . For S n ⊂ S n ′ when n < n ′ , a function J in B ( S n ) also belongs to B ( S n ′ ), meaning that we can interpolate J on S n to a function J ′ on S n ′ . In particular, we say that J in B ( S n ) also belongs to B ( S ). Lastly, due to random sampling, S n is a random set, and therefore functions J n and J ∗ n defined on S n are random variables. In the following discussion, inequalities hold surely without further explanation when it is clear from the context, and inequalities hold almost surely if they are followed by “w.p.1”. S1: Convergence under synchronous value iterations In this step, we first set L n ≥ 1 and K n = | S n | − 1 in Algorithm 1. Thus, for all z ∈ S n , the holding time ∆ t n ( z ) equals γ t ( log | S n | | S n | ) θςρ/d x and is denoted as ∆ t n . We consider the MDP M n = ( S n , U, P n , G n , H n ) at n th iteration and define the following operator T n : B ( S n ) → B ( S n ) that transforms every J ∈ B ( S n ) after a Bellman update as: T n J ( z ) = min v ∈ U { G n ( z, v ) + α ∆ t n E P n [ J ( y ) | z, v ] } , ∀ z ∈ S n , (10) assuming that we can solve the minimization on the RHS of Eq. 10 exactly. For each k ≥ 2, operators T k n are defined recursively as T k n = T n T k − 1 n and T 1 n = T n . When we apply T n on J ∈ B ( S k ) where k < n , J is interpolated to S n before applying T n . Thus, in Algorithms 1-2, we implement the next update J n = T L n n J n − 1 . From [26], we have the following results: J ∗ n = T n J ∗ n , and T n is a contraction mapping. For any J and J ′ in B ( S n ), the following inequality happens surely: || T n J − T n J ′ || ∞ ≤ α ∆ t n || J − J ′ || ∞ . 23 Combining the above results: || J ∗ n − J n || ∞ = || T L n n J ∗ n − T L n n J n − 1 || ∞ ≤ α L n ∆ t n || J ∗ n − J n − 1 || ∞ ≤ α ∆ t n ( || J ∗ n − J ∗ n − 1 || ∞ + || J ∗ n − 1 − J n − 1 || ∞ ) , where the second inequality follows from the triangle inequality, and L n ≥ 1 , α ∈ (0 , 1). Thus, by iterating over n , for any N ≥ 1 and n > N , we have: || J ∗ n − J n || ∞ ≤ A n + α ∆ t n +∆ t n − 1 ... +∆ t N +1 || J ∗ N − J N || ∞ , (11) where A n are defined recursively: A n = α ∆ t n ( || J ∗ n − J ∗ n − 1 || ∞ + A n − 1 ) , ∀ n > N + 1 , (12) A N +1 = α ∆ t N +1 || J ∗ N +1 − J ∗ N || ∞ . (13) Note that for any N ≥ 1: lim n →∞ ∆ t n + ∆ t n − 1 ... + ∆ t N +1 = ∞ , due to the choice of holding times ∆ t n in the procedure ComputeHoldingTime . Therefore, lim n →∞ α ∆ t n + ... +∆ t N +1 || J ∗ N − J N || ∞ = 0 . By Theorem 6, the following event happens with probability 1 (w.p.1): lim n →∞ || J ∗ n − J ∗ || ∞ = 0 , hence, lim n →∞ || J ∗ n − J ∗ n − 1 || ∞ = 0 w.p.1. Thus, for any fixed  > 0, we can choose N large enough such that: || J ∗ n − J ∗ n − 1 || 1 − ς ∞ <  w.p.1 for all n > N, and (14) α ∆ t n + ... +∆ t N +1 || J ∗ N − J N || ∞ <  surely , (15) where ς ∈ (0 , 1) is the constant defined in the procedure ComputeHoldingTime . Now, for all n > N , we rearrange Eqs.12-13 to have A n ≤ B n w.p.1, where B n = α ∆ t n ( || J ∗ n − J ∗ n − 1 || ς ∞ + B n − 1 ) , ∀ n > N + 1 , B N +1 = α ∆ t N +1 || J ∗ N +1 − J ∗ N || ς ∞ . We can see that for n > N + 1: B n = α ∆ t n ( || J ∗ n − J ∗ n − 1 || ς ∞ + B n − 1 ) <  ς/ (1 − ς ) + B n − 1 w.p.1, (16) B N +1 = α ∆ t N +1 || J ∗ N +1 − J ∗ N || ς ∞ <  ς/ (1 − ς ) w.p.1. (17) 24 We now prove that almost surely, B n is bounded for all n ≥ N . Indeed, we derive the conditions so that B n − 1 < B n or B n − 1 ≥ B n as follows: B n − 1 < B n ⇔ B n − 1 < α ∆ t n ( || J ∗ n − J ∗ n − 1 || ς ∞ + B n − 1 ) ⇔ B n − 1 < α ∆ t n || J ∗ n − J ∗ n − 1 || ς ∞ 1 − α ∆ t n ⇒ B n − 1 < K α γ t ( log | Sn | | Sn | ) θςρ/dx ( log | S n | | S n | ) ςρ/d x 1 − α γ t ( log | Sn | | Sn | ) θςρ/dx w.p.1. The last inequality is due to Theorem 6 and | S n | = Θ( n ), | S n − 1 | = Θ( n − 1): || J ∗ n − J ∗ n − 1 || ∞ = O ((log | S n − 1 | / | S n − 1 | ) ρ/d x ) < K ( log | S n | | S n | ) ρ/d x w.p.1, for large n where K is some finite constant. Let β = α γ t ∈ (0 , 1). For large n, log | S n | | S n | are in (0 , 1) and θ ∈ (0 , 1]. Let us define x n = ( log | S n | | S n | ) θςρ/d x , and y n = ( log | S n | | S n | ) ςρ/d x . Then, x n ≥ y n > 0. The above condition is simplified to B n − 1 < K β x n y n 1 − β x n ≤ K β x n x n 1 − β x n , w.p.1. Consider the function r : [0 , ∞ ) → R such that r ( x ) = β x x 1 − β x , we can verify that r ( x ) is non- increasing and is bounded by r (0) = − 1 / log( β ). Therefore: B n − 1 < B n ⇒ B n − 1 < − K log( β ) = − K γ t log( α ) w.p.1. (18) Or conversely, B n − 1 ≥ − K γ t log( α ) w.p.1 ⇒ B n − 1 ≥ B n w.p.1 . (19) The above discussion characterizes the random sequence B n . In particular, Fig. 5 shows a possible realization of the random sequence B n for n > N . As shown visually in this plot, B N +1 is less than  ς/ (1 − ς ) w.p.1 and thus is less than  ς/ (1 − ς ) − K γ t log( α ) w.p.1. For n > N + 1, assume that we have already shown that B n − 1 is bounded from above by  ς/ (1 − ς ) − K γ t log( α ) w.p.1. When B n − 1 ≥ − K γ t log( α ) w.p.1, the sequence is non-increasing w.p.1. Conversely, when the sequence is increasing, i.e. B n − 1 < B n , we assert that B n − 1 < − K γ t log( α ) w.p.1 due to Eq. 18, and the increment is less than  ς/ (1 − ς ) due to Eq. 16. In both cases, we conclude that B n is also bounded by  ς/ (1 − ς ) − K γ t log( α ) w.p.1. Hence, from Eqs. 16-19, we infer that B n is bounded w.p.1 for all n > N : B n <  ς/ (1 − ς ) − K γ t log( α ) w.p.1. 25 N+1 N+2 N+3 N+k Figure 5: A realization of the random sequence B n . We have B N +1 less than  ς/ (1 − ς ) w.p.1. For n larger than N + 1, when B n − 1 ≥ − K γ t log( α ) w.p.1, the sequence is non-increasing w.p.1, i.e. B n − 1 ≥ B n w.p.1. Conversely, when the sequence is increasing, i.e. B n − 1 < B n , we have B n − 1 < − K γ t log( α ) w.p.1, and the increment is less than  ς/ (1 − ς ) . Hence, the random sequence B n is bounded by  ς/ (1 − ς ) − K γ t log( α ) w.p.1. Thus, for all n > N : A n ≤ B n <  (  ς/ (1 − ς ) − K γ t log( α ) ) w.p.1. (20) Combining Eqs. 11,15, and 20, we conclude that for any  > 0, there exists N ≥ 1 such that for all n > N , we have || J ∗ n − J n || ∞ <  (  ς/ (1 − ς ) − K γ t log( α ) + 1 ) w.p.1 . Therefore, lim n →∞ || J ∗ n − J n || ∞ = 0 w.p.1 . Combining with lim n →∞ || J ∗ n − J ∗ || ∞ = 0 w.p.1 , we obtain lim n →∞ || J n − J ∗ || ∞ = 0 w.p.1 . In the above analysis, the shrinking rate ( log | S n | | S n | ) θςρ/d x of holding times plays an important role to construct an upper bound of the sequence B n . This rate must be slower than the convergence rate ( log | S n | | S n | ) ρ/d x of J ∗ n to J ∗ so that the function r ( x ) is bounded, enabling the convergence of cost value functions J n to the optimal cost-to-go J ∗ . Remarkably, we have accomplished this convergence by carefully selecting the range (0 , 1) of the parameter ς . The role of the parameter θ in this convergence will be clear in Step S2. Lastly, we note that if we are able to obtain a faster convergence rate of J ∗ n to J ∗ , we can have faster shrinking rate for holding times. 26 S2: Convergence under asynchronous value iterations When 1 ≤ L n and K n = Θ( | S n | θ ) < | S n | , we first claim the following result: Lemma 10 Consider any increasing sequence { n k } ∞ k =0 as a subset of N such that n 0 = 0 and k ≤ | S n k | ≤ k 1 /θ . For J ∈ B ( S ) , we define: A ( { n j } k j =0 ) = α ∆ t nk +∆ t nk − 1 + ... +∆ t n 1 || J ∗ n 1 − J || ∞ + α ∆ t nk +∆ t nk − 1 + ... +∆ t n 2 || J ∗ n 2 − J ∗ n 1 || ∞ + ... + α ∆ t nk || J ∗ n k − J ∗ n k − 1 || ∞ . The following event happens with probability one: lim k →∞ A ( { n j } k j =0 ) = 0 . Proof We rewrite A ( { n j } k j =0 ) = A n k where A n k are defined recursively: A n k = α ∆ t nk ( || J ∗ n k − J ∗ n k − 1 || ∞ + A n k − 1 ) , ∀ k > K, (21) A n K = A ( { n j } K j =0 ) , ∀ K ≥ 1 . (22) We note that ∆ t n k + ∆ t n k − 1 + ... + ∆ t n K = γ t ( log | S n k | | S n k | ) θςρ/d x + γ t ( log | S n k − 1 | | S n k − 1 | ) θςρ/d x + ... + γ t ( log | S n K | | S n K | ) θςρ/d x ≥ γ t ( 1 | S n k | ) θςρ/d x + γ t ( 1 | S n k − 1 | ) θςρ/d x + ... + γ t ( 1 | S n K | ) θςρ/d x ≥ γ t 1 k ςρ/d x + γ t 1 ( k − 1) ςρ/d x + ... + γ t 1 ( K ) ςρ/d x ≥ γ t ( 1 k + 1 k − 1 + ... + 1 K ) , where the second inequality uses the given fact that | S n k | ≤ k 1 /θ . Therefore, for any K ≥ 1: lim k →∞ α ∆ t nk +∆ t nk − 1 ... +∆ t nK = 0 . We choose a constant % > 1 such that %ς < 1. For any fixed  > 0, we can choose K large enough such that: || J ∗ n k − J ∗ n k − 1 || 1 − %ς ∞ <  w.p.1 for all k > K. (23) For all k > K , we can write A n k ≤ B n k + α ∆ t nk + ... +∆ t nK +1 A ( { n j } K j =0 ) . where B n k = α ∆ t nk ( || J ∗ n k − J ∗ n k − 1 || %ς ∞ + B n k − 1 ) , ∀ k > K, B n K = 0 . Furthermore, we can choose K ′ sufficiently large such that K ′ ≥ K and for all k > K ′ : α ∆ t nk + ... +∆ t nK +1 A ( { n j } K j =0 ) ≤ . 27 We obtain: A n k ≤ B n k + , ∀ k > K ′ ≥ K ≥ 1 . We can also see that for k > K : B n k = α ∆ t nk ( || J ∗ n k − J ∗ n k − 1 || %ς ∞ + B n k − 1 ) <  %ς/ (1 − %ς ) + B n k − 1 w.p.1. (24) Similar to Step S1, we characterize the random sequence B n k as follows: B n k − 1 < B n k ⇔ B n k − 1 < α ∆ t nk || J ∗ n k − J ∗ n k − 1 || %ς ∞ 1 − α ∆ t nk ⇒ B n k − 1 < K α γ t ( log | Snk | | Snk | ) θςρ/dx ( log | S nk − 1 | | S nk − 1 | ) %ςρ/d x 1 − α γ t ( log | Snk | | Snk | ) θςρ/dx w.p.1. Let β = α γ t ∈ (0 , 1). We define: x k = ( log | S n k | | S n k | ) θςρ/d x , and y k = ( log | S n k − 1 | | S n k − 1 | ) %ςρ/d x . We note that log x x is a decreasing function for positive x . Since | S n k − 1 | ≥ k − 1 and | S n k | ≤ k 1 /θ , we have the following inequalities: x k ≥ ( ( log k θ ) θ k ) ςρ/d x , y k ≤ ( (log( k − 1)) % ( k − 1) % ) ςρ/d x . Since θ ∈ (0 , 1] and % > 1, we can find a finite constant K 1 such that y k < K 1 x k for large k . Thus, the above condition leads to B n k − 1 < K β x k y k 1 − β x k < KK 1 β x k x k 1 − β x k , w.p.1. Therefore: B n k − 1 < B n k ⇒ B n k − 1 < − KK 1 log( β ) = − KK 1 γ t log( α ) w.p.1. Or conversely, B n k − 1 ≥ − KK 1 γ t log( α ) w.p.1 ⇒ B n − 1 ≥ B n w.p.1 . Arguing similarly to Step S1, we infer that for all k > K ′ ≥ K ≥ 1: B n k <  %ς/ (1 − %ς ) − KK 1 γ t log( α ) w.p.1. Thus, for any  > 0, we can find K ′ ≥ 1 such that for all k > K ′ : A n k ≤ B n k +  <  (  %ς/ (1 − %ς ) − KK 1 γ t log( α ) + 1 ) w.p.1. We conclude that lim k →∞ A ( { n j } k j =0 ) = 0 . w.p.1 .  28 Returning to the main proof, we use the tilde notation to indicate asynchronous operations to differentiate with our synchronous operations in Step S1. We will also assume that L n = 1 for all n to simplify the following notations. The proof for general L n ≥ 1 is exactly the same. We define the following (asynchronous) mappings ̃ T n : B ( S n ) → B ( S n ) as the restricted mappings of T n on D n , a non-empty random subset of S n , such that for all J ∈ B ( S n ): ̃ T n J ( z ) = min v ∈ U { G n ( z, v ) + α ∆ t n E P n [ J ( y ) | z, v ]} , ∀ z ∈ D n ⊂ S n , (25) ̃ T n J ( z ) = J ( z ) , ∀ z ∈ S n \ D n . (26) We require that ∩ ∞ n =1 ∪ ∞ k = n D k = S. (27) In other words, every state in S are sampled infinitely often. We can see that in Algorithm 1, if the set Z update is assigned to D n in every iteration (Line 13), the sequence { D n } ∞ n =1 has the above property, and | D n | = Θ( | S n | θ ) < | S n | . Starting from any ̃ J 0 ∈ B ( S 0 ), we perform the following asynchronous iteration ̃ J n +1 = ̃ T n +1 ̃ J n , ∀ n ≥ 0 . (28) Consider the following sequence { m k } ∞ k =0 such that m 0 = 0 and for all k ≥ 0, from m k to m k +1 − 1, all states in S m k +1 − 1 are chosen to be updated at least once, and a subset of states in S m k +1 − 1 is chosen to be updated exactly once. We observe that as the size of S n increases linearly with n , if we schedule states in D n ⊂ S n to be updated in a round-robin manner, we have k ≤ S m k ≤ k 1 /θ . When D n is chosen as shown in Algorithm 1, with high probability, k ≤ S m k ≤ k 1 /θ . However, we will assume that the event k ≤ S m k ≤ k 1 /θ happens surely because we can always schedule a fraction of D n to be updated in a round-robin manner. We define W n as the set of increasing sub-sequences of the sequence { 0 , 1 , ..., n } such that each sub-sequence contains { m j } k j =0 where m k ≤ n < m k +1 : W n = { { i j } T j =0 ∣ ∣ { m j } k j =0 ⊂ { i j } T j =0 ⊂ { 0 , 1 , ..., n } ∧ T ≥ 2 ∧ m k ≤ n < m k +1 } . Clearly, if { i j } T j =0 ∈ W n , we have i 0 = 0. For each { i j } T j =0 ∈ W n , we define A ( { i j } T j =0 ) = α ∆ t iT +∆ t iT − 1 + ... +∆ t i 1 || J ∗ i 1 − ̃ J 0 || ∞ + α ∆ t iT +∆ t iT − 1 + ... +∆ t i 2 || J ∗ i 2 − J ∗ i 1 || ∞ + ... + α ∆ t iT || J ∗ i T − J ∗ i T − 1 || ∞ . We will prove by induction that ∀ z ∈ D n ⇒ | ̃ J n ( z ) − J ∗ n ( z ) | ≤ max { i j } T j =0 ∈ W n A ( { i j } T j =0 ) . (29) When n = 1, the only sub-sequence is { i j } T j =0 = { 0 , 1 } ∈ W 1 . It is clear that for z ∈ D 1 , due to the contraction property of T 1 : | J ∗ 1 ( z ) − ̃ J 1 ( z ) | ≤ max { i j } T j =0 ∈ W 1 A ( { i j } T j =0 ) = α ∆ t 1 || J ∗ 1 − ̃ J 0 || ∞ . Assuming that Eq. 29 holds upto n = m k , we need to prove that the equation also holds for those n ∈ ( m k , m k +1 ) and n = m k +1 . Indeed, let us assume that Eq. 29 holds for some n ∈ [ m k , m k +1 − 1). 29 Denote n z ≤ n as the index of the most recent update of z . For z ∈ D n , we compute new values for z in ̃ J n +1 , and by the contraction property of T n +1 , it follows that | ̃ J n +1 ( z ) − J ∗ n +1 ( z ) | ≤ α ∆ t n +1 || J ∗ n +1 − ̃ J n || ∞ = α ∆ t n +1 max z ∈ S n +1 | J ∗ n +1 ( z ) − ̃ J n ( z ) | = α ∆ t n +1 max z ∈ S n +1 | J ∗ n +1 ( z ) − ̃ J n z ( z ) | ≤ α ∆ t n +1 max z ∈ S n +1 ( | J ∗ n z ( z ) − ̃ J n z ( z ) | + || J ∗ n +1 − J ∗ n z || ∞ ) ≤ max z ∈ S n +1 ( α ∆ t n +1 max { i j } T j =0 ∈ W nz A ( { i j } T j =0 ) + α ∆ t n +1 || J ∗ n +1 − J ∗ n z || ∞ ) = max { i j } T j =0 ∈ W n +1 A ( { i j } T j =0 ) . The last equality is due to n + 1 ≤ m k +1 − 1 , and { m j } k j =0 ⊂ {{ i j } T j =0 , n + 1 } ⊂ { 0 , 1 , ..., n + 1 } for any { i j } T j =0 ∈ W n z . Therefore, Eq. 29 holds for all n ∈ ( m k , m k +1 − 1]. When n = m k +1 − 1, we also have the above relation for all z ∈ D n +1 : | ̃ J n +1 ( z ) − J ∗ n +1 ( z ) | ≤ max z ∈ S n +1 ( α ∆ t n +1 max { i j } T j =0 ∈ W nz A ( { i j } T j =0 ) + α ∆ t n +1 || J ∗ n +1 − J ∗ n z || ∞ ) = max { i j } T j =0 ∈ W n +1 A ( { i j } T j =0 ) . The last equality is due to n + 1 = m k +1 and thus { m j } k +1 j =0 ⊂ {{ i j } T j =0 , n + 1 } ⊂ { 0 , 1 , ..., n + 1 } for any { i j } T j =0 ∈ W n z . Therefore, Eq. 29 also holds for n = m k +1 and this completes the induction. We see that all { i j } T j =0 ∈ W n , we have j ≤ i j ≤ m j , and thus j ≤ S i j ≤ j 1 /θ . By Lemma 10, lim n →∞ A ( { i j } T j =0 ∈ W n ) = 0 w.p.1. Therefore, lim n →∞ sup z ∈ D n | ̃ J n ( z ) − J ∗ n ( z ) | = 0 w.p.1. Since all states are updated infinitely often, and J ∗ n converges uniformly to J ∗ with probability one, we conlude that: lim n →∞ || ̃ J n − J ∗ n || ∞ = 0 w.p.1. and lim n →∞ || ̃ J n − J ∗ || ∞ = 0 w.p.1. In both Steps S1 and S2, we have lim n →∞ || J n − J ∗ n || ∞ = 0 w.p.1 5 , therefore μ n converges to μ ∗ n pointwise w.p.1 as μ n and μ ∗ n are induced from Bellman updates based on J n and J ∗ n respectively. Hence, the sequence of policies { μ n } ∞ n =0 has each policy μ n as an  n -optimal policy for the MDP M n such that lim n →∞  n = 0. By Theorem 2, we conclude that lim n →∞ | J n,μ n ( z ) − J ∗ ( z ) | = 0 , ∀ z ∈ S n w.p.1.  E Proof of Theorem 8 We fix an initial starting state x (0) = z . In Theorem 7, starting from an initial state x (0) = z , we construct a sequence of Markov chains { ξ n i ; i ∈ N } ∞ n =1 under minimizing control sequences 5 The tilde notion is dropped at this point. 30 { u n i ; i ∈ N } ∞ n =1 . By convention, we denote the associated interpolated continuous time trajectories and control processes as { ξ n ( t ); t ∈ R } ∞ n =1 and { u n ( t ); t ∈ R } ∞ n =1 repsectively. By Theorem 1, { ξ n ( t ); t ∈ R } ∞ n =1 converges in distribution to an optimal trajectory { x ∗ ( t ); t ∈ R } under an optimal control process { u ∗ ( t ); t ∈ R } with probability one. In other words, ( ξ n ( · ) , u n ( · )) d → ( x ∗ ( · ) , u ∗ ( · )) w.p.1. We will show that this result can hold even when the Bellman equation is not solved exactly at each iteration. In this theorem, we solve the Bellman equation (Eq. 9) by sampling uniformly in U to form a control set U n such that lim n →∞ | U n | = ∞ . Let us denote the resulting Markov chains and control sequences due to this modification as { ξ n i ; i ∈ N } ∞ n =1 and { u n i ; i ∈ N } ∞ n =1 with associated continuous time interpolations { ξ n ( t ); t ∈ R } ∞ n =1 and { u n ( t ); t ∈ R } ∞ n =1 . In this case, randomness is due to both state and control sampling. We will prove that there exists minimizing control sequences { u n i ; i ∈ N } ∞ n =1 and the induced sequence of Markov chains { ξ n i ; i ∈ N } ∞ n =1 in Theorem 7 such that ( ξ n ( · ) − ξ n ( · ) , u n ( · ) − u n ( · )) p → (0 , 0) , (30) where (0 , 0) denotes a pair of zero processes. To prove Eq. 30, we first prove the following lemmas. In the following analysis, we assume that the Bellman update (Eq. 9) has minima in a neighborhood of positive Lebesgue measure. We also assume additional continuity of cost functions for discrete MDPs. Lemma 11 Let us consider the sequence of approximating MDPs {M n } ∞ n =0 . For each n and a state z ∈ S n , let v ∗ n be an optimal control minimizing the Bellman update, which is refered to as an optimal control from z : v ∗ n ∈ V ∗ n = argmin v ∈ U { G n ( z, v ) + α ∆ t n ( z ) E P n [ J n − 1 ( y ) | z, v ] } , J n ( z, v ∗ n ) = J ∗ n ( z ) = G n ( z, v ∗ n ) + α ∆ t n ( z ) E P n [ J n − 1 ( y ) | z, v ∗ n ] , ∀ v ∗ n ∈ V ∗ n . Let v n be the best control in a sampled control set U n from z : v n = argmin v ∈ U n { G n ( z, v ) + α ∆ t n ( z ) E P n [ J n − 1 ( y ) | z, v ] } , J n ( z, v n ) = G n ( z, v n ) + α ∆ t n ( z ) E P n [ J n − 1 ( y ) | z, v n ] . Then, when lim n →∞ | U n | = ∞ , we have | J n ( z, v n ) − J ∗ n ( z ) | p → 0 as n approaches ∞ , and there exists a sequence { v ∗ n | v ∗ n ∈ V ∗ n } ∞ n =0 such that || v n − v ∗ n || 2 p → 0 . Proof We assume that for any  > 0, the set A n  = { v ∈ U | | J n ( z, v ) − J ∗ n ( z ) | ≤  } has positive Lebesgue measure. That is, m ( A n  ) > 0 for all  > 0 where m is Lebesgue measure assigned to U . For any  > 0, we have: P ( {| J n ( z, v n ) − J ∗ n ( z ) | ≥  } ) = ( 1 − m ( A n  ) /m ( U ) ) | U n | . Since 1 − m ( A n  ) /m ( U ) ∈ [0 , 1) and lim n →∞ | U n | = ∞ , we infer that: lim n →∞ P ( {| J n ( z, v n ) − J ∗ n ( z ) | ≥  } ) = 0 . Hence, we conclude that | J n ( z, v n ) − J ∗ n ( z ) | p → 0 as n → ∞ . Under the mild assumption that J n ( z, v ) is continuous on U for all z ∈ S n , thus there exists a sequence { v ∗ n | v ∗ n ∈ V ∗ n } ∞ n =0 such that || v n − v ∗ n || 2 p → 0 as n approaches ∞ .  31 Figure 6: An illustration for Lemma 12. We have ξ n 0 converges in probability to ξ n 0 . From ξ n 0 , the optimal control is v ∗ n that results in the next random state ξ n 1 . From ξ n 0 , the optimal control and the best sampled control are v n and v n respectively. The next random state from ξ n 0 due to the control v n is ξ n 1 . By Lemma 11, we conclude that || J n − J ∗ n || ∞ converges to 0 in probability. Thus, J n returned from the iMDP algorithm when the Bellman update is solved via sampling converges uniformly to J ∗ in probability. We, however, claim that J n,μ n still converges pointwise to J ∗ almost surely in the next discussion. Lemma 12 With the notations in Lemma 11, consider two states ξ n 0 and ξ n 0 such that || ξ n 0 − ξ n 0 || 2 p → 0 as n approaches ∞ . Let ξ n 1 be the next random state of ξ n 0 under the best sampled control v n from ξ n 0 . Then, there exists a sequence of optimal controls v ∗ n from ξ n 0 such that || v n − v ∗ n || 2 p → 0 and || ξ n 1 − ξ n 1 || 2 p → 0 as n approaches ∞ , where ξ n 1 is the next random state of ξ n 0 under the optimal control v ∗ n from ξ n 0 . Proof We have v n as the best sampled control from ξ n 0 . By Lemma 11, there exists a sequence of optimal controls v n from ξ n 0 such that || v n − v n || 2 p → 0. We assume that the mapping from state space S n , which is endowed with the usual Euclidean metric, to optimal controls in U is continuous. As || ξ n 0 − ξ n 0 || 2 p → 0, there exists a sequence of optimal controls v ∗ n from ξ n 0 such that || v n − v ∗ n || 2 p → 0. Now, || v n − v n || 2 p → 0 and || v n − v ∗ n || 2 p → 0 lead to || v n − v ∗ n || 2 p → 0 as n → ∞ . Figure 6 illustrates how v n , v n , and v ∗ n relate ξ n 1 and ξ n 1 . Using the probability transition P n of the MDP M n that is locally consistent with the original continuous system, we have: E [ ξ n 1 | ξ n 0 , u n 0 = v ∗ n ] = ξ n 0 + f ( ξ n 0 , v ∗ n )∆ t n ( ξ n 0 ) + o (∆ t n ( ξ n 0 )) , E [ ξ n 1 | ξ n 0 , u n 0 = v n ] = ξ n 0 + f ( ξ n 0 , v n )∆ t n ( ξ n 0 ) + o (∆ t n ( ξ n 0 )) , Cov [ ξ n 1 | ξ n 0 , u n 0 = v ∗ n ] = F ( ξ n 0 , v ∗ n ) F ( ξ n 0 , v ∗ n ) T ∆ t n ( ξ n 0 ) + o (∆ t n ( ξ n 0 )) , Cov [ ξ n 1 | ξ n 0 , u n 0 = v n ] = F ( ξ n 0 ) , v n ) F ( ξ n 0 ) , v n ) T ∆ t n ( ξ n 0 )) + o (∆ t n ( ξ n 0 ))) , where f ( · , · ) is the nominal dynamics, and F ( · , · ) F ( · , · ) T is the diffusion of the original system that are assumed to be continuous almost everywhere. We note that ∆ t n ( ξ n 0 ) = ∆ t n ( ξ n 0 ) = γ t ( log( | S n | ) / | S n | ) θςρ/d x as ξ n 0 and ξ n 0 are updated at the n th iteration in this context, and the hold- ing times converge to 0 as n approaches infinity. Therefore, when || ξ n 0 − ξ n 0 || 2 p → 0, || v n − v ∗ n || 2 p → 0, we have: E [ ξ n 1 − ξ n 1 | ξ n 0 , ξ n 0 , u n 0 = v ∗ n , u n 0 = v n ] p → 0 , (31) Cov ( ξ n 1 − ξ n 1 | ξ n 0 , ξ n 0 , u n 0 = v ∗ n , u n 0 = v n ) p → 0 . (32) 32 Since ξ n 1 and ξ n 1 are bounded, the random vector E [ ξ n 1 − ξ n 1 | ξ n 0 , ξ n 0 , u n 0 = v ∗ n , u n 0 = v n ] and random matrix Cov ( ξ n 1 − ξ n 1 | ξ n 0 , ξ n 0 , u n 0 = v ∗ n , u n 0 = v n ) are bounded. We recall that if Y n p → 0, and hence Y n d → 0, when Y n is bounded for all n , lim n →∞ E [ Y n ] = 0 and lim n →∞ Cov ( Y n ) = 0. Therefore, Eqs. 31-32 imply: lim n →∞ E [ E [ ξ n 1 − ξ n 1 | ξ n 0 , ξ n 0 , u n 0 = v ∗ n , u n 0 = v n ] ] = 0 , (33) lim n →∞ Cov ( E [ ξ n 1 − ξ n 1 | ξ n 0 , ξ n 0 , u n 0 = v ∗ n , u n 0 = v n ] ) = 0 , (34) lim n →∞ E [ Cov ( ξ n 1 − ξ n 1 | ξ n 0 , ξ n 0 , u n 0 = v ∗ n , u n 0 = v n ) ] = 0 . (35) The above outer expectations and covariance are with resepect to the randomness of states ξ n 0 , ξ n 0 and sampled controls U n . Using the iterated expectation law for Eq. 33, we obtain: lim n →∞ E [ ξ n 1 − ξ n 1 ] = 0 . Using the law of total covariance for Eqs. 34-35, we have: lim n →∞ Cov [ ξ n 1 − ξ n 1 ] = 0 . Since E [ || ξ n 1 − ξ n 1 || 2 2 ] = E [( ξ n 1 − ξ n 1 ) T ( ξ n 1 − ξ n 1 )] = || E [ ξ n 1 − ξ n 1 )] || 2 2 + tr ( Cov [ ξ n 1 − ξ n 1 ]) , the above limits together imply: lim n →∞ E [ || ξ n 1 − ξ n 1 || 2 2 ] = 0 . In other words, ξ n 1 converges in 2 th -mean to ξ n 1 , which leads to || ξ n 1 − ξ n 1 || 2 p → 0 as n approaches ∞ .  Returning to the proof of Eq. 30, we know that ξ n 0 = ξ n 0 = z as the starting state. From any y ∈ S n , an optimal control from y is denoted as v ∗ ( y ), and the best sampled control from the same state y is denoted as v ( y ). By Lemma 12, as u n 0 = v ( ξ n 0 ), there exists u n 0 = v ∗ ( ξ n 0 ) such that || u n 0 − u n 0 || 2 p → 0 and || ξ n 1 − ξ n 1 || 2 p → 0. Let us assume that ( || u n k − 1 − u n k − 1 || 2 , || ξ n k − ξ n k || 2 ) converges in probability to (0 , 0) upto index k . We have u n k = v ( ξ n k ). Using Lemma 12, there exists u n k = v ∗ ( ξ n k ) such that ( || u n k − u n k || 2 , || ξ n k +1 − ξ n k +1 || 2 ) p → (0 , 0). Thus, for any i ≥ 1, we can construct a minimizing control u n i in Theorem 7 such that ( || ξ n i − ξ n i || 2 , || u n i − u n i || 2 ) p → (0 , 0) as n → ∞ . Hence, Eq. 30 follows immediately: ( ξ n ( · ) − ξ n ( · ) , u n ( · ) − u n ( · )) p → (0 , 0) . We have ( ξ n ( · ) , u n ( · )) d → ( x ∗ ( · ) , u ∗ ( · )) w.p.1. Thus, by hierarchical convergence of random variables [30], we achieve ( ξ n ( · ) , u n ( · )) d → ( x ∗ ( · ) , u ∗ ( · )) w.p.1. Therefore, for all z ∈ S n : lim n →∞ | J n,μ n ( z ) − J ∗ ( z ) | = 0 w.p.1.  33 F Proof of Theorem 9 Fix n ∈ N , for all z ∈ S , and y n = argmin z ′ ∈ S n || z ′ − z || 2 , we have μ n ( z ) = μ n ( y n ) . We assume that optimal policies of the original continuous problem are obtainable. By Theorems 7- 8, we have: lim n →∞ | J n,μ n ( y n ) − J ∗ ( y n )) | = 0 w.p.1. Thus, μ n ( y n ) converges to μ ∗ ( y n ) almost surely where μ ∗ is an optimal policy of the original continuous problem. Thus, for all  > 0, there exists N such that for all n > N : || μ n ( y n ) − μ ∗ ( y n ) || 2 ≤  2 w.p.1. Under the assumption that μ ∗ is continuous at z , and due to lim n →∞ y n = z almost surely, we can choose N large enough such that for all n > N : || μ ∗ ( y n ) − μ ∗ ( z ) || 2 ≤  2 w.p.1. From the above inequalities: || μ n ( y n ) − μ ∗ ( z ) || 2 ≤ || μ n ( y n ) − μ ∗ ( y n ) || 2 + || μ ∗ ( y n ) − μ ∗ ( z ) || 2 ≤ , ∀ n > N w.p.1. Therefore, lim n →∞ || μ n ( z ) − μ ∗ ( z ) || 2 = lim n →∞ || μ n ( y n ) − μ ∗ ( z ) || 2 = 0 w.p.1.  34