arXiv:1306.1520v1 [cs.LG] 6 Jun 2013 Policy Search: Any Local Optimum Enjoys a Global Performance Guarantee Bruno Scherrer, LORIA – MAIA project-team, Nancy, France, bruno.scherrer@inria.fr Matthieu Geist, Supélec – IMS-MaLIS Research Group, Metz, France, matthieu.geist@supelec.fr June 7, 2018 Abstract Local Policy Search is a popular reinforcement learning approach for handling large state spaces. Formally, it searches locally in a parameterized policy space in order to maximize the associated value function averaged over some predefined distribution. It is probably commonly believed that the best one can hope in general from such an approach is to get a local optimum of this criterion. In this article, we show the following surprising result: any (approximate) local optimum enjoys a global performance guarantee . We compare this guarantee with the one that is satisfied by Direct Policy Iteration, an approximate dynamic programming algorithm that does some form of Policy Search: if the approximation error of Local Policy Search may generally be bigger (because local search requires to consider a space of stochastic policies), we argue that the concentrability coefficient that appears in the performance bound is much nicer. Finally, we discuss several practical and theoretical consequences of our analysis. 1 Introduction We consider the reinforcement learning problem formalized through Markov Decision Processes (MDP) (Sutton and Barto, 1998; Puterman, 1994), in the situation where the state space is large and approxima- tion is required. On the one hand, Approximate Dynamic Programming (ADP) is a standard approach for handling large state spaces. It consists in mimicking in an approximate form the standard algorithms that were designed to optimize globally the policy (maximizing the associated value function for each state). On the other hand, Local Policy Search (LPS) consists in parameterizing the policy (the so-called “actor”) and locally maximizing the associated expected value function, for example using a (natural) gradient ascent (Baxter and Bartlett, 2001; Kakade, 2001) (possibly with a critic (Sutton et al. , 1999; Peters and Schaal, 2008)), expectation-maximization (EM) (Kober and Peters, 2011), or even directly using some black-box optimization algorithm (Heidrich-Meisner and Igel, 2008). The distinction we make here between ADP and LPS relies on the overall algorithmic scheme that is considered (dynamic programming or local expected value maximization). For example, we see the Direct (or Classification-based) Policy Iteration (DPI) algorithm (Lagoudakis and Parr, 2003; Fern et al. , 2006; Lazaric et al. , 2010) as belonging to the ADP family: even if it can be seen as a policy search method (since there is no representation for the value function), the algorithm follows the general approximate policy iteration (API) scheme. The Conservative Policy Iteration (CPI) algo- rithm (Kakade and Langford, 2002)—of which the analysis has close connections with what we are going to argue in this paper—might be considered at the frontier of ADP and LPS: it is based on a damped version of API, where each new policy is a convex mixture of the current policy and the greedy one, the precise combination being chosen such as guaranteeing an improvement in terms of the local fitness (the value function averaged over some predefined distribution). Following the seminal works by Bertsekas and Tsitsiklis (1996), it has been shown that ADP algo- rithms enjoy global performance guarantees, bounding the loss of using the computed policy instead of using the optimal one as a function of the approximation errors involved along the iterations, for example for approximate policy iteration (API) (Munos, 2003), for approximate value iteration (AVI) (Munos, 1 2007), or more generally for approximate modified policy iteration (AMPI) (Scherrer et al. , 2012). To the best of our knowledge, similar general guarantees do not exist in the literature for LPS algorithms. Bounds have been derived for the CPI algorithm by Kakade and Langford (2002), and at first glance, one may think that this was due to its closeness to the ADP family. In general though, the best one can hope for LPS is to get a local optimum of the optimized fitness (that is, a local maximum of the averaged value function), and the important question of the loss with respect to the optimal policy remains open. As for instance mentionned as the main “future work” in Bhatnagar et al. (2007), where the convergence of a family of natural actor-critic algorithms is proven, “ [i]t is important to characterize the quality of converged solutions.” Experimentally, LPS methods seem to work pretty well. Applications to standard benchmarks (Baxter and Bartlett, 2001; Kakade, 2001; Peters and Schaal, 2008) and real applications such as robotics (Peters and Schaal, 2008; Kober and Peters, 2011) suggest that they are competitive with the ADP ap- proach. Surprisingly, gradient-based and EM approaches, that are usually prone to be stuck in local optima, do not seem to be penalized in practice. Even more surprisingly, it was shown (Kakade, 2001) that a natural gradient ascent in the policy space can outperform ADP on the Tetris game. The mo- tivation of this paper is to fill the theoretical gap on the LPS methods and to explain to some extent their empirical successes. Our main contribution (Theorem 3) is to show that any (approximate) local optimum of the expected value function enjoys a global performance guarantee , similar to the one pro- vided by ADP algorithms. The proof technique we use is reminiscent of the one used for CPI, but our result is much more general and applies to a broad class of algorithms. Section 2 provides the necessary background and states formally what we mean by local policy search. Section 3 states and proves our main result. Section 4 discusses it. Notably, a comparison to similar bounds for ADP is proposed and the practical consequences of the result are discussed. Section 5 opens some perspectives. 2 Background and notations Write ∆ X the set of probability distributions over a finite set X and Y X the applications from X to the set Y . By convention, all vectors are column vectors, except distributions which are row vectors (for left multiplication). We consider a discounted MDP M = {S , A , P, r, γ } (Puterman, 1994; Bertsekas, 1995), with S the finite state space 1 , A the finite action space, P ∈ (∆ S ) S×A the Markovian dynamics ( P ( s ′ | s, a ) denotes the probability of transiting to s ′ from the ( s, a ) couple), r ∈ R S×A the bounded reward function and γ ∈ [0 , 1) the discount factor. A stochastic policy π ∈ (∆ A ) S associates to each state s a probability distribution π ( . | s ) over the action space A . For a given policy π , we write r π ∈ R S defined as r π ( s ) = ∑ a ∈A π ( a | s ) r ( s, a ) = E a ∼ π ( . | s ) [ r ( s, a )] and P π ∈ (∆ S ) S defined as P π ( s ′ | s ) = ∑ a ∈A π ( a | s ) P ( s ′ | s, a ) = E a ∼ π ( . | s ) [ P ( s ′ | s, a )] . The value function v π quantifies the quality of a policy π for each state s by measuring the expected cumulative reward received for starting in this state and then following the policy: v π ( s ) = E  ∑ t ≥ 0 γ t r π ( s t ) | s 0 = s, s t +1 ∼ P π ( . | s t )   . The Bellman operator T π of policy π associates to each function v ∈ R S the function defined as [ T π v ]( s ) = E [ r π ( s ) + γv ( s ′ ) | s ′ ∼ P π ( . | s )] , or more compactly T π v = r π + γP π v . The value function v π is the unique fixed point of T π . It is well known that there exists a policy π ∗ that is optimal in the sense that it satisfies v π ∗ ( s ) ≥ v π ( s ) for all states s and policies π . The value function v ∗ is the unique fixed point of the following nonlinear Bellman equation: v ∗ = T v ∗ with T v = max π ∈A S T π v where the max is taken componentwise. Given any function v ∈ R S , we say that a policy π ′ is greedy with respect to v if T π ′ v = T v , and we write G ( π ) for the set of policies that are greedy with respect to the value v π of some policy π . The notions of optimal value function and greedy policies are fundamental to 1 It is straightforward to extend our results to the case of infinite state space and compact action space. We chose the finite space setting for the ease and clarity of exposition. 2 optimal control because of the following property: any policy π ∗ that is greedy with respect to the optimal value is an optimal policy and its value v π ∗ is equal to v ∗ . Therefore, another equivalent characterization is that π is optimal if and only if it is greedy with respect to its value, that is if π ∈ G ( π ) . (1) For any distribution μ , we define the γ -weighted occupancy measure 2 induced by the policy π when the initial state is sampled for μ as d μ,π = (1 − γ ) μ ( I − γP π ) − 1 (we recall μ to be a row vector by convention) with ( I − γP π ) − 1 = ∑ t ≥ 0 ( γP π ) t . It can easily be seen that μv π = 1 1 − γ d μ,π r π . For any two distributions μ and ν , write ∥ ∥ μ ν ∥ ∥ ∞ the smallest constant C satisfying μ ( s ) ≤ Cν ( s ) , for any s ∈ S (this constant is actually the supremum norm of the componentwise ratio, thus the notation). From an algorithmic point of view, Dynamic Programming methods compute the optimal value policy pair ( v ∗ , π ∗ ) in an iterative way. When the problem is large and cannot be solved exactly, Approximate Dynamic Programming (ADP) refers to noisy implementations of these exact methods, where the noise is due to approximations at each iteration. For instance, Approximate Value and Policy Iteration re- spectively correspond to the following schemes: v k +1 = T v k + ǫ k and { v k = v π k + ǫ k π k +1 ∈ G ( v k ) . In the Local Policy Search (LPS) context on which we focus in this paper, we further need to specify the space where we are going to perform the search. We write Π this set and assume that it is a convex subset of (∆ A ) S . For a predefined distribution ν of interest, the problem addressed by LPS can be cast as follows: find π ∈ Π such that π is a local maximum of J ν ( π ) = E s ∼ ν [ v π ( s )] . Assume that we are able to (approximately) find such a locally optiaml policy π . A natural question is: how close is v π to v ∗ = v π ∗ ? Quite surprisingly, and in contrast with most optimization problems, we will provide a generic answer to this question; this is the aim of the next section. 3 Main result In order to state our main result, we need to first define precisely what we mean by approximate local optimum. For any pair of policies π and π ′ and coefficient α ∈ (0 , 1) , we write (1 − α ) π + απ ′ the stochastic mixture of π and π ′ with weights (1 − α ) and α . Definition 1 ( ǫ -local optimum) . We say that a policy π ∈ Π is an ǫ -local optimum of J ν ( π ) if: ∀ π ′ ∈ Π , lim α → 0 νv (1 − α ) π + απ ′ − νv π α ≤ ǫ. This condition is for instance satisfied when ‖∇ π J ν ( π ) ‖ ∞ ≤ ǫ , it states that the gradient is “small enough”. Notice that the assumption of a convex policy space is necessary for this definition. Then, we define a relaxation of the set of policies that are greedy with respect to some given policy. Definition 2 ( μ -weighted ǫ -greedy policies) . We write G Π ( π, μ, ǫ ) the set of policies which are ǫ -greedy respectively to π (in μ -expectation). It is formally defined as G Π ( π, μ, ǫ ) = { π ′ ∈ Π such that ∀ π ′′ ∈ Π , μT π ′ v π + ǫ ≥ μT π ′′ v π } . This is indeed a relaxation of G , as it can be observed that for all policies π and π ′ , π ′ ∈ G ( π ) ⇔ ∀ μ ∈ ∆ S , π ′ ∈ G Π ( π, μ, 0) ⇔ ∃ μ ∈ ∆ S , μ > 0 , π ′ ∈ G Π ( π, μ, 0) . We are now ready to state the first important result, which links the ǫ -local optimality of Definition 1 to some relaxed optimality characterization involving Definition 2. 2 When it exists, this measure tends to the stationary distribution of P π when the discount factor tends to 1 . 3 Theorem 1. The policy π ∈ Π is an ǫ -local maximum of J ν ( π ) if and only if it is (1 − γ ) ǫ -greedy respectively to itself, in d ν,π -expectation: ∀ π ′ ∈ Π , lim α → 0 νv (1 − α ) π + απ ′ − νv π α ≤ ǫ ⇔ π ∈ G Π ( π, d ν,π , (1 − γ ) ǫ ) . The following technical (but simple) lemma will be useful for the proof. Lemma 1. For any policies π and π ′ , we have v π ′ − v π = ( I − γP π ′ ) − 1 ( T π ′ v π − v π ) . Proof. The proof uses the fact that the linear Bellman Equation v π = r π + γP π v π implies v π = ( I − γP π ) − 1 r π . Then, v π ′ − v π = ( I − γP π ′ ) − 1 r π ′ − v π = ( I − γP π ′ ) − 1 ( r π ′ + γP π ′ v π − v π ) = ( I − γP π ′ ) − 1 ( T π ′ v π − v π ) . Proof of Theorem 1. For any α and any π ′ ∈ Π , write π α = (1 − α ) π + απ ′ . Using Lemma 1, we have: ν ( v π α − v π ) = ν ( I − γP π α ) − 1 ( T π α v π − v π ) . By observing that r π α = (1 − α ) r π + αr π ′ and P π α = (1 − α ) P π + αP π ′ , it can be seen that T π α v π = (1 − α ) T π v π + αT π ′ v π . Thus, using the fact that v π = T π v π , we get: T π α v π − v π = (1 − α ) T π v π + αT π ′ v π − v π = α ( T π ′ v π − v π ) . In parallel, we have ( I − γP π α ) − 1 = ( I − γP π + αγ ( P π ′ − P π )) − 1 = ( I − γP π ) − 1 ( I + αM ) , where the exact form of the matrix M does not matter. Put together, we obtain ν ( v π α − v π ) = αν ( I − γP π ) − 1 ( T π ′ v π − v π ) + o ( α 2 ) . Taking the limit, we obtain lim α → 0 ν ( v π α − v π ) α = ν ( I − γP π ) − 1 ( T π ′ v π − v π ) = (1 − γ ) d ν,π ( T π ′ v π − v π ) , from which the stated result follows. With Theorem 1, we know that if the LPS algorithm has produced a policy π that is an ǫ -local maximum, then it satisfies for some distribution μ π ∈ G Π ( π, μ, ǫ ) , (2) that can be seen as a relaxed version of the original optimality characterization of Equation (1). As we are about to see, in the Theorem to come next, such a relaxed optimality characterization can be shown to imply a global guarantee. To state this result, we first need to define the “ ν -greedy-complexity” of our policy space, which measure how good Π was designed so as to approximate the greedy operator, for a starting distribution ν . Definition 3 ( ν -greedy-complexity) . We define E ν (Π) the ν -greedy-complexity of the policy space Π as E ν (Π) = max π ∈ Π min π ′ ∈ Π ( d ν,π ( T v π − T π ′ v π )) . It is clear that if Π contains any deterministic policy (a strong assumption), then E ν (Π) = 0 . Given this definition, we are ready to state our second important result. 4 Theorem 2. If π ∈ G Π ( π, d ν,π , ǫ ) , then for any policy π ′ and for any distribution μ over S , we have μv π ′ ≤ μv π + 1 (1 − γ ) 2 ∥ ∥ ∥ ∥ d μ,π ′ ν ∥ ∥ ∥ ∥ ∞ ( E ν (Π) + ǫ ) . Notice that this theorem is actually a slight 3 generalization of Theorem 6.2 of Kakade and Langford (2002). We provide the proof, that is essentially the same as that given in Kakade and Langford (2002), for the sake of completeness. Proof. Using again Lemma 1 and the fact that T v π ≥ T π ′ v π , we have μ ( v π ′ − v π ) = μ ( I − γP π ′ ) − 1 ( T π ′ v π − v π ) = 1 1 − γ d μ,π ′ ( T π ′ v π − v π ) ≤ 1 1 − γ d μ,π ′ ( T v π − v π ) . Since T v π − v π ≥ 0 and d ν,π ≥ (1 − γ ) ν , we get μ ( v π ′ − v π ) ≤ 1 1 − γ ∥ ∥ ∥ ∥ d μ,π ′ ν ∥ ∥ ∥ ∥ ∞ ν ( T v π − v π ) ≤ 1 (1 − γ ) 2 ∥ ∥ ∥ ∥ d μ,π ′ ν ∥ ∥ ∥ ∥ ∞ d ν,π ( T v π − v π ) . Finally, using d ν,π ( T v π − v π ) = ( d ν,π T v π − d ν,π v π ) , we obtain μ ( v π ′ − v π ) ≤ 1 (1 − γ ) 2 ∥ ∥ ∥ ∥ d μ,π ′ ν ∥ ∥ ∥ ∥ ∞ ( d ν,π T v π − max π ′ ∈ Π d ν,π T π ′ v π + max π ′ ∈ Π d ν,π T π ′ v π − d ν,π v π ) ≤ 1 (1 − γ ) 2 ∥ ∥ ∥ ∥ d μ,π ′ ν ∥ ∥ ∥ ∥ ∞ ( E ν (Π) + ǫ ) The main result of the paper is a straightforward combination of the results of both Theorems. Theorem 3. Assume that the policy π is an ǫ -local optimum of J ν ( π ) over Π , that is ∀ π ′ ∈ Π , lim α → 0 J ν ( π α ) − J ν ( π ) α ≤ ǫ (with π α = (1 − α ) π + απ ′ ) , then, π enjoys the following global performance guarantee: 0 ≤ E s ∼ μ [ v ∗ ( s ) − v π ( s )] ≤ 1 1 − γ ∥ ∥ ∥ ∥ d μ,π ∗ ν ∥ ∥ ∥ ∥ ∞ ( E ν (Π) 1 − γ + ǫ ) . 4 Discussion We have just shown that any policy search algorithm that is able to estimate any ǫ -close local optimum of the fitness function J ν ( π ) = E s ∼ ν [ v π ( s )] actually comes with a global performance guarantee . In this section, we discuss the relations of our analyses with previous works, we compare this guarantee with the standard ones of approximate dynamic programming (focusing particularly on the DPI algorithm) and we discuss some practical and theoretical consequences of our analysis. 4.1 Closely related analysis Though the main result of our paper is Theorem 3, and since Theorem 2 appears in a very close form in Kakade and Langford (2002), our main technical contribution is Theorem 1 that highlights a deep connection between local optimality and a relaxed Bellman optimality characterization. A result, that is similar in flavor, is derived by Kakade (2001) for the Natural Policy Gradient algorithm: Theorem 3 there shows that natural gradient updates are moving the policy towards the solution of a (DP) update. The author even writes: “The natural gradient could be efficient far from the maximum, in that it is pushing the policy toward choosing greedy optimal actions”. Though there is an obvious connection with our work, the result there is limited since 1) it seems to be specific to the natural gradient approach 3 Theorem 2 holds for any policy π ′ , not only for the optimal one, and the error term is split up (which is necessary to provide a more general result). 5 (though our result is very general), and 2) it is not exploited so as to connect with a global performance guarantee. A performance guarantee very similar to the one we provide in Theorem 2 was first derived for CPI in Kakade and Langford (2002). The main difference is that the term ( E ν (Π) + ǫ ) of Theorem 2 is replaced there by some global precision ǫ , that corresponds to the error made by a classifier that is used as an approximate greedy policy chooser. Similarly to the work we have just mentioned on the Natural Policy Gradient, this result of the literature was certainly considered specific to the CPI algorithm, that has unfortunately not been used widely in practice probably because of its somewhat complex implementation. In contrast, we show in this paper that such a performance guarantee is valid for any method that finds a policy that satisfies a relaxed Bellman identity like that given Equation (2), among which one finds many widely used methods that do Local Policy Search. 4.2 Relations to bounds of approximate dynamic programming The performance guarantee of any approximate dynamic programming algorithm implies (i) a (quadratic) dependency on the average horizon 1 1 − γ , (ii) a concentration coefficient (which quantifies the divergence between the worst discounted average future state distribution when starting from the measure of interest, and the distribution used to control the estimation errors), and (iii) an error term linked to the estimation error encountered at each iteration (which can be due to the approximation of value functions and/or policies). Depending on what quantity is estimated, a comparison of these estimation errors may be hard. To ease the comparison, the following discussion focuses on the Direct Policy Iteration algorithm that does some form of policy search. Note however that several aspects of our comparison holds for other ADP algorithms. Direct Policy Iteration (DPI) (Lagoudakis and Parr, 2003; Lazaric et al. , 2010) is an approximate policy iteration algorithm where at each iteration, ( i ) the value function is estimated for a set of states using Monte Carlo rollouts and ( ii ) the greedy policy (respectively to the current value function) is approximately chosen in some predefined policy space (through a weighted classification problem). Write P this policy space, which is typically a set of deterministic policies . For an initial policy π 0 and a given distribution ν , the DPI algorithms iterates as follows: pick π k +1 ∈ P such as (approximately) minimizing ν ( T v π k − T π k +1 v π k ) . To provide the DPI bound, we need an alternative concentration coefficient as well as some new error characterizing P . Let C ∗ μ,ν be the concentration coefficient defined as C ∗ μ,ν = (1 − γ ) 2 ∞ ∑ i =0 ∞ ∑ j =0 γ i + j sup π ∈A S ∥ ∥ ∥ ∥ μ ( P π ∗ ) i ( P π ) j ν ∥ ∥ ∥ ∥ ∞ . We need also a measure of the complexity of the policy space P , similar to E ν : E ′ ν ( P ) = max π ∈P min π ′ ∈P ( ν ( T v π − T π ′ v π )) Let also e be an estimation error term, which depends on the number of samples and which tends to zero as the number of samples tends to infinity (at a rate depending on the chosen classifier). The performance guarantee of DPI (Lazaric et al. , 2010; Ghavamzadeh and Lazaric, 2012) can be expressed as follows: lim sup k →∞ μ ( v ∗ − v π k ) ≤ C ∗ μ,ν (1 − γ ) 2 ( E ′ ν ( P ) + e ) . This bound is to be compared with the result of Theorem 3, regarding the three terms involved: the average horizon, the concentration coefficient and the greedy error term. Each term is discussed now, a brief summary being provided in Table 1. As said in Section 4.1, the LPS bound is really similar to the CPI one, and the CPI and DPI bounds have been extensively compared in (Ghavamzadeh and Lazaric, 2012). Our discussion can be seen as complementary. Horizon term. Both bounds have a quadratic dependency on the average horizon 1 1 − γ . For ap- proximate dynamic programming, this bound can be shown to be tight (Scherrer et al. , 2012), the only known solution to improve this being to introduce non-stationary policies (Scherrer and Lesner, 2012). 6 Table 1: Comparison of the performance guarantees for LPS and DPI bounded term horizon term concentration term error term LPS μ ( v ∗ − v π ) 1 (1 − γ ) 2 ∥ ∥ ∥ d μ,π ∗ ν ∥ ∥ ∥ ∞ E ν (Π) + ǫ (1 − γ ) DPI lim sup k →∞ μ ( v ∗ − v π k ) 1 (1 − γ ) 2 C ∗ μ,ν E ′ ν ( P ) + e The tightness of this bound for policy search is an open question. However, we suggest later in Section 4.3 a possible way to improve this. Concentration coefficients. Both bounds involve a concentration coefficient. Even if they are different, they can be linked. Theorem 4. We always have that: ∥ ∥ ∥ d μ,π ∗ ν ∥ ∥ ∥ ∞ ≤ 1 1 − γ C ∗ μ,ν . Moreover, if there always exists a ν such that ∥ ∥ ∥ d μ,π ∗ ν ∥ ∥ ∥ ∞ < ∞ (with ν = d μ,π ∗ ), there might not exist a ν such that C ∗ μ,ν < ∞ . Proof. Let us first consider the inequality. By using the definition of d μ,π ∗ and eventually the fact that d μ,π ∗ ≥ (1 − γ ) ν , we have C ∗ μ,ν = (1 − γ ) 2 ∞ ∑ i =0 ∞ ∑ j =0 γ i + j sup π ∈ (∆ A ) S ∥ ∥ ∥ ∥ μ ( P π ∗ ) i ( P π ) j ν ∥ ∥ ∥ ∥ ∞ ≥ (1 − γ ) 2 ∥ ∥ ∥ ∥ ∥ ∥ ∞ ∑ i,j =0 γ i + j μ ( P π ∗ ) i + j ν ∥ ∥ ∥ ∥ ∥ ∥ ∞ = (1 − γ ) ∥ ∥ ∥ ∥ ∥ ∞ ∑ i =0 γ i d μ,π ∗ ( P π ∗ ) i ν ∥ ∥ ∥ ∥ ∥ ∞ ≥ (1 − γ ) 2 ∥ ∥ ∥ ∥ ∥ ∞ ∑ i =0 γ i μ ( P π ∗ ) i ν ∥ ∥ ∥ ∥ ∥ ∞ = (1 − γ ) ∥ ∥ ∥ ∥ d μ,π ∗ ν ∥ ∥ ∥ ∥ ∞ . For the second part of the theorem, consider an MDP with N states and N actions, with μ = δ 1 being a dirac on the first state, and such that from here action a ∈ [1; N ] leads in state a deterministically. Write c = sup π ∈A S ‖ μP π ν ‖ ∞ the first term defining C ∗ μ,ν . For any π , we have μP π ≤ cν . Thus, for any action a we have δ a ≤ cν ⇒ 1 ≤ cν ( a ) . Consequently, 1 = ∑ N i =1 ν ( i ) ≥ 1 c ∑ N i =1 1 ⇔ c ≥ N . This being true for arbitrary N ∈ N , we get c = ∞ and thus C ∗ μ,ν = ∞ . The first part of this result was already stated in Ghavamzadeh and Lazaric (2012), for the comparison of CPI (which involves the same concentration as LPS) and DPI. The second part is new: it tells that we may have ∥ ∥ ∥ d μ,π ∗ ν ∥ ∥ ∥ ∞ ≪ C ∗ μ,ν , which is clearly in favor of LPS (and CPI, as a side effect). Error terms. Both bounds involve an error term. The terms ǫ (LPS) and e (DPI) can be made arbitrarily small by increasing the computational effort (the time devoted to run the algorithm and the amount of samples used), though not much more can be said without studying a specific algorithmic instance ( e.g. , type of local search for LPS or type of classifier for DPI). The terms defining the “greedy complexity” of policy spaces can be more easily compared. Because they use different distributions that can be compared ( d ν,π ≥ (1 − γ ) ν ), we have for all policy spaces Π , E ′ ν (Π) ≤ E ν (Π) 1 − γ . However, this result (already stated in Ghavamzadeh and Lazaric (2012)) does not take into account the fact that LPS (or CPI for the discussion of Ghavamzadeh and Lazaric (2012)) works with stochastic policies while DPI works with deterministic policies . For example, assume that Π is the convex closure of P . In this case, we would have E ′ ν ( P ) ≤ E ′ ν (Π) . Therefore, this error term would be more in favor of DPI than LPS: the search space is presumably smaller (while possibly allowing to represent the same deterministic greedy policies). 7 4.3 Practical and theoretical consequences of our analysis Finally, this section provides a few important consequences of our analysis and of Theorem 3 in particular. Rich policy and equivalence between local and global optimality. If the the policy space is very rich, one can easily show that any local optimum is actually global (this result being a direct corollary of Theorem 3). Theorem 5. Let ν > 0 be a distribution. Assume that the policy space is rich in the sense that E ν (Π) = 0 , and that π is an (exact) local optimum of J ν ( ǫ = 0 ). Then, we have v π = v ∗ . If this result is well-known in the case of tabular policies, it is to our knowledge new in such a general case (acknowledging that E ν (Π) = 0 is a rather strong assumption). Choice of the sampling distribution. Provided the result of Theorem 3, and as also mentionned about CPI in Kakade and Langford (2002) since it satisfies a similar bound, if one wants to optimize the policy according to a distribution μ (that is, such that μ ( v ∗ − v π ) is small), then one should optimize the fitness J ν with the distribution ν ≃ d μ,π ∗ (so as to minimize the coefficient ∥ ∥ ∥ d μ,π ∗ ν ∥ ∥ ∥ ∞ ). Ideally, one should sample states based on trajectories following the optimal policy π ∗ starting from states drawn according to μ (which is not surprising). This is in general not realistic since we do not know the optimal policy π ∗ , but practical solutions may be envisioned. First, this means that one should sample states in the “interesting” part of the state space, that is where the optimal policy is believed to lead. This is a natural piece of information that a domain expert should be able to provide and this is in general much easier than actually controlling the system with the optimal policy (or with a policy that leads to these interesting parts of the state space). Also, though we leave the precise study of this idea for future research, a natural practical approach for setting the distribution ν would be to compute a sequence of policies π 1 , π 2 , . . . such that for all i , π i is a local optimum of π 7 → J d ν,πi − 1 ( π ) , that is of the criterion weighted by the region visited by the previous policy π i − 1 . It may particularly be interesting to study whether the convergence of such an iterative process leads to interesting guarantees. One may also notice that Theorem 3 may be straightforwardly written more generally for any policy. If π is an ǫ -local optimum of J ν over Π , then for any stochastic policy π ′ we have μv π ′ ≤ μv π + 1 1 − γ ∥ ∥ ∥ ∥ d μ,π ′ ν ∥ ∥ ∥ ∥ ∞ ( E ν (Π) 1 − γ + ǫ ) . Therefore, one can sample trajectories according to an acceptable (and known) controller π ′ so as to get state samples to optimize J d ν,π ′ . More generally, if we know where a good policy π ′ leads the system to from some initial distribution μ , we can learn a policy π that is guaranteed to be approximately as good (and potentially better). A better learning problem? With the result of Theorem 3, we have a squared dependency of the bound on the effective average horizon 1 1 − γ . For approximate dynamic programming, it is known that this dependency is tight (Scherrer and Lesner, 2012). At the current time, this is an open question for policy search. However, we can somehow improve the bound. We have shown that the ǫ -local optimality of a policy π implies that it satisfies a relaxed Bellman global optimality characterization, π ∈ G Π ( π, d ν,π , ǫ ) , which in turns implies Theorem 3. The following result, involving a slightly simpler relaxed Bellman equation, can be proved similarly to Theorem 2: π ∈ G Π ( π, ν, ǫ ) ⇔ μv π ′ ≤ μv π + 1 1 − γ ∥ ∥ ∥ ∥ d μ,π ′ ν ∥ ∥ ∥ ∥ ∞ ( E ν (Π) + ǫ ) . A policy satisfying the left hand side would have an improved dependency on the horizon ( 1 1 − γ instead of 1 (1 − γ ) 2 ). At the current time, we do not know whether there exists an efficient algorithm for computing a policy satisfying π ∈ G Π ( π, ν, ǫ ) . The above guarantee suggests that solving such a problem may improve over traditional policy search approaches. 5 Conclusion In the past years, local policy search algorithms have been shown to be practical viable alternatives to the more traditional approximate dynamic programming field. The derivation of global performance guaran- 8 tees for such approaches, probably considered as a desperate case, was to our knowledge never considered in the literature. In this article, we have shown a surprising result: any Local Policy Search algorithm , as long as it is able to provide an approximate local optimum of J ν ( π ) , enjoys a global performance guaran- tee similar to the ones of approximate dynamic programming algorithms. From a theoretical viewpoint, there is thus no reason to prefer approximate dynamic programming over policy search (practical reasons – e.g. , necessity of a simulator – or other theoretical reasons – e.g. , rate of convergence – may come in line). Since the bounds of ADP are known to be tight, the question whether the guarantee we have provided is tight constitutes an interesting future research direction. We suggested that it may be a better learning strategy to look for a policy π satisfying π ∈ G Π ( π, ν, ǫ ) instead of searching for a local maximum of J ν , as it leads to a better bound. Designing an algorithm that would do so efficiently is another interesting perspective. Finally, we here only considered pure actor algorithms, with only a parameterization of the policy. The extension of our analysis to situations where one also uses a critic (a parameterization of the value function) is a natural track to explore. References Baxter, J. and Bartlett, P. L. (2001). Infinite-horizon gradient-based policy search. Journal of Artificial Intelligence Research , 15 , 319–350. Bertsekas, D. and Tsitsiklis, J. (1996). Neuro-Dynamic Programming . Athena Scientific. Bertsekas, D. P. (1995). Dynamic Programming and Optimal Control . Athena Scientific. Bhatnagar, S., Sutton, R. S., Ghavamzadeh, M., and Lee, M. (2007). Incremental natural actor-critic algorithms. In Conference on Neural Information Processing Systems (NIPS) , Vancouver, Canada. Fern, A., Yoon, S., and Givan, R. (2006). Approximate Policy Iteration with a Policy Language Bias: Solving Relational Markov Decision Processes. Journal of Artificial Intelligence Research , 25 , 75–118. Ghavamzadeh, M. and Lazaric, A. (2012). Conservative and Greedy Approaches to Classification-based Policy Iteration. In Conference on Artificial Intelligence (AAAI) . Heidrich-Meisner, V. and Igel, C. (2008). Evolution strategies for direct policy search. In Proceedings of the 10th international conference on Parallel Problem Solving from Nature: PPSN X , pages 428–437. Kakade, S. (2001). A Natural Policy Gradient. In Neural Information Processing Systems (NIPS) , pages 1531–1538. Kakade, S. and Langford, J. (2002). Approximately optimal approximate reinforcement learning. In International Conference on Machine Learning . Kober, J. and Peters, J. (2011). Policy Search for Motor Primitives in Robotics. pages 171–203. Lagoudakis, M. G. and Parr, R. (2003). Reinforcement learning as classification: Leveraging modern classifiers. In International Conference on Machine Learning , pages 424–431. Lazaric, A., Ghavamzadeh, M., and Munos, R. (2010). Analysis of a classification-based policy iteration algorithm. In International Conference on Machine Learning , pages 607–614. Munos, R. (2003). Error bounds for approximate policy iteration. In International Conference on Machine Learning , pages 560–567. Munos, R. (2007). Performance bounds in Lp norm for approximate value iteration. SIAM J. Control and Optimization . Peters, J. and Schaal, S. (2008). Natural Actor-Critic. Neurocomputing , 71 , 1180–1190. Puterman, M. L. (1994). Markov Decision Processes: Discrete Stochastic Dynamic Programming . Wiley- Interscience. 9 Scherrer, B. and Lesner, B. (2012). On the Use of Non-Stationary Policies for Stationary Infinite-Horizon Markov Decision Processes. In P. Bartlett, F. Pereira, C. Burges, L. Bottou, and K. Weinberger, editors, Advances in Neural Information Processing Systems 25 , pages 1835–1843. Scherrer, B., Gabillon, V., Ghavamzadeh, M., and Geist, M. (2012). Approximate Modified Policy Iteration. In International Conference on Machine Learning (ICML) . Sutton, R. and Barto, A. (1998). Reinforcement Learning, An introduction . BradFord Book. The MIT Press. Sutton, R. S., McAllester, D. A., Singh, S. P., and Mansour, Y. (1999). Policy Gradient Methods for Reinforcement Learning with Function Approximation. In Neural Information Processing Systems (NIPS) , pages 1057–1063. 10