An Optimal Control Approach for the Persistent Monitoring Problem Christos G. Cassandras, Xu Chu Ding, Xuchao Lin Abstract— We propose an optimal control framework for persistent monitoring problems where the objective is to control the movement of mobile agents to minimize an uncertainty metric in a given mission space. For a single agent in a one- dimensional space, we show that the optimal solution is obtained in terms of a sequence of switching locations, thus reducing it to a parametric optimization problem. Using Infinitesimal Perturbation Analysis (IPA) we obtain a complete solution through a gradient-based algorithm. We also discuss a receding horizon controller which is capable of obtaining a near-optimal solution on-the-fly. We illustrate our approach with numerical examples. I. INTRODUCTION Enabled by recent technological advances, the deployment of autonomous agents that can cooperatively perform com- plex tasks is rapidly becoming a reality. In particular, there has been considerable progress reported in the literature on sensor networks that can carry out coverage control [6], [13], [17], surveillance [10], [11] and environmental sampling [15], [19] missions. In this paper, we are interested in gen- erating optimal control strategies for persistent monitoring tasks; these arise when agents must monitor a dynamically changing environment which cannot be fully covered by a stationary team of available agents. Persistent monitoring differs from traditional coverage tasks due to the perpetual need to cover a changing environment, i.e., all areas of the mission space must be visited infinitely often. The main challenge in designing control strategies in this case is in balancing the presence of agents in the changing environment so that it is optimally covered over time while still satisfy- ing sensing and motion constraints. Examples of persistent monitoring missions include surveillance in a museum to prevent unexpected events or thefts, unmanned vehicles for border patrol missions, and environmental applications where routine sampling of an area is involved. In this paper, we address the persistent monitoring problem through an optimal control framework to drive agents so as to minimize a metric of uncertainty over the environment. In coverage control [6], [13], it is common to model knowledge of the environment as a non-negative density function defined over the mission space, and usually assumed to be fixed over time. However, since persistent monitoring tasks involve dynamically changing environments, it is natural to extend it to a function of both space and time to model uncertainty This work was supported in part by NSF under Grant EFRI-0735974, by AFOSR under grants FA9550-07-1-0361 and FA9550-09-1-0095, by DOE under grant DE-FG52-06NA27490, and by ONR under grant N00014- 09-1-1051. The authors are with the Division of Systems Engineering and Center for Information and Systems Engineering, Boston University, Boston, MA; e-mail:{cgc,xcding,mmxclin}@bu.edu. in the environment. We assume that uncertainty at a point grows in time if it is not covered by any agent sensors; for simplicity, we assume this growth is linear. To model sensor coverage, we define a probability of detecting events at each point of the mission space by agent sensors. Thus, the uncertainty of the environment decreases (for simplicity, linearly) with a rate proportional to the event detection probability, i.e., the higher the sensing effectiveness is, the faster the uncertainty is reduced.. While it is desirable to track the value of uncertainty over all points in the environment, this is generally infeasible due to computational complexity and memory constraints. Motivated by polling models in queueing theory, e.g., spatial queueing [1], [5], and by stochastic flow models [21], we assign sampling points of the environment to be monitored persistently (equivalently, we partition the environment into a discrete set of regions). We associate to these points “uncertainty queues” which are visited by one or more servers. The growth in uncertainty at a sampling point can then be viewed as a flow into a queue, and the reduction in uncertainty (when covered by an agent) can be viewed as the queue being visited by mobile servers as in a polling system. Moreover, the service flow rates depend on the distance of the sampling point to nearby agents. From this point of view, we aim to control the movement of the servers (agents) so that the total accumulated “uncertainty queue” content is minimized. Control and motion planning for agents performing per- sistent monitoring tasks have been studied in the literature. In [17] the focus is on sweep coverage problems, where agents are controlled to sweep an area. In [14], [20] a similar metric of uncertainty is used to model knowledge of a dynamic environment. In [14], the sampling points in a 1-D environment are denoted as cells, and the optimal control policy for a two-cell problem is given. Problems with more than two cells are addressed by a heuristic policy. In [20], the authors proposed a stabilizing speed controller for a single agent so that the accumulated uncertainty over a set of points along a given path in the environment is bounded, and an optimal controller that minimizes the maximum steady-state uncertainty over points of interest, assuming that the agent travels along a closed path and does not change direction. The persistent monitoring problem is also related to robot patrol problems, where a team of robots are required to visit points in the workspace with frequency constraints [8], [9], [12]. Our ultimate goal is to optimally control a team of coop- erating agents in a 2 or 3-D environment. The contribution of this paper is to take a first step toward this goal by arXiv:1108.3221v2 [cs.SY] 5 Oct 2011 formulating and solving an optimal control problem for one agent moving in a 1-D mission space in which we minimize the accumulated uncertainty over a given time horizon and over an arbitrary number of sampling points. Even in this simple case, determining a complete explicit solution is computationally hard. However, we show that the optimal trajectory of the agent is to oscillate in the mission space: move at full speed, then switch direction before reaching either end point. Thus, we show that the solution is reduced to a parametric optimization problem over the switching points for such a trajectory. We then use generalized In- finitesimal Perturbation Analysis (IPA) [4], [22] to determine these optimal switching locations, which fully characterize the optimal control for the agent. This establishes the basis for extending this approach, first to multiple agents and then to a 2-dimensional mission space. It also provides insights that motivate the use of a receding horizon approach for bypassing the computational complexity limiting real-time control actions. These next steps are the subject of ongoing research. The rest of the paper is organized as follows. Section II formulates the optimal control problem. Section III charac- terizes the solution of the optimal control problem in terms of switching points in the mission space, and includes IPA in conjunction with a gradient-based algorithm to compute the sequence of optimal switching locations. Section IV provides some numerical results. Section V discusses extensions of this result to a receding horizon framework and to multiple agents. Section VI concludes the paper. II. PERSISTENT MONITORING PROBLEM FORMULATION We consider a mobile agent in a 1-dimensional mission space of length L. Let the position of the agent be s(t) ∈[0,L] with dynamics: ˙s(t) = u(t), s(0) = 0 (1) i.e., we assume that the agent can control its direction and speed. We assume that the speed is constrained by |u(t)| ≤1. We associate with every point x ∈[0,L] a function p(x,s) at state s(t) that captures the probability of detecting an event at this point. We assume that p(x,s) = 1 if x = s, and that p(x,s) decays when the distance between x and s (i.e., |x −s|) increases. Assuming a finite sensing range r, we set p(x,s) = 0 when |x−s| > r. In this paper, we use a linear decay model shown below as our event detection probability function: p(x,s) =  1−|x−s| r if |x−s| ≤r 0 if |x−s| > r (2) We consider a set of points {αi}, i = 1,...,M, αi ∈[0,L], and associate a time-varying measure of uncertainty with each point αi, which we denote by Ri(t). Without loss of generality, we assume 0 ≤α1 ≤··· ≤αM ≤L and, to simplify notation, we set pi(s(t)) ≡p(αi,s(t)). This set may be selected to contain points of interest in the environment, or sampled points from the mission space. Alternatively, we may consider a partition of [0,L] into M intervals whose center points are αi = (2i −1)L/2M, i = 1,...,M. We can then set p(x,s) = pi(s) for all x ∈[αi − L 2M,αi + L 2M]. The uncertainty functions Ri(t) are defined to have the following properties: (i) Ri(t) increases with a fixed rate dependent on αi, if pi(s(t)) = 0, (ii) Ri(t) decreases with a fixed rate if pi(s(t)) = 1, and (iii) Ri(t) ≥0 for all t. It is then natural to model uncertainty so that its decrease is proportional to the probability of detection. In particular, we model the dynamics of Ri(t), i = 1,...,M, as follows: ˙Ri(t) =  0 if Ri(t) = 0, Ai < Bpi(s(t)) Ai −Bpi(s(t)) otherwise (3) where we assume that initial conditions Ri(0), i = 1,...,M, are given and that B > Ai > 0 for all i (thus, the uncertainty strictly decreases when s(t) = αi). Viewing persistent monitoring as a polling system, each point αi (equivalently, ith interval in [0,L]) is associated with a “virtual queue” where uncertainty accumulates with inflow rate Ai. The service rate of this queue is time-varying and given by Bpi(s(t)), controllable through the agent position at time t, as shown in Fig. 1. This interpretation is convenient for characterizing the stability of this system: For each queue, we may require that Ai < 1 T R T 0 Bpi(s(t))dt. Alternatively, we may require that each queue becomes empty at least once over [0,T]. We may also impose conditions such as Ri(T) ≤ Rmax for each queue as additional constraints for our problem so as to provide bounded uncertainty guarantees, although we will not do so in this paper. Note that this analogy readily extends to multi-agent and 2 or 3-D settings. Also, note that B can also be made location dependent without affect the analysis in this paper. … Ai Bpi(s(t)) s(t) A1 · · · · · · AM Fig. 1. A queueing system analog of the persistent monitoring problem. The goal of the optimal persistent monitoring problem we consider is to control the mobile agent direction and speed u(t) so that the cumulative uncertainty over all sensor points {αi}, i = 1,...,M is minimized over a fixed time horizon T. Thus, we aim to solve the following optimal control problem: Problem P1: min u(t) J = 1 T Z T 0 M ∑ i=1 Ri(t)dt (4) subject to the agent dynamics (1), uncertainty dynamics (3), state constraint 0 ≤s(t) ≤L, t ∈[0,T], and control constraint |u(t)| ≤1, t ∈[0,T]. III. OPTIMAL CONTROL SOLUTION In this section we first characterize the optimal control solution of Problem P1 and show that it is reduced to a parametric optimization problem. This allows us to utilize the IPA method [4] to find a complete optimal solution. A. Hamiltonian analysis We define the state vector x(t) = [s(t),R1(t),...,RM(t)]T and the associated costate vector λ (t) = [λs (t),λ1(t),...,λM(t)]T. In view of the discontinuity in the dynamics of Ri(t) in (3), the optimal state trajectory may contain a boundary arc when Ri(t) = 0 for any i; otherwise, the state evolves in an interior arc. We first analyze the system operating in such an interior arc. Due to (1) and (3), the Hamiltonian is: H (x,λ,u) = M ∑ i=1 Ri (t)+λs (t)u(t)+ M ∑ i=1 λi (t)(Ai −Bpi(s)) (5) and the costate equations ˙λ = −∂H ∂x are: ˙λs (t) = −∂H ∂s = −B M ∑ i=1 λi (t) ∂pi(s) ∂s = −B r ∑ i∈F−(t) λi(t)+ B r ∑ i∈F+(t) λi(t) ˙λi (t) = −∂H ∂Ri = −1, i = 1,...,M, (6) where we have used (2), and the sets F−(t) and F+(t) are defined as:  F−(t) = {i : s(t)−r ≤αi ≤s(t)} F+(t) = {i : s(t) < αi ≤s(t)+r}, so that they identify all points αi within the agent’s sensing range. Since we impose no terminal state constraints, the boundary conditions are λs (T) = 0 and λi (T) = 0, i = 1,...,M. Applying the Pontryagin minimum principle to (5) with u⋆(t), t ∈[0,T), denoting an optimal control, we have H (x⋆,λ ⋆,u⋆) = min u∈[−1,1]H (x,λ,u) and it is immediately obvious that it is necessary for an optimal control to satisfy: u⋆(t) =  1 if λs (t) < 0 −1 if λs (t) > 0 (7) This condition excludes the case where λs (t) = 0 over some finite “singular intervals” [2]. It turns out this can arise only in some pathological cases which we shall not discuss in this paper. The implication of (6) with λi (T) = 0 is that λi (t) = T −t for all t ∈[0,T] and all i = 1,...,M and λi (t) is monotonically decreasing starting with λi (0) = T. However, this is only true if the entire optimal trajectory is an interior arc, i.e., the state constraints remain inactive. On the other hand, looking at (6), observe that when the two end points, 0 and L, are not within the range of the agent, we have |F−(t)| = |F+(t)|, since the number of indices i satisfying s(t)−r ≤αi ≤s(t) is the same as that satisfying s(t) < αi ≤ s(t)+r. Consequently, ˙λs (t) = 0, i.e., λs (t) remains constant as long as this condition is satisfied and, in addition, none of the state constraints Ri(t) ≥0, i = 1,...,M, is active. Thus, as long as the optimal trajectory is an interior arc and λs (t) < 0, the agent moves at maximal speed u⋆(t) = 1 in the positive direction towards the point s = L. If λs (t) switches sign before any of the state constraints Ri(t) ≥0, i = 1,...,M, becomes active or the agent reaches the end point s = L, then u⋆(t) = −1 and the agent reverses its direction or, possibly, comes to rest. In what follows, we examine the effect of the state constraints and will establish the fact that the complete solution of this problem boils down to determining a set of switching locations over (0,L) with the end points being infeasible on an optimal trajectory. The dynamics in (3) indicate a discontinuity arising when the condition Ri(t) = 0 is satisfied while ˙Ri(t) = Ai − Bpi(s(t)) < 0 for some i = 1,...,M. Thus, Ri = 0 defines an interior boundary condition which is not an explicit function of time. Following standard optimal control analysis [2], if this condition is satisfied at time t for some k ∈{1,...,M}, H x(t−),λ(t−),u(t−)  = H x(t+),λ(t+),u(t+)  (8) where we note that one can make a choice of setting the Hamiltonian to be continuous at the entry point of a boundary arc or at the exit point. Using (5) and (3), (8) implies: λ ⋆ s t− u⋆t− +λ ⋆ k t−Ai −Bpk(s(t−))  = λ ⋆ s t+ u⋆t+ (9) In addition, λ ⋆ s (t−) = λ ⋆ s (t+) and λ ⋆ i (t−) = λ ⋆ i (t+) for all i ̸= k, but λ ⋆ k (t) may experience a discontinuity so that: λ ⋆ k t− = λ ⋆ k t+ −πk (10) where πk ≥0. Recalling (7), since λ ⋆ s (t) remains unaffected, so does the optimal control, i.e., u⋆(t−) = u⋆(t+). Moreover, since this is an entry point of a boundary arc, it follows from (3) that ˙Rk(t−) = Ai −Bpk(s(t−)) < 0. Therefore, (9) and (10) imply that λ ⋆ k t− = 0, λ ⋆ k t+ = πk ≥0. The actual evaluation of the costate vector over the in- terval [0,T] requires solving (6), which in turn involves the determination of all points where the state variables Ri(t) reach their minimum feasible values Ri(t) = 0, i = 1,...,M. This generally involves the solution of a two-point-boundary- value problem. However, our analysis thus far has already established the structure of the optimal control (7) which we have seen to remain unaffected by the presence of boundary arcs where Ri(t) = 0 for one or more i = 1,...,M. Let us now turn our attention to the constraints s(t) ≥0 and s(t) ≤L. The following proposition asserts that neither of these can become active on an optimal trajectory. Proposition 1: On an optimal trajectory, s⋆(t) ̸= 0 and s⋆(t) ̸= L for all t ∈(0,T]. Proof: Suppose s(t) ≥0 becomes active at some t ∈ (0,T). In this case, λi (t−) = λi (t+) for all i = 1,...,M, but λs (t) may experience a discontinuity so that λs t− = λs t+ −π0 where π0 ≥0 is a scalar constant. Since the constraint s = 0 is not an explicit function of time, (8) holds and, using (5), we get λ ⋆ s t− u⋆t− = λ ⋆ s t+ u⋆t+ (11) Clearly, as the agent approaches s = 0 at time t, we must have ˙s⋆(t−) = u⋆(t−) < 0 and, from (7), λ ⋆ s (t−) > 0. It follows that λ ⋆ s (t+) = λ ⋆ s (t−) + π0 > 0. On the other hand, u⋆(t+) ≥0, since the agent must either come to rest or reverse its motion at s = 0, hence λ ⋆ s (t+)u⋆(t+) ≥0. From (11), this contradicts the fact that λ ⋆ s (t−)u⋆(t−) < 0 and we conclude that s⋆(t) = 0 can not occur. By the exact same argument, s⋆(t) = L also cannot occur. Based on this analysis, the optimal control in (7) depends entirely on the points where λs (t) switches sign and, in light of Prop. 1, the solution of the problem reduces to the determination of a parameter vector θ = [θ1,...,θN]T, where θj ∈(0,L) denotes the jth location where the optimal control changes sign. Note that N is generally not known a priori and depends on the time horizon T. Since s(0) = 0, from Prop. 1 we have u⋆(0) = 1, thus θ1 corresponds to the optimal control switching from 1 to −1. Furthermore, θj, j odd, always correspond to u⋆(t) switching from 1 to −1, and vice versa if j is even. Thus, we have the following constraints on the switching locations for all j = 2,...,N:  θj ≤θj−1, if j is even θj ≥θj−1, if j is odd. (12) It is now clear that the behavior of the agent under the optimal control policy (7) is that of a hybrid system whose dynamics undergo switches when u⋆(t) changes between 1 and −1 or when Ri(t) reaches or leaves the boundary value Ri = 0. As a result, we are faced with a parametric optimization problem for a system with hybrid dynamics. This is a setting where one can apply the generalized theory of Infinitesimal Perturbation Analysis (IPA) in [4], [22] to obtain the gradient of the objective function J in (4) with respect to the vector θ and, therefore, determine an optimal vector θ ⋆through a gradient-based optimization approach. Remark 1: If the agent dynamics are replaced by a model such as ˙s(t) = g(s) + bu(t), observe that (7) still holds, as does Prop. 1. The only difference lies in (6) which would involve a dependence on dg(s) ds and further complicate the associated two-point-boundary-value problem. However, since the optimal solution is also defined by a parameter vector θ = [θ1,...,θN]T, we can still apply the IPA approach presented in the next section. B. Infinitesimal Perturbation Analysis (IPA) Our analysis has shown that, for an optimal trajectory, the agent always moves at full speed and never reaches either boundary point, i.e., 0 < s⋆(t) < L (excluding certain pathological cases as mentioned earlier.) Thus, the agent’s movement can be parametrized through θ = [θ1,...,θN]T where θi is the ith control switching point and the solution of Problem P1 reduces to the determination of an optimal parameter vector θ ⋆. As we pointed out, the agent’s behavior on an optimal trajectory defines a hybrid system, and the switching locations translate to switching times between particular modes of the hybrid system. Hence, this is similar to switching-time optimization problems, e.g., [7], [18], [23] except that we can only control a subset of mode switching times. To describe an IPA treatment of the problem, we first present the hybrid automaton model corresponding to the system operating on an optimal trajectory. Hybrid automaton model. We use a standard definition of a hybrid automaton (e.g., see [3]) as the formalism to model such a system. Thus, let q ∈Q (a countable set) denote the discrete state (or mode) and x ∈X ⊆Rn denote the continuous state. Let υ ∈ϒ (a countable set) denote a discrete control input and u ∈U ⊆Rm a continuous control input. Similarly, let δ ∈∆(a countable set) denote a discrete disturbance input and d ∈D ⊆Rp a continuous disturbance input. The state evolution is determined by means of (i) a vector field f : Q × X ×U × D →X, (ii) an invariant (or domain) set Inv : Q×ϒ×∆→2X, (iii) a guard set Guard : Q×Q×ϒ×∆→2X, and (iv) a reset function r : Q×Q×X × ϒ×∆→X. The system remains at a discrete state q as long as the continuous (time-driven) state x does not leave the set Inv(q,υ,δ). If x reaches a set Guard(q,q′,υ,δ) for some q′ ∈Q, a discrete transition can take place. If this transition does take place, the state instantaneously resets to (q′,x′) where x′ is determined by the reset map r(q,q′,x,υ,δ). Changes in υ and δ are discrete events that either enable a transition from q to q′ by making sure x ∈Guard(q,q′,υ,δ) or force a transition out of q by making sure x /∈Inv(q,υ,δ). We will classify all events that cause discrete state transitions in a manner that suits the purposes of IPA. Since our problem is set in a deterministic framework, δ and d will not be used. We show in Fig. 2 a partial hybrid automaton model of the system: due to the size of the overall model, Fig. 2 is limited to the behavior of the agent with respect to a single αi,i ∈{1,...,M}. The model consists of 14 discrete states (modes) and is symmetric in the sense that states 1−7 correspond to the agent operating with u(t) = 1, and states 8 −14 correspond to the agent operating with u(t) = −1. The events that cause state transitions can be placed in three categories: (i) The value of Ri(t) becomes 0 and triggers a switch in the dynamics of (3). This can only happen when Ri(t) > 0 and ˙Ri(t) = Ai −Bpi(s(t)) < 0 (e.g., in states 3 and 4), causing a transition to state 7 in which the invariant condition is Ri(t) = 0. (ii) The agent reaches a switching location, indicated by the guard condition s(t) = θ j for any j = 1,...,N. In these cases, a transition results from a state q to q+7 if q = 1,...,6 and to q−7 otherwise. (iii) The agent position reaches one of several critical values that affect the dynamics of Ri(t) while Ri(t) > 0. When s(t) = αi −r, the value of pi(s(t)) becomes strictly positive and ˙Ri(t) = Ai −Bpi(s(t)) > 0, as in the transition 1 →2. Subsequently, when s(t) = αi −r(1 −Ai/B), as in the transition 2 →3, the value of pi(s(t)) becomes sufficiently large to cause ˙Ri(t) = Ai−Bpi(s(t)) < 0 so that a transition due to Ri(t) = 0 1 6 7 8 14 13 12 11 10 ! " D ! " D 2 " E 2 " E 1 " E 1 " E 2 " E ! " # D  ! " # D  ! " # D  ! " # D  " = ș$%&' 2 3 4 5 9 "= ș$% " = ș$%&' "= ș$% "= ș$%&' " = ș$% " = ș$%&' " = ș$% "= ș$%&' " = ș$% " = ș$%&' s= ș$% " = ș$%&' " = ș$% > @ ( ) , 1 , 0 ! ! ! ! ( ) * " ) " # ( D   !   > @ 1 1 ( ) , 1 , 0 ! ! ! ( ) + " ) # " ( D E    !   > @ 1 1 ( ) , 1 , 0 ! ! ! ( ) + " ) " ( E D   !   > @ 2 2 ( ) , 1 , 0 ! ! ! ( ) + " ) " ( D E   !   > @ 2 2 ( ) , 1 , 0 ! ! ! ( ) + " ) " # ( E D    !   > @ ( ) , 1 , 0 ! ! ! ! ( ) * " ) " # ( D !  !   > @ ( ) 0, 1 0 ! ! ( ) " ) (   > @ ( ) , 1 , 0 ! ! ! ! ( ) * " ) " # ( D    !   > @ 1 1 ( ) , 1 , 0 ! ! ! ( ) + " ) # " ( D E     !   > @ 1 1 ( ) , 1 , 0 ! ! ! ( ) + " ) " ( E D    !   > @ 2 2 ( ) , 1 , 0 ! ! ! ( ) + " ) " ( D E    !   > @ 2 2 ( ) , 1 , 0 ! ! ! ( ) + " ) " # ( E D     !   > @ ( ) , 1 , 0 ! ! ! ! ( ) * " ) " # ( D  !  !   > @ ( ) 0, 1 0 ! ! ( ) " ) (    (!,= 0 (!,= 0 (!,= 0 (!,= 0 1 2 1 2 where 1 , 1 , 1 , 1 ! ! ! ! ! ! ! ! " " * * + * - + * - # # # # - - D D E D E D   § · § · § · § ·         ¨ ¸ ¨ ¸ ¨ ¸ ¨ ¸ © ¹ © ¹ © ¹ © ¹ 1 " E Fig. 2. Hybrid automaton for each αi. Red arrows represent events when the control switches between 1 and −1. Blue arrows represent events when Ri becomes 0. Black arrows represent all other events. becomes feasible at this state. Similar transitions occur when s(t) = αi, s(t) = αi+r(1−Ai/B), and s(t) = αi+r. The latter results in state 6 where ˙Ri(t) = Ai > 0 and the only feasible event is s(t) = θj, j odd, when a switch must occur and a transition to state 13 takes place (similarly for state 8). IPA review. Before proceeding, we provide a brief review of the IPA framework for general stochastic hybrid systems as presented in [4]. In our case, the system is deterministic, offering several simplifications. The purpose of IPA is to study the behavior of a hybrid system state as a function of a parameter vector θ ∈Θ for a given compact, convex set Θ ⊂Rl. Let {τk(θ)}, k = 1,...,K, denote the occurrence times of all events in the state trajectory. For convenience, we set τ0 = 0 and τK+1 = T. Over an interval [τk(θ),τk+1(θ)), the system is at some mode during which the time-driven state satisfies ˙x = fk(x,θ,t). An event at τk is classified as (i) Exogenous if it causes a discrete state transition independent of θ and satisfies dτk dθ = 0; (ii) Endogenous, if there exists a continuously differentiable function gk : Rn ×Θ →R such that τk = min{t > τk−1 : gk (x(θ,t),θ) = 0}; and (iii) Induced if it is triggered by the occurrence of another event at time τm ≤τk. Since the system considered in this paper does not include induced events, we will limit ourselves to the first two event types. IPA specifies how changes in θ influence the state x(θ,t) and the event times τk(θ) and, ultimately, how they influence interesting performance metrics which are generally expressed in terms of these variables. Given θ = [θ1,...,θN]T, we use the notation for Jacobian matrices: x′(t) ≡∂x(θ,t) ∂θ , τ′ k ≡∂τk(θ) ∂θ , k = 1,...,K, for all state and event time derivatives. It is shown in [4] that x′(t) satisfies: d dt x′ (t) = ∂fk (t) ∂x x′ (t)+ ∂fk (t) ∂θ (13) for t ∈[τk,τk+1) with boundary condition: x′(τ+ k ) = x′(τ− k )+  fk−1(τ− k )−fk(τ+ k )  τ′ k (14) for k = 0,...,K. In addition, in (14), the gradient vector for each τk is τk = 0 if the event at τk is exogenous and τ′ k = − ∂gk ∂x fk(τ− k ) −1 ∂gk ∂θ + ∂gk ∂x x′(τ− k )  (15) if the event at τk is endogenous and defined as long as ∂gk ∂x fk(τ− k ) ̸= 0. IPA equations. To clarify the presentation, we first note that i = 1,...,M is used to index the points where un- certainty is measured; j = 1,...,N indexes the compo- nents of the parameter vector; and k = 1,...,K indexes event times. In order to apply the three fundamental IPA equations (13)-(15) to our system, we use the state vector x(t) = [s(t),R1(t),...,RM(t)]T and parameter vector θ = [θ1,...,θN]T. We then identify all events that can occur in Fig. 2 and consider intervals [τk(θ),τk+1(θ)) over which the system is in one of the 14 states shown for each i = 1,...,M. Applying (13) to s(t) with fk (t) = 1 or −1 due to (1) and (7), the solution yields the gradient vector ∇s(t) = [ ∂s ∂θ1 (t),..., ∂s ∂θM (t)]T, where ∂s ∂θj (t) = ∂s ∂θj (τ+ k ), for t ∈[τk,τk+1) (16) for all k = 1,...,K, i.e., for all states q(t) ∈{1,...,14}. Similarly, let ∇Ri(t) = [ ∂Ri ∂θ1 (t),..., ∂Ri ∂θM (t)]T for i = 1,...,M. We note from (3) that fk (t) = 0 for states q(t) ∈Q1 ≡{7,14}; fk (t) = Ai for states q(t) ∈Q2 ≡{1,6,8,13}; and fk (t) = Ai −Bpi(s(t)) for all other states which we further classify into Q3 ≡{2,3,11,12} and Q4 ≡{4,5,9,10}. Thus, solving (13) and using (16) gives: ∇Ri (t) = ∇Ri(τ+ k ) − ( 0 if q(t) ∈Q1 ∪Q2 B  ∂pi(s) ∂s  ∇s τ+ k  ·(t −τk) otherwise where ∂pi(s) ∂s = ± 1 r as evaluated from (2) depending on the sign of αi −s(t) at each automaton state. We now turn our attention to the determination of ∇s τ+ k  and ∇Ri(τ+ k ) from (14), which involves the event time gradi- ent vectors ∇τk = [ ∂τk ∂θ1 ,..., ∂τk ∂θM ]T for k = 1,...,K. Looking at Fig. 2, there are three readily distinguishable cases regarding the events that cause state transitions: Case 1: An event at time τk which is neither Ri = 0 nor s = θ j, for any j = 1,...,N. In this case, it is easy to see that the dynamics of both s(t) and Ri(t) are continuous, so that fk−1(τ− k ) = fk(τ+ k ) in (14) applied to s(t) and Ri(t), i = 1,...,M and we get  ∇s τ+ k  = ∇s τ− k  ∇Ri(τ+ k ) = ∇Ri(τ− k ), i = 1,...,M (17) Case 2: An event Ri = 0 at time τk. This corresponds to transitions 3 →7, 4 →7, 10 →14 and 11 →14 in Fig. 2 where the dynamics of s(t) are still continuous, but the dynamics of Ri(t) switch from fk−1(τ− k ) = Ai −Bpi(s(τ− k )) to fk(τ+ k ) = 0. Thus, ∇s τ− k  = ∇s τ+ k  , but we need to evaluate τ′ k to determine ∇Ri(τ+ k ). Observing that this event is endogenous, (15) applies with gk = Ri = 0 and we get ∂τk ∂θj = − ∂Ri ∂θ j τ− k  Ai −Bpi(s(τ− k )), j = 1,...,N, k = 1,...,K It follows from (14) that ∂Ri ∂θj τ+ k  = ∂Ri ∂θj τ− k  − [Ai −Bpi(s(τ− k ))] ∂Ri ∂θ j τ− k  Ai −Bpi(s(τ− k )) = 0 Thus, ∂Ri ∂θ j τ+ k  is always reset to 0 regardless of ∂Ri ∂θ j τ− k  . Case 3: An event at time τk due to a control sign change at s = θj, j = 1,...,N. This corresponds to any transition between the upper and lower part of the hybrid automaton in Fig. 2. In this case, the dynamics of Ri(t) are continuous and we have ∂Ri ∂θ j τ+ k  = ∂Ri ∂θ j τ− k  for all i, j,k. On the other hand, we have ˙s(τ+ k ) = u(τ+ k ) = −u(τ− k ) = ±1. Observing that any such event is endogenous, (15) applies with gk = s−θj = 0 for some j = 1,...,N and we get ∂τk ∂θj = 1−∂s ∂θ j τ− k  u(τ− k ) (18) Combining (18) with (14) and recalling that that u(τ+ k ) = −u(τ− k ), we have ∂s ∂θj (τ+ k ) = ∂s ∂θj (τ− k )+[u τ− k  −u(τ+ k )] 1−∂s ∂θ j τ− k  u(τ− k ) = 2 where ∂s ∂θj τ− k  = 0 because ∂s ∂θj (0) = 0 = ∂s ∂θ j (t) for all t ∈ [0,τk), since the position of the agent cannot be affected by θj prior to this event. Now, let us consider the effect of perturbations to θn for n < j, i.e., prior to the current event time τk. In this case, we have ∂gk ∂θn = 0 and (15) becomes ∂τk ∂θn = − ∂s ∂θn τ− k  u(τ− k ) so that using this in (14) gives: ∂s ∂θn (τ+ k ) = ∂s ∂θn (τ− k )−  u τ− k  −u(τ+ k )  ∂s ∂θn τ− k  u τ− k  = −∂s ∂θn τ− k  Combining the above results, the components of ∇s(τ+ k ) where τk is the event time when s(τk) = θ j for some j, are given by ∂s ∂θn (τ+ k ) =    −∂s ∂θn τ− k  if n = 1,..., j −1 2 if n = j 0 if n = j +1,...,N (19) It follows from (16) and the analysis of all three cases above that ∂s ∂θj (t) for all j is constant throughout an optimal trajectory except at transitions caused by control switching locations (Case 3). In particular, for the kth event correspond- ing to s(τk) = θj, t ∈[τk,T], if u(t) = 1, then ∂s ∂θ j (t) = −2 if j is odd, and ∂s ∂θ j (t) = 2 if j is even; similarly, if u(t) = −1, then ∂s ∂θj (t) = 2 if j is odd and ∂s ∂θ j (t) = −2 if j is even. In summary, we can write ∂s ∂θ j (t) as ∂s ∂θj (t) =  (−1)j ·2u(t) t ≥τk 0 t < τk , j = 1,...,N (20) Finally, we can combine (20) with our results for ∂Ri ∂θ j (t) in all three cases above. Letting s(τl) = θ j, we obtain the following expression for ∂Ri ∂θ j (t) for all k ≥l, t ∈[τk,τk+1): ∂Ri ∂θj (t) = ∂Ri ∂θj τ+ k  (21) +    0 if q(t) ∈Q1 ∪Q2 (−1)j+1 2B r u τ+ k  ·(t −τk) if q(t) ∈Q3 −(−1)j+1 2B r u τ+ k  ·(t −τk) if q(t) ∈Q4 with boundary condition ∂Ri ∂θj (τ+ k ) = ( 0 if q τ+ k  ∈Q1 ∂Ri ∂θ j (τ− k ) otherwise (22) Objective Function Gradient Evaluation. Since we are ultimately interested in minimizing the objective function J(θ) (now a function of θ instead of u) in (4) with respect to θ, we first rewrite: J(θ) = 1 T M ∑ i=1 K ∑ k=0 Z τk+1(θ) τk(θ) Ri (t,θ)dt where we have explicitly indicated the dependence on θ. We then obtain: ∇J(θ) = 1 T M ∑ i=1 N ∑ k=0 Z τk+1 τk ∇Ri (t)dt +Ri (τk+1)∇τk+1 −Ri (τk)∇τk  Observing the cancellation of all terms of the form Ri (τk)∇τk for all k, we finally get ∇J(θ) = 1 T M ∑ i=1 N ∑ k=0 Z τk+1 τk ∇Ri (t)dt. (23) The evaluation of ∇J(θ) therefore depends entirely on ∇Ri (t), which is obtained from (21)-(22) and the event times τk, k = 1,...,K, given initial conditions s(0) = 0, Ri (0) for i = 1,...,M and ∇Ri(0) = 0. Objective Function Optimization. We now seek to ob- tain θ ⋆minimizing J(θ) through a standard gradient-based optimization scheme of the form θ l+1 = θ l −ηl ˜∇J(θ l) (24) where {ηl} is an appropriate step size sequence and ˜∇J(θ) is the projection of the gradient ∇J(θ) onto the feasible set (the set of θ satisfying the constraint (12)). The optimization scheme terminates when | ˜∇J(θ)| < ε (for a fixed threshold ε) for some θ. Our IPA-based algorithm to obtain θ ⋆ minimizing J(θ) is summarized in Alg. 1 where we have adopted the Armijo step-size (see [16]) for {ηl}. Algorithm 1 : IPA-based optimization algorithm to find θ ⋆ 1: Set N = ⌊T L ⌋(⌊·⌋is the floor function), and set θ = [θ1,...,θN]T satisfying constraint (12) 2: repeat 3: Compute s(t), t ∈[0,T] using θ 4: Compute ˜∇J(θ) and update θ through (24) 5: until | ˜∇J(θ)| < ε 6: if θ satisfies Prop. 1 then 7: Stop, return θ as θ ⋆ 8: else 9: Set N +1 →N and set θN = s(T) 10: Go to Step 2 11: end if Recalling that the dimension N of θ ⋆is unknown (it depends on T), a distinctive feature of Alg. 1 is that we vary N by possibly increasing it after a vector θ locally minimizing J is obtained, if it does not satisfy the necessary optimality condition in Prop. 1. We start the search for a feasible N by setting it to ⌊T L ⌋, the minimal N for which θ can satisfy Prop. 1, and only need to increase N if the locally optimal θ vector violates Prop. 1. It is possible to increase N further after Alg. 1 stops, and obtain a local optimal θ vector with a lower cost. This is due to possible non-convexity of the problem in terms of θ and N. In practice, this computation can take place in the background while the agent is in operation. Alternatively, 0 10 20 30 40 0 5 10 15 20 Time t Agent position s(t) Two switching points: Agent position s(t) vs. t 0 5 10 15 10 11 12 13 14 15 16 17 Two switching points: Objective function J vs. iterations Objective function J Iterations 0 200 400 600 800 1000 0 20 40 60 80 100 Time t Agent position s(t) Ten switching points: Agent position s(t) vs. t 0 5 10 15 70 75 80 85 90 Ten switching points: Objective function J vs. iterations Objective function J Iterations Fig. 3. Numerical results. Top figures correspond to L = 20, T = 36, 21 sampling points in [0,L]. Bottom figures correspond to L = 100, T = 980, 101 sampling points in [0,L]. Left plots: optimal trajectories. Right plots: J versus iterations. we can adapt a receding horizon formulation to compute the optimal control on-line. This approach is explained in more detail in Sec. V. IV. NUMERICAL RESULTS In this section we present two numerical examples where we have used Alg. 1 to obtain an optimal persistent moni- toring trajectory. The results are shown in Fig. 3. The top two figures correspond to an example with L = 20, M = 21, α1 = 0, αM = 20, and the remaining sampling points are evenly spaced between each other. Moreover, Ai = 0.01 for all i, B = 3,r = 4,Ri(0) = 2 for all i and T = 36. We start the algorithm with θ = [12]T and ε = 2×10−10. The algorithm stopped after 13 iterations (about 9 sec) using Armijo step- sizes, and the cost, J, was decreased from 16.63 to J⋆= 10.24 with θ ⋆= [17.81,1.29]T, i.e., the dimension increased by 1. In the top-left, the optimal trajectory s⋆(t) is plotted; in the top-right, J is plotted against iterations. We also increased N to 3 with initial θ = [12,16,4]; Alg. 1 converged to a local minimum J = 13.27 > J⋆= 10.24 under N = 2. The bottom two figures correspond to an example with L = 100, M = 101 and evenly spaced sampling points over [0,L], Ai = 0.01 for all i, B = 3, r = 4, Ri(0) = 2 for all i and T = 980. We start the algorithm with N = 9, θ = [95,95,95,95,95,5,5,5,5]T and same ε. The algorithm stopped after 14 iterations (about 10 min, an indication of the rapid increase in computational complexity) using Armijo step-sizes, and J was decreased from 88.10 to J⋆= 70.49 with θ ⋆= [98.03,96.97,96.65,96.35,95.70,2.94,3.21,3.61,4.08,4.57]T where N = 10. Note that the cost is much higher in this case due to the larger number of sampling points. Moreover, none of the optimal switching locations is at 0 or L, consistent with Prop. 1. We also increased N to 11 with θ = [90,90,90,90,90,90,10,10,10,10,10]; Alg. 1 converged to 101.56 > J⋆= 70.49 under N = 10. V. EXTENSIONS In this section we briefly discuss extensions to a “myopic” Receding Horizon (RH) framework, or a setting with multi- ple agents. Our proposed uncertainty model can be directly used to solve the persistent monitoring problem with a RH approach by solving Problem P1 not for the time horizon T, but for a smaller time window H, where H ≤T, repeatedly every time interval h ≤H. Because H is usually much smaller than T, and since the optimal control is shown to be “bang- bang” when not inside a singular arc, it can be assumed that the control is constant (denoted as u) during the horizon [t,t + H]. In this case, the problem of minimizing the cost function (4) over u ∈[−1,1] is a scalar optimization problem and its solution can be obtained explicitly, given the initial conditions of s(t) and Ri(t). The RH controller operates as follows: at time t, the optimal control is computed for [t,t + H] and is used for the time interval [t,t + h]. This process is repeated every h units of time, until t = T. In our numerical examples, the cost obtained using the RH framework is very close to the optimal cost (consistently within 5%), and since an explicit solution is available, the optimal control can be computed quickly and in real-time. The RH framework can also accommodate situations where events are triggered in real-time at some sampling points; in the virtual queue analogy, this means the inflow rates Ai of some queues are time-varying. This approach also opens up future work for multiple agents in 2-D or 3-D mission spaces. In a multi-agent frame- work, we can use the same model for uncertainty, but with a joint event detection probability function p(x,s1,...,sn), where there are n agents. This joint probability can be ex- pressed in terms of individual detection probabilities p(x,si) as: p(x,s1,...,sn) = 1 −∏n i=1(1 −p(x,si)). Although the optimal control problem can still be fully solved for multiple agents in the 1-D mission space, this problem quickly becomes intractable in higher dimensions. In this case, we aim to develop a unified receding horizon approach that integrates with our previous cooperative coverage control strategies [13]. VI. CONCLUSIONS We have formulated a persistent monitoring problem where we consider a dynamic environment with uncertainties at points changing depending on the proximity of the agent. We obtained an optimal control solution that minimizes the accumulated uncertainty over the environment, in the case of a single agent and 1-D mission space. The solution is characterized by a sequence of switching points, and we use an IPA-based gradient algorithm to compute the solution. We also discussed extensions of our approach using a receding horizon framework. Ongoing work aims at solving the problem with multiple agents and a richer dynamical model for each agent, as well as addressing the persistent monitoring problem in 2-D and 3-D mission spaces. REFERENCES [1] D.J. Bertsimas and G. Van Ryzin. Stochastic and dynamic vehicle routing in the Euclidean plane with multiple capacitated vehicles. Operations Research, pages 60–76, 1993. [2] A.E. Bryson and Y.C. Ho. Applied optimal control. Wiley New York, 1975. [3] C.G. Cassandras, J. Lygeros, and CRC Press. Stochastic hybrid systems. CRC/Taylor & Francis, 2007. [4] C.G. Cassandras, Y. Wardi, C.G. Panayiotou, and C. Yao. Perturbation analysis and optimization of stochastic hybrid systems. European Journal of Control, 16(6):642–664, 2010. [5] R.B. Cooper. Introduction to queuing theory. Edward Arnold, 1981. [6] J. Cortes, S. Martinez, T. Karatas, and F. Bullo. Coverage control for mobile sensing networks. Robotics and Automation, IEEE Transac- tions on, 20(2):243–255, 2004. [7] M. Egerstedt, Y. Wardi, and H. Axelsson. Transition-time optimization for switched-mode dynamical systems. Automatic Control, IEEE Transactions on, 51(1):110–115, 2006. [8] Y. Elmaliach, N. Agmon, and G.A. Kaminka. Multi-robot area patrol under frequency constraints. In Robotics and Automation, 2007 IEEE International Conference on, pages 385–390. IEEE, 2007. [9] Y. Elmaliach, A. Shiloni, and G.A. Kaminka. A realistic model of frequency-based multi-robot polyline patrolling. In Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems-Volume 1, pages 63–70. International Foundation for Autonomous Agents and Multiagent Systems, 2008. [10] A.R. Girard, A.S. Howell, and J.K. Hedrick. Border patrol and surveillance missions using multiple unmanned air vehicles. In 43rd IEEE Conference on Decision and Control, volume 1, pages 620–625. IEEE, 2005. [11] B. Grocholsky, J. Keller, V. Kumar, and G. Pappas. Cooperative air and ground surveillance. IEEE Robotics & Automation Magazine, 13(3):16–25, 2006. [12] P.F. Hokayem, D. Stipanovic, and M.W. Spong. On persistent coverage control. In Decision and Control, 2007 46th IEEE Conference on, pages 6130–6135. IEEE, 2008. [13] W. Li and C.G. Cassandras. A cooperative receding horizon controller for multivehicle uncertain environments. IEEE Transactions on Auto- matic Control, 51(2):242–257, 2006. [14] N. Nigam and I. Kroo. Persistent surveillance using multiple un- manned air vehicles. In IEEE Aerospace Conference, pages 1–14. IEEE, 2008. [15] D.A. Paley, F. Zhang, and N.E. Leonard. Cooperative control for ocean sampling: The glider coordinated control system. IEEE Transactions on Control Systems Technology, 16(4):735–744, 2008. [16] E. Polak. Optimization: algorithms and consistent approximations. Springer Verlag, 1997. [17] I. Rekleitis, V. Lee-Shue, A.P. New, and H. Choset. Limited communi- cation, multi-robot team based coverage. In Robotics and Automation, 2004. Proceedings. ICRA’04. 2004 IEEE International Conference on, volume 4, pages 3462–3468. IEEE, 2004. [18] M.S. Shaikh and P.E. Caines. On the hybrid optimal control problem: Theory and algorithms. IEEE Transactions on Automatic Control, 52(9):1587–1603, 2007. [19] R. N. Smith, M. Schwager, S. L. Smith, D. Rus, and G. S. Sukhatme. Persistent ocean monitoring with underwater gliders: Adapting sam- pling resolution. Journal of Field Robotics, 28(5):714–741, 2011. [20] S. L. Smith, M. Schwager, and D. Rus. Persistent robotic tasks: Mon- itoring and sweeping in changing environments. IEEE Transactions on Robotics, July 2011. To appear. [21] G. Sun, C.G. Cassandras, Y. Wardi, C.G. Panayiotou, and G.F. Riley. Perturbation analysis and optimization of stochastic flow networks. Automatic Control, IEEE Transactions on, 49(12):2143–2159, 2004. [22] Y. Wardi, R. Adams, and B. Melamed. A unified approach to infinites- imal perturbation analysis in stochastic flow models: the single-stage case. IEEE Trans. on Automatic Control, 55(1):89–103, 2009. [23] X. Xu and P.J. Antsaklis. Optimal control of switched systems based on parameterization of the switching instants. Automatic Control, IEEE Transactions on, 49(1):2–16, 2004.