1 Optimal Pruning for Multi-Step Sensor Scheduling Marco F. Huber, Member, IEEE Abstract—In the considered linear Gaussian sensor scheduling prob- lem, only one sensor out of a set of sensors performs a measurement. To minimize the estimation error over multiple time steps in a com- putationally tractable fashion, the so-called information-based pruning algorithm is proposed. It utilizes the information matrices of the sensors and the monotonicity of the Riccati equation. This allows ordering sensors according to their information contribution and excluding many of them from scheduling. Additionally, a tight lower is calculated for branch- and-bound search, which further improves the pruning performance. Index Terms—Kalman filtering, linear systems, sensor scheduling, sensor networks, stochastic optimal control I. INTRODUCTION In sensor systems consisting of multiple sensors, e.g., in sensor networks, most information about an observed process is obtained if each sensor performs measurements at each time instant. But due to constrained communication bandwidth, simultaneously sending all acquired data may not be possible. Furthermore, to save energy, sensors should be switched off if their measurements are not required. To increase the operational lifetime of such sensor systems, the measurement rate should be as low as possible, which on the other hand increases the estimation error. Sensor scheduling constitutes a promising solution to this trade-off. Here, only one sensor per time step is activated in order to minimize a function of the estimation error over a finite time horizon. In this paper, the sensor scheduling problem for discrete-time linear Gaussian systems is studied. Determining the optimal sensor schedule, i.e., the sequence of active sensors over the considered time horizon that minimizes the estimation error, requires traversing a search tree as shown in [1]. The depth and branching factor of the tree are equal to the time horizon length and the number of sensors, respectively. Each path from the root to a leaf node of the tree corre- sponds to one possible sensor schedule. The tree-search corresponds to solving a binary integer program and thus, sensor scheduling is NP-hard. Especially for long time horizons and/or a large number of sensors, an exhaustive tree-search can be avoided by employing pruning techniques. Existing pruning techniques can be classified into suboptimal and optimal methods. By employing optimal pruning [2], [3], [4], deleting the optimal sensor schedule is impossible, but many complete sensor schedules may still be evaluated. Drastic savings in computational demand arise from suboptimal pruning [5], [6], but ending up with the optimal schedule is no longer guaranteed. The sub-optimal methods in [4], [7], [8] utilize convex relaxation and optimization instead of traversing the tree. In [9], [10], the sensor scheduling problem is formulated as partially observable Markov decision process and solved approximately by means of discretizing the belief space. In the following, an optimal pruning technique named information- based pruning (IBP) is introduced aiming at pruning complete sub- trees as early as possible. To achieve this, two contributions are made: First, a partial ordering of sensors based on the so-called sensor information matrix (see Section III). This allows excluding sensors from tree search merely on the basis of the measurement matrix and noise covariance matrix. Second, with the sensor information M. F. Huber is with the AGT Group (R&D) GmbH, Darmstadt, Germany. Email: marco.huber@ieee.org matrices of the remaining sensors a so-called bounding sensor is calculated, which provides a lower bound to the estimation error (see Section IV). Both contributions are combined in a branch-and- bound algorithm for efficiently traversing the tree for the optimal sensor schedule. II. PROBLEM FORMULATION The dynamics of the observed process are given by the linear model xk+1 = Ak · xk + wk , (1) where k = 0, 1, . . . , N −1 is the discrete time index and N is the time horizon length.1 A finite set S of sensors is considered, where measurement zi k from sensor i ∈S = {1, . . . , S} is related to the latent system state xk ∈Rnx via the linear measurement model zi k = Hi k · xk + vi k . Both Ak and Hi k are time-variant matrices. The noise terms wk and vi k are zero-mean white Gaussian with covariance matrices Cw k and Cv,i k , respectively, with Cv,i k being positive definite. A measurement value ˆzi k of sensor i ∈S is a realization of zi k. The initial system state x0 ∼N(x0; ˆx0, Cx 0) at time step k = 0 is known and assumed to be Gaussian with mean vector ˆx0 and covariance ma- trix Cx 0. A sensor schedule is indicated by u = (u0, u1, . . . , uN−1), where an element uk encodes the index of the sensor to be scheduled for measurement at time step k, i.e., if sensor i is scheduled at time step k then uk = i. For linear Gaussian systems, the Kalman filter is the optimal estimator of xk in a minimum mean-square sense. For a given sensor schedule u, the state covariance matrix Cx k evolves according to the Riccati equation (partly in information form) Cx k+1(u) = ruk(Cx k(u)) := Cw k + Ak Cx k(u) −1 + (Huk k )T (Cv,uk k )−1 Huk k | {z } :=Muk k −1 AT k , where the function ri(Cx k) propagates the state covariance Cx k to the next time step given the measurement of sensor i. Further, Mi k is the symmetric and positive semi-definite sensor information matrix. It subsumes the information contribution of sensor i at time step k. The aim of multi-step sensor scheduling is to determine the sensors for the time steps k = 0, 1, . . . , N −1 such that the uncertainty of the state estimate and thus, the estimation error is minimized. For this purpose, the optimal sensor schedule u∗= (u∗ 0, . . . , u∗ N−1) results from solving u∗= arg min u {J(u)} (2) with J(u) = N X k=1 gk(WkCx k(u)WT k ) , where Wk is a positive semi-definite weighting matrix. By means of Wk, an adaptation to specific scheduling objectives is possible. For instance, with Wk = I for k = 1, . . . , N the focus is on minimizing the average estimation error, while for Wk = 0 for k = 1, . . . , N −1 and WN = I merely the terminal estimation error is minimized. The cost function gk can be the trace, determinant, or the maximum eigenvalue of the weighted state covariance matrix. Thus, gk maps the weighted estimation error or uncertainty subsumed 1Random variables are denoted by lowercase bold-face, vectors by under- lined, and matrices by uppercase bold-face letters. arXiv:1203.6243v2 [cs.SY] 29 Mar 2012 2 Cx 0 Cx 1(1) Cx 2(1, 1) u1 = 1 Cx 2(1, 2) u1 = 2 u0 = 1 Cx 1(2) Cx 2(2, 1) u1 = 1 Cx 2(2, 2) u1 = 2 u0 = 2 Fig. 1. Search tree for time horizon length N = 2 and S = 2 sensors with root node Cx 0. in WkCx k(u)WT k , k = 1, . . . , N to a scalar value. J(u) is the total cost of the schedule u. Solving the optimization problem in (2) corresponds to finding the path with lowest cost from the root to a leaf node in a tree like the one depicted in Fig. 1. Obviously, enumerating all possible paths is computationally infeasible for many sensors and/or a long time horizon. III. PARTIAL ORDERING OF SENSORS The first step of the proposed IBP is based on the monotonicity property of the Riccati equation. Theorem 1 For any i ∈S, positive semi-definite matrix Wk, and covariance matrices Cx k, ˜Cx k with Cx k ⪰˜Cx k, i.e., Cx k−˜Cx k is positive semi-definite, it holds: (Monotonicity) ri(Cx k) ⪰ri( ˜Cx k) , (Cost ordering) gk(Cx k) ≥gk( ˜Cx k) , (Symmetric weighting) WkCx kWT k ⪰Wk ˜Cx kWT k . For a proof of the monotonicity see Lemma 2 in [11] and for the cost ordering property see [12]. The monotonicity implies that the positive semi-definite ordering between covariance matrices will not change by applying the Riccati equation. This ordering is also not affected by multiplying the covariance matrices with the positive semi-definite weighting matrices Wk in (2). Furthermore, the ordering property automatically implies an or- dering of the (scalar) cost functions. But pruning nodes merely on the basis of the scalar costs gk without considering the ordering of the covariance matrices automatically leads to suboptimal pruning. This follows from the fact that the cost ordering is only a necessary but no sufficient condition for the positive semi-definite ordering between Cx k and ˜Cx k. Hence, at each node of the search tree, the ordering of covariance matrices needs to be determined explicitly for the next time step in order to ensure optimal pruning. On this account, the optimal pruning procedure proposed in [2] evaluates the Riccati equation for each sensor, which is computationally demanding for large state spaces or high dimensional measurements. The proposed pruning technique instead allows determining the order of covariance matrices without evaluating the Riccati equation multiple times. Theorem 2 (Order of Sensor Information Matrices) Given the covariance matrix Cx k and the sensor information matrices Mi k and Mj k for two sensors i, j ∈S such that Mi k ⪰Mj k , (3) then ri(Cx k) ⪯rj(Cx k) . PROOF. Adding (Cx k)−1 to both sides of (3) and inverting both sides yields (Cx k)−1 + Mi k −1 ⪯ (Cx k)−1 + Mj k −1. Both sides are now multiplied from right with AT k and from left with Ak. Adding Cw k results in ri(Cx k) ⪯rj(Cx k) . □ Thus, for checking the positive semi-definite ordering of covariance matrices, merely the ordering (3) of sensor information matrices for two sensors i and j needs to be determined. Selecting sensor i will then provide a smaller covariance matrix at time step k + 1. Due to the monotonicity of the Riccati equation, this even holds for arbitrary sensor schedules from time step k + 1 on. Corollary 1 (Pruning via Sensor Information Matrices) Suppose that the sensor schedule u′ = (u0, u1, . . . , uk−1) up to time step k −1 results in the covariance matrix Cx k(u′) . If for two sensors uk and ˜uk the order Muk k ⪯M˜uk n holds, then ∀u′′ = (uk+1, . . . , uN−1) there exists a schedule ˜u′′ = (˜uk+1, . . . , ˜uN−1) such that J (u) ≥J (˜u), where u = (u′, uk, u′′) and ˜u = (u′, ˜uk, ˜u′′) . PROOF. Due to Theorem 2, at least the sensor schedule ˜u′′ = u′′ yields Cx k(u) ⪰˜Cx k(˜u) for all k. The cost ordering according to Theorem 1 then leads to J(u) ≥J(˜u) . □ Without evaluating the Riccati equation at time step k and only by comparing the sensor information matrices of sensor uk and ˜uk, it can be decided that only sensor ˜uk needs to be considered for determining the optimal sensor schedule, while the covariance matrices Cx k+1(u), . . . , Cx N(u) need not to be determined for any sequence u′′. Hence, the complete sub-tree of sensor uk can be pruned. A. Order of Sensor Information Matrices When comparing sensor information matrices (or equivalently covariance matrices), the difference is in some cases indefinite, i.e., it is not determinable, if one information matrix is “larger” than the other. This is due to the fact that the order relation ⪯of positive semi-definite matrices leads to partial orders. Thus, in general, not all sensors can be pruned at a specific time step. Scalar systems, i.e., the dimension of the state x is one, form an exception. Here, the partial order becomes a total order. Per time step, it is now possible to prune all sensors except of one, which is equivalent to selecting the sensor that minimizes the state variance at each time step. This greedy strategy automatically leads to the optimal sensor schedule. Similar results can be found in [13] for the case of scheduling multiple scalar systems to a single sensor. B. Branch-and-Bound Even if pruning based on sensor information matrices reduces the number of possible sensor schedules, the remaining number of nodes in the search tree may still be large due to the partial order. To further prune the tree, the proposed technique is combined with branch-and- bound search. Branch-and-bound is a common search technique for classical decision problems like traveling-salesman or knapsack. The basic idea is to assign a lower bound of the achievable total cost value to any visited node. By means of these bounds, the tree is further expanded (branched), whereas nodes with a smaller lower bound are considered more promising to lead to the optimal sensor schedule and thus are expanded first. Sub-trees are pruned if their lower bound is larger than the total cost value of an already completely evaluated sensor schedule. 3 Algorithm 1 IBP branch-and-bound algorithm. The algorithm is initialized with Jmin = ∞. 1: For a given sensor schedule (u0, . . . , uk−1) do: 2: if leaf node, i.e., k = N then 3: Jmin ←J(u0, . . . , uN−1) 4: else 5: U ←child(uk−1) // children of sensor uk−1 6: U′ ←prune(U) // use ordering of sensor information matrices 7: bound(U′) // calculate lower bounds for all sensors in U′ 8: U′ ←sort(U′) // sort in ascending order based on lower bounds 9: U′′ ←prune(U′) // use lower bounds and Jmin 10: expand(U′′) // call Algorithm 1 for remaining sensors uk ∈U ′′ 11: end if For a particular node that was reached during the search by employing the sensor schedule (u0, . . . , uk−1), the total costs can be written as J(u) = J(u0, . . . , uk−1) | {z } known + J(uk, . . . , uN−1) | {z } unknown , (4) where only the value of the first summand is evaluated and thus known. While the value of the second summand is not calculated yet, a lower bound can be easily assigned by exploiting the sen- sor information matrices as shown in the next section. Based on this bound, the sub-tree corresponding to the remaining schedules (uk, . . . , uN−1) can be pruned, if the lower bound is larger than a global bound Jmin, which is the total cost of the currently best completely evaluated sensor schedule. Jmin forms an upper bound of the optimal cost. Obviously, the closer the lower bound to the true value of J(uk, . . . , uN−1), the earlier complete sub-trees can be pruned and thus, the better the pruning performance. IV. BOUNDING SENSOR Due to the cumulative structure of the total costs J(u) with non- negative summands, a simple lower bound can be obtained by setting the second summand in (4) equal to zero. In the following, this simple bound is referred to as zero bound (ZB). This bound can be interpreted as expecting a complete reduction of uncertainty for each time step k, . . . , N −1. Such a reduction can merely be achieved by a sensor information matrix with infinite trace or determinant. Obviously, such a sensor information matrix provides the coarsest lower bound possible. A significantly tighter lower bound can be derived by means of a so-called bounding sensor with sensor information matrix ¯ Mk for which the covariance matrices evolve according to ¯Cx n+1 = An  ¯Cx n −1 + ¯ Mn −1 AT n + Cw n (5) for n = k, . . . , N −1 commencing from ¯Cx k = Cx k. For each time step n, this special information matrix fulfills ¯ Mn ⪰Mun n (6) for all sensors un ∈S, where ¯ Mn is as close to Mun n as possible. According to Theorem 2, for covariance matrices obtained from (5) holds ¯Cx n+1 ⪯Cx n+1(un) for all time steps n = k, . . . , N −1 and sensor schedules (uk, . . . , uN−1). To prune the tree at an arbitrary time step (or level) k, merely the cost for the bounding sensor needs to be evaluated for the time steps k, . . . , N −1 in order to obtain the lower bound. This calculation can be done in linear time. In comparison, calculating the costs of all possible sensor schedules (uk, . . . , uN−1) requires exponential computation time. The complete IBP search algorithm is summarized in Algorithm 1, where as basic structure a depth-first search is applied. Breadth-first search can be utilized as well. The determination of the bounding sensor with information matrix ¯ Mk, which is required for calculating the lower bounds in line 7, is described in the following. A. Bounding Sensor for Two Sensors Since sensor information matrices are symmetric and positive semi- definite, they can be graphically interpreted as ellipsoids similar to the covariance ellipsoids that correspond to covariance matrices. In the following, the ellipsoid corresponding to a sensor information matrix is referred to as sensor information ellipsoid. Unlike covari- ance ellipsoids, sensor information ellipsoids have no distinguished position. Hence, it can be assumed that all sensor information ellipsoids are centered around the origin. Based on this interpretation, determining the bounding sensor information matrix ¯ Mk corresponds to the determination of the covering ellipsoid that contains the ellipsoids of all sensor information matrices. This problem is similar to the so-called L¨owner ellipsoid problem (see e.g. [14]). Thanks to the fact that all sensor information ellipsoids have the same center, the L¨owner ellipsoid problem can be significantly simplified. At first, the two sensors case is considered. Let u(1), u(2) ∈S be the two sensors with information matrices M(1) k , M(2) k . For this case, a bounding sensor information matrix with minimum determinant, i.e., a minimum volume covering ellipsoid, results from solving a generalized eigenvalue problem as summarized in the following theorem and as depicted in Fig. 2. Theorem 3 (Min. Bounding Sensor Information Matrix) The bounding sensor information matrix ¯ Mk with minimum determinant that fulfills (6) for two sensor information matrices M(1) k and M(2) k is given by ¯ Mk =  VT−1 · max  VTM(1) k V, VTM(2) k V  · V−1 , (7) where max( · ) is the element-wise maximum of matrices and V is the matrix of generalized eigenvectors of M(1) k and M(2) k . PROOF. At first, the sensor information matrices M(1) k and M(2) k have to be diagonalized simultaneously, which requires solving the generalized eigenvalue problem |M(1) k −λM(2) k | = 0 . (8) The solution of (8) is given by the eigenvalues λi, i ∈{1, 2, . . . , nx} and the matrix of eigenvectors V. This allows diagonalizing M(1) k and M(2) k according to VTM(1) k V = diag ([λ1, . . . , λnx]) , VTM(2) k V = I . (9) 4 (a) (b) (c) ¯Mk M(1) k M(2) k Fig. 2. Illustration of the determination of the minimum bounding sensor information matrix for the two sensors’ case. (a) Sensor information ellipsoids of two sensor information matrices M(1) k , M(2) k . (b) Result of the simultane- ous diagonalization of M(1) k and M(2) k . The covering ellipsoid (red dashed ellipsoid) corresponding to the bounding sensor information matrix results from taking the maximum eigenvalue (or longest principal axis, indicated by the arrows) for each dimension. (c) Transforming back yields the desired minimum covering ellipsoid corresponding to ¯ Mk. Determining the minimum volume covering ellipsoid for diagonal matrices is straightforward by taking the maximum of each diagonal element of the matrices in (9) according to max VTM(1) k V, VTM(2) k V  . (10) This corresponds to taking the longest principal axis for each di- mension or equivalently to taking the maximum of (1, λi) for each i ∈{1, 2, . . . , nx} (see Fig. 2 (b)). Multiplying VT−1 from left and V−1 from right to (10) reverses the diagonalization and leads to the desired minimum sensor information matrix ¯ Mk (see Fig. 2 (c)). □ B. Recursive Calculation for an Arbitrary Number of Sensors Determining the bounding sensor with minimum sensor informa- tion matrix for an arbitrary number of sensors is computationally ex- pensive in general, as numerical optimization is required. However, a very tight but not necessarily minimum bounding sensor information matrix can be efficiently calculated by employing the solution of the two sensors’ case recursively in a pairwise fashion. In doing so, the complexity for determining the bounding sensor information matrix is linear with the number of sensors. The recursion commences from the initial solution ¯ M(2) k according to (7) for the first two sensors u(1), u(2) ∈S. For the remaining sensors u(i) ∈S, with i ∈{3, 4, . . . , |S|}, the recursion ¯ M(i) k =  VT−1 · max  VT ¯ M(i−1) k V, VTM(i) k V  · V−1 (11) applies, where V is the matrix of eigenvectors of ¯ M(i−1) k and M(i) k . The final solution ¯ M(i) k for i = |S| is the desired bounding sensor information matrix ¯ Mk that fulfills (6). V. SIMULATION EXAMPLE The proposed IBP is evaluated by means of a simplified target tracking example. The state xk = [xk, ˙xk, yk, ˙yk]T of the observed target comprises the two-dimensional position [xk, yk]T and the velocities [ ˙xk, ˙yk]T in x and y direction. The system matrix and noise covariance matrix of wk of the dynamics model (1) are A = I ⊗ 1 T 0 1  and Cw = q · I ⊗ " T 3 3 T 2 2 T 2 2 T # , (12) respectively, where ⊗is the Kronecker matrix product. In (12), T = 1 s is the sampling interval and q = 0.02 is the scalar diffusion strength. Mean vector and covariance matrix of the initial state x0 are ˆx0 = [0, 1, 0, 1]T and Cx 0 = I, respectively. 1 2 3 4 5 6 7 8 0 400 800 1200 1600 N → expanded nodes → COV ZB SIM IBP 1 2 3 4 5 6 7 8 N → (a) (b) 0 1 2 3 4 avg. deviation → GRE CVX Fig. 3. (a) Average number of expanded nodes. (b) Average deviation of greedy scheduling and convex scheduling from optimal scheduling in terms of total costs J. A sensor network observes the target. It consists of eight sensors with measurement matrices H1 =  1 0 0 0  , H2 =  0 1 0 0  , H3 =  0 0 1 0  , H4 =  0 0 0 1  , H5 =  (H1 k)T, (H3)TT , H6 =  (H2 k)T, (H4)TT , H7 =  (H1 k)T, (H2)TT , H8 =  (H3 k)T, (H4)TT . The noise covariance matrices Cv k are diagonal, where the diagonal elements are determined randomly from a uniform distribution on the interval [0, 1]. For each k the weighting matrix Wk = I, i.e., the objective is to minimize the average estimation error. The cost functions gk( · ) are set to the trace for each k. Six different pruning methods are compared for time horizon lengths N = 1, . . . , 8: GRE Greedy scheduling, i.e., one-step lookahead. CVX Suboptimal scheduling based on convex optimization [8]. ZB Branch-and-bound with zero bound, see Section IV. COV Pruning method proposed in [2]. SIM ZB plus the order of sensor information matrices. IBP Proposed information-based pruning. For each time horizon length, M = 50 Monte Carlo runs are performed. In Fig. 3 (a) the average number of expanded nodes in the search tree is plotted. Compared to the total number of nodes, which is PN i=0 |S|i ∈O |S|N with O( · ) being the big-O in Landau notation, optimal pruning significantly reduces the search space. The much better performance of SIM compared to ZB shows the benefit from utilizing the order of sensor information matrices. Additionally employing this order can be considered a pre-selection of candidate sensor schedules, while branch-and-bound thins out this candidate set. IBP provides a much tighter lower bound than ZB and SIM, respectively, which further improves pruning. The average deviation of the suboptimal methods GRE and CVX from optimal scheduling is depicted in Fig. 3 (b). The av- erage deviation for a given time horizon length N is defined as 1 M PM i=1 Ji N,• −Ji N  with Ji N being the optimal total cost of simulation run i calculated by means of IBP and Ji N,• being the total cost of pruning method • ∈{GRE, CVX}. More significant deviations from the optimal scheduling especially in case of GRE are expected for example in scenarios where additional sensor constraints like energy or communication costs (see Section VI-B) have to be considered, where sensors are temporarily unavailable, or where the dynamics and sensor models are nonlinear (consider for example the results in [15], [16]). Regarding the runtime, IBP is outperformed only by GRE, while all optimal scheduling algorithms and CVX are up to three orders of magnitude slower than IBP. 5 VI. POTENTIAL EXTENSIONS The proposed information-based pruning algorithm can be ex- tended in many ways in order to apply it to more general sensor scheduling tasks and to further improve the pruning performance. Three possible extensions are discussed. A. Multiple Sensors per Time Step So far, merely one sensor per time step is selected for performing a measurement. The extension to multiple sensors requires to schedule a subset of sensors S′ ⊂S with |S′| = S′ per time step. For this purpose, each subset out of S S′  possible subsets has to be evaluated at each time step if no pruning is performed. This additional subset selection has to be included into the tree search in order to employ IBP for reducing the search effort. A straightforward realization is to perform nested branches per time step instead of a single branch as in the single sensor case. Each nested branch can be considered a tree of depth S′ but with a branching factor that decreases with each level. In the first level, the branching factor is S, in the second level it is S −1 since the sensor selected in the previous branch must not be considered in the current branch. Finally, in level S′ the branching factor is reduced to S −S′ + 1. Each path from the root to a leaf node in this subset selection tree corresponds to a sensor subset that can be scheduled for a time step. At each leaf node, the subset selection tree for the next time step begins. Overall the whole search tree has a depth of N · S′ with varying branching factor. For comparison, the search tree for the single sensor case has a depth of N with a constant branching factor of S. IBP can now be directly employed as in the single sensor case. B. Sensor Constraints In many sensor scheduling problems, there is an additional budget constraint of the form B · u ≤b , (13) where u = [uT 0 , . . . , uT N−1]T ∈{0, 1}N · S is a binary sensor scheduling vector with elements uk ∈{0, 1}S for each time step k = 0, . . . , N −1. Such a budget can for example be a maximum number of sensors K ∈N allowed to perform measurements. Here, B = [1, 1, . . . , 1]T is a vector of ones and b = K. A further example is an energy budget Ki ∈N that is assigned to each sensor i. The energy budget Ki is reduced by one after each measurement of sensor i. In this case, b = [K1, K2, . . . , KS]T and B is a binary matrix, where only the elements i, i + S, . . . , i + (N −1) · S of row i of B are equal to one. If such sensor constraints are present, IBP cannot be applied directly. This is basically due to pruning based on the partial ordering of sensors as this ordering does not take the sensor constraints into account. Additionally to considering the sensor information matrices, it is also necessary to check whether the budget constraint can still be fulfilled if a node of the search tree is pruned. Consider the example mentioned above, where a maximum number of sensors K ∈N is allowed to perform measurements. To map IBP on this constrained scheduling problem, one possibility is to include a virtual sensor with information matrix M = 0. Selecting this virtual sensor corresponds to performing no measurement. No budget costs are assigned to this virtual sensor, i.e., the elements in B corresponding to the virtual sensor are zero. Of course, the information matrix of the virtual sensor is always dominated by all other sensors, but selecting the virtual sensor will never violate the constraint K. Thus, by ensuring that this sensor is never pruned, IBP can be applied directly. In contrast to the partial ordering, calculating the lower bound based on the sensor information matrices is always directly possible. Certainly, the bound will not be as tight as in the unconstrained case. Calculating the bound at a particular node in the search tree requires to assume the best case where each sensor is available at all times. Thus, all sensor information matrices are incorporated into calculating the bounding sensor even if some sensors might not be available in future. It is worth mentioning that the constraint (13) itself can be considered as some kind of pruning. Avoiding the selection of a sensor that will violate the constraint corresponds to pruning the whole sub-tree of this particular sensor. C. Improving the Bounds Scheduling algorithms based on convex optimization (see for example [4], [7], [8]) are sub-optimal, but they can be employed in an optimal branch-and-bound algorithm for calculating lower bounds as shown in [4]. In many cases, these bounds are tighter than the bound proposed in this paper. However, calculating lower bounds based on convex optimization is computationally significantly more complex. Thus, both bounds could be combined, where the proposed bound is used firstly for pruning in order to reduce the need of calculating bounds via convex optimization. For IBP merely the total cost of the currently best completely evaluated sensor schedule is used for setting the global bound value Jmin. By means of tight upper bounds, Jmin can be decreased more rapidly, which will significantly improve the pruning performance. One way to calculate upper bounds is to use Kalman predictions without measurement updates. Better upper bounds can be provided, if the sensor information matrices are used for calculating a further bounding sensor with information matrix M′ k ⪯Muk k similar to (6). Therefore, the recursive algorithm proposed in Section IV can be employed, where max in (10) and (11) is replaced by min . VII. CONCLUSION The proposed information-based pruning excludes complete sub- trees very early, while preserving the optimal sensor schedule is guaranteed. Compared to existing optimal pruning methods, the ef- fectiveness of the proposed method relies on exploiting the properties of the sensor information matrices. This allows pruning without explicitly evaluating the Riccati equation and by calculating a tight lower bound to the total costs. IBP can be extended for example to schedule multiple sensors per time step or to consider constraints like energy budgets. VIII. ACKNOWLEDGMENTS The author gratefully thanks Yilin Mo and Bruno Sinopoli from the Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, USA, for providing the source code of their algorithm proposed in [8]. This research work was partially supported by the Intelligent Sensor-Actuator-Systems Laboratory (ISAS), Karlsruhe Institute of Technology (KIT), Germany, as well as by the Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB, Karlsruhe, Germany. REFERENCES [1] L. Meier, J. Peschon, and R. M. Dressler, “Optimal Control of Mea- surement Subsystems,” IEEE Transactions on Automatic Control, vol. AC-12, no. 5, pp. 528–536, Oct. 1967. [2] A. Logothetis and A. Isaksson, “On sensor scheduling via information theoretic criteria,” Proceedings of the 1999 American Control Confer- ence (ACC), vol. 4, pp. 2402–2406, 1999. 6 [3] M. F. Huber and U. D. Hanebeck, “Priority List Sensor Scheduling using Optimal Pruning,” in Proceedings of the 11th International Conference on Information Fusion (Fusion), Cologne, Germany, Jul. 2008, pp. 350– 357. [4] M. F. Huber, “On Multi-Step Sensor Scheduling via Convex Optimiza- tion,” in Proceedings of the 2nd International Workshop on Cognitive Information Processing (CIP), Elba Island, Italy, Jun. 2010, pp. 376–381. [5] P. Alriksson and A. Rantzer, “Sub-Optimal Sensor Scheduling with Error Bounds,” in Proceedings of the 16th IFAC World Congress, Prague, Czech Republic, Jul. 2005. [6] V. Gupta, T. H. Chung, B. Hassibi, and R. M. Murray, “Sensor Schedul- ing Algorithms Requiring Limited Computations,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2004, pp. 825–828. [7] S. Joshi and S. Boyd, “Sensor Selection via Convex Optimization,” IEEE Transactions on Signal Processing, vol. 57, no. 2, pp. 451–462, Feb. 2009. [8] Y. Mo, R. Ambrosino, and B. Sinopoli, “Sensor selection strategies for state estimation in energy constrained wireless sensor networks,” Automatica, vol. 47, pp. 1330–1338, 2011. [9] Y. He and E. K. P. Chong, “Sensor Scheduling for Target Tracking in Sensor Networks,” in Proceedings of the 43rd IEEE Conference on Decision and Control (CDC), Dec. 2004, pp. 743–748. [10] L. Chen and R. K. Mehra, “Reformulating Kalman Filter Based Optimal Dynamic Coverage Control,” in Proceedings of the 17th IFAC World Congress, Seoul, Korea, Jul. 2008, pp. 4174–4179. [11] V. Gupta, T. H. Chung, B. Hassibi, and R. M. Murray, “On a Stochastic Sensor Selection Algorithm with Applications in Sensor Scheduling and Sensor Coverage,” Automatica, vol. 42, no. 2, pp. 251–260, Feb. 2006. [12] K. M. Abadir and J. R. Magnus, Matrix Algebra. Cambridge University, 2005. [13] S. Howard, S. Suvorova, and B. Moran, “Optimal Policy for Scheduling of Gauss-Markov Systems,” in Proceedings of the 7th International Conference on Information Fusion (Fusion), Stockholm, Sweden, 2004, pp. 888–892. [14] E. A. Yildirim, “On the Minimum Volume Covering Ellipsoid of Ellipsoids,” SIAM Journal on Optimization, vol. 17, no. 3, pp. 621–641, Sep. 2006. [15] A. S. Chhetri, D. Morrell, and A. Papandreou-Suppappola, “Nonmyopic Sensor Scheduling and its Efficient Implementation for Target Tracking Applications,” EURASIP Journal on Applied Signal Processing, vol. 2006, pp. 1–18, 2006. [16] W. Xiao, L. Xie, J. Chen, and L. Shue, “Multi-Step Adaptive Sensor Scheduling for Target Tracking in Wireless Sensor Networks,” in Pro- ceedings of the 31st International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2006, pp. IV–705–IV–708.