Joint Estimation and Localization in Sensor Networks Nikolay Atanasov, Roberto Tron, Victor M. Preciado, and George J. Pappas Abstract —This paper addresses the problem of collaborative tracking of dynamic targets in wireless sensor networks. A novel distributed linear estimator, which is a version of a distributed Kalman filter , is derived. We prove that the filter is mean square consistent in the case of static target estimation. When large sensor networks are deployed, it is common that the sensors do not have good knowledge of their locations, which affects the target estimation procedure. Unlike most existing approaches for target tracking, we investigate the performance of our filter when the sensor poses need to be estimated by an auxiliary localization procedure. The sensors are localized via a distributed Jacobi algorithm from noisy relative measurements. We prove strong convergence guarantees for the localization method and in turn for the joint localization and target estimation approach. The performance of our algorithms is demonstrated in simulation on environmental monitoring and target tracking tasks. I. I NTRODUCTION A central problem in networked sensing systems is the estimation and tracking of the states of dynamic phenomena of interest that evolve in the sensing field. Potential applications include environmental monitoring [1], [2], surveillance and reconnaissance [3], [4], social networks [5]. In most situations, individual sensors receive partially informative measurements which are insufficient to estimate the target state in isolation. The sensors need to engage in information exchange with one another and solve a distributed estimation problem. To compli- cate matters, it is often the case that the sensors need to know their own locations with respect to a common reference in order to utilize the target measurements meaningfully. Hence, in general, the sensors face a joint localization and estima- tion problem. Virtually all existing work in distributed target estimation assumes implicitly that the localization problem is solved, while all the literature on localization does not consider the effect of the residual errors on a common estimation task. The goal of this paper is to show that the two problems can be solved jointly, and that, with simple measurement models, the resulting estimates have strong convergence guarantees. Assumptions and contributions. We assume that the sensors obtain linear Gaussian measurements of the target state and repeated sequential measurements of their relative positions along the edges of a graph. Our contributions are as follows: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. This work was supported by ONR-HUNT grant N00014-08-1-0696 and by TerraSwarm, one of six centers of STARnet, a Semiconductor Research Corporation program sponsored by MARCO and DARPA. N. Atanasov, V. Preciado, and G. Pappas are with the Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA 19104, { atanasov, preciado, pappasg } @seas.upenn.edu . R. Tron is with the Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA 19104, tron@seas.upenn.edu • We derive a distributed linear estimator for tracking dynamic targets. We prove that the filter is mean-square consistent in the case of a static target. • We provide a distributed algorithm for sensor localization from sequential relative measurements and prove mean- square and strong consistency. • We prove mean-square consistency of the joint localiza- tion and target estimation procedure. Related work. Our target estimation algorithm was inspired by Rahnama Rad and Tahbaz-Salehi [6], who propose an algorithm for distributed static parameter estimation using nonlinear sensing models. We specialize their model to het- erogeneous sensors with linear Gaussian observations, show stronger convergence results (mean-square consistency instead of weak consistency), and then generalize the solution to dynamic targets. Our filter is similar to the Kalman-Consensus [7], [8] and the filter proposed by Khan et al. [9], [10]. Khan et al. [9] show that a dynamic target can be tracked with bounded error if the norm of the target system matrix is less than the network tracking capacity. Shahrampour et al. [10] quantify the estimation performance using a global loss function and show that the asymptotic estimation error depends on its de- composition. Kar et al. [11] study distributed static parameter estimation with nonlinear observation models and noisy inter- sensor communication. Related work also includes [12], which combines the Jacobi over-relaxation method with dynamic consensus to compute distributed weighted least squares. Our localization algorithm follows the lines of the Jacobi algorithm, first proposed for localization in sensor networks by Barooah and Hespanha [13], [14]. In contrast with their approach, we consider repeated relative measurements and show strong convergence guarantees for the resulting sequen- tial localization algorithm. Other work in sensor network local- ization considers nonlinear and less informative measurement models than those used in this paper. For instance [15], [16], [17], [18] address the problem of localization using range- only measurements, which is challenging because a graph with specified edge lengths can have several embeddings in the plane. Khan et al. [19] introduce a distributed localization (DILOC) algorithm, which uses the barycentric coordinates of a node with respect to its neighbors and show its convergence via a Markov chain. Diao et al. [20] relax the assmuption of DILOC that all nodes must be inside the convex hull of the anchors. Localization has also been considered in the context of camera networks [21]. Paper organization. The joint localization and estimation problem is formulated precisely in Sec. II. The distributed linear estimator for target tracking is derived in Sec. III as- suming known sensor locations. A distributed Jacobi algorithm arXiv:1404.3580v1 [cs.MA] 14 Apr 2014 is introduced in Sec. IV to localize the sensors using relative measurements when the true locations are unknown. Mean- square and strong consistency are proven. In Sec. V, we show that the error of the target estimator, when combined with the localization procedure, remains arbitrarily small. All proofs are provided in the Appendix. II. P ROBLEM F ORMULATION Consider a static sensor network composed of n sensors with configurations { x 1 , . . . , x n } ⊂ X ∼ = R d . The configura- tion of a sensor may include its position, orientation, and other operational parameters but we will refer to it, informally, as the sensor’s location. The communication network interconnecting the sensors is represented by an undirected graph G = ( V, E ) with vertices V := { 1 , . . . , n } corresponding to the sensors and | E | = m edges. An edge ( j, i ) ∈ E from sensor j to sensor i exists if they can communicate. The set of nodes (neighbors) connected to sensor i is denoted by N i . The task of the sensors is to estimate and track the state y ( t ) ∈ Y ∼ = R d y of a phenomenon of interest (target), where Y is a convex set. The target evolves according to the following target motion model : y ( t + 1) = F y ( t ) + η ( t ) , η ( t ) ∼ N (0 , W ) , (1) where η ( t ) is the process noise, whose values at any pair of times are independent. Sensor i , depending on its location x i , can obtain a measurement z i ( t ) of the target state y ( t ) at time t according to the following sensor observation model : z i ( t ) = H i ( x i ) y ( t )+ v i ( t, x i ) , v i ( t, x i ) ∼ N (0 , V i ( x i )) , (2) where v i ( t, x i ) is a sensor-state-dependent measurement noise specific to sensor i , which is independent at any pair of times and across different sensors. The measurement noise is independent of the target noise η ( t ) as well. The signals, z i ( t ) , observed by a single sensor, although potentially informative, do not reveal the target state completely, i.e. each sensor faces a local identification problem. We assume, however, that the target is observable if one has access to the signals received by all sensors. The sensors need to know their locations in order to use the signals z i ( t ) to estimate the targets state. However, when large sensor networks are deployed, it is common that the sensors do not have good knowledge of their positions but instead only a rough estimate (prior). We suppose that each sensor has access to noisy relative measurements of the positions of its neighbors 1 , which can be used to localize the sensors. In particular, at time t sensor i receives the following noisy relative configuration measurement from its neighbor j : s ij ( t ) = x j − x i +  ij ( t ) ,  ij ( t ) ∼ N (0 , E ij ) , (3) where  ij ( t ) is a measurement noise, which is independent at any pair of times and across sensor pairs. The relative measure- ment noises are independent of the target measurement and 1 The graphs describing the communication and the relative measurement topologies might be different in practice. However, we assume that they are the same in order to simplify the presentation. motion noises too. Since there is translation ambiguity in the measurements (3) we assume that all sensors agree to localize themselves in the reference frame of sensor 1. The location estimates can then be used in place of the unknown sensor positions during the target estimation procedure. The joint localization and estimation problem is summarized below. Problem (Joint Estimation and Localization) . The task of each sensor i is to construct estimators ˆ x i ( t ) and ˆ y i ( t ) of its own location x i and of the target state y in a distributed manner, i.e. using information only from its neighbors and the measurements { s ij ( t ) | j ∈ N i } and { z i ( t ) } . To illustrate the results, we use two scenarios which fit our models throughout the paper. The first is environmental monitoring problem in which a sensor network of remote methane leak detectors (RMLD), based on tunable diode laser absorption spectroscopy, is deployed to estimate the methane concentration in a landfill. The methane field is assumed static (i.e. F = I d y , W = 0 ) and can be modeled by discretizing the environment into cells and representing the gas concentration with a discrete Gaussian random field, y ∈ R d y (See Fig.5). It was verified experimentally in [22] that the RMLD sensors fit the linear model in (2). Second, we consider tracking a swarm of mobile vehicles via a sensor network using range and bearing measurements (See Fig. 1). The position ( y 1 j , y 2 j ) ∈ R 2 and velocity ( ̇ y 1 j , ̇ y 2 j ) ∈ R 2 of the j th target have discretized double integrator dynamics driven by Gaussian noise: y j ( t + 1) = [ I 2 τ I 2 0 I 2 ] y j ( t ) + η j ( t ) , W := q [ τ 3 3 I 2 τ 2 2 I 2 τ 2 2 I 2 τ I 2 ] , where y j = [ y 1 j , y 2 j , ̇ y 1 j , ̇ y 2 j ] T is the j -th target state, τ is the sampling period is sec , and q is a diffusion strength measured in ( m sec 2 ) 2 1 Hz . Each sensor in the network takes noisy range and bearing measurements of the target’s position: z ij ( t ) = [ √ ( y 1 j − x 1 i ) 2 + ( y 2 j − x 2 i ) 2 arctan ( ( y 2 j − x 2 i ) / ( y 1 j − x 1 i ) ) ] + v ( t, x i , y j ) , (4) where x i := ( x 1 i , x 2 i ) ∈ R 2 is the sensor’s location and the noise v grows linearly with the distance between the sensor and the target. The observation model is nonlinear in this case so we resort to linearization in order to apply our framework. III. D ISTRIBUTED T ARGET E STIMATION We begin with the task of estimating and tracking the dynamic state of a target via the sensor network. For now we assume that the sensors know their positions and con- centrate on the estimation task. We specializing the general parameter estimation scheme of Rahnama Rad and Tahbaz- Salehi [6] to linear Gaussian observation models such as (2). We show that the resulting distributed linear filter is mean- square consistent 2 when the target is stationary. This result is 2 A distributed estimator of a parameter y is weakly consistent if all esti- mates, ˆ y i ( t ) , converge in probability to y , i.e. lim t →∞ P ( ‖ ˆ y i ( t ) − y ‖ ≥  ) = 0 for any  > 0 and all i . It is mean-square consistent if all estimates converge in L 2 to y , i.e. lim t →∞ E [ ‖ ˆ y i ( t ) − y ‖ 2 ] = 0 , ∀ i . −100 −80 −60 −40 −20 0 20 40 60 −60 −40 −20 0 20 40 x [m] y [m] Fig. 1: A realization of the target tracking scenario in which a sensor network with 40 nodes tracks 10 mobile targets via range and bearing measurements −50 0 50 −50 0 50 x [m] y [m] Iteration 0 −50 0 50 −50 0 50 x [m] Iteration 20 Fig. 2: Initial and final (after 20 steps) node locations estimated by the distributed localization algorithm on a randomly generated graph with 300 nodes ans 1288 edges stronger than the weak consistency 2 shown in the general non- Gaussian case in [6, Thm.1]. Suppose for now that the target is stationary, i.e. y := y (0) = y (1) = . . . . To introduce the estimation scheme from [6], suppose also that instead of the linear Gaussian measurements in (2), the sensor measurements z i ( t ) are drawn from a general distribution with conditional probability density function (pdf) l i ( · | y ) . As before, the signals observed by sensor i are iid over time and independent from the observations of all other sensors. In order to aggregate the information provided to it over time - either through observations or communication with neighbors - each sensor i holds and updates a pdf p i,t over the target state space Y . Consider the following distributed estimation algorithm: p i,t +1 ( y ) = ξ i,t l i ( z i ( t + 1) | y ) ∏ j ∈N i ∪{ i } ( p j,t ( y ) ) κ ij , ˆ y i ( t ) ∈ arg max y ∈Y p i,t ( y ) , (5) where ξ i,t is a normalization constant ensuring that p i,t +1 is a proper pdf and κ ij > 0 are weights such that ∑ j ∈N i ∪{ i } κ ij = 1 . The update is the same as the standard Bayes rule with the exception that sensor i does not just use its own prior but a geometric average of its neighbors’ priors. Given a connected graph, the authors of [6] show that (5) is weakly consistent under broad assumptions on the observation models l i . Next, we specialize the estimator in (5) to the linear Gaussian measurement model in (2). Let G ( ω, Ω) denote a Gaussian distribution (in information space) with mean Ω − 1 ω and covariance matrix Ω − 1 . The quantities ω and Ω are conventionally called information vector and information matrix , respectively. Suppose that the pdfs p i,t of all sensors i ∈ V at time t are that of Gaussian distributions G ( ω i,t , Ω i,t ) . We claim that the posteriors resulting from applying the update in (5) remain Gaussian. Lemma 1 ([23, Thm.2]) . Let Y i ∼ G ( ω i , Ω i ) for i = 1 , . . . , n be a collection of random Gaussian vectors with associated weights κ i . The weighted geometric mean, ∏ n i =1 p κ i i , of their pdfs p i is proportional to the pdf of a random vector with distribution G (∑ n i =1 κ i ω i , ∑ n i =1 κ i Ω i ) . Lemma 2 ([23, Thm.2]) . Let Y ∼ G ( ω, Ω) and V ∼ G (0 , V − 1 ) be random vectors. Consider the linear transforma- tion Z = HY + V . The conditional distribution of Y | Z = z is proportional to G ( ω + H T V − 1 z, Ω + H T V − 1 H ) . Lemma 1 says that if the sensor priors are Gaussian G ( ω i,t , Ω i,t ) , then after applying the geometric averaging in (5) the resulting distribution will still be Gaussian and its information vector and information matrix will be weighted averages of the prior ones. Lemma 2 says that after applying Bayes rule the distribution will remain Gaussian. Combining the two allows us to derive the following linear Gaussian version of the estimator in (5): ω i,t +1 = ∑ j ∈N i ∪{ i } κ ij ω j,t + H T i V − 1 i z i ( t ) , Ω i,t +1 = ∑ j ∈N i ∪{ i } κ ij Ω j,t + H T i V − 1 i H i , (6) where H i := H i ( x i ) and V i := V i ( x i ) . The estimate of sensor i at time t of the true target state y is: ˆ y i ( t ) := Ω − 1 i,t ω i,t . (7) In this linear Gaussian case, we prove a strong result about the quality of the estimates. Theorem 1. Suppose that the communication graph G is connected and the matrix [ H T 1 . . . H T n ] T has rank d y . Then, the estimates (7) of all sensors converge in mean square to y , i.e. lim t →∞ E [ ‖ ˆ y i ( t ) − y ‖ 2 2 ] = 0 for all i . The estimation procedure in (6), (7) can be extended to track a dynamic target as in (1) by adding a local prediction step, same as that of the Kalman filter, at each sensor. The distributed linear filter is summarized in Alg. 1 and Thm. 1 guarantees its mean-square consistency for stationary targets. Its performance on dynamic targets was studied in the target tracking scenario introduced in Sec. II and the results are presented in Fig. 1 and Fig. 3. IV. L OCALIZATION FROM R ELATIVE M EASUREMENTS Target tracking via the distributed estimator in Alg. 1 requires knowledge of the true sensor locations. As mentioned 0 10 20 30 40 50 60 70 80 0 5 10 15 Position RMSE [m] Time Steps 0 10 20 30 40 50 60 70 80 0 2 4 6 8 10 Time Steps Velocity RMSE [m/s] Fig. 3: Root mean squared error (RMSE) of the estimated target position and velocity obtained from averaging 50 simulated runs of the distributed linear estimator in the target tracking scenario (Fig. 1). The error increases because as targets move away from the sensor network, the covariance of the measurement noise grows linearly with distance. The errors of node 1 (blue) are lower because it was always placed at the origin and thus close to the starting target positions. 0 5 10 15 20 25 30 0 0.2 0.4 0.6 0.8 1 Time Steps Position RMSE [m] Fig. 4: Root mean squared error (RMSE) of the location estimates obtained from averaging 50 simulated runs of the distributed localization alogorithm with randomly generated graphs with 300 nodes (e.g. Fig. 2) Algorithm 1 Distributed Linear Estimator Input : Prior ( ω i,t , Ω i,t ) , messages ( ω j,t , Ω j,t ) , ∀ j ∈ N i , and measure- ment z i ( t ) Output : ( ω i,t +1 , Ω i,t +1 ) Update Step: ω i,t +1 = ∑ j ∈N i ∪{ i } κ ij ω j,t + H T i V − 1 i z i ( t ) Ω i,t +1 = ∑ j ∈N i ∪{ i } κ ij Ω j,t + H T i V − 1 i H i ˆ y i ( t + 1) = Ω − 1 i,t +1 ω i,t +1 Prediction Step: Ω i,t +1 = ( F Ω − 1 i,t +1 F T + W ) − 1 ω i,t +1 = Ω i,t +1 F ˆ y i ( t + 1) earlier this is typically not the case, especially when large sensor networks are deployed. This section describes a method for localization from relative measurements (3), whose strong convergence guarantees can be used to analyze the conver- gence of a joint localization and estimation procedure. The relative measurements, received by all sensors at time t , can be written in matrix form as follows: s ( t ) = ( B ⊗ I d ) T x +  ( t ) , where B ∈ R n × m is the incidence matrix of the communica- tion graph G . All sensors agree to localize relative to node 1 and know that x 1 = 0 . Let ̃ B ∈ R ( n − 1) × m be the incidence matrix with the row corresponding to sensor 1 removed. Further, define E := E [  ( t )  ( t ) T ] = diag ( E 1 , . . . , E m ) , where {E k } is an enumeration of the noise covariances associated with the edges of G . Given t measurements, the least squares estimate of x leads to the classical Best Linear Unbiased Estimator (BLUE), given by: ˆ x ( t ) := ( ̃ B E − 1 ̃ B T ) − 1 ̃ B E − 1 t − 1 ∑ τ =0 s ( τ ) , (8) where the inverse of ̃ B E − 1 ̃ B T exists as long as the graph G is connected [13]. Among all linear estimators of x , BLUE has the smallest variance for the estimation error [24]. The computation in (8) can be distributed via a Jacobi algorithm for solving a linear system as follows. Each sensor maintains an estimate ˆ x i ( t ) of its own state at time t and a history of the av- eraged measurements, σ i ( t ) := 1 t +1 ∑ t τ =0 ∑ j ∈N i E − 1 ij s ij ( τ ) , received up to time t . Given prior estimates (ˆ x i ( t ) , σ i ( t )) , the update of the distributed Jacobi algorithm at sensor i is: ˆ x i ( t + 1) = ( ∑ j ∈N i E − 1 ij ) − 1 ( ∑ j ∈N i E − 1 ij ˆ x j ( t ) − σ i ( t ) ) , σ i ( t + 1) = 1 t + 1 ( tσ i ( t ) + ∑ j ∈N i E − 1 ij s ij ( t ) ) . (9) Barooah and Hespanha [13], [14] show that, with a single round of relative measurements, the the Jacobi algorithm provides an unbiased estimate of x . Here, we incorporate repeated sequential measurements and prove much stronger performance guarantee. Theorem 2. Suppose that the communication graph G is con- nected. Then, the estimates ˆ x i ( t ) of the sensor configurations in (9) are mean-square and strongly consistent estimators of the true sensor states, i.e.: lim t →∞ E [ ‖ ˆ x i ( t ) − x i ‖ 2 2 ] = 0 , P ( lim t →∞ ‖ ˆ x i ( t ) − x i ‖ 2 = 0 ) = 1 , ∀ i The performance of our distributed localization algorithm was analyzed on randomly generated graphs with 300 nodes. The location priors were chosen from a normal distribution with standard deviation of 5 meters from the true node positions. An instance of the localization task is illustrated in Fig. 2, while the estimation error is shown in Fig. 4. V. J OINT L OCALIZATION AND E STIMATION Having derived separate estimators for the sensor locations and the target state, we are ready to return to the original problem of joint localization and estimation. At time t , the location estimates { ˆ x i ( t ) } in (9) can be used in the target estimator (6), (7) instead of the true sensor positions. It is important to analyze the evolution of the coupled estimation procedure because it is not clear that the convergence result in Thm. 1 will continue to hold. Define the sensor information matrix M i ( x ) := H i ( x ) T V i ( x ) − 1 H i ( x ) . In an analogy with the centralized Kalman filter, the sensor information matrix captures the amount of information added to the inverse of the covariance matrix during an update step of the Riccati map. From this point of view, it is natural to describe sensor properties in terms of the sensor information matrix. A regular- ity assumption which stipulates that nearby sensing locations provide similar information gain is necessary. 2 4 6 8 10 12 14 2 4 6 8 10 12 14 0 10 20 30 40 0 0.2 0.4 0.6 0.8 1 Position RMSE [m] Time Steps 0 10 20 30 40 0 20 40 60 80 100 Estimation RMSE Time Steps 0 10 20 30 40 0 20 40 60 80 100 Estimation RMSE Time Steps Fig. 5: Methane emission monitoring via a sensor network. The true (unknown) sensor locations (red dots), the sensing range (red circle), and a typical realization of the methane field are shown on the left. The root mean squared error (RMSE) of the location estimates and of the field estimates obtained from averaging 50 simulated runs of the joint localization and estimation algorithm with continuous sensor observation models are shown in the two middle plots. In an additional experiment, the sensors were placed on the boundaries of the cells of the discretized field. As the observation model for each sensor was defined in terms of the proximal environment cells, this made the model discontinuous. The rightmost plot illustrates that the field estimation error does not vanish when discontinuities are present. Assumption (Observation Model Continuity) . The sensor in- formation matrices M i ( x ) are bounded 3 continuous functions of x for all i . The following theorem ensures that the target state estimator retains its convergence properties when used jointly with the distributed localization procedure. Theorem 3. Let { ˆ x i ( t ) } be strongly consistent estimators of the sensor configurations, i.e. ˆ x i ( t ) a.s. − − → x i , ∀ i . Suppose that the communication graph G is connected and the matrix [ H 1 ( x 1 ) T . . . H n ( x n ) T ] T has rank d y . Let δ > 0 be ar- bitrary. If each sensor i updates its target estimate ( ω i,t , Ω i,t ) as follows: ω i,t +1 = ∑ j ∈N i ∪{ i } κ ij ω j,t + ̂ H T i,t ̂ V − 1 i,t z i ( t ) , Ω i,t +1 = ∑ j ∈N i ∪{ i } κ ij Ω j,t + ̂ H T i,t ̂ V − 1 it ̂ H i,t , ˆ y i ( t + 1) = ( Ω i,t +1 + ( t + 1) δI d ) − 1 ω i,t +1 , (10) where ̂ H i,t := H i (ˆ x i ( t )) and ̂ V i,t := V i (ˆ x i ( t )) , then the asymptotic mean-square error of target estimates is O ( δ 2 ) : lim t →∞ E [ ‖ ˆ y i ( t ) − y ‖ 2 2 ] = δ 2 y T ( n ∑ j =1 π j M j ( x j ) + δI ) − 2 y, for all i , where y is the true target state and x j is the true position of sensor j . The combined procedure specified by (9) and (10) provides a mean-square consistent way to estimate the sensor locations and the target state jointly. The performance of the joint algo- rithm was evaluated on the methane concentration estimation problem and the results are summarized in Fig. 5. VI. C ONCLUSION This paper studied the problem of joint target tracking and node localization in sensor networks. A distributed linear estimator for tracking dynamic targets was derived. It was 3 There exists a constant q such that ‖ M i ( x ) ‖ ≤ q < ∞ for all i and x . proven that the filter is mean-square consistent when esti- mating static states. Next, a distributed Jacobi algorithm was proposed for localization and its mean-square and almost sure consistency were shown. Finally, the combined localization and target estimation procedure was shown to have arbirarily small asymptotic estimation error. Future work will focus on strengthening the result in Thm. 3 to mean-square consistency and relaxing the assumption of a strongly consistent localization procedure. Studying the rela- tionship between our distributed linear estimator, the Kalman- Consensus filter [8], and the filter proposed by Khan et al. [9] is of interest as well. A PPENDIX A: P ROOF OF T HEOREM 1 Define the following: ω t := [ ω T 1 t . . . ω T nt ] T Ω t := [ Ω T 1 t . . . Ω T nt ] T M i := H i ( x i ) T V − 1 i ( x i ) H i ( x i ) M := [ M T 1 . . . M T n ] T ζ ( t ) := [ H 1 V − T 1 v 1 ( t ) T . . . H n V − T n v n ( t ) T ] T . The update equations of the filter (6) in matrix form are: ω t +1 = ( K ⊗ I d y ) ω t + M y + ζ ( t ) , Ω t +1 = ( K ⊗ I d y ) Ω t + M, (11) where K = [ κ ij ] with κ ij = 0 if j / ∈ N i ∪ { i } is a stochastic matrix. The solutions of the linear systems are: ω t = ( K ⊗ I d y ) t ω 0 + t − 1 ∑ τ =0 ( K ⊗ I d y ) t − 1 − τ ( M y + ζ ( τ ) ) , Ω t = ( K ⊗ I d y ) t Ω 0 + t − 1 ∑ τ =0 ( K ⊗ I d y ) t − 1 − τ M. Looking at the i -th components again, we have: ω it t + 1 := 1 t + 1 n ∑ j =1 [ K t ] ij ω j 0 + 1 t + 1 t − 1 ∑ τ =0 n ∑ j =1 [ K t − τ − 1 ] ij ( M j y + H T j V − 1 j v j ( τ )) , Ω it t + 1 := 1 t + 1 n ∑ j =1 [ K t ] ij Ω j 0 + 1 t + 1 t − 1 ∑ τ =0 n ∑ j =1 [ K t − τ − 1 ] ij M j . Define the following to simplify the notation: g it := 1 t +1 ∑ n j =1 [ K t ] ij ω j 0 , G it := 1 t +1 ∑ n j =1 [ K t ] ij Ω j 0 , φ it := 1 t +1 ∑ t − 1 τ =0 ∑ n j =1 [ K t − τ − 1 ] ij H T j V − 1 j v j ( τ ) , C it := 1 t +1 ∑ t − 1 τ =0 ∑ n j =1 [ K t − τ − 1 ] ij M j , b it := g it − G it y, B it := 1 t +1 Ω it . (12) With the shorthand notation: ω it t + 1 = g it + φ it + C it y, B it = Ω it t + 1 = G it + C it , (13) where φ it is the only random quantity. Its mean is zero because the measurement noise has zero mean, while its covariance is: E [ φ it φ T it ] = 1 ( t + 1) 2 E [( t − 1 ∑ τ =0 n ∑ j =1 [ K t − τ − 1 ] ij H T j V − 1 j v j ( τ ) ) × ( t − 1 ∑ s =0 n ∑ η =1 [ K t − s − 1 ] iη H T η V − 1 η v η ( s ) ) T ] = 1 ( t + 1) 2 n ∑ j =1 t − 1 ∑ τ =0 [ K t − τ − 1 ] 2 ij H T j V − 1 j E [ v j ( τ ) v j ( τ ) T ] V − 1 j H j = 1 ( t + 1) 2 n ∑ j =1 t − 1 ∑ τ =0 [ K t − τ − 1 ] 2 ij M j  1 t + 1 C it , (14) where the second equality uses the fact that v j ( τ ) and v η ( s ) are independent unless the indices coincide, i.e. E v j ( τ ) v η ( s ) T = δ τ s δ jη V j . The L ̈ owner ordering inequality in the last step uses that 0 ≤ [ K t − τ − 1 ] ij ≤ 1 and M j  0 . Since G is connected, K corresponds to the transition matrix of an aperiodic irreducible Markov chain with a unique stationary distribution π so that K t → π 1 T with π j > 0 . This implies that, as t → ∞ , the numerators of g it and G it remain bounded and therefore g it → 0 and G it → 0 . Since Ces ́ aro means preserve convergent sequences and their limits: 1 t + 1 t − 1 ∑ τ =0 [ K t − τ − 1 ] ij → π j , ∀ i, which implies that C it → ∑ n j =1 π j M j . The full-rank as- sumption on [ H T 1 . . . H T n ] T and π j > 0 guarantee that ∑ n j =1 π j M j is positive definite. Finally, consider the mean squared error: E [ (ˆ y i ( t ) − y ) T (ˆ y i ( t ) − y ) ] = E ∥ ∥ ∥ ∥ ( Ω it t + 1 ) − 1 ω it t + 1 − ( Ω it t + 1 ) − 1 ( Ω it t + 1 ) y ∥ ∥ ∥ ∥ 2 2 = E ∥ ∥ B − 1 it ( g it + C it y + φ it − ( G it + C it ) y )∥ ∥ 2 2 = E ‖ B − 1 it ( b it + φ it ) ‖ 2 2 = E [ b T it B − T it B − 1 it b it + 2 b T it B − T it B − 1 it φ it + φ T it B − T it B − 1 it φ it ] ( a ) = = = b T it B − T it B − 1 it b it + tr( B − 1 it E [ φ it φ T it ] B T it ) ( b ) ≤ b T it B − T it B − 1 it b it + 1 t + 1 tr( B − 1 it C it B − T it ) → 0 , where ( a ) holds because the first term is deterministic, while the cross term contains E [ φ it ] = 0 . Inequality ( b ) follows from (14). In the final step, as shown before B − 1 it → (∑ n j =1 π j M j ) − 1 and C it → ∑ n j =1 π j M j  0 remain bounded, while b it → 0 and 1 / ( t + 1) → 0 . A PPENDIX B: P ROOF OF T HEOREM 2 Define the generalized (matrix-weighted) degree matrix D ∈ R nd × nd of the graph G as a block-diagonal matrix with D ii := ∑ j ∈N i E − 1 ij . Since E ij  0 for all { i, j } ∈ E , the generalized degree matrix is positive definite, D  0 . Define also the generalized adjacency matrix A ∈ R nd × nd as follows: A ij := { E − 1 ij if { i, j } ∈ E, 0 else . The generalized Laplacian and the generalized signless Lapla- cian of G are defined as L := D − A and | L | := D + A , respectively. Further, let R := ( B ⊗ I d ) T ∈ R md × nd and define the block-diagonal matrix E ∈ R md × md with blocks E ij for { i, j } ∈ E . It is straightforward to verify that L = R T E − 1 R  0 and | L | = ( | B | ⊗ I d ) E − 1 ( | B | ⊗ I d ) T  0 , where | B | ∈ R n × m is the signless incidence matrix of G . Let ̃ B ∈ R ( n − 1) × m and ̃ R ∈ R md × ( n − 1) d be the matrices resulting after removing the row corresponding to sensor 1 from B . Similarly, let ̃ D, ̃ A, ̃ L, | ̃ L | ∈ R ( n − 1) d × ( n − 1) d denote the generalized degree, adjacency, Laplacian, and signless Laplacian matrices with the row and column corresponding to sensor 1 removed. Thm. 2.2.1 in [14] shows that ̃ L  0 provided that G is connected. The same approach can be used to show that | ̃ L |  0 . Let ̃ x ∈ R ( n − 1) d be the locations of sensors 2 , . . . , n in the reference frame of sensor 1 and ˆ x ( t ) ∈ R ( n − 1) d be their estimates at time t obtained from (9). The update in (9) can be written in matrix form as follows: ̃ D ˆ x ( t + 1) = ̃ A ˆ x ( t ) + ̃ R T E − 1 ( ̃ R ̃ x + 1 t + 1 t ∑ τ =0  ( τ ) ) . (15) Define the estimation error at time t as e ( t ) := ̃ x − ˆ x ( t ) and let u ( t ) := 1 t +1 ∑ t τ =0  ( τ ) . The dynamics of the error state can be obtained from (15): e ( t + 1) = ̃ x − ̃ D − 1 ̃ A ˆ x ( t ) − ̃ D − 1 ̃ L ̃ x − ̃ D − 1 ̃ R T E − 1 u ( t ) = ̃ x − ̃ D − 1 ̃ A ˆ x ( t ) − ̃ D − 1 ( ̃ D − ̃ A ) ̃ x − ̃ D − 1 ̃ R T E − 1 u ( t ) = ̃ D − 1 ̃ Ae ( t ) − ̃ D − 1 ̃ R T E − 1 u ( t ) . The error dynamics are governed by a stochastic linear time- invariant system, whose internal stability depends on the eigenvalues of ̃ D − 1 ̃ A . To show that the error dynamics are stable, we resort to the following lemma. Lemma 3 ([25, Lemma 4.2]) . Let L = D − A ∈ C n × n be such that D + D ∗  0 and L θ = D + D ∗ − ( e iθ A + e − iθ A ∗ )  0 for all θ ∈ R . Then ρ ( D − 1 A ) < 1 . Consider ̃ L θ := 2( ̃ D − cos( θ ) ̃ A ) . If cos θ = 0 , then ̃ L θ = 2 ̃ D  0 . If cos θ ∈ (0 , 1] , then ̃ L θ  2 cos θ ̃ L  0 . Finally, if cos θ ∈ [ − 1 , 0) , then ̃ L θ  2 | cos θ || ̃ L |  0 . Thus, we can conclude that ρ ( ̃ D − 1 ̃ A ) < 1 . The proof of the theorem is completed by the following lemma with F := ̃ D − 1 ̃ A and G := − ̃ D − 1 ̃ R T E − 1 . Lemma 4. Consider the discrete-time stochastic linear time- invariant system: e ( t + 1) = F e ( t ) + G 1 t +1 ∑ t τ =0  ( τ ) (16) driven by Gaussian noise  ( τ ) ∼ N (0 , E ) , which is indepen- dent at any pair of times. If the spectral radius of F satisfies ρ ( F ) < 1 , then e ( t ) a.s.,L 2 − − − − → 0 . Proof. By the strong law of large numbers [26, Thm.2.4.1], u ( t ) := 1 t +1 ∑ t τ =0  ( τ ) converges to 0 almost surely. Let Ω be the set with measure 1 on which u ( t ) converges so that for any γ > 0 , ∃ T ∈ N such that ∀ t ≥ T , ‖ u ( t ) ‖ ≤ γ . For realizations in Ω , the solution to (16) with initial time T is: e ( t ) = F t − T e ( T ) + ∑ t − 1 τ = T F t − τ − 1 G u ( τ ) . Then, ‖ e ( t ) ‖ ≤ ‖ F t − T e ( T ) ‖ + t − 1 ∑ τ = T ∥ ∥ F t − τ − 1 ∥ ∥ ‖ G ‖ γ . Taking the limit of t and using that F is stable, we have lim t →∞ ‖ e ( t ) ‖ ≤ ( ∞ ∑ τ =0 ∥ ∥ F τ ∥ ∥ ) ‖ G ‖ γ. Since ρ ( F ) < 1 , the system is internally (uniformly) exponen- tially stable, which is equivalent to ∑ ∞ τ =0 ‖ F τ ‖ ≤ β for some finite constant β [27, Ch.22]. Thus, lim t →∞ ‖ e ( t ) ‖ ≤ β ‖ G ‖ γ , which can be made arbitrarily small by choice of γ . We conclude that e ( t ) → 0 on Ω and consequently e ( t ) a.s. − − → 0 . Next, we show convergence in L 2 . First, consider the propagation of the cross term C ( t ) := ( t + 1) E e ( t ) u ( t ) T . Note that E u ( t ) = 0 and E u ( t ) u ( t ) T = E t +1 . Using the fact that  ( t + 1) is independent of e ( t ) and u ( t ) we have C ( t + 1) = E ( F e ( t ) + G u ( t ) )( ( t + 1) u ( t ) +  ( t + 1) ) T = F C ( t ) + ( t + 1) G E u ( t ) u ( t ) T = F C ( t ) + G E . The solution of the above linear time-invariant system is: C ( t ) = F t C (0) + ∑ t − 1 τ =0 F t − τ − 1 G E and since F is stable: lim t →∞ E e ( t ) u ( t ) T = lim t →∞ 1 t + 1 t − 1 ∑ τ =0 F τ G E = 0 . Now, consider the second moment of the error: Σ( t + 1) := E e ( t + 1) e ( t + 1) T = F Σ( t ) F T + F ( E e ( t ) u ( t ) T ) G T + G ( E u ( t ) e ( t ) T ) F T + 1 t +1 G E G T = F Σ( t ) F T + Q ( t ) , where Q ( t ) := 1 t + 1 ( F C ( t ) G T + G C ( t ) T F T + G E G T ) . As shown above Q ( t ) → 0 as t → ∞ , i.e. for any δ > 0 , ∃ T ′ ∈ N such that ∀ t ≥ T ′ , ‖ Q ( t ) ‖ ≤ δ . With initial time T ′ , Σ( t ) = F t − T ′ Σ( T ′ )( F T ) t − T ′ + t − 1 ∑ τ = T ′ F t − τ − 1 Q ( τ )( F T ) t − τ − 1 for t ≥ T ′ . Then: ‖ Σ( t ) ‖ ≤ ∥ ∥ F t − T ′ ∥ ∥ 2 ‖ Σ( T ′ ) ‖ + t − T ′ − 1 ∑ τ =0 ‖ F τ ‖ 2 δ ≤ α 2 μ 2( t − T ′ ) + δα 2 t − T ′ − 1 ∑ τ =0 μ 2 τ , where the existence of the constants α > 0 and 0 ≤ μ < 1 is guaranteed by the stability of F . We conclude that lim t →∞ ‖ Σ( t ) ‖ ≤ δα 2 1 − μ 2 , which can be made arbitrarily small by choice of δ . In other words, e ( t ) L 2 − − → 0 . A PPENDIX C: P ROOF OF T HEOREM 3 We use the same notation and follow the same steps as in the proof of Thm. 1, except that now the terms H i , V i , M i , M, ζ ( t ) , φ it , C it , B it are time-varying and stochastic because they depend on the location estimates ˆ x i ( t ) . To emphasize this, we denote them by ̂ H it , ̂ V it , ̂ M it , ̂ M t , ̂ ζ ( t ) , ̂ φ it , ̂ C it , ̂ B it , where for example ̂ M it := M i (ˆ x i ( t )) . The same linear systems (11) describe the evolutions of ω t and Ω t except that they are stochastic now and (13) becomes: ω it t + 1 = g it + ̂ C it y + ̂ φ it , ̂ B it := Ω it t + 1 = G it + ̂ C it . We still have that K t → π 1 T with π j > 0 . Also, g it , G it , and b it are still deterministic and converge to zero as t → ∞ . The following observations are necessary to conclude that ̂ C it still converges to ∑ n j =1 π j M j . Lemma 5. If ˆ x i ( t ) a.s. − − → x i , then ̂ M it a.s.,L 2 − − − − → M i . Proof. Almost sure convergence follows from the continuity of M i ( · ) and the continuous mapping theorem [26, Thm.3.2.4]. L 2 -convergence follows from the boundedness of M i ( · ) and the dominated convergence theorem [26, Thm.1.6.7]. Lemma 6. If a t → a and b t → b , then 1 t ∑ t − 1 τ =0 a t − τ b τ → ab . Proof. The convergence of a t implies its boundedness, | a t | ≤ q < ∞ . Then, notice ab = 1 t ∑ t − 1 τ =0 ab and ∣ ∣ ∣ ∣ 1 t t − 1 ∑ τ =0 a t − τ b τ − ab ∣ ∣ ∣ ∣ = ∣ ∣ ∣ ∣ 1 t t − 1 ∑ τ =0 ( a t − τ ( b τ − b ) + ( a t − τ − a ) b )∣ ∣ ∣ ∣ ≤ ∣ ∣ ∣ ∣ 1 t t − 1 ∑ τ =0 a t − τ ( b τ − b ) ∣ ∣ ∣ ∣ + ∣ ∣ ∣ ∣ 1 t t − 1 ∑ τ =0 ( a t − τ − a ) b ∣ ∣ ∣ ∣ ≤ ∣ ∣ ∣ ∣ q ( 1 t t − 1 ∑ τ =0 b τ − b )∣ ∣ ∣ ∣ + ∣ ∣ ∣ ∣ ( 1 t t ∑ τ =1 a τ − a ) b ∣ ∣ ∣ ∣ , where both terms converge to zero since Ces ́ aro means pre- serve convergent sequences and their limits. Combining Lemma 5, [ K t ] ij → π j , and Lemma 6, we have: 1 t + 1 t − 1 ∑ τ =0 [ K t − τ − 1 ] ij ̂ M jτ a.s. − − → [ π 1 T ] ij M j = π j M j . Moreover, 0 ≤ [ K t ] ij ≤ 1 and the boundedness of ̂ M jt imply, by the bounded convergence theorem [26, Thm.1.6.7], that the sequence above converges in L 2 as well: ̂ C it a.s.,L 2 − − − − → ∑ n j =1 π j M j  0 . (17) In turn, (17) guarantees that: ̂ B − 2 it = ( G it + ̂ C it ) − 2 a.s. − − → (∑ n j =1 π j M j ) − 2 (18) but is not enough to ensure that E [ ̂ B − 2 it ] remains bounded as t → ∞ . The parameter δ > 0 is needed to guarantee the boundedness. In particular, define ̂ B it ( δ ) := ̂ B it + δI d y . Then ̂ B it ( δ ) − 2 = ( G it + ̂ C it + δI d y ) − 2 ≺ 1 δ 2 I d y and by the bounded convergence theorem and (18): ̂ B it ( δ ) − 2 a.s.,L 1 − − − − → (∑ n j =1 π j M j + δI d y ) − 2 , (19) so that lim t →∞ E [ ̂ B it ( δ ) − 2 ] < ∞ . From (17) and the bound- edness of ̂ B it ( δ ) − 1 and ̂ C it , we also have: ̂ B it ( δ ) − 1 ̂ C it ̂ B it ( δ ) − T a.s.,L 2 − − − − → (20) ( n ∑ j =1 π j M j + δI d y ) − 1 ( n ∑ j =1 π j M j )( n ∑ j =1 π j M j + δI d y ) − T . Since ̂ H it and ̂ V it depend solely on ˆ x i ( t ) , they are independent of v i ( t ) . Because E [ v j ( τ )] = 0 , E [ ̂ H T jτ ̂ V − 1 jτ v j ( τ )] = 0 and as before E [ ̂ φ it ] = 0 . Since ̂ B it ( δ ) is independent of v i ( t ) as well, E [ ̂ B it ( δ ) − 2 ̂ φ it ] = 0 and a result equivalent to (14) holds: E [ ̂ B it ( δ ) − 1 ̂ φ it ̂ φ T it ̂ B it ( δ ) − T ] = E [ ̂ B it ( δ ) − 1 ( 1 ( t + 1) 2 n ∑ j =1 t − 1 ∑ τ =0 [ K t − τ − 1 ] 2 ij ̂ M jτ ) ̂ B it ( δ ) − T ]  1 t + 1 E [ ̂ B it ( δ ) − 1 ̂ C it ̂ B it ( δ ) − T ] . (21) Finally, we can consider the mean squared error: E [ ‖ ˆ y i ( t ) − y ‖ 2 2 ] = E ∥ ∥ ∥ ∥ ̂ B it ( δ ) − 1 ω it t + 1 − ̂ B it ( δ ) − 1 ̂ B it ( δ ) y ∥ ∥ ∥ ∥ 2 2 = E ∥ ∥ ∥ ∥ ̂ B it ( δ ) − 1 ( g it + ̂ C it y + ̂ φ it − ( G it + ̂ C it + δI d y ) y )∥ ∥ ∥ ∥ 2 2 = E ‖ ̂ B it ( δ ) − 1 ( b it + ̂ φ it + δy ) ‖ 2 2 = E [ b T it ̂ B it ( δ ) − 2 b it + ̂ φ T it ̂ B it ( δ ) − 2 ̂ φ it + δ 2 y T ̂ B it ( δ ) − 2 y + 2 b T it ̂ B it ( δ ) − 2 ̂ φ it + 2 δy T ̂ B it ( δ ) − 2 ̂ φ it + 2 δb T it ̂ B it ( δ ) − 2 y ] = b T it E [ ̂ B it ( δ ) − 2 ] b it + tr( E [ ̂ B it ( δ ) − 1 ̂ φ it ̂ φ T it ̂ B it ( δ ) − T ] ) + δ 2 y T E [ ̂ B it ( δ ) − 2 ] y + 2 δb T it E [ ̂ B it ( δ ) − 2 ] y (21) ≤ b T it E ̂ B it ( δ ) − 2 b it + 2 δb T it E ̂ B it ( δ ) − 2 y + δ 2 y T E ̂ B it ( δ ) − 2 y + 1 t + 1 tr ( E [ ̂ B it ( δ ) − 1 ̂ C it ̂ B it ( δ ) − T ]) → δ 2 y T ( n ∑ j =1 π j M j + δI d y ) − 2 y. In the final step, the first two terms go to zero because b it → 0 and lim t E ̂ B it ( δ ) − 2 < ∞ from (19), the third term converges in view of (19) again, while the last term goes to zero because the trace is bounded in the limit in view of (20). R EFERENCES [1] A. Mainwaring, D. Culler, J. Polastre, R. Szewczyk, and J. Anderson, “Wireless Sensor Networks for Habitat Monitoring,” in Int. Workshop on Wireless Sensor Networks and Applications , 2002. [2] N. Leonard, D. Paley, F. Lekien, R. Sepulchre, D. Fratantoni, and R. Davis, “Collective Motion, Sensor Networks, and Ocean Sampling,” Proceedings of the IEEE , vol. 95, no. 1, 2007. [3] T. Yan, T. He, and J. Stankovic, “Differentiated Surveillance for Sensor Networks,” in Int. Conf. on Embedded Networked Sensor Systems , 2003. [4] J. Lu and T. Suda, “Differentiated Surveillance for Static and Random Mobile Sensor Networks,” IEEE Trans. on Wireless Communications , vol. 7, no. 11, 2008. [5] A. Jadbabaie, P. Molavi, A. Sandroni, and A. Tahbaz-Salehi, “Non- Bayesian Social Learning,” Games and Economic Behavior , vol. 76, no. 1, 2012. [6] K. Rahnama Rad and A. Tahbaz-Salehi, “Distributed Parameter Estima- tion in Networks,” in Conf. on Decision and Control (CDC) , 2010. [7] R. Olfati-Saber, “Distributed Kalman Filtering for Sensor Networks,” in Conf. on Decision and Control (CDC) , 2007. [8] ——, “Kalman-Consensus Filter: Optimality, Stability, and Perfor- mance,” in Joint Conf. on Decision and Control (CDC) and Chinese Control Conference , 2009. [9] U. Khan, S. Kar, A. Jadbabaie, and J. Moura, “On connectivity, observ- ability, and stability in distributed estimation,” in Conf. on Decision and Control (CDC) , 2010. [10] S. Shahrampour, A. Rakhlin, and A. Jadbabaie, “Online Learning of Dynamic Parameters in Social Networks,” in Advances in Neural Information Processing Systems (NIPS) , 2013. [11] S. Kar, J. Moura, and K. Ramanan, “Distributed Parameter Estimation in Sensor Networks: Nonlinear Observation Models and Imperfect Communication,” Trans. on Information Theory , vol. 58, no. 6, 2012. [12] J. Cort ́ es, “Distributed Kriged Kalman filter for spatial estimation,” Trans. on Automatic Control (TAC) , vol. 54, no. 12, 2009. [13] P. Barooah and J. Hespanha, “Estimation on Graphs from Relative Measurements,” IEEE Control Systems , vol. 27, no. 4, 2007. [14] P. Barooah, “Estimation and Control with Relative Measurements: Al- gorithms and Scaling Laws,” Ph.D. dissertation, University of California Santa Barbara, 2007. [15] J. Aspnes, T. Eren, D. Goldenberg, A. Morse, W. Whiteley, Y. Yang, B. Anderson, and P. Belhumeur, “A Theory of Network Localization,” IEEE Trans. on Mobile Computing , vol. 5, no. 12, 2006. [16] D. Moore, J. Leonard, D. Rus, and S. Teller, “Robust Distributed Network Localization with Noisy Range Measurements,” in Int. Conf. on Embedded Networked Sensor Systems , 2004. [17] A. So and Y. Ye, “Theory of semidefinite programming for Sensor Network Localization,” Mathematical Programming , vol. 109, no. 2-3, 2007. [18] N. Priyantha, H. Balakrishnan, E. Demaine, and S. Teller, “Anchor-Free Distributed Localization in Sensor Networks,” in Int. Conf. on Embedded Networked Sensor Systems , 2003. [19] U. Khan, S. Kar, and J. Moura, “Distributed Sensor Localization in Random Environments Using Minimal Number of Anchor Nodes,” Trans. on Signal Processing , vol. 57, no. 5, 2009. [20] Y. Diao, Z. Lin, M. Fu, and H. Zhang, “A New Distributed Localization Method for Sensor Networks,” in Asian Control Conf. (ASCC) , 2013. [21] R. Tron and R. Vidal, “Distributed Image-Based 3-D Localization in Camera Sensor Networks,” in Joint Conf. on Decision and Control (CDC) and Chinese Control Conference , 2009. [22] V. Hernandez Bennetts, A. Lilienthal, A. Khaliq, V. Pomareda Sese, and M. Trincavelli, “Towards Real-World Gas Distribution Mapping and Leak Localization Using a Mobile Robot with 3D and Remote Gas Sensing Capabilities,” in Int. Conf. on Robotics and Automation , 2013. [23] A. Barker, D. Brown, and W. Martin, “Bayesian estimation and the Kalman filter,” Computers and Mathematics with Applications , vol. 30, no. 10, 1995. [24] J. Mendel, Lessons in Estimation Theory for Signal Processing, Com- munications, and Control . Prentice-Hall, 1995. [25] L. Elsner and V. Mehrmann, “Convergence of block iterative methods for linear systems arising in the numerical solution of Euler equations,” Numerische Mathematik , vol. 59, no. 1, 1991. [26] R. Durrett, Probability: Theory and Examples , 4th ed. Cambridge University Press, 2010. [27] W. Rugh, Linear System Theory , 2nd ed. Prentice-Hall, 1996.