On the Covariance of ICP-based Scan-matching Techniques Silv` ere Bonnabel, Martin Barczyk and Franc ̧ois Goulette Abstract — This paper considers the problem of estimating the covariance of roto-translations computed by the Iterative Closest Point (ICP) algorithm. The problem is relevant for localization of mobile robots and vehicles equipped with depth- sensing cameras (e.g., Kinect) or Lidar (e.g., Velodyne). The closed-form formulas for covariance proposed in previous literature generally build upon the fact that the solution to ICP is obtained by minimizing a linear least-squares problem. In this paper, we show this approach needs caution because the rematching step of the algorithm is not explicitly accounted for, and applying it to the point-to-point version of ICP leads to completely erroneous covariances. We then provide a formal mathematical proof why the approach is valid in the point- to-plane version of ICP, which validates the intuition and experimental results of practitioners. I. I NTRODUCTION This paper considers the covariance of relative roto- translations obtained by applying the well-known Iterative Closest-Point (ICP) algorithm [1], [2] to pairs of successive point clouds captured by a scanning sensor moving through a structured environment. This so-called scan matching [3], [4], [5] is used in mobile robotics, and more generally autonomous navigation, to incrementally compute the global pose of the vehicle. The resulting estimates are typically fused with other measurements, such as odometry, visual landmark detection, and/or GPS. In order to apply probabilis- tic filtering and sensor fusion techniques such as the extended Kalman filter (EKF) e.g. [6], [7], EKF variants [8], [9], particle filtering methods, or optimization-based smoothing techniques to find a maximum likelihood estimate as in Graph SLAM [10], the probability distribution of the error associated to each sensor is required. Since these errors are typically assumed to be zero-mean and normally distributed, only a covariance matrix is needed. Contrarily to conventional localization sensors, the co- variance of relative roto-translation estimates will not only depend on sensor noise characteristics, but also on the geometry of the environment. Indeed, when using ICP for scan matching, several sources of errors come into play: 1) the presence of geometry in one scan not observed in the subsequent one(s), that is, lack of overlapping. 2) mismatching of points, that is, if scans start far from each other the ICP may fall into a local (not global) minimum, yielding an erroneous roto-translation esti- mate. Silv` ere Bonnabel and Franc ̧ois Goulette are with MINES ParisTech, PSL - Research University, Centre de robotique, 60 Bd St Michel 75006 Paris, France { firstname.lastname } @mines-paristech.fr Martin Barczyk is with the Department of Mechanical Engi- neering, University of Alberta, Edmonton AB, T6G 1H9, Canada martin.barczyk@ualberta.ca 3) even if 1) and 2) do not occur, the computed estimate will still possess uncertainty due to sensor noise and possibly underconstrained environments, such as a long featureless corridor. In practice, the first problem can be addressed by rejecting point pairs with excessive distance metrics or located close to the scanning boundaries [11], and the second by using dead reckoning estimates to pre-align scans or employing a sufficiently fast sampling rate. We will thus focus on the third source of error. The covariance of estimates obtained from a scan matching algorithm (such as ICP) can be obtained as follows [12], [13]. The estimated transformation ˆ x output by the algorithm is defined as a local argmin of a cost function J ( x, z ) of the transformation x and the data z (scanned point clouds). As a result we always have ∂ ∂x J (ˆ x, z ) = 0 . By the implicit function theorem ˆ x is a function of the data z around this minimum. Because of the above identity, a small variation δz in the data will imply a small variation δx in the estimate as ∂ 2 J ∂x 2 δx + ∂ 2 J ∂z∂x δz = 0 , so that δx = − ( ∂ 2 J ∂x 2 ) − 1 ∂ 2 J ∂z∂x δz . As a result if δz denotes the (random) discrepancy in the mea- surement due to sensor noise, the corresponding variability in the estimates δx = ˆ x − x gives E ( δxδx T ) = cov (ˆ x ) as cov (ˆ x ) := ( ∂ 2 J ∂x 2 ) − 1 ∂ 2 J ∂z∂x cov ( z ) ∂ 2 J ∂z∂x T ( ∂ 2 J ∂x 2 ) − 1 (1) Our goal is to point out the potential lack of validity of this formula for ICP covariance computation, but also to charac- terize situations where it can safely be used. Specifically, the problem with (1) is that it relies on δx = − ( ∂ 2 J ∂x 2 ) − 1 ∂ 2 J ∂z∂x δz , which is based on the local implicit function theorem, and which only holds for infinitesimal variations δx, δz . In the case of ICP, infinitesimal means sub-pixel displacements. Indeed when matching scans, the rematching step performed by the ICP makes the cost function far from smooth, so that the Taylor expansion ∂J ∂x (ˆ x + ∆ x, z ) = ∂J (ˆ x, z ) ∂x + ∂ 2 J (ˆ x, z ) ∂x 2 ∆ x + O (∆ x 2 ) (2) which is true in the limit ∆ x → 0 may turn out to be com- pletely wrong for displacements ∆ x larger than only a few pixels. An example of this will be given in Section II-A.2. On the other hand, if the registration errors are projected onto a reference surface as in point-to-plane ICP [2], Equation (1) will provide valid results. This will be formally proven in Section III. Our paper is an extension and rigorous justification of the results of [13] and [14]. Our main contributions are to point out the potential shortcomings of a blind application of (1) arXiv:1410.7632v3 [cs.CV] 16 Mar 2016 to point-to-point ICP in Section II, and then to provide a formal mathematical proof based on geometry arguments in Section III for the validity of (1) for point-to-plane ICP. Finally the results are illustrated on a simple 3D example in Section IV. II. M ATHEMATICAL FRAMEWORK Consider using ICP for scan matching (either in 2D or 3D). We seek to find the transformation between two clouds of points { p k } 1 ≤ k ≤ N and { q i } 1 ≤ i ≤ M . This transformation is X = ( R, p ) , a roto-translation such that the action of X on a vector p is Xp := Rp + T . Let π ( k, X ) denote the label j of the point q j in the second cloud which is the closest to Xp k . The basic point-to-point ICP [1] consists of the following steps: 1) initialize X old = ( I, 0) 2) choose a set of N indices k and define π ( k, X old ) such that q π ( k,X old ) is the point in the second cloud which is the closest to X old p k 3) find X new as the argmin of ∑ k || X new p k − q π ( k,X old ) || 2 4) if X new converged to X old then quit, else X old = X new and goto 2) We see that the goal pursued by the ICP is to minimize the function J ( { p i } , { q i } ; σ, X ) := ∑ k || Xp k − q σ ( k ) ) || 2 (3) over the roto-translation X and the matching σ : { 1 , · · · , N } 7 → { 1 , · · · , M } . As a result, ICP acts as a coordinate descent which alternatively updates X as the argmin X of J for a fixed π , and the argmin π of J for a fixed X (the latter being true only for point-to-point ICP). This is what allows to prove local convergence of point-to-point ICP as done in [1]. Because of the huge combinatorial problem underlying the optimization task of jointly minimizing J over the transformation and the matching, ICP provides a simple and tractable (although computationally heavy) approach to estimate X . The ICP algorithm possesses many variants, such as the point-to-plane version where the cost function is replaced by J ( { p i } , { q i } ; σ, X ) := ∑ k || ( Xp k − q σ ( k ) ) · n σ ( k ) || 2 (4) that is, the registration error is projected onto the surface unit normal vector of the second cloud at q σ ( k ) . An alternative to this in 2D exploited in [13] consists of creating a reference surface S r in R 2 by connecting adjacent points in the second cloud { q i } with segments, and then employing cost function J ( S r , { p k } ; X ) := ∑ k || Xp k − Π( S r , Xp k ) || 2 (5) where Π( S r , · ) is the projection onto the surface S r . Definition 1 We define the ICP cost function as J ( { p i } , { q i } ; π ( · , X ) , X ) , that is, the error function J ( { p i } , { q i } ; σ, X ) with closest neighbor matching. Definition 2 The stability of the ICP in the sense of [14] is defined as the variation of the ICP cost function when X moves a little away from the argmin ˆ X . Definition 2’s terminology comes from the theory of dynam- ical systems. Indeed, the changes in the ICP cost indicate the ability (and speed) of the algorithm to return to its minimum when it is initialized close to it. If the argmin ˆ X is changed to ˆ X + δX and the cost does not change, then the ICP algorithm output will remain at ˆ X + δX and will not return to its original value ˆ X . Meanwhile, although closely related, the covariance is rooted in statistical considerations and not in the dynamical behavior of the algorithm: Definition 3 The covariance of the ICP algorithm is defined as the statistical dispersion (or variability), due to sensor noise, of the transformation ˆ X computed by the algorithm over a large number of experiments. A. Potential lack of validity of (1) 1) Mathematical insight: Consider the ICP cost function J ( { p i } , { q i } ; π ( X, · ) , X ) . To simplify notation we omit the point clouds and we let F ( X ) be the function X → J ( π ( X, · ) , X ) . At convergence we have by construction of the ICP algorithm at the argmin ∂ 2 J ( π ( ˆ X, · ) , ˆ X ) = 0 , where the ∂ 2 denotes the derivative with respect to the second argument. As the closest point matching X → π ( X, · ) is locally constant (except on a set of null measure), since an infinitesimal change of each point does not change the nearest neighbors except if the point is exactly equidistant to two distinct points, we have d 2 F dX 2 ( ˆ X ) = ∂ 2 2 J ( π ( ˆ X, · ) , ˆ X ) that is, the matching can be considered as fixed when computing the Hessian of X → J ( π ( X, · ) , X ) at the argmin. But the first-order approximation ∂ 2 J ( π ( ˆ X + δX, · ) , ˆ X + δX ) ≈ ∂ 2 J ( π ( ˆ X, · ) , ˆ X ) + ∂ 2 2 J ( π ( ˆ X, · ) , ˆ X ) δX can turn out to very poorly model the stability of the algorithm at the scale δX = ∆ X of interest to us, pre- cisely because when moving away from the current argmin, rematching occurs and thus π ( ˆ X + ∆ X, · ) 6 = π ( ˆ X, · ) . Whereas a closed form estimate of ∂ 2 2 J ( π ( ˆ X, · ) , ˆ X ) is easy to calculate, obtaining a Taylor expansion around the minimum which accounts for rematching would require sampling the error function all around its minimum, leading to high computational cost [15]. 2) Illustration: Consider a simple 2D example of a scan- ner moving parallel to a flat wall using point-to-point ICP. Figures 1 and 2 illustrate the fallacy of considering a second- order Taylor expansion of the cost, i.e. computing the cost with fixed matching. Indeed, Fig. 2 displays the discrepancy between the true ICP cost and its second-order approximation around the minimum when moving along a 2D wall. We see that rematching with closest point correctly reflects the underconstraint/inobservability of the environment, since the cost function is nearly constant as we move along the feature- less wall. On the other hand, the second-order approximation does not. This proves the Hessian ∂ 2 J ∂X 2 to the cost at the minimum does not correctly reflect the change in the cost function value, and thus the stability of the algorithm. Fig. 1. Point-to-point ICP illustration. (a) The first cloud is made of 10 equally spaced collinear points (e.g., scans of a flat wall), the second cloud is obtained by duplication. To ensure overlap we focus on the central points. (b) While translating the second cloud to the right, no re-matching occurs. (c) Translation and re-matching with closest points. The costs J corresponding to cases b and c are shown in Fig. 2. Fig. 2. Point-to-point ICP results from Fig. 1. Dashed line: plot of the second-order approximation to the cost J ( π ( ˆ X, · ) , ˆ X ) + 0 + ∂ 2 ∂X 2 J ( π ( ˆ X, · ) , ˆ X )( X − ˆ X ) 2 versus translations. Due to the quadratic form of the cost (3) the second-order approximation is also equal to J ( π ( ˆ X, · ) , X ) , i.e. the cost with matching held fixed (Fig 1 case b). Solid line: Plot of the true ICP cost J ( π ( X, · ) , X ) , i.e. accounting for re-matching with closest points (Fig 1 case c). Regarding covariance, it is easy to see that Equation (1) will not reflect the true covariance of the ICP either, as the true covariance should be very large (ideally infinite) along the wall’s direction, which can only happen if ∂ 2 J ∂X 2 is very small, but which is not the case here. B. Covariance of linear least-squares Consider the linear least-squares minimization problem with cost function J ( x ) = ∑ i || d i − B i x || 2 (6) The solution is of course ˆ x = ( ∑ i B T i B i ) − 1 ( ∑ i B T i d i ) (7) Let A := ( ∑ i B T i B i ) , which represents the (half) Hessian 1 2 ∂ 2 2 J of the cost function J . Note that A T = A . If the measurement d i satisfies d i = B i x + w i where x is the true parameter and w i a noise, the covariance of the least squares estimate over a great number of experiments is cov (ˆ x ) = E 〈 (ˆ x − x )(ˆ x − x ) T 〉 = E 〈 A − 1 ( ∑ i B T i w i )[ A − 1 ( ∑ j B T j w j )] T 〉 = A − 1 ∑ i ∑ j ( B T i E ( w i w T j ) B j ) A − 1 (8) which indeed agrees with (1). Furthermore, if the w i ’s are identically distributed independent noises with covariance matrix E ( w i w T j ) = σ 2 Iδ ij , we recover the well-known result [16, Thm. 4.1] that cov (ˆ x ) = σ 2 A − 1 ( ∑ i B T i B i ) A − 1 = σ 2 A − 1 meaning the (half) Hessian to the cost function A encodes the covariance of the estimate. C. Application to point-to-point ICP The application of the least-squares covariance formulas to point-to-point ICP can be done as follows [17]. Note in the 3D case the roto-translation X is a member of SE (3) , the Special Euclidean Lie group with associated Lie algebra se (3) 3 ξ . Using homogeneous coordinates this writes X = [ R p 0 1 ] , R ∈ SO (3) , p ∈ R 3 ξ = [ S ( x R ) x T 0 0 ] , x R ∈ R 3 , x T ∈ R 3 where S ( · ) is the 3 × 3 skew-symmetric matrix S ( a ) T = − S ( a ) such that S ( a ) b = a × b , a, b ∈ R 3 . The map exp : se (3) → SE (3) is the matrix exponential e ξ := I + ξ + (1 / 2!) ξ 2 + · · · . As explained in Section I, we can assume the scans to be aligned by ICP start out close to each other. This means X ∈ SE (3) is close to identity and ξ ∈ se (3) is close to zero, such that X = e ξ ≈ I + ξ = ⇒ Xp ≈ p + x R × p + x T . (9) We can thus consider the ICP estimate ˆ X as parameterized by ˆ x = (ˆ x R , ˆ x T ) ∈ R 6 . Specifically we define the linear map L : R 6 → se (3) , L (ˆ x ) = ˆ ξ such that I + L (ˆ x ) ≈ ˆ X 3 SE (3) for ˆ X close to identity. The output model Y m of the ICP is then written as Y m = ˆ X = I + L (ˆ x ) = I + L ( x ) + L ( δx ) = X + L ( δx ) where δx = ˆ x − x and L ( δx ) can be viewed as a zero-mean noise term with associated covariance E ( δxδx T ) = cov (ˆ x ) by (1). Using (9) in the point-to-point ICP, the cost function (3) with matching fixed at its convergence value ˆ x writes J ( π (ˆ x, · ) , x ) = ∑ i ‖ p i + S ( x R ) p i + x T − q π (ˆ x,i ) ‖ 2 which can be rewritten in the sum-of-squares form (6) with d i = p i − q π (ˆ x,i ) , B i = [ S ( p i ) − I ] The Hessian A = 1 2 ∂ 2 2 J = ∑ i B T i B i is then equal to ∑ i [ − S ( p i ) 2 S ( p i ) − S ( p i ) I ] D. Application to point-to-plane ICP For point-to-plane ICP, the cost function (4) with matching fixed at its convergence value ˆ x and approximation (9) is given by J ( π (ˆ x, · ) , x ) = ∑ i [ ( x R × p i + x T + p i − q π (ˆ x,i ) ) · n π (ˆ x,i ) ] 2 Using the scalar triple product circular property ( a × b ) · c = ( b × c ) · a , this can also be rewritten in form (6) with d i = n T π (ˆ x,i ) ( p i − q π (ˆ x,i ) ) B i = [ − ( p i × n π (ˆ x,i ) ) T − n T π (ˆ x,i ) ] and the Hessian A = 1 2 ∂ 2 2 J = ∑ i B T i B i is equal to ∑ i [ ( p i × n π (ˆ x,i ) )( p i × n π (ˆ x,i ) ) T ( p i × n π (ˆ x,i ) ) n T π (ˆ x,i ) n π (ˆ x,i ) ( p i × n π (ˆ x,i ) ) n π (ˆ x,i ) n T π (ˆ x,i ) ] The above expression models the Hessian of J ( { p i } , { q i } ; x, π (ˆ x, · )) and was given in [14], who argue by intuition that it models the stability of the point-to- plane ICP algorithm. We will formally prove this fact — that the point-to-plane Hessian correctly captures the behavior of the true ICP cost function J ( { p i } , { q i } ; x, π ( x, · )) around ˆ x — in Section III. III. A RIGOROUS MATHEMATICAL RESULT FOR POINT - TO - PLANE ICP The present section is devoted to prove that as far as point- to-plane ICP is concerned, and unlike the point-to-point case, Equation (2) and hence (1) is indeed valid, even for large ∆ x . In fact, a bound on ∆ x depending on the curvature of the scanned surface is given, allowing to characterize the domain of validity of the formula. This result is novel and provides a rigorous framework to justify the intuitive arguments in [14]. Theorem 1 Consider a 2D environment made of (an en- semble of disjoint) smooth surface(s) S r having maximum curvature κ . Consider a cloud of points { a i } obtained by scanning the environment. Consider the cost J ( π ( x, · ) , x ) obtained by matching the cloud { a i } with the displaced cloud { a i } + x R × { a i } + x T where x := ( x R , x T ) are the motion parameters. As 0 is a global minimum the gradient vanishes at x = 0 . The following second-order Taylor expansion J ( π ( x + ∆ x, · ) , x + ∆ x ) = J ( π (0 , · ) , 0) + ∂ 2 2 J ( π (0 , · ) , 0) || ∆ x || 2 + O ( κ || ∆ x || 3 ) (10) is valid for ∆ x sufficiently small, but large enough to let rematching occur. Note that if the environment is made of disjoint planes, we have κ = 0 and both cost functions agree exactly . The remainder of this section is devoted to the proof of the theorem, and a corollary proving the result remains true in 3D. A. Details of result The proof of the previous theorem is based on the follow- ing. Proposition Consider the assumptions of Theorem 1. Around the minimum x = 0 the cost with fixed matching J ( π (0 , · ) , x ) = ∑ i [ ( a i + x R × a i + x T − a π (0 ,i ) ) · n i ] 2 differs from the true ICP cost in the following way J ( π ( x, · ) , x ) = ∑ i [ ( a i + x R × a i + x T − a π ( x,i ) ) · n i + ψ i ] 2 where the approximation error ψ i is already second order in the function arguments as | ψ i | ≤ 8 κ ( ‖ x R × a i + x T ‖ ) 2 as long as κ | s i − s π ( x,i ) | ≤ 1 where s i and s π ( x,i ) denote the curvilinear abscissae of the points a i and a π ( x,i ) along the surface S r . To begin with, note that the condition κ | s i − s π ( x,i ) | ≤ 1 is independent of the chosen units as κ | s i − s π ( x,i ) | is dimensionless. To fix ideas about the validity condition, assume the environment is circular with an arbitrary radius. The above condition means that the Taylor expansion is proved valid as long as the displacement yields a rematching with the nearest neighbor at most 1 rad ( 57 . 3 ◦ ) along the circle from the initial point. We see this indicates a large domain of validity. Note that in case where the environment is a line both functions coincide exactly. Fig. 3. Illustration for the proof. To prove the result, assume the surface where point i lies is parameterized by γ ( s ) with curvilinear abscissa s , and with maximum curvature κ . Such a curve has tangent vector γ ′ ( s ) with ‖ γ ′ ( s ) ‖ = 1 and normal vector γ ′′ ( s ) with ‖ γ ′′ ( s ) ‖ ≤ κ , the curvature. The point cloud { a i } ∈ R 2 is obtained by scanning this environment at discrete points γ ( s 1 ) , γ ( s 2 ) , · · · . We assume the surfaces (here curves) are sufficiently disjoint so that under the assumptions of the Proposition, a i and its closest point a π ( x,i ) lie on the same curve of maximum curvature κ . By writing x R × a i + x T + a i − a π ( x,i ) = x R × a i + x T + a i − a π (0 ,i ) + a π ( x,i ) we see the error made for each term i is ψ := ( a π (0 ,i ) − a π ( x,i ) ) · n i = ( a i − a j ) · n i where we let j := π ( x, i ) and we used the obvious fact that π (0 , i ) = i . To study ψ i expand γ ( s ) about s = s i using Taylor’s theorem with remainder: γ ( s ) = γ ( s i ) + γ ′ ( s i )( s − s i ) + ∫ s s i γ ′′ ( u )( u − s i ) du Take s = s π ( x,i ) := s j and project along the normal n i : ( γ ( s j ) − γ ( s i )) · n i = ( s j − s i ) γ ′ ( s i ) · n i + ∫ s j s i γ ′′ ( u ) · n i ( u − s i ) du Note γ ′ ( s i ) · n i = 0 since γ ′ ( s i ) is the (unit) tangent vector to the curve at s = s i . Taking absolute values of both sides, | ψ i | = | ( b j − b i ) · n i | ≤ 1 2 [max u ‖ γ ′′ ( ̃ u ) ‖ ] | s j − s i | 2 ≤ 1 2 κ | s j − s i | 2 ≤ 1 2 κ (4 ‖ x R × a i + x T ‖ ) 2 as claimed. Only the last inequality needs be justified. It stems from the following result: Lemma If no rematching occurs, i.e. i = j , then ψ i = 0 . If i 6 = j , we have for κ | s i − s j | ≤ 1 the inequality | s i − s j | ≤ 4 ‖ x R × a i + x T ‖ . Indeed, rematching occurs only if the displaced point x R × a i + x T is closer to a j , as illustrated in Fig. 3. But this implies the distance between the displaced point and a i is greater than half the distance between a i and a j (see Fig. 3), that is ‖ a j − a i ‖ ≤ 2 ‖ a i + x R × a i + x T − a i ‖ . Now, another Taylor expansion yields γ ( s j ) − γ ( s i ) = γ ′ ( s i )( s j − s i ) + ∫ s j s i γ ′′ ( u )( u − s i ) du . Using ‖ γ ′ ( s ) ‖ = 1 and ‖ γ ′′ ( s ) ‖ ≤ κ we get ‖ γ ( s j ) − γ ( s i ) ‖ ≥ | s j − s i | − 1 2 κ ( s j − s i ) 2 ≥ 1 2 | s j − s i | , the latter inequality steming from the assumption that κ | s j − s i | ≤ 1 . Gathering those results we have thus proved 2 ‖ x R × a i + x T ‖ ≥ ‖ a j − a i ‖ := ‖ γ ( s j ) − γ ( s i ) ‖ ≥ 1 2 | s j − s i | which allows to prove the Lemma, and in turn the Proposi- tion. B. Extension to the 3D case Corollary The results hold in 3D where κ denotes the maximum of the Gauss principal curvatures. The corollary can be proved in exactly the same way as the theorem, by studying the discrepancy between both cost functions term-by-term. The idea is then merely to consider the plane spanned by the unit normal n i and the segment relating a i and a j . This plane intersects the surface S r at a curve, and the same process can be applied as in the 2D case. The curvature of this curve is by definition less than the maximum Gauss principal curvature of the surface. IV. I LLUSTRATION OF THE RESULTS IN 3D The covariance of scan matching estimates is computed using Equation (8), which requires a model of the measure- ment noise w i via its covariance E ( w i w T j ) . Modeling noise of depth sensors is a separate topic and will not be considered in the present paper. Regardless, it’s clear from (8) that the Hessian A of the cost function plays a key role in this computation. We now demonstrate using a very simple numerical example in 3D that the Hessian of the point-to- plane correctly models the behavior of the ICP algorithm. Consider a 3D scan { p i } of a plane wall by a depth camera located perpendicularly d units away as shown in Figure 4. A depth image of N H by N V pixels (function of the hardware) captures a surface measuring H by V units (function of the optical field of view and distance d ) such that a i = [ x i y i d ] T where − H/ 2 ≤ x i ≤ H/ 2 , − V / 2 ≤ y i ≤ V / 2 . H V (0,0,d) x z y Fig. 4. Scan of 3D plane wall with N H horizontal and N V vertical points distributed symmetrically about origin. Assume a previous scan { q i } with associated surface nor- mals { n i } was captured with the same camera orientation at distance d ′ such that q i = [ x ′ i y ′ i d ′ ] T , n i = [0 0 − 1] T where − H ′ / 2 ≤ x ′ i ≤ H ′ / 2 , − V ′ / 2 ≤ y ′ i ≤ V ′ / 2 . From Section II-D we have A = ∑ i         y 2 i − x i y i 0 0 0 y i − x i y i x 2 i 0 0 0 − x i 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 y i − x i 0 0 0 1         =         Ψ 0 0 0 0 0 0 Ξ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 N         where ∑ x 2 i := Ξ , ∑ y 2 i := Ψ and ∑ 1 := N are non-zero. By inspection this A possesses three zero eigenvalues with associated eigenvectors ( e 3 , e 4 , e 5 ) ∈ R 6 , indicating that in this case rotations about the z axis and translations along the x and y axes are unobservable to scan matching, which agrees with physical intuition about Figure 4. Although A is singular, (8) can still be computed by removing x 3 , x 4 and x 5 from the state vector x = [ x R x T ] , thus deleting the third, fourth and fifth columns of B i or equivalently rows and columns of A . In this way only the covariance of observable parameters will be estimated. Now consider using the point-to-point Hessian given in Section II-C. In this case we have A = ∑ i         d 2 + y 2 i − x i y i − dx i 0 − d y i − x i y i d 2 + x 2 i − dy i d 0 − x i − dx i − dy i x 2 i + y 2 i − y i x i 0 0 d − y i 1 0 0 − d 0 x i 0 1 0 y i − x i 0 0 0 1         =         N d 2 + Ψ 0 0 0 − N d 0 0 N d 2 + Ξ 0 N d 0 0 0 0 Ξ + Ψ 0 0 0 0 N d 0 N 0 0 − N d 0 0 0 N 0 0 0 0 0 0 N         By inspection this A is full rank and so it does not have zero eigenvalues. Since we know there are three unobserv- able directions, the point-to-point ICP Hessian provides a completely wrong model of the scan matching observability (and in turn covariances), exactly as predicted. V. C ONCLUSION In this paper we have provided a rigorous mathematical proof — a novel result to the best of our knowledge — why the closed-form formula (1) and its linearized version (8) provide correct roto-translation estimate covariances only in the point-to-plane variant of ICP, but not point-to-point. This paper has not investigated the modeling of the noise term w i which appears in the linearized covariance formula (8). We know that assuming this term to be inde- pendent and identically distributed Gaussian noise will lead to erroneous (overly optimistic) estimates of covariance, as noted in [17] for instance. We are currently investigating how to rigorously derive a closed-form expression to obtain a valid and realistic covariance matrix for 3D depth sensor- based scan matching. A CKNOWLEDGMENTS The work reported in this paper was partly supported by the Cap Digital Business Cluster TerraMobilita Project. R EFERENCES [1] P. J. Besl and N. D. McKay, “A method for registration of 3-D shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence , vol. 14, no. 2, pp. 239–256, February 1992. [2] Y. Chen and G. Medioni, “Object modelling by registration of multiple range images,” Image and Vision Computing , vol. 10, no. 3, pp. 145– 155, April 1992. [3] F. Lu and E. E. Milios, “Robot pose estimation in unknown envi- ronments by matching 2D range scans,” in Proceedings of the 1994 IEEE Computer Society Conference on Computer Vision and Pattern Recognition , Seattle, WA, June 1994, pp. 935–938. [4] A. V. Segal, D. Haehnel, and S. Thrun, “Generalized-ICP,” in Robotics: Science and Systems V , J. Trinkle, Y. Matsuoka, and J. Castellanos, Eds. MIT Press, 2009, pp. 161–168. [5] M. Jaimez and J. Gonzalez-Jimenez, “Fast visual odometry for 3-D range sensors,” IEEE Transactions on Robotics , vol. 31, no. 4, pp. 809–822, August 2015. [6] J. Nieto, T. Bailey, and E. Nebot, “Recursive scan-matching SLAM,” Robotics and Autonomous Systems , vol. 55, no. 1, pp. 39–49, January 2007. [7] A. Mallios, P. Ridao, D. Ribas, F. Maurelli, and Y. Petillot, “EKF- SLAM for AUV navigation under probabilistic sonar scan-matching,” in Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems , Taipei, Taiwan, October 2010, pp. 4404–4411. [8] T. Hervier, S. Bonnabel, and F. Goulette, “Accurate 3D maps from depth images and motion sensors via nonlinear kalman filtering,” in Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems , Vilamoura, Algarve, Portugal, October 2012, pp. 5291–5297. [9] M. Barczyk, S. Bonnabel, J.-E. Deschaud, and F. Goulette, “Invariant EKF design for scan matching-aided localization,” IEEE Transactions On Control Systems Technology , vol. 23, no. 6, pp. 2440–2448, November 2015. [10] G. Grisetti, R. K ̈ ummerle, C. Stachniss, and W. Burgard, “A tutorial on graph-based SLAM,” IEEE Intelligent Transportation Systems Magazine , vol. 2, no. 4, pp. 31–43, 2010. [11] S. Rusinkiewicz and M. Levoy, “Efficient variants of the ICP algo- rithm,” in Proceedings of the Third International Conference on 3-D Digital Imaging and Modeling , Quebec City, Canada, May 2001, pp. 145–152. [12] A. K. R. Chowdhury and R. Chellappa, “Stochastic approximation and rate-distortion analysis for robust structure and motion estimation,” International Journal of Computer Vision , vol. 55, no. 1, pp. 27–53, 2003. [13] A. Censi, “An accurate closed-form estimate of ICP’s covariance,” in Proceedings of the 2007 IEEE International Conference on Robotics and Automation , Roma, Italy, April 2007, pp. 3167–3172. [14] N. Gelfand, L. Ikemoto, S. Rusinkiewicz, and M. Levoy, “Geomet- rically stable sampling for the ICP algorithm,” in Proceedings of the Fourth International Conference on 3–D Digital Imaging and Modeling , Banff, Canada, October 2003, pp. 260–267. [15] O. Bengtsson and A.-J. Baerveldt, “Robot localization based on scan- matching — estimating the covariance matrix for the IDC algorithm,” Robotics and Autonomous Systems , vol. 44, no. 1, pp. 29–40, July 2003. [16] S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory . Prentice Hall, 1993. [17] O. Bengtsson and A.-J. Baerveldt, “Localization in changing environ- ments – estimation of a covariance matrix for the IDC algorithm,” in Proceedings of the 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems , Maui, Hawaii, USA, October 2001, pp. 1931–1937.