Recursive Bayesian Filtering in Circular State Spaces Gerhard Kurz a , Igor Gilitschenski a , Uwe D. Hanebeck a a Intelligent Sensor-Actuator-Systems Laboratory (ISAS) Institute for Anthropomatics and Robotics Karlsruhe Institute of Technology (KIT), Germany Abstract For recursive circular filtering based on circular statistics, we introduce a general framework for estimation of a circular state based on different circular distributions, specifically the wrapped normal distribution and the von Mises distribution. We propose an estimation method for circular systems with nonlinear system and measurement functions. This is achieved by relying on efficient deterministic sampling techniques. Furthermore, we show how the calculations can be simplified in a variety of important special cases, such as systems with additive noise as well as identity system or measurement functions. We introduce several novel key components, particularly a distribution-free prediction algorithm, a new and superior formula for the multiplication of wrapped normal densities, and the ability to deal with non-additive system noise. All proposed methods are thoroughly evaluated and compared to several state-of-the-art solutions. 1. Introduction Estimation of circular quantities is an omnipresent issue, be it the wind direction, the angle of a robotic revolute joint, the orientation of a turntable, or the direction a vehicle is facing. Circular estimation is not limited to applications involving angles, however, and can be applied to a variety of periodic phenomena. For example phase estimation is a common issue in signal processing, and tracking objects that periodically move along a certain trajectory is also of interest. Standard approaches to circular estimation are typically based on estimation techniques designed for linear scenarios that are tweaked to deal with some of the issues arising in the presence of circular quantities. However, modifying linear methods cannot only be tedious and error-prone, but also yields suboptimal results, because certain assumptions of these methods are violated. For example, solutions based on Kalman filters [ 1 ] or nonlinear versions thereof [ 2 ] fundamentally neglect the true topology of the underlying manifold and assume a Gaussian distribution, which is only defined on R n . In the linear case, the use of a Gaussian distribution is frequently justified by the central limit theorem. This justification no longer holds in a circular setting, as the Gaussian is not a limit distribution on the circle. In order to properly deal with circular estimation problems, we rely on circular statistics [ 3 ], [ 4 ], a subfield of statistics that deals with circular quantities. More broadly, the field of directional statistics [ 5 ] considers a variety of manifolds, such as the circle, the hypersphere, or the torus. Unlike standard approaches that assume linear state spaces, methods based on circular statistics correctly use the proper manifold and rely on probability distributions defined on this manifold. Circular statistics has been applied in a variety of sciences, such as biology [ 4 ], bioinformatics [ 6 ], meteorology [ 7 ], and geosciences [8]. Email addresses: gerhard.kurz@kit.edu (Gerhard Kurz), gilitschenski@kit.edu (Igor Gilitschenski), uwe.hanebeck@ieee.org (Uwe D. Hanebeck) arXiv:1501.05151v1 [cs.SY] 21 Jan 2015 system measurement publication distribution model noise model noise Azmani, Reboul, Choquel, Benjelloun [9] von Mises identity additive identity additive Markovic, Chaumette, Petrovic [14] von Mises-Fisher identity additive identity additive Kurz, Gilitschenski, Julier, Hanebeck [21] Bingham identity additive identity additive Kurz, Gilitschenski, Hanebeck [15] wrapped normal/von Mises nonlinear additive identity additive Kurz, Gilitschenski, Hanebeck [16] wrapped normal nonlinear additive nonlinear any this paper wrapped normal/von Mises nonlinear any nonlinear any Table 1: Circular filters based on directional statistics. There has been some work on filtering algorithms based on circular statistics by Azmani et al. [ 9 ], which was further investigated by Stienne et al. [ 10 ]. Their work is based on the von Mises distribution and allows for recursive filtering of systems with a circular state space. However, it is limited to the identity with additive noise as the system equation and the measurement equation. The filter from [ 9 ] has been applied to phase estimation of GPS signals [ 11 ], [ 12 ] as well as map matching [ 13 ]. Markovic et al. have published a similar filter [ 14 ] based on the von Mises-Fisher distribution, a generalization of the von Mises distribution to the hypersphere. We have previously published a recursive filter based on the wrapped normal distribution allowing for a nonlinear system equation [ 15 ]. The paper [ 16 ] extends this approach to make a nonlinear measurement update possible. Both papers rely on a deterministic sampling scheme, based on the first circular moment. This kind of sampling is reminiscent of the well-known unscented Kalman filter (UKF) [ 2 ]. We have extended this sampling scheme to the first two circular moments in [ 17 ], so the proposed filters are, in a sense, circular versions of the UKF. The developed filters have been applied in the context of constrained tracking [ 18 ], bearings-only sensor scheduling [19], as well as circular model predictive control [20]. Furthermore, we proposed a recursive filter based on the circular Bingham distribution in [ 21 ]. The Bingham distribution is closely related to the von Mises distribution, but has a bimodal density, which makes it interesting for axial estimation problems (i.e., problems with 180 ◦ symmetry). An overview of all of these filters and the considered distributions as well as system and measurement models is given in Table 1. This paper summarizes and combines our results as well as extends the previous work by a number of additional contributions. First of all, we propose a general filtering framework that can be used in conjunction with a variety of system and measurement equations, different types of noise, and both the wrapped normal and the von Mises distributions. Our previous publications [ 15 ], [ 16 ] as well as the work by Azmani et al. [9] can be seen as special cases of the proposed framework. Furthermore, we introduce a new multiplication formula for wrapped normal distributions that outperforms the solution proposed in [ 15 ]. We generalize the prediction step from [ 15 ] to a purely moment-based solution that does not need to assume any kind of distribution. Compared to [ 16 ], we add the ability to deal with non-additive noise not only in the measurement update but also in the prediction step. Finally, we perform a thorough evaluation, where we compare the proposed techniques to several state-of-the-art approaches. This paper is structured as follows. First, we formulate the problem in Sec. 2. Then, we introduce the necessary fundamentals from circular statistics in Sec. 3. Based on these fundamentals, we propose deterministic sampling schemes in Sec. 4 and derive the operations on circular densities required for the circular filter in Sec. 5. These results are used to introduce circular filtering algorithms in Sec. 6. An evaluation of the proposed algorithms can be found in Sec. 7. Finally, conclusions are given in Sec. 8. 2. Problem Formulation In this section, we formulate the problems under consideration and summarize some standard approaches that have been used to address the issues associated with periodicity. 2 2.1. Circular Filtering Circular filtering considers estimation problems on the unit circle, which is commonly parameterized as the set of complex numbers with unit length { x ∈ C : | x | = 1 } . To allow for a more convenient one-dimensional notation, we identify S 1 with the half-open interval [0 , 2 π ) ⊂ R , while keeping the topology of the circle. Together with the operation + : S 1 × S 1 → S 1 , x + y := x + R y mod 2 π, for all x, y ∈ [0 , 2 π ) with standard addition + R on R , the circle S 1 forms an Abelian group. Because S 1 with the topology given above has a manifold structure, ( S 1 , +) is a Lie group. We consider a system whose state x k at time step k is a value on the unit circle S 1 . System and measurement models are assumed to be given. In this paper, we propose several methods to deal with different types of models. More complex models necessitate the use of more sophisticated algorithms and conversely, simpler models allow the use of computationally less expensive algorithms. 2.1.1. System Model In this work, we consider a system model whose state evolves according to the general system equation x k +1 = a k ( x k , w k ) with (nonlinear) system function a k : S 1 × W → S 1 and noise w k ∈ W stemming from some noise space W . Note that W is not necessarily S 1 , but may be an arbitrary set, for example the real-vector space R n , some manifold, or even a discrete set. An interesting and practically relevant special case is a (nonlinear) system with additive noise x k +1 = a k ( x k ) + w k mod 2 π with a k : S 1 → S 1 and w k ∈ S 1 . More particular, we also consider the special case, where a k is the identity, i.e., x k +1 = x k + w k mod 2 π. 2.1.2. Measurement Model The system state cannot be observed directly, but may only be estimated based on measurements that are disturbed by noise. A general measurement function is given by ˆ z k = h k ( x k , v k ) , where ˆ z k ∈ Z is the measurement in the measurement space Z , h k : S 1 × V → Z is the measurement function and v k ∈ V is arbitrary measurement noise in a certain noise space V . Note that the measurement and noise space can be arbitrary sets in general. An interesting special case are measurement functions where the measurement noise is additive, i.e., ˆ z k = h k ( x k ) + v k with measurement function h k : S 1 → Z and v k ∈ Z . In this case, we require Z to have a group structure with + as the operation. Additionally, we consider the more specific case where h k is the identity, i.e., ˆ z k = x k + v k mod 2 π with ˆ z k , v k ∈ S 1 . Remark 1. We do not consider linear system models because linearity is a concept of vector spaces, not manifolds [15]. For this reason, there are no linear functions on the circle. 3 0 pi/2 pi 3pi/2 2pi 0 0.5 1 1.5 φ f( φ ) Gaussian prior measurement incorrect fusion (a) Incorrect fusion. 0 pi/2 pi 3pi/2 2pi 0 0.5 1 1.5 φ f( φ ) Gaussian prior measurement repositioned meas. correct fusion (b) Correct fusion. Figure 1: This figure illustrates that repositioning of the measurement is necessary to obtain satisfactory performance when using classical filters on circular problems. 2.2. Standard Approaches As circular estimation problems are wide-spread in a variety of applications, a number of standard approaches have been employed. We introduce three of the most common methods and explain their strengths and weaknesses. 2.2.1. Gaussian-based approaches Gaussian-based methods (wrongly) assume a Gaussian distribution and use standard filtering techniques for Gaussians in conjunction with certain modifications to allow their application to circular problems. One-Dimensional Methods. One common approach is the use of a standard Kalman filter [ 1 ] or, in case of nonlinear system or measurement functions, unscented Kalman filter (UKF) [ 2 ] with a scalar state x k containing the angle θ k , i.e., x k = θ k . However, two modifications are necessary before this approach can be used in practice. First, the estimate has to be forced to stay inside the interval [0 , 2 π ) by performing a modulo operation after every prediction and/or update step. Second, if the measurement space is periodic, the measurement needs to be repositioned to be closer to the state mean in certain cases. This problem occurs whenever the measurement and the current state mean are more than π apart. In this case, the measurement needs to be moved by ± 2 πk to an equivalent measurement that deviates at most π from the state mean. An illustration of this problem is given in Fig. 1. This type of filter is used as a comparison in [ 15 ]. When the uncertainty is small, this kind of approach works fairly well, but it tends to produce unsatisfactory results if the uncertainty is high. Two-Dimensional Methods. Another common approach is based on the Kalman filter or the UKF with two-dimensional state subject to a nonlinear constraint. More specifically, an angle θ k is represented by a state vector x k = [ cos ( θ k ) , sin ( θ ) k ] T and the constraint is || x k || = 1 to enforce that x k is on the unit circle. In order to enforce this constraint, x k is projected to the unit circle after each prediction and/or update step. More sophisticated approaches increase the covariance to reflect the fact that the projection operation constitutes an increase in uncertainty [22]. This approach has been used in [ 18 ], but did not perform as well as the filter based on circular statistics. One of the issues of this approach is the fact that the system and measurement model sometimes become more complicated when the angle θ k is transformed to a two-dimensional vector. 2.2.2. Particle Filters Another method that can be applied is particle filtering [ 23 ]. Particle filters on nonlinear manifolds are fairly straightforward to implement because each particle can be treated separately. For the particle filter to work, the system function and the measurement likelihood both need to respect 4 −2 −1 0 1 2 −2 −1 0 1 2 0 0.05 0.1 0.15 0.2 0.25 x 1 =cos( φ ) x 2 =sin( φ ) f(x 1 , x 2 ) (a) The wrapped normal distribution (red) is ob- tained by wrapping a Gaussian distribution (blue) around the circle. The parameters we used for this example are μ = 0 , σ = 2. (b) The von Mises distribution (red) arises when restricting a two-dimensional Gaussian with μ = ( cos μ, sin μ ) T and covariance κ · I 2 × 2 to the unit circle. The parameters for this example are μ = 0 , κ = 1. Figure 2: Relation between WN and VM distributions and the Gaussian distribution. the underlying topology. The reweighting step as well as the commonly used sequential importance resampling (SIR) are independent of the underlying manifold and can be used in a circular setting as well. However, issues that are typically associated with particle filters arise. If the measurement likelihood function is very narrow, particle degeneration can occur, i.e., (almost) all particles have zero weight after the reweighting step. Furthermore, a large number of particles is required to obtain stable results. Even though these problems are less critical in a one-dimensional setting, there can still be issues if measurements with high certainty occur in areas with few particles. This can, for example, occur when information from sensors with very different degrees of accuracy is fused. It should also be noted that sampling from certain circular distributions can be somewhat involved and require the use of the Metropolis Hastings algorithm [ 24 ] or similar approaches, e.g., in case of the von Mises distribution. 3. Circular Statistics Because of the drawbacks of the approaches discussed above, we propose a filtering scheme based on circular statistics [3], [5]. In the following, we introduce the required fundamentals from the field of circular statistics. 3.1. Circular Distributions Circular statistics considers probability distributions defined on the unit circle. A variety of distributions has been proposed [ 4 ]. We give definitions of all distributions that are required for the proposed filtering scheme. Definition 1 (Wrapped Normal Distribution) . The wrapped normal (WN) distribution is given by the probability density function (pdf) f ( x ; μ, σ ) = 1 √ 2 πσ ∞ ∑ k = −∞ exp ( − ( x − μ + 2 πk ) 2 2 σ 2 ) with x ∈ S 1 , location parameter μ ∈ S 1 , and concentration parameter σ > 0 . 5 The WN distribution is obtained by wrapping a one-dimensional Gaussian distribution around the unit circle and adding up all probability mass that is wrapped to the same point (see Fig. 2a). It appears as a limit distribution on the circle [ 15 ] in the following sense. A summation scheme of random variables that converges to the Gaussian distribution in the linear case, will converge to the WN distribution if taken modulo 2 π . Because of its close relation to the Gaussian distribution, the WN distribution inherits a variety of its properties, for example its normalization constant as well as the formula for the convolution of densities. Even though there is an infinite sum involved, evaluation of the pdf of a WN distribution can be performed efficiently, because only few summands need to be considered [25]. Definition 2 (Von Mises Distribution) . The von Mises (VM) distribution is given by the pdf f ( x ; μ, κ ) = 1 2 πI 0 ( κ ) exp( κ cos( x − μ )) with x ∈ S 1 , location parameter μ ∈ S 1 , and concentration parameter κ > 0 . I 0 ( · ) is the modified Bessel function [26] of order 0 . The VM distribution, sometimes also referred to as the circular normal (CN) distribution, is obtained by restricting a two-dimensional Gaussian with mean || μ || = 1 and covariance κ · I 2 × 2 (where I 2 × 2 is the identity matrix) to the unit circle and reparameterizing to [0 , 2 π ) as can be seen in Fig. 2b. For this reason, it also inherits some of the properties of the Gaussian distribution; most importantly, it is closed under Bayesian inference. The VM distribution has been used as a foundation for a circular filter by Azmani et al. [ 9 ]. It is closely related to the Bingham distribution and conversion between the two is effectively a matter of shrinking the the interval [0 , 2 π ) by a factor of two two, i.e., f (2 x ; μ, κ ) is a Bingham distribution [27, p. 4]. Definition 3 (Wrapped Dirac Mixture Distribution) . The wrapped Dirac mixture (WD) distribution with L components is given by f ( x ; γ 1 , . . . , γ L , β 1 , . . . , β L ) = L ∑ j =1 γ j δ ( x − β j ) with Dirac delta distribution δ ( · ) , Dirac positions β 1 , . . . , β L ∈ S 1 , and weights γ 1 , . . . , γ L > 0 where ∑ L j =1 γ j = 1 . Unlike the WN and VM distributions, the WD distribution is a discrete probability distribution on a continous domain. It can be obtained by wrapping a Dirac mixture in R around the unit circle. WD distributions can be used as discrete approximations of continuous distributions with a finite set of samples. In this paper, we use the following notation. We denote the density function of a WN distribution with parameters μ and σ with WN ( μ, σ ). If a random variable x is distributed according to this WN distribution, we write x ∼ WN ( μ, σ ). The terms VM ( μ, κ ) and WD ( γ 1 , . . . , γ L , β 1 , . . . , β L ) are used analogously to describe the density functions of VM and WD distributions with parameters μ, κ and γ 1 , . . . , γ L , β 1 , . . . , β L , respectively. 3.2. Circular Moments In circular statistics, there is a concept called circular (or trigonometric) moment. Definition 4 (Circular Moments) . For a random variable x ∼ f ( x ) defined on the circle, its n -th circular moment is given by m n = E (exp( ix ) n ) = E (exp( inx )) = ∫ 2 π 0 exp( inx ) · f ( x ) d x ∈ C with imaginary unit i . 6 The n -th moment is a complex number and, hence, has two degrees of freedom, the real and the imaginary part. For this reason, the first moment includes information about the circular mean arg m 1 = atan2 ( Im m 1 , Re m 1 ) as well as the concentration | m 1 | = √ (Re m 1 ) 2 + (Im m 1 ) 2 of the distribution 1 , similar to the first two real-valued moments. It can be shown that WN and VM distributions are uniquely defined by their first circular moment [3]. Remark 2. Any continuous and piecewise continuously differentiable 2 π -periodic function f : R → R can be written as a Fourier series f ( x ) = ∞ ∑ k = −∞ c k exp( ikx ) where c k = 1 2 π ∫ 2 π 0 f ( x ) exp( − ikx ) dx . If f ( · ) is the pdf of a circular probability distribution, the Fourier coefficients are closely related to the circular moments according to c k = 1 2 π m − k . Lemma 1 (Moments for WN, VM, and WD Distributions) . For WN, VM, and WD distributions with given parameters, the n -th circular moment can be calculated according to m W N n = exp ( inμ − n 2 σ 2 2 ) , m V M n = exp( inμ ) I | n | ( κ ) I 0 ( κ ) , m W D n = L ∑ j =1 γ j exp( inβ j ) . A proof is given in [5]. 3.3. Circular Moment Matching As both WN and VM distributions are uniquely defined by their first moment, it is possible to convert between them by matching the first circular moment. Lemma 2 (Circular Moment Matching) . We define A ( x ) = I 1 ( x ) I 0 ( x ) as given in [5]. 1. For a given first moment m 1 , the WN distribution with this first moment has the density WN ( atan2(Im m 1 , Re m 1 ) , √ − 2 log ( | m 1 | ) ) . 2. For a given first moment m 1 , the VM distribution with this first moment has the density VM ( atan2(Im m 1 , Re m 1 ) , A − 1 ( | m 1 | ) ) . 3. For a given VM distribution with density VM ( μ, κ ) , the WN distribution with identical first moment has the density WN ( μ, √ − 2 log ( I 1 ( κ ) I 0 ( κ ) )) . 4. For a given WN distribution with density WN ( μ, σ ) , the VM distribution with identical first moment has the density VM ( μ, A − 1 ( exp ( − σ 2 2 ))) The proof is given in [ 15 ]. Calculation of the function A − 1 ( · ) is somewhat tricky. In [ 15 ], we use the algorithm by Amos [ 28 ] 2 to calculate A ( · ) and MATLAB’s fsolve to invert this function. Stienne et al. have proposed closed-form approximations, which can be calculated very easily but have a large approximation error [ 12 ], [ 10 ]. A more detailed discussion on approximations of A − 1 ( · ) can be found in [29] and [30, Sec. 2.3]. 1 The term 1 − | m 1 | is sometimes referred to as circular variance. 2 Pseudocode of this algorithm is given in [15]. 7 4. Deterministic Sampling In order to propagate continous probability densities through nonlinear functions, it is a common technique to use discrete sample-based approximations of the continous densities. A set of samples can easily be propagated by applying the nonlinear function to each sample individually. This approach can be used for both the prediction and the measurement update step. We distinguish between deterministic and nondeterministic sampling. Nondeterministic sampling relies on a randomized algorithm to stochastically obtain samples of a density. Typical examples include the samplers used by the particle filter [ 23 ] or the Gaussian particle filter [ 31 ]. Deterministic sampling selects samples in a deterministic way, for example to fit certain moments (the sampler used by the UKF, [ 2 ]), or to optimally approximate the shape of the density (published in [ 32 ], this sampler is used by the S 2 KF, [ 33 ]). Deterministic sampling schemes have the advantage of requiring a significantly smaller number of samples, which is why we will focus on this type of solution. A na ̈ ıve solution for approximating a WN density may be the application of a deterministic sampling scheme for the Gaussian distribution (such as the samplers used in [ 2 ], [ 33 ]) and subsequently wrapping the samples. Even though this technique is valid for stochastic samples, it does not provide satisfactory results for deterministic samples. In extreme cases, wrapping can cause different samples to be wrapped to the same point, grossly misrepresenting the original density. This problem is illustrated in Fig. 3. In the case of UKF samples (Fig. 3a), it can be seen that for σ ≈ 2 . 5, one sample is placed at μ and two samples are placed on the opposite side of the circle, i.e., the mode of the approximation is opposite to the true mode. Furthermore, for σ ≈ 5 all three UKF samples are wrapped to the same position, i.e., the sample-based approximation degenerates to a distribution with single Dirac component even though the true distribution is nearly uniform. Similar issues arise in the case of S 2 KF samples (see Fig. 3b). 4.1. Analytic Solutions First of all, we consider analytic solutions to obtain deterministic samples. These solutions are based on circular moment-matching and only provide a small, fixed number of Dirac delta components, but are extremely fast to calculate, making them a good choice for real-time applications. In [ 15 ], we presented a method to obtain a WD approximation with three equally weighted components, which is based on matching the first circular moment (see Algorithm 1). We further extended this scheme to obtain a WD with five components by matching the first as well as second circular moment (see Algorithm 2), which, as we proved in [ 17 ], necessitates the use of different weights. Both approaches can approximate arbitrary symmetric circular densities with given circular moments. Algorithm 1: Deterministic approximation with L = 3 components. Input : first circular moment m 1 Output : WD ( γ 1 , . . . , γ 3 , β 1 , . . . , β 3 ) /* extract μ */ μ ← atan2(Im m 1 , Re m 1 ); /* obtain Dirac positions */ α ← arccos ( 3 2 | m 1 | − 1 2 ) ; β 1 ← μ − α mod 2 π ; β 2 ← μ mod 2 π ; β 3 ← μ + α mod 2 π ; /* obtain weights */ γ 1 , γ 2 , γ 3 ← 1 3 ; 8 Algorithm 2: Deterministic approximation with L = 5 components. Input : first circular moment m 1 , second circular moment m 2 , parameter λ ∈ [0 , 1] with default λ = 0 . 5 Output : WD ( γ 1 , . . . , γ 5 , β 1 , . . . , β 5 ) /* extract μ */ μ ← atan2(Im m 1 , Re m 1 ); m 1 ← | m 1 | ; m 2 ← | m 2 | ; /* obtain weights */ γ min 5 ← (4 m 2 1 − 4 m 1 − m 2 + 1) / (4 m 1 − m 2 − 3); γ max 5 ← (2 m 2 1 − m 2 − 1) / (4 m 1 − m 2 − 3); γ 5 ← γ min 5 + λ ( γ max 5 − γ min 5 ); γ 1 , γ 2 , γ 3 , γ 4 ← (1 − γ 5 ) / 4; /* obtain Dirac positions */ c 1 ← 2 1 − γ 5 ( m 1 − γ 5 ); c 2 ← 1 1 − γ 5 ( m 2 − γ 5 ) + 1; x 2 ← (2 c 1 + √ 4 c 2 1 − 8( c 2 1 − c 2 )) / 4; x 1 ← c 1 − x 2 ; φ 1 ← arccos( x 1 ); φ 2 ← arccos( x 2 ); ( β 1 , . . . , β 5 ) ← μ + ( − φ 1 , + φ 1 , − φ 2 , + φ 2 , 0) mod 2 π ; 0 1 2 3 4 5 0 2 4 6 0 0.5 1 1.5 σ x f(x) WN( π , σ ) proposed wrapped UKF (a) Analytic solution from Algorithm 1 and wrapped UKF [2] samples. 0 1 2 3 4 5 0 2 4 6 0 0.5 1 1.5 σ x f(x) WN( π , σ ) proposed wrapped S2KF (b) Analytic solution from Algorithm 2 with λ = 0 . 8 and wrapped S 2 KF [33] samples. Figure 3: Proposed approaches for generating samples of WN distributions with a different concentra- tion parameters σ compared to the na ̈ ıve approach of wrapping samples of a Gaussian with identical σ . It can be seen that the UKF and S 2 KF samples are eventually wrapped to the same location, which produces an extremely poor approximation. 9 0 0.2 0.4 a) 0 0.2 0.4 b) 0 0.2 0.4 c) 0 0.1 0.2 d) 0 pi 2pi 0 0.1 0.2 e) VM( π , 1) WN( π , 1.27) Figure 4: Example of the deterministic sampling of a VM and WN distribution with equal first circular moment. From top to bottom: a) original densities, b) result of Algorithm 1, c) result of Algorithm 2, d) approach based on VM kernels from [ 34 ] for 10 components, e) quantization approach from [ 35 ] for 10 components. Note that the result of Algorithm 1 is identical for both densities because only the first circular moment is considered. 4.2. Optimization-based Solutions If a larger number of samples is desired and there are more degrees of freedom in the samples than constraints (such as preservation of circular moments), optimization-based solutions can be used. The number of samples can be adjusted by the user and an optimal approximation is derived by minimizing a distance measure. In order to simultaneously calculate optimal locations and weights for the samples, a systematic approach based on VM kernels has been proposed in [ 34 ]. For a WD mixture, an induced VM mixture is compared to the true distribution with a quadratic integral distance. A specific kernel width is considered for each component, which depends on the weight of the component and the value of the true distribution at the location of the component. Both the weights and the locations of a fixed even number of WD components are optimized to obtain an optimal symmetric approximation. Constraints in the optimization algorithms are used to maintain a predefined number of circular moments. This approach results in well-distributed Dirac mixtures that fulfill the moment constraints. A quantization approach is discussed in [ 35 ]. It is based on computing optimal Voronoi quantizers. In this approach, optimality refers to minimal quadratic distortion. The resulting Voronoi quantizer gives rise to a circular discrete probability distribution on a continuous domain that approximates the original continuous distribution. Use of this approximation is particularly beneficial in the prediction step of stochastic filters, because an error bound for propagation through a non-trivial system function can be obtained without actually knowing that function. It is sufficient to require it to be Lipschitz and to know an upper bound for the Lipschitz constant. Furthermore, circular moment constraints can be introduced in the optimization procedure of the quantization approach. Examples from all discussed methods for deterministic sampling are depicted in Fig. 4. 5. Operations on Densities In order to derive a circular filtering algorithm, we need to be able to perform certain operations on the involved probability densities. 5.1. Shifting and Mirroring For a given density f ( x ), we want to obtain the density f ( c − x ) for a constant c ∈ S 1 . This operation is necessary in certain cases of the update step. We can split this operation into two steps: 10 mirroring to obtain f ( − x ), and subsequent shifting by c to obtain f ( c + ( − x )). Mirroring WN ( μ, σ ) and VM ( μ, κ ) yields WN (2 π − μ, σ ) and VM (2 π − μ, κ ) because the distributions are symmetric around their mean. Shifting WN ( μ, σ ) and VM ( μ, κ ) by c yields WN ( μ − c mod 2 π, σ ) and VM ( μ − c mod 2 π, κ ) , so the combined operation results in WN ((2 π − μ ) − c mod 2 π, σ ) and VM ((2 π − μ ) − c mod 2 π, κ ) . 5.2. Circular Convolution Given two independent circular random variables x 1 ∼ f 1 ( x 1 ) , x 2 ∼ f 2 ( x 2 ), the sum x 1 + x 2 is distributed according to ( f 1 ∗ f 2 )( x ) = ∫ 2 π 0 f 1 ( t ) f 2 ( x − t ) d t , where ∗ denotes the convolution. This operation is necessary in the prediction step to incorporate additive noise. WN distributions are closed under convolution and the new pdf can be obtained just as in the Gaussian case [ 15 ], i.e., μ = μ 1 + μ 2 mod 2 π, σ 2 = σ 2 1 + σ 2 2 . VM distributions are not closed under convolution. For this reason, Azmani et al. [ 9 ] used the approximation from [ 5 ], which is given by μ = μ 1 + μ 2 , κ = A − 1 ( A ( κ 1 ) , A ( κ 2 )). The function A ( · ) is the same as defined in Lemma 2. This approximation can be derived from an intermediate WN representation [ 10 ]. A similar approximation has been used by Markovic et al. for the von Mises-Fisher case [14, (7)]. In this paper, we present a more general result that calculates the convolution based on circular moments. Lemma 3 (Moments After Addition of Random Variables) . Assume independent random variables x 1 ∼ f 1 , x 2 ∼ f 2 defined on the circle. For the sum x = x 1 + x 2 , it holds E (exp( inx )) = E (exp( inx 1 )) E (exp( inx 2 )) . Proof. m n = E (exp( inx )) = ∫ 2 π 0 exp( inx ) f ( x ) d x = ∫ 2 π 0 ∫ 2 π 0 exp( in ( x )) f 1 ( y ) f 2 ( x − y ) d y d x = ∫ 2 π 0 ∫ 2 π 0 exp( in ( x 1 + x 2 )) f 1 ( x 1 ) f 2 ( x 2 ) d x 1 d x 2 = ∫ 2 π 0 exp( inx 1 ) f 1 ( x 1 ) d x 1 ∫ 2 π 0 exp( inx 2 ) f 2 ( x 2 ) d x 2 = E (exp( inx 1 )) E (exp( inx 2 )) If moment matching of the first circular moment is used to fit a WN or VM to the density that results from convolution, the solutions for WN and VM distributions from [ 9 ] and [ 15 ] arise as special cases of Lemma 3. Remark 3. In fact, Lemma 3 allows us to calculate the convolution purely moment-based. For this reason, we do not need to assume any particular distribution, but can just calculate the moments of the convolved density based on the products of original moments. 3 3 In general, for example in the case of mixture densities, the complexity of the density increases with each successive convolution, so considering only a finite number of moments constitutes an approximation. 11 0 1 2 3 4 5 6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 x f(x) original WN VM moment−based true product Figure 5: Multiplication of two WN densities with parameters μ 1 = 2 , σ 1 = 0 . 7 , and μ 2 = 4 . 95 , σ 2 = 1 . 3 . The true product and the results of both proposed approximation methods (VM and moment-based) are depicted. Note that the true product is not a WN density. 5.3. Multiplication Multiplication of pdfs is an important operation for filtering algorithms, because it is required for Bayesian inference. In general, the product of two pdfs is not normalized and thus, not a pdf. For this reason, we consider the renormalized product, which is a valid pdf. 5.3.1. VM Von Mises densities are closed under multiplication [ 9 ]. It holds that VM ( μ 1 , κ 1 ) · VM ( μ 2 , κ 2 ) ∝ VM ( μ, κ ), where μ = atan2(Im m 1 , Re m 1 ) , κ = | m 1 | , with m 1 = κ 1 exp( iμ 1 ) + κ 2 exp( iμ 2 ) . 5.3.2. WN WN densities are not closed under multiplication. In the following, we consider two different methods to approximate the density of the product with a WN density. WN via VM. WN densities are not closed under multiplication. In [ 15 ], we proposed a method to use the VM distribution in order to approximate the product of two WN densities. More specifically, we convert the WN densities to VM densities using Lemma 2, multiply according to the VM multiplication formula, and convert back to a WN distribution by applying Lemma 2 again. This method has the disadvantage that, in general, the first circular moment of the resulting WN does not match the first circular moment of the true product. An example can be seen in Fig. 5. WN via Moment Matching. In this paper, we present a new method for approximating the product of WN distributions. This method is based on directly approximating the true posterior moments. Theorem 1. The first circular moment of WN ( μ 1 , σ 1 ) · WN ( μ 2 , σ 2 ) after renormalization is given by m 1 = ∞ ∑ j,k = −∞ w ( j, k ) 2 π ∫ 0 exp( ix ) N ( x ; μ ( j, k ) , σ ( j, k )) dx ∞ ∑ j,k = −∞ w ( j, k ) 2 π ∫ 0 N ( x ; μ ( j, k ) , σ ( j, k )) dx 12 where N ( x ; μ, σ ) is a one-dimensional Gaussian density with mean μ and standard deviation σ , and μ ( j, k ) = ( μ 1 + 2 πj ) σ 2 2 + ( μ 2 + 2 πk ) σ 2 1 σ 2 1 + σ 2 2 , σ ( j, k ) = √ σ 2 1 σ 2 2 σ 2 1 + σ 2 2 , w ( j, k ) = exp ( − 1 2 (( μ 1 +2 πj ) − ( μ 2 +2 πk )) 2 σ 2 1 + σ 2 2 ) √ 2 π ( σ 2 1 + σ 2 2 ) . Proof. The true renormalized product is given by f ( x ) = c · f ( x ; μ 1 , σ 1 ) · f ( x ; μ 2 , σ 2 ), where c renormalizes the product, i.e., c = (∫ 2 π 0 f ( x ; μ 1 , σ 1 ) · f ( x ; μ 2 , σ 2 ) d x ) − 1 We calculate m 1 = c · ∫ 2 π 0 exp( ix ) · f ( x ; μ 1 , σ 1 ) · f ( x ; μ 2 , σ 2 ) d x = c · ∫ 2 π 0 exp( ix ) · ∞ ∑ j = −∞ N ( x ; μ 1 + 2 πj, σ 1 ) · ∞ ∑ k = −∞ N ( x ; μ 2 + 2 πk, σ 2 ) d x = c · ∞ ∑ j = −∞ ∞ ∑ k = −∞ ∫ 2 π 0 exp( ix ) · N ( x ; μ 1 + 2 πj, σ 1 ) · N ( x ; μ 2 + 2 πk, σ 2 ) d x = c · ∞ ∑ j = −∞ ∞ ∑ k = −∞ ∫ 2 π 0 exp( ix ) · w ( j, k ) · ·N ( x ; μ ( j, k ) , σ ( j, k )) d x = c · ∞ ∑ j = −∞ ∞ ∑ k = −∞ · w ( j, k ) · ∫ 2 π 0 exp( ix ) · N ( x ; μ ( j, k ) , σ ( j, k )) d x , where we use the dominated convergence theorem to interchange summation and integration. We use the abbreviations, μ ( j, k ) = ( μ 1 + 2 πj ) σ 2 2 + ( μ 2 + 2 πk ) σ 2 1 σ 2 1 + σ 2 2 , σ ( j, k ) = √ σ 2 1 σ 2 2 σ 2 1 + σ 2 2 , w ( j, k ) = exp ( − 1 2 (( μ 1 +2 πj ) − ( μ 2 +2 πk )) 2 σ 2 1 + σ 2 2 ) √ 2 π ( σ 2 1 + σ 2 2 ) based on the multiplication formula for Gaussian densities given in [36, 8.1.8]. 13 To obtain the renormalization factor c − 1 , we use a similar derivation c − 1 = ∫ 2 π 0 f ( x ; μ 1 , σ ) 1 · f ( x ; μ 2 , σ 2 ) d x = ∫ 2 π 0 ∞ ∑ j = −∞ N ( x ; μ 1 + 2 πj, σ 1 ) · ∞ ∑ k = −∞ N ( x ; μ 2 + 2 πk, σ 2 ) d x = ∞ ∑ j = −∞ ∞ ∑ k = −∞ ∫ 2 π 0 N ( x ; μ 1 + 2 πj, σ 1 ) · N ( x ; μ 2 + 2 πk, σ 2 ) d x = ∞ ∑ j = −∞ ∞ ∑ k = −∞ w ( j, k ) · ∫ 2 π 0 N ( x ; μ ( j, k ) , σ ( j, k )) d x . The involved integrals can be reduced to evaluations of the complex error function erf [ 26 , 7.1]. This yields ∫ 2 π 0 exp( ix ) · N ( x ; μ ( j, k ) , σ ( j, k )) d x = 1 2 exp ( iμ ( j, k ) − σ ( j, k ) 2 2 ) · ( erf ( μ ( j, k ) + iσ ( j, k ) 2 √ 2 σ ( j, k ) ) − erf ( μ ( j, k ) − 2 π + iσ ( j, k ) 2 √ 2 σ ( j, k ) )) and ∫ 2 π 0 N ( x ; μ ( j, k ) , σ ( j, k )) d x = 1 2 ( erf ( μ ( j, k ) σ ( j, k ) √ 2 ) − erf ( μ ( j, k ) − 2 π σ ( j, k ) √ 2 )) . There are efficient implementations of the complex error function by means of the related Faddeeva function [ 37 ]. Furthermore, the infinite sums can be truncated to just a few summands without a significant loss in accuracy. For example, the multiplication in Fig. 5 requires 5 × 5 summands for an error smaller than the accuracy of the IEEE 754 64 bit double data type [ 38 ]. Consequently, the proposed method allows for efficient calculation of the approximate multiplication of WN densities. 6. Circular Filtering Based on the results in the previous section, we derive circular filtering algorithms for the scenarios described in Sec. 2.1. All proposed algorithms follow the recursive filtering concept and consist of prediction and measurement update steps. We formulate the necessary steps without requiring a particular density whenever possible such that most methods can be directly applied to WN as well as VM distributions, and might even be generalized to other continous circular distributions. An overview of all considered prediction and measurement update algorithms is given in Table 2. 14 system model measurement model method identity additive noise arbitrary noise identity additive noise arbitrary noise WN [15] (special case) [15] contribution [15] [16] [16] VM [9] contribution contribution [9] contribution contribution moment-based contribution contribution contribution - - - Table 2: Prediction and measurement update algorithms. Entries marked with contribution are contributions of this paper. 6.1. Prediction The prediction step is used to propagate the estimate through time. 6.1.1. Identity System Model The transition density is given according to f ( x k +1 | x k ) = ∫ 2 π 0 f ( x k +1 , w k | x k ) d w k = ∫ 2 π 0 f ( x k +1 | x k , w k ) f w ( w k ) d w k = ∫ 2 π 0 δ ( x k +1 − ( x k + w k )) f w ( w k ) d w k = f w ( x k +1 − x k ) , where f w ( · ) is the density of the system noise. For the predicted density, according to the Chapman- Kolmogorov equation we obtain f p ( x k +1 ) = ∫ 2 π 0 f ( x k +1 | x k ) f e ( x k ) d x k . (1) In the special case of an identity system model, this yields f p ( x k +1 ) = ∫ 2 π 0 f w ( x k +1 − x k ) f e ( x k ) d x k = ( f w ∗ f e )( x k +1 ) , where ∗ denotes convolution as defined in Sec. 5.2. For the VM distribution, this system model has been considered in [ 9 ]. If a WN distribution is assumed, (1) is a special case of [ 15 ] where we omit the propagation through the nonlinear function. 6.1.2. Nonlinear System Model with Additive Noise Similar to the previous case, the transition density is given by f ( x k +1 | x k ) = ∫ 2 π 0 δ ( x k +1 − ( a k ( x k ) + w k )) f w ( w k ) d w k . We approximate the prior density f e ( x k ) ≈ ∑ L j =1 γ j δ ( x k − β j ) using, for example, Algorithm 1 or Algorithm 2. Then, the prediction density can be approximated according to f p ( x k +1 ) = ∫ 2 π 0 f ( x k +1 | x k ) f e ( x k ) d x k 15 ≈ ∫ 2 π 0 (∫ 2 π 0 δ ( x k +1 − ( a k ( x k ) + w k )) f w ( w k ) d dw k ) ·   L ∑ j =1 γ j δ ( x k − β j )   d x k = ∫ 2 π 0 f w ( w k ) L ∑ j =1 γ j ∫ 2 π 0 δ ( x k +1 − ( a k ( x k ) + w k )) · δ ( x k − β j ) d x k d w k = ∫ 2 π 0 f w ( w k ) L ∑ j =1 γ j δ ( x k +1 − ( a k ( β j ) + w k )) ︸ ︷︷ ︸ ≈ ̃ f ( x k +1 − w k ) d w k = ( f w ∗ ̃ f )( x k +1 ) , where ̃ f is obtained by moment matching of the WD ∑ L j =1 γ j δ ( x − ( a k ( β j )) according to Lemma 2. The convolution can be calculated as described in Sec. 5.2. 6.1.3. Nonlinear System Model with Arbitrary Noise In this paper, we extend the previous results to deal with arbitrary noise in the prediction step. For arbitrary noise, the transition density is given by f ( x k +1 | x k ) = ∫ 2 π 0 δ ( x k +1 − a k ( x k , w k )) f w ( w k ) d w k . We approximate the prior density f e ( x k ) ≈ ∑ L j =1 γ j δ ( x k − β j ) as well as the noise density f w ( w k ) ≈ ∑ L w l =1 γ w l δ ( w k − β w l ). It should be noted that the noise is not necessarily a circular quantity and different approximation techniques may be required. If W = R n , the techniques presented in [ 32 ] may be applied. Then, the prediction density can be approximated according to f p ( x k +1 ) = ∫ 2 π 0 f ( x k +1 | x k ) f e ( x k ) d x k = ∫ 2 π 0 ∫ W δ ( x k +1 − a k ( x k , w k )) f w ( w k ) d w k f e ( x k ) d x k ≈ ∫ 2 π 0 ∫ W δ ( x k +1 − a k ( x k , w k )) ( L w ∑ l =1 γ w l δ ( w k − β w l ) ) d w k f e ( x k ) d x k = L w ∑ l =1 γ w l ∫ 2 π 0 δ ( x k +1 − a k ( x k , β w l )) f e ( x k ) d x k ≈ L w ∑ l =1 γ w l ∫ 2 π 0 δ ( x k +1 − a k ( x k , β w l )) ·   L ∑ j =1 γ j δ ( x k − β j )   d x k = L ∑ j =1 L w ∑ l =1 γ j γ w l δ ( x k +1 − a k ( β j , β w l )) 16 A continuous density can be fitted to this result by circular moment matching. The algorithm is given in Algorithm 3. It is worth noting that this algorithm can be executed based purely on the circular moments of f ( x k ) and f w ( w k ) if the deterministic sampling scheme only depends on these moments. In that case, we do not necessarily need to fit a distribution f ( x k +1 ) to the resulting circular moments, but can just store the estimate by retaining those circular moments. Algorithm 3: Prediction with arbitrary noise. Input : prior density f ( x k ), system noise density f w ( w k ), system function a k ( · , · ) Output : predicted density f ( x k +1 ) /* sample prior density and noise density */ ( γ 1 , . . . , γ L , β 1 , . . . , β L ) ← sampleDeterm( f ( x k )); ( γ w 1 , . . . , γ w L w , β w 1 , . . . , β w L w ) ← sampleDeterm( f w ( w k )); /* obtain Cartesian product and propagate */ for j ← 1 to L do for l ← 1 to L w do γ p j + L ( l − 1) ← γ j · γ w l ; β p j + L ( l − 1) ← a k ( β j , β w l ); end end /* obtain posterior density */ f ( x k +1 ) ← momentMatching( γ p 1 , . . . , γ p L · L w , β p 1 , . . . , β p L · L w )); 6.2. Measurement Update The measurement update step fuses the current estimate with a measurement that was obtained according to the measurement equation. 6.2.1. Identity Measurement Model In the case of the identity measurement model and additive noise, the measurement likelihood is given by f ( z k | x k ) = f v ( z k − x k ) . For the posterior density, application of Bayes’ theorem yields f ( x k | z k ) = f ( z k | x k ) f ( x k ) f ( z k ) ∝ f ( z k | x k ) f ( x k ) , where f ( x k ) is the prior density. Thus, we obtain the posterior density f ( x k | z k ) ∝ f v ( z k − x k ) f ( x k ) , as the product of the prior density and f v ( z k − x k ), which can be obtained as described in Sec. 5.1. The multiplication depends on the assumed probability density and can be performed using the multiplication formulas given in Sec. 5.3. For the VM case, this is equivalent to the measurement update from [9] and for the WN case, this is equivalent to the measurement update from [15]. 6.2.2. Nonlinear Model with Additive Noise For a nonlinear measurement function with additive noise, the measurement likelihood is calculated according to f ( z k | x k ) = f v ( z k − h k ( x k )) , as given in [ 16 ]. The remainder of the measurement update step is identical to the case of arbitrary noise as described in the following section. 17 6.2.3. Nonlinear Model with Arbitrary Noise For a nonlinear update with arbitrary noise, we assume that the likelihood is given. The key idea is to approximate the prior density f ( x k ) with a WD mixture and reweigh the components according to the likelihood f ( z k | x k ). However, this can lead to degenerate solutions, i.e., most or all weights are close to zero, if the likelihood function is narrow. We have shown in [ 16 ] that a progressive solution as introduced in [ 39 ] can be used to avoid this issue. For this purpose, we formulate the likelihood as a product of likelihoods f ( z k | x k ) = f ( z k | x k ) λ 1 · . . . · f ( z k | x k ) λ s , where λ 1 , . . . , λ s > 0 and ∑ s j =1 λ j = 1. This decomposition of the likelihood allows us to perform the measurement update step gradually by performing s partial update steps. Each update step is small enough to prevent degeneration and we obtain a new sample set after each step, to ensure that the differences between the sample weights stay small. In order to determine λ 1 , . . . , λ s and s , we require that the quotient between the smallest weight γ min and the largest weight γ max after reweighing is not below a certain threshold R ∈ (0 , 1), i.e., γ min γ max ≥ R . Using the conservative bounds γ min ≥ min j =1 ,...,L ( γ n j ) · min j =1 ,...,L ( f ( z k | β n j )) , γ max ≤ max j =1 ,...,L ( γ n j ) · max j =1 ,...,L ( f ( z k | β n j )) , this leads to the condition λ n ≤ log ( R · max j =1 ,...,L ( γ n j ) min j =1 ,...,L ( γ n j ) ) log ( min j =1 ,...,L f ( z k | β n j ) max j =1 ,...,L f ( z k | β n j ) ) , where WD ( γ n 1 , . . . , γ n L , β n 1 , . . . , β n L ) is the deterministic approximation at n -th progression step 4 . The progression continues until ∑ n λ n = 1. This method can be applied in conjunction with WN as well as VM distributions (see Algorithm 4). 7. Evaluation 7.1. Propagation Accuracy In order to evaluate the deterministic sampling as introduced in Sec. 4, we investigate the accuracy when performing propagation through the nonlinear function g : S 1 → S 1 , g ( x ) = x + c × R sin( x ) mod 2 π , where c ∈ [0 , 1) is a parameter controlling the strength of the nonlinearity and × R refers to multipli- cation in the field of real numbers R . Furthermore, we consider the density WN (0 , σ ) that we want to propagate through g ( · ). For this purpose, we sample WN (0 , σ ) deterministically using the methods described in Sec. 4 and obtain WD ( γ 1 , . . . , γ L , β 1 , . . . , β L ). Then, we apply g ( · ) componentwise, which yields WD ( γ 1 , . . . , γ L , g ( β 1 ) , . . . , g ( β L )). The true posterior is given by f true ( x ) = f ( g − 1 ( x ); μ, σ ) g ′ ( x ) 4 Compared to [ 16 ], we extend the progressive scheme to handle discrete approximations with non-equally weighted components. 18 Algorithm 4: Progressive measurement update for arbitrary noise. Input : measurement ˆ z k , likelihood f ( z k | x k ), predicted density f p ( x k ) as WN or VM, threshold parameter R Output : estimated fensity as WN or VM s ← 0; f ( x k ) ← f p ( x k ); while ∑ s n =1 λ n < 1 do s ← s + 1; /* deterministic sampling from WN or VM density (Sec. 4) */ ( γ 1 , . . . , γ L , β 1 , . . . , β L ) ← sampleDeterm( f ( x k )); /* calculate size of progression step */ λ s ← min     1 − ∑ s − 1 n =1 λ n , log ( R · max j =1 ,...,L ( γj ) min j =1 ,...,L ( γj ) ) log ( min j =1 ,...,L f ( zk | βj ) max j =1 ,...,L f ( zk | βj ) )     ; /* execute progression step */ for j ← 1 to L do γ j ← γ j · f (ˆ z k | β j ) λ s ; end /* use moment matching to obtain WN or VM density */ f ← momentMatching( γ 1 , . . . , γ L , β 1 , . . . , β L ); end and can only be calculated numerically. We evaluate the first and the second circular moment m W D i , i = 1 , 2 of the resulting WD distribution and compare to the first and the second circular moment m true i , i = 1 , 2 of the true posterior, which is obtained by numerical integration 5 . The considered error measure is given by | m W D i − m true i | , i = 1 , 2, where | · | is the Euclidean norm in the complex plane. Additionally, we fit a WN density to the posterior WD by circular moment matching and numerically calculate the Kullback-Leibler divergence D KL ( f true || f f itted ) = ∫ 2 π 0 f true ( x ) log ( f true ( x ) f f itted ( x ) ) d x , (2) between the true posterior and the fitted WN. The Kullback-Leibler divergence is an information theoretic measure to quantify the information loss when approximating f true with f f itted . This concept is illustrated in Fig. 6. The results for different values of σ are depicted in Fig. 7. We compare several samplers, the analytic methods with L = 3 components (Algorithm 1) and L = 5 components (Algorithm 2, parameter λ = 0 . 5) from Sec. 4.1 as well as the quantization approach discussed in Sec. 4.2. It can be seen that the analytic solution with L = 5 components is significantly better than the solution with L = 3 components. The quantization-based solution is computationally quite demanding but gives almost optimal results. However, the analytic solution with L = 5 components has comparable performance in terms of the Kullback-Leibler divergence even though the posterior moments are not calculated as accurately. 7.2. Moment-Based WN Multiplication In this evaluation, we compare the two methods for WN multiplication given in Sec. 5.3.2 and Sec. 5.3.2. For two WN densities WN ( μ 1 , σ 1 ) and WN ( μ 2 , σ 2 ), we calculate the true product f true = WN ( μ 1 , σ 1 ) · WN ( μ 2 , σ 2 ) and compare it to the WN approximation f f itted . 5 Numerical integration produces very accurate results in this case, but is too slow for use in practical filtering applications. 19 0 1 2 3 4 5 6 0 0.1 0.2 0.3 0.4 x f(x) prior WN WD mixture 0 2 4 6 0 2 4 6 x g(x) − − − − − − − − − − − − − − − − − − − − − − − → g ( x ) 0 1 2 3 4 5 6 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 x f g (x) true posterior WD mixture fitted WN prior g ( x ) = x + c sin( x ) mod 2 π posterior Figure 6: Propagation of a WN distribution with parameters μ = 0 . 1 , σ = 1 through a nonlinear function g by means of a deterministic WD approximation with five components. In this example, we use c = 0 . 7. In order to determine the approximation quality, we compute the Kullback-Leibler divergence (2). Furthermore, we consider the L 2 distance D L 2 ( f true , f f itted ) = ∫ 2 π 0 ( f true ( x ) − f f itted ( x ) ) 2 d x . The results for different values of σ 1 , μ 2 , and σ 2 are depicted in Fig. 8 and Fig. 9. We keep μ 1 fixed because only the difference between μ 2 and μ 1 affects the result. Multiplication is commutative, so we consider different sets of values for σ 1 and σ 2 to avoid redundant plots. As can be seen, the moment-based approach derived in Sec. 5.3.2 significantly outperforms the approach from Sec. 5.3.2 in almost all cases according to both distance measures. The new approach is particularly superior for small uncertainties. 7.3. Filtering Scenario System Function Measurement Noise C v s (3) 0 . 01 · I 2 × 2 m (3) 0 . 1 · I 2 × 2 l (3) 3 · I 2 × 2 s-non-additive (4) 0 . 01 · I 2 × 2 m-non-additive (4) 0 . 1 · I 2 × 2 l-non-additive (4) 3 · I 2 × 2 Table 3: Evaluation scenarios. In order to evaluate the proposed filtering algorithms, we simulated several scenarios. First of all, we distinguish between models with additive and with a more complex noise structure. In the case of additive noise, we consider the system function x k +1 = x k + c 1 × R sin( x k ) + c 2 + w k , (3) with two parameters c 1 = 0 . 1 , c 2 = 0 . 15, noise w k ∼ WN (0 , 0 . 2), and × R is multiplication in the field of real numbers R . Intuitively, c 1 determines the degree of nonlinearity and c 2 is a constant angular velocity that is added at each time step. For the case of arbitrary noise, the system function is given by x k +1 = x k + c 1 × R sin( x k + w k ) + c 2 , (4) 20 σ = 0 . 5 σ = 1 . 0 σ = 1 . 5 0 0.5 1 0 0.01 0.02 0.03 0.04 c first circular moment error L=3 L=5 L=50 quantization 0 0.5 1 0 0.05 0.1 0.15 0.2 c first circular moment error L=3 L=5 L=50 quantization 0 0.5 1 0 0.02 0.04 0.06 0.08 0.1 c first circular moment error L=3 L=5 L=50 quantization 0 0.5 1 0 0.05 0.1 0.15 0.2 0.25 c second circular moment error L=3 L=5 L=50 quantization 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 c second circular moment error L=3 L=5 L=50 quantization 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 c second circular moment error L=3 L=5 L=50 quantization 0 0.2 0.4 0.6 0.8 1 0 2 4 6 8 x 10 −3 c KLD L=3 L=5 L=50 quantization 0 0.5 1 0 0.005 0.01 0.015 0.02 0.025 c KLD L=3 L=5 L=50 quantization 0 0.5 1 0 0.01 0.02 0.03 c KLD L=3 L=5 L=50 quantization Figure 7: Propagation of WN (0 , σ ) through a nonlinear function with nonlinearity parameter c . with the same c 1 , c 2 , and w k as above. In both cases, the nonlinear measurement function is given by ˆ z k = [cos( x k ) , sin( x k )] T + v k ∈ R 2 with additive noise v k ∼ N ( 0 , η · I 2 × 2 ), η ∈ { 3 , 0 . 1 , 0 . 01 } . An overview of all considered scenarios is given in Table 3. In the scenarios with additive system noise, we compare the proposed filter to all standard approaches described in Sec. 2.2, a UKF with 1D state vector, a UKF with 2D state vector and particle filters with 10 and 100 particles. In order to handle non-additive noise with a UKF, typically state augmentation is used, which is not applicable to arbitrary noise. For this reason, we only compare the proposed approach to the particle filters in the non-additive noise case. The initial estimate is given by x 0 ∼ WN (0 , 1), whereas the true initial state is on the opposite side of the circle x true 0 = π , i.e., the initial estimate is poor, which is difficult to handle for noncircular filters. For the circular filtering algorithm, we use the deterministic sampling method given in Algorithm 2 with parameter λ = 0 . 5. The progression threshold is chosen as R = 0 . 2. In order to evaluate the performance of different filters, we consider a specific error measure that takes periodicity into account. The angular error is defined as the shortest distance on the circle d : S 1 × S 1 → [0 , π ] , d ( a, b ) = min( | a − b | , 2 π − | a − b | ) . This leads to an angular version of the commonly used root mean square error (RMSE) √ √ √ √ 1 k max k max ∑ k =1 d ( x k , x true k ) 2 21 σ 1 = 0 . 1 σ 1 = 0 . 4 σ 1 = 1 . 0 σ 2 = 0 . 2 0 2 4 6 0 5 10 15 20 μ KLD VM moment−based 0 2 4 6 0 1 2 3 μ KLD 0 2 4 6 0 0.01 0.02 0.03 0.04 μ KLD σ 2 = 0 . 5 0 2 4 6 0 0.1 0.2 0.3 0.4 0.5 μ KLD 0 2 4 6 0 0.2 0.4 0.6 0.8 1 μ KLD 0 2 4 6 0 0.05 0.1 μ KLD σ 2 = 1 . 0 0 2 4 6 0 0.005 0.01 μ KLD 0 2 4 6 0 0.02 0.04 0.06 0.08 0.1 μ KLD 0 2 4 6 0 0.05 0.1 0.15 μ KLD Figure 8: Kullback-Leibler divergence between the true product of WN densities and the proposed approximations. between estimates x k and true state variables x true k . We simulated the system for k max = 100 time steps and compared the angular RMSE of all estimators. The results from 100 Monte Carlo runs are depicted in Fig. 10. In the scenarios with additive noise, it can be seen that the proposed filter performs very well regardless of the amount of noise. Only the particle filter with 100 particles is able to produce similar results. However, it should be noted that the proposed filter uses just five samples. The particle filter with 10 particles performs a lot worse and fails completely for small noise as a result of particle degeneration issues. Both variants of the UKF perform worse than the proposed filter. Particularly the UKF with two-dimensional state does not work very well, which can be explained by the inaccuracies in the conversion of a one-dimensional into a two-dimensional noise noise. When non-additive noise is considered, the proposed filter even significantly outperforms the particle filter with 100 particles. As a result of the low number of particles and the associated issues regarding particle degeneration, the particle filter with 10 particles has the worst performance. 8. Conclusion In this paper, we presented a framework for recursive filtering on the circle. The proposed filtering algorithms can deal with arbitrary nonlinear system and measurement functions. Furthermore, they can be used in conjunction with different circular probability distributions. We have shown that the prediction step can be performed based on circular moments only, without ever assuming a particular distribution. For the purpose of evaluation, we have considered several aspects of the proposed methods. First of all, the accuracy of deterministic approximations was evaluated by considering the error when using them to propagate a continous distribution through a nonlinear function. We have found that the proposed deterministic approximation with five samples yields good results for most practical scenarios. 22 σ 1 = 0 . 1 σ 1 = 0 . 4 σ 1 = 1 . 0 σ 2 = 0 . 2 0 2 4 6 0 2 4 6 8 μ squared distance VM moment−based 0 2 4 6 0 0.5 1 1.5 2 2.5 μ squared distance 0 2 4 6 0 0.02 0.04 0.06 μ squared distance σ 2 = 0 . 5 0 2 4 6 0 0.5 1 μ squared distance 0 2 4 6 0 0.2 0.4 0.6 μ squared distance 0 2 4 6 0 0.02 0.04 0.06 0.08 μ squared distance σ 2 = 1 . 0 0 2 4 6 0 0.01 0.02 0.03 0.04 μ squared distance 0 2 4 6 0 0.02 0.04 0.06 0.08 0.1 μ squared distance 0 2 4 6 0 0.01 0.02 0.03 0.04 0.05 μ squared distance Figure 9: L 2 distance between the true product of WN densities and the proposed approximations. Second, we evaluated the novel moment-based WN multiplication method and show that it is superior to the previously published method based on fitting a VM distribution. Finally, we evaluated the proposed filtering algorithms in several scenarios and compared it to standard approaches. These simulations show the advantages of using a circular filtering scheme compared to traditional methods intended for the linear case. Future work may include extensions of the proposed methods to other manifolds such as the torus or the hypersphere. Additionally, consideration of multimodal circular distributions may be of interest, for example by means of WN or VM mixtures. Acknowledgment This work was partially supported by grants from the German Research Foundation (DFG) within the Research Training Groups RTG 1126 “Soft-tissue Surgery: New Computer-based Methods for the Future Workplace” and RTG 1194 “Self-organizing Sensor-Actuator-Networks”. References [1] R. E. Kalman, “A New Approach to Linear Filtering and Prediction Problems,” Transactions of the ASME Journal of Basic Engineering , vol. 82, pp. 35–45, 1960. [2] S. J. Julier and J. K. Uhlmann, “Unscented Filtering and Nonlinear Estimation,” Proceedings of the IEEE , vol. 92, no. 3, pp. 401–422, Mar. 2004. [3] S. R. Jammalamadaka and A. Sengupta, Topics in Circular Statistics . World Scientific, 2001. [4] E. Batschelet, Circular Statistics in Biology , ser. Mathematics in Biology. London: Academic Press, 1981. [5] K. V. Mardia and P. E. Jupp, Directional Statistics , 1st ed. Wiley, 1999. [6] K. V. Mardia, G. Hughes, C. C. Taylor, and H. Singh, “A Multivariate von Mises Distribution with Applications to Bioinformatics,” Canadian Journal of Statistics , vol. 36, no. 1, pp. 99–109, 2008. [7] N. I. Fisher, “Problems with the Current Definitions of the Standard Deviation of Wind Direction,” Journal of Climate and Applied Meteorology , vol. 26, no. 11, pp. 1522–1529, 1987. 23 small noise η = 0 . 01 medium noise η = 0 . 1 large noise η = 3 proposed UKF 1D UKF 2D PF10 PF100 0.2 0.4 0.6 0.8 1 angular error (RMSE) proposed UKF 1D UKF 2D PF10 PF100 0.2 0.4 0.6 0.8 1 angular error (RMSE) proposed UKF 1D UKF 2D PF10 PF100 0.5 1 1.5 2 angular error (RMSE) proposed PF10 PF100 0 0.1 0.2 0.3 0.4 0.5 angular error (RMSE) proposed PF10 PF100 0 0.5 1 1.5 2 angular error (RMSE) proposed PF10 PF100 0.5 1 1.5 2 angular error (RMSE) Figure 10: RMSE (in radians) for different filters obtained from 100 Monte Carlo runs for additive noise (top) and non-additive noise (bottom). [8] K. V. Mardia, “Directional Statistics in Geosciences,” Communications in Statistics - Theory and Methods , vol. 10, no. 15, pp. 1523–1543, 1981. [9] M. Azmani, S. Reboul, J.-B. Choquel, and M. Benjelloun, “A Recursive Fusion Filter for Angular Data,” in IEEE International Conference on Robotics and Biomimetics (ROBIO 2009) , 2009, pp. 882–887. [10] G. Stienne, S. Reboul, M. Azmani, J. Choquel, and M. Benjelloun, “A Multi-temporal Multi-sensor Circular Fusion Filter,” Information Fusion , vol. 18, pp. 86–100, Jul. 2013. [11] G. Stienne, S. Reboul, J. Choquel, and M. Benjelloun, “Cycle Slip Detection And Repair with a Circular On-line Change-Point Detector,” Signal Processing , 2014. [12] ——, “Circular Data Processing Tools Applied to a Phase Open Loop Architecture for Multi-Channels Signals Tracking,” in IEEE/ION Position Location and Navigation Symposium (PLANS 2012) , 2012, pp. 633–642. [13] K. E. Mokhtari, S. Reboul, M. Azmani, J.-B. Choquel, H. Salaheddine, B. Amami, and M. Benjelloun, “A Circular Interacting Multi-Model Filter Applied to Map Matching,” in Proceedings of the 16th International Conference on Information Fusion (Fusion 2013) , 2013. [14] I. Markovic, F. Chaumette, and I. Petrovic, “Moving Object Detection, Tracking and Following Using an Omnidi- rectional Camera on a Mobile Robot,” in Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA 2014) , Hong-Kong, Jun. 2014. [15] G. Kurz, I. Gilitschenski, and U. D. Hanebeck, “Recursive Nonlinear Filtering for Angular Data Based on Circular Distributions,” in Proceedings of the 2013 American Control Conference (ACC 2013) , Washington D. C., USA, Jun. 2013. [16] ——, “Nonlinear Measurement Update for Estimation of Angular Systems Based on Circular Distributions,” in Proceedings of the 2014 American Control Conference (ACC 2014) , Portland, Oregon, USA, Jun. 2014. [17] ——, “Deterministic Approximation of Circular Densities with Symmetric Dirac Mixtures Based on Two Circular Moments,” in Proceedings of the 17th International Conference on Information Fusion (Fusion 2014) , Salamanca, Spain, Jul. 2014. [18] G. Kurz, F. Faion, and U. D. Hanebeck, “Constrained Object Tracking on Compact One-dimensional Manifolds Based on Directional Statistics,” in Proceedings of the Fourth IEEE GRSS International Conference on Indoor Positioning and Indoor Navigation (IPIN 2013) , Montbeliard, France, Oct. 2013. [19] I. Gilitschenski, G. Kurz, and U. D. Hanebeck, “Bearings-Only Sensor Scheduling Using Circular Statistics,” in Proceedings of the 16th International Conference on Information Fusion (Fusion 2013) , Istanbul, Turkey, Jul. 2013. [20] G. Kurz, M. Dolgov, and U. D. Hanebeck, “Nonlinear Stochastic Model Predictive Control in the Circular Domain (to appear),” in Proceedings of the 2015 American Control Conference (ACC 2015) , Chicago, Illinois, USA, Jul. 2015. [21] G. Kurz, I. Gilitschenski, S. J. Julier, and U. D. Hanebeck, “Recursive Estimation of Orientation Based on the Bingham Distribution,” in Proceedings of the 16th International Conference on Information Fusion (Fusion 2013) , Istanbul, Turkey, Jul. 2013. [22] S. J. Julier and J. J. LaViola, “On Kalman Filtering With Nonlinear Equality Constraints,” IEEE Transactions on Signal Processing , vol. 55, no. 6, pp. 2774–2784, 2007. 24 [23] M. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A Tutorial on Particle Filters for Online Nonlinear/Non- Gaussian Bayesian Tracking,” IEEE Transactions on Signal Processing , vol. 50, no. 2, pp. 174–188, 2002. [24] W. K. Hastings, “Monte Carlo Sampling Methods Using Markov Chains and Their Applications,” Biometrika , vol. 57, no. 1, pp. 97–109, 1970. [25] G. Kurz, I. Gilitschenski, and U. D. Hanebeck, “Efficient Evaluation of the Probability Density Function of a Wrapped Normal Distribution,” in Proceedings of the IEEE ISIF Workshop on Sensor Data Fusion: Trends, Solutions, Applications (SDF 2014) , Bonn, Germany, Oct. 2014. [26] M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables , 10th ed. New York: Dover, 1972. [27] K. Mardia, “Characterizations of Directional Distributions,” in A Modern Course on Statistical Distributions in Scientific Work . Springer Netherlands, 1975, vol. 17, pp. 365–385. [28] D. E. Amos, “Computation of Modified Bessel Functions and Their Ratios,” Mathematics of Computation , vol. 28, no. 125, pp. 239–251, 1974. [29] S. Sra, “A Short Note on Parameter Approximation for von Mises–Fisher Distributions: And a Fast Implementation of Is (x),” Computational Statistics , vol. 27, no. 1, pp. 177–190, 2012. [30] G. Stienne, “Traitements des signaux circulaires appliqu ́ es ` a l’altim ́ etrie par la phase des signaux GNSS,” Ph.D. dissertation, Universit ́ e du Littoral Cˆ ote d’Opale, Dec. 2013. [31] J. H. Kotecha and P. Djuric, “Gaussian Particle Filtering,” IEEE Transactions on Signal Processing , vol. 51, no. 10, pp. 2592–2601, 2003. [32] U. D. Hanebeck and V. Klumpp, “Localized Cumulative Distributions and a Multivariate Generalization of the Cram ́ er-von Mises Distance,” in Proceedings of the 2008 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI 2008) , Seoul, Republic of Korea, Aug. 2008, pp. 33–39. [33] J. Steinbring and U. D. Hanebeck, “S2KF: The Smart Sampling Kalman Filter,” in Proceedings of the 16th International Conference on Information Fusion (Fusion 2013) , Istanbul, Turkey, Jul. 2013. [34] U. D. Hanebeck and A. Lindquist, “Moment-based Dirac Mixture Approximation of Circular Densities,” in Proceedings of the 19th IFAC World Congress (IFAC 2014) , Cape Town, South Africa, Aug. 2014. [35] I. Gilitschenski, G. Kurz, and U. D. Hanebeck, “Optimal Quantization of Circular Distributions (submitted),” in Proceedings of the 53rd IEEE Conference on Decision and Control (CDC 2014) , 2014. [36] K. B. Petersen and M. S. Pedersen, “The Matrix Cookbook,” Nov. 2012. [37] S. G. Johnson, “Faddeeva Package,” http://ab-initio.mit.edu/wiki/index.php/Faddeeva Package, Dec. 2012. [38] D. Goldberg, “What Every Computer Scientist Should Know About Floating-point Arithmetic,” ACM Computing Surveys , vol. 23, no. 1, pp. 5–48, Mar. 1991. [39] U. D. Hanebeck, “PGF 42: Progressive Gaussian Filtering with a Twist,” in Proceedings of the 16th International Conference on Information Fusion (Fusion 2013) , Istanbul, Turkey, Jul. 2013. 25