Recursive Bayesian Filtering
in Circular State Spaces
Gerhard Kurza, Igor Gilitschenskia, Uwe D. Hanebecka
aIntelligent Sensor-Actuator-Systems Laboratory (ISAS)
Institute for Anthropomatics and Robotics
Karlsruhe Institute of Technology (KIT), Germany
Abstract
For recursive circular filtering based on circular statistics, we introduce a general framework for
estimation of a circular state based on different circular distributions, specifically the wrapped normal
distribution and the von Mises distribution. We propose an estimation method for circular systems
with nonlinear system and measurement functions. This is achieved by relying on efficient deterministic
sampling techniques. Furthermore, we show how the calculations can be simplified in a variety of
important special cases, such as systems with additive noise as well as identity system or measurement
functions. We introduce several novel key components, particularly a distribution-free prediction
algorithm, a new and superior formula for the multiplication of wrapped normal densities, and the
ability to deal with non-additive system noise. All proposed methods are thoroughly evaluated and
compared to several state-of-the-art solutions.
1. Introduction
Estimation of circular quantities is an omnipresent issue, be it the wind direction, the angle of
a robotic revolute joint, the orientation of a turntable, or the direction a vehicle is facing. Circular
estimation is not limited to applications involving angles, however, and can be applied to a variety
of periodic phenomena. For example phase estimation is a common issue in signal processing, and
tracking objects that periodically move along a certain trajectory is also of interest.
Standard approaches to circular estimation are typically based on estimation techniques designed
for linear scenarios that are tweaked to deal with some of the issues arising in the presence of circular
quantities. However, modifying linear methods cannot only be tedious and error-prone, but also yields
suboptimal results, because certain assumptions of these methods are violated. For example, solutions
based on Kalman filters [1] or nonlinear versions thereof [2] fundamentally neglect the true topology
of the underlying manifold and assume a Gaussian distribution, which is only defined on Rn. In the
linear case, the use of a Gaussian distribution is frequently justified by the central limit theorem. This
justification no longer holds in a circular setting, as the Gaussian is not a limit distribution on the
circle.
In order to properly deal with circular estimation problems, we rely on circular statistics [3], [4], a
subfield of statistics that deals with circular quantities. More broadly, the field of directional statistics
[5] considers a variety of manifolds, such as the circle, the hypersphere, or the torus. Unlike standard
approaches that assume linear state spaces, methods based on circular statistics correctly use the
proper manifold and rely on probability distributions defined on this manifold. Circular statistics
has been applied in a variety of sciences, such as biology [4], bioinformatics [6], meteorology [7], and
geosciences [8].
Email addresses: gerhard.kurz@kit.edu (Gerhard Kurz), gilitschenski@kit.edu (Igor Gilitschenski),
uwe.hanebeck@ieee.org (Uwe D. Hanebeck)
arXiv:1501.05151v1  [cs.SY]  21 Jan 2015
system
measurement
publication
distribution
model
noise
model
noise
Azmani, Reboul, Choquel, Benjelloun [9]
von Mises
identity
additive
identity
additive
Markovic, Chaumette, Petrovic [14]
von Mises-Fisher
identity
additive
identity
additive
Kurz, Gilitschenski, Julier, Hanebeck [21]
Bingham
identity
additive
identity
additive
Kurz, Gilitschenski, Hanebeck [15]
wrapped normal/von Mises
nonlinear
additive
identity
additive
Kurz, Gilitschenski, Hanebeck [16]
wrapped normal
nonlinear
additive
nonlinear
any
this paper
wrapped normal/von Mises
nonlinear
any
nonlinear
any
Table 1: Circular filters based on directional statistics.
There has been some work on filtering algorithms based on circular statistics by Azmani et al. [9],
which was further investigated by Stienne et al. [10]. Their work is based on the von Mises distribution
and allows for recursive filtering of systems with a circular state space. However, it is limited to the
identity with additive noise as the system equation and the measurement equation. The filter from [9]
has been applied to phase estimation of GPS signals [11], [12] as well as map matching [13]. Markovic
et al. have published a similar filter [14] based on the von Mises-Fisher distribution, a generalization
of the von Mises distribution to the hypersphere.
We have previously published a recursive filter based on the wrapped normal distribution allowing
for a nonlinear system equation [15]. The paper [16] extends this approach to make a nonlinear
measurement update possible. Both papers rely on a deterministic sampling scheme, based on the
first circular moment. This kind of sampling is reminiscent of the well-known unscented Kalman filter
(UKF) [2]. We have extended this sampling scheme to the first two circular moments in [17], so the
proposed filters are, in a sense, circular versions of the UKF.
The developed filters have been applied in the context of constrained tracking [18], bearings-only
sensor scheduling [19], as well as circular model predictive control [20].
Furthermore, we proposed a recursive filter based on the circular Bingham distribution in [21].
The Bingham distribution is closely related to the von Mises distribution, but has a bimodal density,
which makes it interesting for axial estimation problems (i.e., problems with 180◦symmetry).
An overview of all of these filters and the considered distributions as well as system and measurement
models is given in Table 1.
This paper summarizes and combines our results as well as extends the previous work by a number
of additional contributions. First of all, we propose a general filtering framework that can be used in
conjunction with a variety of system and measurement equations, different types of noise, and both
the wrapped normal and the von Mises distributions. Our previous publications [15], [16] as well as
the work by Azmani et al. [9] can be seen as special cases of the proposed framework.
Furthermore, we introduce a new multiplication formula for wrapped normal distributions that
outperforms the solution proposed in [15]. We generalize the prediction step from [15] to a purely
moment-based solution that does not need to assume any kind of distribution. Compared to [16],
we add the ability to deal with non-additive noise not only in the measurement update but also in
the prediction step. Finally, we perform a thorough evaluation, where we compare the proposed
techniques to several state-of-the-art approaches.
This paper is structured as follows. First, we formulate the problem in Sec. 2. Then, we introduce
the necessary fundamentals from circular statistics in Sec. 3. Based on these fundamentals, we propose
deterministic sampling schemes in Sec. 4 and derive the operations on circular densities required for
the circular filter in Sec. 5. These results are used to introduce circular filtering algorithms in Sec. 6.
An evaluation of the proposed algorithms can be found in Sec. 7. Finally, conclusions are given in
Sec. 8.
2. Problem Formulation
In this section, we formulate the problems under consideration and summarize some standard
approaches that have been used to address the issues associated with periodicity.
2
2.1. Circular Filtering
Circular filtering considers estimation problems on the unit circle, which is commonly parameterized
as the set of complex numbers with unit length {x ∈C : |x| = 1}. To allow for a more convenient
one-dimensional notation, we identify S1 with the half-open interval [0, 2π) ⊂R, while keeping the
topology of the circle. Together with the operation
+ : S1 × S1 →S1,
x + y := x +R y
mod 2π,
for all x, y ∈[0, 2π) with standard addition +R on R, the circle S1 forms an Abelian group. Because
S1 with the topology given above has a manifold structure, (S1, +) is a Lie group.
We consider a system whose state xk at time step k is a value on the unit circle S1. System and
measurement models are assumed to be given. In this paper, we propose several methods to deal with
different types of models. More complex models necessitate the use of more sophisticated algorithms
and conversely, simpler models allow the use of computationally less expensive algorithms.
2.1.1. System Model
In this work, we consider a system model whose state evolves according to the general system
equation
xk+1 = ak(xk, wk)
with (nonlinear) system function ak : S1 × W →S1 and noise wk ∈W stemming from some noise
space W. Note that W is not necessarily S1, but may be an arbitrary set, for example the real-vector
space Rn, some manifold, or even a discrete set. An interesting and practically relevant special case is
a (nonlinear) system with additive noise
xk+1 = ak(xk) + wk
mod 2π
with ak : S1 →S1 and wk ∈S1. More particular, we also consider the special case, where ak is the
identity, i.e.,
xk+1 = xk + wk
mod 2π.
2.1.2. Measurement Model
The system state cannot be observed directly, but may only be estimated based on measurements
that are disturbed by noise. A general measurement function is given by
ˆzk = hk(xk, vk) ,
where ˆzk ∈Z is the measurement in the measurement space Z, hk : S1 × V →Z is the measurement
function and vk ∈V is arbitrary measurement noise in a certain noise space V . Note that the
measurement and noise space can be arbitrary sets in general.
An interesting special case are
measurement functions where the measurement noise is additive, i.e.,
ˆzk = hk(xk) + vk
with measurement function hk : S1 →Z and vk ∈Z. In this case, we require Z to have a group
structure with + as the operation. Additionally, we consider the more specific case where hk is the
identity, i.e.,
ˆzk = xk + vk
mod 2π
with ˆzk, vk ∈S1.
Remark 1. We do not consider linear system models because linearity is a concept of vector spaces,
not manifolds [15]. For this reason, there are no linear functions on the circle.
3
0 
 pi/2 
 pi 
 3pi/2 
 2pi
0
0.5
1
1.5
φ
f(φ)
 
 
Gaussian prior
measurement
incorrect fusion
(a) Incorrect fusion.
0 
 pi/2 
 pi 
 3pi/2 
 2pi
0
0.5
1
1.5
φ
f(φ)
 
 
Gaussian prior
measurement
repositioned meas.
correct fusion
(b) Correct fusion.
Figure 1: This figure illustrates that repositioning of the measurement is necessary to obtain satisfactory
performance when using classical filters on circular problems.
2.2. Standard Approaches
As circular estimation problems are wide-spread in a variety of applications, a number of standard
approaches have been employed. We introduce three of the most common methods and explain their
strengths and weaknesses.
2.2.1. Gaussian-based approaches
Gaussian-based methods (wrongly) assume a Gaussian distribution and use standard filtering
techniques for Gaussians in conjunction with certain modifications to allow their application to circular
problems.
One-Dimensional Methods. One common approach is the use of a standard Kalman filter [1] or, in
case of nonlinear system or measurement functions, unscented Kalman filter (UKF) [2] with a scalar
state xk containing the angle θk, i.e., xk = θk. However, two modifications are necessary before this
approach can be used in practice. First, the estimate has to be forced to stay inside the interval [0, 2π)
by performing a modulo operation after every prediction and/or update step.
Second, if the measurement space is periodic, the measurement needs to be repositioned to be
closer to the state mean in certain cases. This problem occurs whenever the measurement and the
current state mean are more than π apart. In this case, the measurement needs to be moved by ±2πk
to an equivalent measurement that deviates at most π from the state mean. An illustration of this
problem is given in Fig. 1.
This type of filter is used as a comparison in [15]. When the uncertainty is small, this kind of
approach works fairly well, but it tends to produce unsatisfactory results if the uncertainty is high.
Two-Dimensional Methods. Another common approach is based on the Kalman filter or the UKF with
two-dimensional state subject to a nonlinear constraint. More specifically, an angle θk is represented
by a state vector xk = [cos(θk), sin(θ)k]T and the constraint is ||xk|| = 1 to enforce that xk is on the
unit circle. In order to enforce this constraint, xk is projected to the unit circle after each prediction
and/or update step. More sophisticated approaches increase the covariance to reflect the fact that the
projection operation constitutes an increase in uncertainty [22].
This approach has been used in [18], but did not perform as well as the filter based on circular
statistics. One of the issues of this approach is the fact that the system and measurement model
sometimes become more complicated when the angle θk is transformed to a two-dimensional vector.
2.2.2. Particle Filters
Another method that can be applied is particle filtering [23]. Particle filters on nonlinear manifolds
are fairly straightforward to implement because each particle can be treated separately. For the
particle filter to work, the system function and the measurement likelihood both need to respect
4
−2
−1
0
1
2
−2
−1
0
1
2
0
0.05
0.1
0.15
0.2
0.25
x1=cos(φ)
x2=sin(φ)
f(x1, x2)
(a) The wrapped normal distribution (red) is ob-
tained by wrapping a Gaussian distribution (blue)
around the circle. The parameters we used for this
example are µ = 0, σ = 2.
(b) The von Mises distribution (red) arises when
restricting a two-dimensional Gaussian with µ =
(cos µ, sin µ)T and covariance κ · I2×2 to the unit
circle. The parameters for this example are µ =
0, κ = 1.
Figure 2: Relation between WN and VM distributions and the Gaussian distribution.
the underlying topology. The reweighting step as well as the commonly used sequential importance
resampling (SIR) are independent of the underlying manifold and can be used in a circular setting as
well.
However, issues that are typically associated with particle filters arise.
If the measurement
likelihood function is very narrow, particle degeneration can occur, i.e., (almost) all particles have
zero weight after the reweighting step. Furthermore, a large number of particles is required to obtain
stable results. Even though these problems are less critical in a one-dimensional setting, there can
still be issues if measurements with high certainty occur in areas with few particles. This can, for
example, occur when information from sensors with very different degrees of accuracy is fused. It
should also be noted that sampling from certain circular distributions can be somewhat involved and
require the use of the Metropolis Hastings algorithm [24] or similar approaches, e.g., in case of the
von Mises distribution.
3. Circular Statistics
Because of the drawbacks of the approaches discussed above, we propose a filtering scheme based
on circular statistics [3], [5]. In the following, we introduce the required fundamentals from the field
of circular statistics.
3.1. Circular Distributions
Circular statistics considers probability distributions defined on the unit circle. A variety of
distributions has been proposed [4]. We give definitions of all distributions that are required for the
proposed filtering scheme.
Definition 1 (Wrapped Normal Distribution). The wrapped normal (WN) distribution is given by
the probability density function (pdf)
f(x; µ, σ) =
1
√
2πσ
∞
X
k=−∞
exp

−(x −µ + 2πk)2
2σ2

with x ∈S1, location parameter µ ∈S1, and concentration parameter σ > 0.
5
The WN distribution is obtained by wrapping a one-dimensional Gaussian distribution around
the unit circle and adding up all probability mass that is wrapped to the same point (see Fig. 2a).
It appears as a limit distribution on the circle [15] in the following sense. A summation scheme of
random variables that converges to the Gaussian distribution in the linear case, will converge to the
WN distribution if taken modulo 2π. Because of its close relation to the Gaussian distribution, the
WN distribution inherits a variety of its properties, for example its normalization constant as well as
the formula for the convolution of densities. Even though there is an infinite sum involved, evaluation
of the pdf of a WN distribution can be performed efficiently, because only few summands need to be
considered [25].
Definition 2 (Von Mises Distribution). The von Mises (VM) distribution is given by the pdf
f(x; µ, κ) =
1
2πI0(κ) exp(κ cos(x −µ))
with x ∈S1, location parameter µ ∈S1, and concentration parameter κ > 0. I0(·) is the modified
Bessel function [26] of order 0.
The VM distribution, sometimes also referred to as the circular normal (CN) distribution, is
obtained by restricting a two-dimensional Gaussian with mean ||µ|| = 1 and covariance κ · I2×2 (where
I2×2 is the identity matrix) to the unit circle and reparameterizing to [0, 2π) as can be seen in Fig. 2b.
For this reason, it also inherits some of the properties of the Gaussian distribution; most importantly,
it is closed under Bayesian inference. The VM distribution has been used as a foundation for a circular
filter by Azmani et al. [9]. It is closely related to the Bingham distribution and conversion between the
two is effectively a matter of shrinking the the interval [0, 2π) by a factor of two two, i.e., f(2x; µ, κ)
is a Bingham distribution [27, p. 4].
Definition 3 (Wrapped Dirac Mixture Distribution). The wrapped Dirac mixture (WD) distribution
with L components is given by
f(x; γ1, . . . , γL, β1, . . . , βL) =
L
X
j=1
γjδ(x −βj)
with Dirac delta distribution δ(·), Dirac positions β1, . . . , βL ∈S1, and weights γ1, . . . , γL > 0 where
PL
j=1 γj = 1.
Unlike the WN and VM distributions, the WD distribution is a discrete probability distribution
on a continous domain. It can be obtained by wrapping a Dirac mixture in R around the unit circle.
WD distributions can be used as discrete approximations of continuous distributions with a finite set
of samples.
In this paper, we use the following notation. We denote the density function of a WN distribution
with parameters µ and σ with WN(µ, σ). If a random variable x is distributed according to this WN
distribution, we write x ∼WN(µ, σ). The terms VM(µ, κ) and WD(γ1, . . . , γL, β1, . . . , βL) are used
analogously to describe the density functions of VM and WD distributions with parameters µ, κ and
γ1, . . . , γL, β1, . . . , βL, respectively.
3.2. Circular Moments
In circular statistics, there is a concept called circular (or trigonometric) moment.
Definition 4 (Circular Moments). For a random variable x ∼f(x) defined on the circle, its n-th
circular moment is given by
mn = E(exp(ix)n) = E(exp(inx))
=
Z 2π
0
exp(inx) · f(x) dx ∈C
with imaginary unit i.
6
The n-th moment is a complex number and, hence, has two degrees of freedom, the real and
the imaginary part. For this reason, the first moment includes information about the circular mean
arg m1 = atan2(Im m1, Re m1) as well as the concentration |m1| =
p
(Re m1)2 + (Im m1)2 of the
distribution1, similar to the first two real-valued moments. It can be shown that WN and VM
distributions are uniquely defined by their first circular moment [3].
Remark 2. Any continuous and piecewise continuously differentiable 2π-periodic function f : R →R
can be written as a Fourier series
f(x) =
∞
X
k=−∞
ck exp(ikx)
where
ck = 1
2π
Z 2π
0
f(x) exp(−ikx)dx .
If f(·) is the pdf of a circular probability distribution, the Fourier coefficients are closely related to the
circular moments according to ck =
1
2πm−k.
Lemma 1 (Moments for WN, VM, and WD Distributions). For WN, VM, and WD distributions
with given parameters, the n-th circular moment can be calculated according to
mWN
n
= exp

inµ −n2σ2
2

,
mV M
n
= exp(inµ)I|n|(κ)
I0(κ) ,
mWD
n
=
L
X
j=1
γj exp(inβj) .
A proof is given in [5].
3.3. Circular Moment Matching
As both WN and VM distributions are uniquely defined by their first moment, it is possible to
convert between them by matching the first circular moment.
Lemma 2 (Circular Moment Matching). We define A(x) = I1(x)
I0(x) as given in [5].
1. For a given first moment m1, the WN distribution with this first moment has the density
WN

atan2(Im m1, Re m1),
p
−2 log (|m1|)

.
2. For a given first moment m1, the VM distribution with this first moment has the density
VM
�atan2(Im m1, Re m1), A−1(|m1|)

.
3. For a given VM distribution with density VM(µ, κ), the WN distribution with identical first
moment has the density WN

µ,
r
−2 log

I1(κ)
I0(κ)

.
4. For a given WN distribution with density WN(µ, σ), the VM distribution with identical first
moment has the density VM

µ, A−1 
exp

−σ2
2

The proof is given in [15]. Calculation of the function A−1(·) is somewhat tricky. In [15], we use
the algorithm by Amos [28]2 to calculate A(·) and MATLAB’s fsolve to invert this function. Stienne
et al. have proposed closed-form approximations, which can be calculated very easily but have a large
approximation error [12], [10]. A more detailed discussion on approximations of A−1(·) can be found
in [29] and [30, Sec. 2.3].
1The term 1 −|m1| is sometimes referred to as circular variance.
2Pseudocode of this algorithm is given in [15].
7
4. Deterministic Sampling
In order to propagate continous probability densities through nonlinear functions, it is a common
technique to use discrete sample-based approximations of the continous densities. A set of samples can
easily be propagated by applying the nonlinear function to each sample individually. This approach
can be used for both the prediction and the measurement update step.
We distinguish between deterministic and nondeterministic sampling. Nondeterministic sampling
relies on a randomized algorithm to stochastically obtain samples of a density. Typical examples
include the samplers used by the particle filter [23] or the Gaussian particle filter [31]. Deterministic
sampling selects samples in a deterministic way, for example to fit certain moments (the sampler
used by the UKF, [2]), or to optimally approximate the shape of the density (published in [32], this
sampler is used by the S2KF, [33]). Deterministic sampling schemes have the advantage of requiring a
significantly smaller number of samples, which is why we will focus on this type of solution.
A na¨ıve solution for approximating a WN density may be the application of a deterministic
sampling scheme for the Gaussian distribution (such as the samplers used in [2], [33]) and subsequently
wrapping the samples. Even though this technique is valid for stochastic samples, it does not provide
satisfactory results for deterministic samples. In extreme cases, wrapping can cause different samples
to be wrapped to the same point, grossly misrepresenting the original density. This problem is
illustrated in Fig. 3. In the case of UKF samples (Fig. 3a), it can be seen that for σ ≈2.5, one
sample is placed at µ and two samples are placed on the opposite side of the circle, i.e., the mode
of the approximation is opposite to the true mode. Furthermore, for σ ≈5 all three UKF samples
are wrapped to the same position, i.e., the sample-based approximation degenerates to a distribution
with single Dirac component even though the true distribution is nearly uniform. Similar issues arise
in the case of S2KF samples (see Fig. 3b).
4.1. Analytic Solutions
First of all, we consider analytic solutions to obtain deterministic samples. These solutions are
based on circular moment-matching and only provide a small, fixed number of Dirac delta components,
but are extremely fast to calculate, making them a good choice for real-time applications.
In [15], we presented a method to obtain a WD approximation with three equally weighted
components, which is based on matching the first circular moment (see Algorithm 1). We further
extended this scheme to obtain a WD with five components by matching the first as well as second
circular moment (see Algorithm 2), which, as we proved in [17], necessitates the use of different
weights. Both approaches can approximate arbitrary symmetric circular densities with given circular
moments.
Algorithm 1: Deterministic approximation with L = 3 components.
Input: first circular moment m1
Output: WD(γ1, . . . , γ3, β1, . . . , β3)
/* extract µ
*/
µ ←atan2(Im m1, Re m1);
/* obtain Dirac positions
*/
α ←arccos
� 3
2|m1| −1
2

;
β1 ←µ −α mod 2π;
β2 ←µ mod 2π;
β3 ←µ + α mod 2π;
/* obtain weights
*/
γ1, γ2, γ3 ←1
3;
8
Algorithm 2: Deterministic approximation with L = 5 components.
Input: first circular moment m1, second circular moment m2,
parameter λ ∈[0, 1] with default λ = 0.5
Output: WD(γ1, . . . , γ5, β1, . . . , β5)
/* extract µ
*/
µ ←atan2(Im m1, Re m1);
m1 ←|m1|;
m2 ←|m2|;
/* obtain weights
*/
γmin
5
←(4m2
1 −4m1 −m2 + 1)/(4m1 −m2 −3);
γmax
5
←(2m2
1 −m2 −1)/(4m1 −m2 −3);
γ5 ←γmin
5
+ λ(γmax
5
−γmin
5
);
γ1, γ2, γ3, γ4 ←(1 −γ5)/4;
/* obtain Dirac positions
*/
c1 ←
2
1−γ5 (m1 −γ5);
c2 ←
1
1−γ5 (m2 −γ5) + 1;
x2 ←(2c1 +
p
4c2
1 −8(c2
1 −c2))/4;
x1 ←c1 −x2;
φ1 ←arccos(x1);
φ2 ←arccos(x2);
(β1, . . . , β5) ←µ + (−φ1, +φ1, −φ2, +φ2, 0) mod 2π;
0
1
2
3
4
5
0
2
4
6
0
0.5
1
1.5
 
σ
x
 
f(x)
WN(π, σ)
proposed
wrapped UKF
(a) Analytic solution from Algorithm 1 and
wrapped UKF [2] samples.
0
1
2
3
4
5
0
2
4
6
0
0.5
1
1.5
 
σ
x
 
f(x)
WN(π, σ)
proposed
wrapped S2KF
(b) Analytic solution from Algorithm 2 with λ =
0.8 and wrapped S2KF [33] samples.
Figure 3: Proposed approaches for generating samples of WN distributions with a different concentra-
tion parameters σ compared to the na¨ıve approach of wrapping samples of a Gaussian with identical
σ. It can be seen that the UKF and S2KF samples are eventually wrapped to the same location,
which produces an extremely poor approximation.
9
0
0.2
0.4
a)
 
 
0
0.2
0.4
b)
0
0.2
0.4
c)
0
0.1
0.2
d)
0
pi
2pi
0
0.1
0.2
e)
VM(π, 1)
WN(π, 1.27)
Figure 4: Example of the deterministic sampling of a VM and WN distribution with equal first circular
moment. From top to bottom: a) original densities, b) result of Algorithm 1, c) result of Algorithm 2,
d) approach based on VM kernels from [34] for 10 components, e) quantization approach from [35] for
10 components. Note that the result of Algorithm 1 is identical for both densities because only the
first circular moment is considered.
4.2. Optimization-based Solutions
If a larger number of samples is desired and there are more degrees of freedom in the samples
than constraints (such as preservation of circular moments), optimization-based solutions can be used.
The number of samples can be adjusted by the user and an optimal approximation is derived by
minimizing a distance measure.
In order to simultaneously calculate optimal locations and weights for the samples, a systematic
approach based on VM kernels has been proposed in [34]. For a WD mixture, an induced VM mixture
is compared to the true distribution with a quadratic integral distance. A specific kernel width is
considered for each component, which depends on the weight of the component and the value of the
true distribution at the location of the component. Both the weights and the locations of a fixed even
number of WD components are optimized to obtain an optimal symmetric approximation. Constraints
in the optimization algorithms are used to maintain a predefined number of circular moments. This
approach results in well-distributed Dirac mixtures that fulfill the moment constraints.
A quantization approach is discussed in [35]. It is based on computing optimal Voronoi quantizers.
In this approach, optimality refers to minimal quadratic distortion. The resulting Voronoi quantizer
gives rise to a circular discrete probability distribution on a continuous domain that approximates the
original continuous distribution. Use of this approximation is particularly beneficial in the prediction
step of stochastic filters, because an error bound for propagation through a non-trivial system function
can be obtained without actually knowing that function. It is sufficient to require it to be Lipschitz
and to know an upper bound for the Lipschitz constant. Furthermore, circular moment constraints
can be introduced in the optimization procedure of the quantization approach.
Examples from all discussed methods for deterministic sampling are depicted in Fig. 4.
5. Operations on Densities
In order to derive a circular filtering algorithm, we need to be able to perform certain operations
on the involved probability densities.
5.1. Shifting and Mirroring
For a given density f(x), we want to obtain the density f(c −x) for a constant c ∈S1. This
operation is necessary in certain cases of the update step. We can split this operation into two steps:
10
mirroring to obtain f(−x), and subsequent shifting by c to obtain f(c + (−x)). Mirroring WN(µ, σ)
and VM(µ, κ) yields WN(2π −µ, σ) and VM(2π −µ, κ) because the distributions are symmetric
around their mean. Shifting WN(µ, σ) and VM(µ, κ) by c yields
WN(µ −c
mod 2π, σ) and VM(µ −c
mod 2π, κ) ,
so the combined operation results in
WN((2π −µ) −c
mod 2π, σ)
and
VM((2π −µ) −c
mod 2π, κ) .
5.2. Circular Convolution
Given two independent circular random variables x1 ∼f1(x1), x2 ∼f2(x2), the sum x1 + x2 is
distributed according to
(f1 ∗f2)(x) =
Z 2π
0
f1(t)f2(x −t) dt ,
where ∗denotes the convolution. This operation is necessary in the prediction step to incorporate
additive noise.
WN distributions are closed under convolution and the new pdf can be obtained just as in the
Gaussian case [15], i.e., µ = µ1 + µ2 mod 2π, σ2 = σ2
1 + σ2
2. VM distributions are not closed under
convolution. For this reason, Azmani et al. [9] used the approximation from [5], which is given by
µ = µ1 + µ2, κ = A−1(A(κ1), A(κ2)). The function A(·) is the same as defined in Lemma 2. This
approximation can be derived from an intermediate WN representation [10]. A similar approximation
has been used by Markovic et al. for the von Mises-Fisher case [14, (7)].
In this paper, we present a more general result that calculates the convolution based on circular
moments.
Lemma 3 (Moments After Addition of Random Variables). Assume independent random variables
x1 ∼f1, x2 ∼f2 defined on the circle. For the sum x = x1 + x2, it holds
E(exp(inx)) = E(exp(inx1))E(exp(inx2)) .
Proof.
mn =E(exp(inx)) =
Z 2π
0
exp(inx)f(x) dx
=
Z 2π
0
Z 2π
0
exp(in(x))f1(y)f2(x −y) dy dx
=
Z 2π
0
Z 2π
0
exp(in(x1 + x2))f1(x1)f2(x2) dx1 dx2
=
Z 2π
0
exp(inx1)f1(x1) dx1
Z 2π
0
exp(inx2)f2(x2) dx2
=E(exp(inx1))E(exp(inx2))
If moment matching of the first circular moment is used to fit a WN or VM to the density that
results from convolution, the solutions for WN and VM distributions from [9] and [15] arise as special
cases of Lemma 3.
Remark 3. In fact, Lemma 3 allows us to calculate the convolution purely moment-based. For this
reason, we do not need to assume any particular distribution, but can just calculate the moments of
the convolved density based on the products of original moments.3
3In general, for example in the case of mixture densities, the complexity of the density increases with each successive
convolution, so considering only a finite number of moments constitutes an approximation.
11
Recursive Bayesian Filtering
in Circular State Spaces
Gerhard Kurza, Igor Gilitschenskia, Uwe D. Hanebecka
aIntelligent Sensor-Actuator-Systems Laboratory (ISAS)
Institute for Anthropomatics and Robotics
Karlsruhe Institute of Technology (KIT), Germany
Abstract
For recursive circular filtering based on circular statistics, we introduce a general framework for
estimation of a circular state based on different circular distributions, specifically the wrapped normal
distribution and the von Mises distribution. We propose an estimation method for circular systems
with nonlinear system and measurement functions. This is achieved by relying on efficient deterministic
sampling techniques. Furthermore, we show how the calculations can be simplified in a variety of
important special cases, such as systems with additive noise as well as identity system or measurement
functions. We introduce several novel key components, particularly a distribution-free prediction
algorithm, a new and superior formula for the multiplication of wrapped normal densities, and the
ability to deal with non-additive system noise. All proposed methods are thoroughly evaluated and
compared to several state-of-the-art solutions.
1. Introduction
Estimation of circular quantities is an omnipresent issue, be it the wind direction, the angle of
a robotic revolute joint, the orientation of a turntable, or the direction a vehicle is facing. Circular
estimation is not limited to applications involving angles, however, and can be applied to a variety
of periodic phenomena. For example phase estimation is a common issue in signal processing, and
tracking objects that periodically move along a certain trajectory is also of interest.
Standard approaches to circular estimation are typically based on estimation techniques designed
for linear scenarios that are tweaked to deal with some of the issues arising in the presence of circular
quantities. However, modifying linear methods cannot only be tedious and error-prone, but also yields
suboptimal results, because certain assumptions of these methods are violated. For example, solutions
based on Kalman filters [1] or nonlinear versions thereof [2] fundamentally neglect the true topology
of the underlying manifold and assume a Gaussian distribution, which is only defined on Rn. In the
linear case, the use of a Gaussian distribution is frequently justified by the central limit theorem. This
justification no longer holds in a circular setting, as the Gaussian is not a limit distribution on the
circle.
In order to properly deal with circular estimation problems, we rely on circular statistics [3], [4], a
subfield of statistics that deals with circular quantities. More broadly, the field of directional statistics
[5] considers a variety of manifolds, such as the circle, the hypersphere, or the torus. Unlike standard
approaches that assume linear state spaces, methods based on circular statistics correctly use the
proper manifold and rely on probability distributions defined on this manifold. Circular statistics
has been applied in a variety of sciences, such as biology [4], bioinformatics [6], meteorology [7], and
geosciences [8].
Email addresses: gerhard.kurz@kit.edu (Gerhard Kurz), gilitschenski@kit.edu (Igor Gilitschenski),
uwe.hanebeck@ieee.org (Uwe D. Hanebeck)
arXiv:1501.05151v1  [cs.SY]  21 Jan 2015
system
measurement
publication
distribution
model
noise
model
noise
Azmani, Reboul, Choquel, Benjelloun [9]
von Mises
identity
additive
identity
additive
Markovic, Chaumette, Petrovic [14]
von Mises-Fisher
identity
additive
identity
additive
Kurz, Gilitschenski, Julier, Hanebeck [21]
Bingham
identity
additive
identity
additive
Kurz, Gilitschenski, Hanebeck [15]
wrapped normal/von Mises
nonlinear
additive
identity
additive
Kurz, Gilitschenski, Hanebeck [16]
wrapped normal
nonlinear
additive
nonlinear
any
this paper
wrapped normal/von Mises
nonlinear
any
nonlinear
any
Table 1: Circular filters based on directional statistics.
There has been some work on filtering algorithms based on circular statistics by Azmani et al. [9],
which was further investigated by Stienne et al. [10]. Their work is based on the von Mises distribution
and allows for recursive filtering of systems with a circular state space. However, it is limited to the
identity with additive noise as the system equation and the measurement equation. The filter from [9]
has been applied to phase estimation of GPS signals [11], [12] as well as map matching [13]. Markovic
et al. have published a similar filter [14] based on the von Mises-Fisher distribution, a generalization
of the von Mises distribution to the hypersphere.
We have previously published a recursive filter based on the wrapped normal distribution allowing
for a nonlinear system equation [15]. The paper [16] extends this approach to make a nonlinear
measurement update possible. Both papers rely on a deterministic sampling scheme, based on the
first circular moment. This kind of sampling is reminiscent of the well-known unscented Kalman filter
(UKF) [2]. We have extended this sampling scheme to the first two circular moments in [17], so the
proposed filters are, in a sense, circular versions of the UKF.
The developed filters have been applied in the context of constrained tracking [18], bearings-only
sensor scheduling [19], as well as circular model predictive control [20].
Furthermore, we proposed a recursive filter based on the circular Bingham distribution in [21].
The Bingham distribution is closely related to the von Mises distribution, but has a bimodal density,
which makes it interesting for axial estimation problems (i.e., problems with 180◦symmetry).
An overview of all of these filters and the considered distributions as well as system and measurement
models is given in Table 1.
This paper summarizes and combines our results as well as extends the previous work by a number
of additional contributions. First of all, we propose a general filtering framework that can be used in
conjunction with a variety of system and measurement equations, different types of noise, and both
the wrapped normal and the von Mises distributions. Our previous publications [15], [16] as well as
the work by Azmani et al. [9] can be seen as special cases of the proposed framework.
Furthermore, we introduce a new multiplication formula for wrapped normal distributions that
outperforms the solution proposed in [15]. We generalize the prediction step from [15] to a purely
moment-based solution that does not need to assume any kind of distribution. Compared to [16],
we add the ability to deal with non-additive noise not only in the measurement update but also in
the prediction step. Finally, we perform a thorough evaluation, where we compare the proposed
techniques to several state-of-the-art approaches.
This paper is structured as follows. First, we formulate the problem in Sec. 2. Then, we introduce
the necessary fundamentals from circular statistics in Sec. 3. Based on these fundamentals, we propose
deterministic sampling schemes in Sec. 4 and derive the operations on circular densities required for
the circular filter in Sec. 5. These results are used to introduce circular filtering algorithms in Sec. 6.
An evaluation of the proposed algorithms can be found in Sec. 7. Finally, conclusions are given in
Sec. 8.
2. Problem Formulation
In this section, we formulate the problems under consideration and summarize some standard
approaches that have been used to address the issues associated with periodicity.
2
2.1. Circular Filtering
Circular filtering considers estimation problems on the unit circle, which is commonly parameterized
as the set of complex numbers with unit length {x ∈C : |x| = 1}. To allow for a more convenient
one-dimensional notation, we identify S1 with the half-open interval [0, 2π) ⊂R, while keeping the
topology of the circle. Together with the operation
+ : S1 × S1 →S1,
x + y := x +R y
mod 2π,
for all x, y ∈[0, 2π) with standard addition +R on R, the circle S1 forms an Abelian group. Because
S1 with the topology given above has a manifold structure, (S1, +) is a Lie group.
We consider a system whose state xk at time step k is a value on the unit circle S1. System and
measurement models are assumed to be given. In this paper, we propose several methods to deal with
different types of models. More complex models necessitate the use of more sophisticated algorithms
and conversely, simpler models allow the use of computationally less expensive algorithms.
2.1.1. System Model
In this work, we consider a system model whose state evolves according to the general system
equation
xk+1 = ak(xk, wk)
with (nonlinear) system function ak : S1 × W →S1 and noise wk ∈W stemming from some noise
space W. Note that W is not necessarily S1, but may be an arbitrary set, for example the real-vector
space Rn, some manifold, or even a discrete set. An interesting and practically relevant special case is
a (nonlinear) system with additive noise
xk+1 = ak(xk) + wk
mod 2π
with ak : S1 →S1 and wk ∈S1. More particular, we also consider the special case, where ak is the
identity, i.e.,
xk+1 = xk + wk
mod 2π.
2.1.2. Measurement Model
The system state cannot be observed directly, but may only be estimated based on measurements
that are disturbed by noise. A general measurement function is given by
ˆzk = hk(xk, vk) ,
where ˆzk ∈Z is the measurement in the measurement space Z, hk : S1 × V →Z is the measurement
function and vk ∈V is arbitrary measurement noise in a certain noise space V . Note that the
measurement and noise space can be arbitrary sets in general.
An interesting special case are
measurement functions where the measurement noise is additive, i.e.,
ˆzk = hk(xk) + vk
with measurement function hk : S1 →Z and vk ∈Z. In this case, we require Z to have a group
structure with + as the operation. Additionally, we consider the more specific case where hk is the
identity, i.e.,
ˆzk = xk + vk
mod 2π
with ˆzk, vk ∈S1.
Remark 1. We do not consider linear system models because linearity is a concept of vector spaces,
not manifolds [15]. For this reason, there are no linear functions on the circle.
3
0 
 pi/2 
 pi 
 3pi/2 
 2pi
0
0.5
1
1.5
φ
f(φ)
 
 
Gaussian prior
measurement
incorrect fusion
(a) Incorrect fusion.
0 
 pi/2 
 pi 
 3pi/2 
 2pi
0
0.5
1
1.5
φ
f(φ)
 
 
Gaussian prior
measurement
repositioned meas.
correct fusion
(b) Correct fusion.
Figure 1: This figure illustrates that repositioning of the measurement is necessary to obtain satisfactory
performance when using classical filters on circular problems.
2.2. Standard Approaches
As circular estimation problems are wide-spread in a variety of applications, a number of standard
approaches have been employed. We introduce three of the most common methods and explain their
strengths and weaknesses.
2.2.1. Gaussian-based approaches
Gaussian-based methods (wrongly) assume a Gaussian distribution and use standard filtering
techniques for Gaussians in conjunction with certain modifications to allow their application to circular
problems.
One-Dimensional Methods. One common approach is the use of a standard Kalman filter [1] or, in
case of nonlinear system or measurement functions, unscented Kalman filter (UKF) [2] with a scalar
state xk containing the angle θk, i.e., xk = θk. However, two modifications are necessary before this
approach can be used in practice. First, the estimate has to be forced to stay inside the interval [0, 2π)
by performing a modulo operation after every prediction and/or update step.
Second, if the measurement space is periodic, the measurement needs to be repositioned to be
closer to the state mean in certain cases. This problem occurs whenever the measurement and the
current state mean are more than π apart. In this case, the measurement needs to be moved by ±2πk
to an equivalent measurement that deviates at most π from the state mean. An illustration of this
problem is given in Fig. 1.
This type of filter is used as a comparison in [15]. When the uncertainty is small, this kind of
approach works fairly well, but it tends to produce unsatisfactory results if the uncertainty is high.
Two-Dimensional Methods. Another common approach is based on the Kalman filter or the UKF with
two-dimensional state subject to a nonlinear constraint. More specifically, an angle θk is represented
by a state vector xk = [cos(θk), sin(θ)k]T and the constraint is ||xk|| = 1 to enforce that xk is on the
unit circle. In order to enforce this constraint, xk is projected to the unit circle after each prediction
and/or update step. More sophisticated approaches increase the covariance to reflect the fact that the
projection operation constitutes an increase in uncertainty [22].
This approach has been used in [18], but did not perform as well as the filter based on circular
statistics. One of the issues of this approach is the fact that the system and measurement model
sometimes become more complicated when the angle θk is transformed to a two-dimensional vector.
2.2.2. Particle Filters
Another method that can be applied is particle filtering [23]. Particle filters on nonlinear manifolds
are fairly straightforward to implement because each particle can be treated separately. For the
particle filter to work, the system function and the measurement likelihood both need to respect
4
−2
−1
0
1
2
−2
−1
0
1
2
0
0.05
0.1
0.15
0.2
0.25
x1=cos(φ)
x2=sin(φ)
f(x1, x2)
(a) The wrapped normal distribution (red) is ob-
tained by wrapping a Gaussian distribution (blue)
around the circle. The parameters we used for this
example are µ = 0, σ = 2.
(b) The von Mises distribution (red) arises when
restricting a two-dimensional Gaussian with µ =
(cos µ, sin µ)T and covariance κ · I2×2 to the unit
circle. The parameters for this example are µ =
0, κ = 1.
Figure 2: Relation between WN and VM distributions and the Gaussian distribution.
the underlying topology. The reweighting step as well as the commonly used sequential importance
resampling (SIR) are independent of the underlying manifold and can be used in a circular setting as
well.
However, issues that are typically associated with particle filters arise.
If the measurement
likelihood function is very narrow, particle degeneration can occur, i.e., (almost) all particles have
zero weight after the reweighting step. Furthermore, a large number of particles is required to obtain
stable results. Even though these problems are less critical in a one-dimensional setting, there can
still be issues if measurements with high certainty occur in areas with few particles. This can, for
example, occur when information from sensors with very different degrees of accuracy is fused. It
should also be noted that sampling from certain circular distributions can be somewhat involved and
require the use of the Metropolis Hastings algorithm [24] or similar approaches, e.g., in case of the
von Mises distribution.
3. Circular Statistics
Because of the drawbacks of the approaches discussed above, we propose a filtering scheme based
on circular statistics [3], [5]. In the following, we introduce the required fundamentals from the field
of circular statistics.
3.1. Circular Distributions
Circular statistics considers probability distributions defined on the unit circle. A variety of
distributions has been proposed [4]. We give definitions of all distributions that are required for the
proposed filtering scheme.
Definition 1 (Wrapped Normal Distribution). The wrapped normal (WN) distribution is given by
the probability density function (pdf)
f(x; µ, σ) =
1
√
2πσ
∞
X
k=−∞
exp

−(x −µ + 2πk)2
2σ2

with x ∈S1, location parameter µ ∈S1, and concentration parameter σ > 0.
5
The WN distribution is obtained by wrapping a one-dimensional Gaussian distribution around
the unit circle and adding up all probability mass that is wrapped to the same point (see Fig. 2a).
It appears as a limit distribution on the circle [15] in the following sense. A summation scheme of
random variables that converges to the Gaussian distribution in the linear case, will converge to the
WN distribution if taken modulo 2π. Because of its close relation to the Gaussian distribution, the
WN distribution inherits a variety of its properties, for example its normalization constant as well as
the formula for the convolution of densities. Even though there is an infinite sum involved, evaluation
of the pdf of a WN distribution can be performed efficiently, because only few summands need to be
considered [25].
Definition 2 (Von Mises Distribution). The von Mises (VM) distribution is given by the pdf
f(x; µ, κ) =
1
2πI0(κ) exp(κ cos(x −µ))
with x ∈S1, location parameter µ ∈S1, and concentration parameter κ > 0. I0(·) is the modified
Bessel function [26] of order 0.
The VM distribution, sometimes also referred to as the circular normal (CN) distribution, is
obtained by restricting a two-dimensional Gaussian with mean ||µ|| = 1 and covariance κ · I2×2 (where
I2×2 is the identity matrix) to the unit circle and reparameterizing to [0, 2π) as can be seen in Fig. 2b.
For this reason, it also inherits some of the properties of the Gaussian distribution; most importantly,
it is closed under Bayesian inference. The VM distribution has been used as a foundation for a circular
filter by Azmani et al. [9]. It is closely related to the Bingham distribution and conversion between the
two is effectively a matter of shrinking the the interval [0, 2π) by a factor of two two, i.e., f(2x; µ, κ)
is a Bingham distribution [27, p. 4].
Definition 3 (Wrapped Dirac Mixture Distribution). The wrapped Dirac mixture (WD) distribution
with L components is given by
f(x; γ1, . . . , γL, β1, . . . , βL) =
L
X
j=1
γjδ(x −βj)
with Dirac delta distribution δ(·), Dirac positions β1, . . . , βL ∈S1, and weights γ1, . . . , γL > 0 where
PL
j=1 γj = 1.
Unlike the WN and VM distributions, the WD distribution is a discrete probability distribution
on a continous domain. It can be obtained by wrapping a Dirac mixture in R around the unit circle.
WD distributions can be used as discrete approximations of continuous distributions with a finite set
of samples.
In this paper, we use the following notation. We denote the density function of a WN distribution
with parameters µ and σ with WN(µ, σ). If a random variable x is distributed according to this WN
distribution, we write x ∼WN(µ, σ). The terms VM(µ, κ) and WD(γ1, . . . , γL, β1, . . . , βL) are used
analogously to describe the density functions of VM and WD distributions with parameters µ, κ and
γ1, . . . , γL, β1, . . . , βL, respectively.
3.2. Circular Moments
In circular statistics, there is a concept called circular (or trigonometric) moment.
Definition 4 (Circular Moments). For a random variable x ∼f(x) defined on the circle, its n-th
circular moment is given by
mn = E(exp(ix)n) = E(exp(inx))
=
Z 2π
0
exp(inx) · f(x) dx ∈C
with imaginary unit i.
6
The n-th moment is a complex number and, hence, has two degrees of freedom, the real and
the imaginary part. For this reason, the first moment includes information about the circular mean
arg m1 = atan2(Im m1, Re m1) as well as the concentration |m1| =
p
(Re m1)2 + (Im m1)2 of the
distribution1, similar to the first two real-valued moments. It can be shown that WN and VM
distributions are uniquely defined by their first circular moment [3].
Remark 2. Any continuous and piecewise continuously differentiable 2π-periodic function f : R →R
can be written as a Fourier series
f(x) =
∞
X
k=−∞
ck exp(ikx)
where
ck = 1
2π
Z 2π
0
f(x) exp(−ikx)dx .
If f(·) is the pdf of a circular probability distribution, the Fourier coefficients are closely related to the
circular moments according to ck =
1
2πm−k.
Lemma 1 (Moments for WN, VM, and WD Distributions). For WN, VM, and WD distributions
with given parameters, the n-th circular moment can be calculated according to
mWN
n
= exp

inµ −n2σ2
2

,
mV M
n
= exp(inµ)I|n|(κ)
I0(κ) ,
mWD
n
=
L
X
j=1
γj exp(inβj) .
A proof is given in [5].
3.3. Circular Moment Matching
As both WN and VM distributions are uniquely defined by their first moment, it is possible to
convert between them by matching the first circular moment.
Lemma 2 (Circular Moment Matching). We define A(x) = I1(x)
I0(x) as given in [5].
1. For a given first moment m1, the WN distribution with this first moment has the density
WN

atan2(Im m1, Re m1),
p
−2 log (|m1|)

.
2. For a given first moment m1, the VM distribution with this first moment has the density
VM
�atan2(Im m1, Re m1), A−1(|m1|)

.
3. For a given VM distribution with density VM(µ, κ), the WN distribution with identical first
moment has the density WN

µ,
r
−2 log

I1(κ)
I0(κ)

.
4. For a given WN distribution with density WN(µ, σ), the VM distribution with identical first
moment has the density VM

µ, A−1 
exp

−σ2
2

The proof is given in [15]. Calculation of the function A−1(·) is somewhat tricky. In [15], we use
the algorithm by Amos [28]2 to calculate A(·) and MATLAB’s fsolve to invert this function. Stienne
et al. have proposed closed-form approximations, which can be calculated very easily but have a large
approximation error [12], [10]. A more detailed discussion on approximations of A−1(·) can be found
in [29] and [30, Sec. 2.3].
1The term 1 −|m1| is sometimes referred to as circular variance.
2Pseudocode of this algorithm is given in [15].
7
4. Deterministic Sampling
In order to propagate continous probability densities through nonlinear functions, it is a common
technique to use discrete sample-based approximations of the continous densities. A set of samples can
easily be propagated by applying the nonlinear function to each sample individually. This approach
can be used for both the prediction and the measurement update step.
We distinguish between deterministic and nondeterministic sampling. Nondeterministic sampling
relies on a randomized algorithm to stochastically obtain samples of a density. Typical examples
include the samplers used by the particle filter [23] or the Gaussian particle filter [31]. Deterministic
sampling selects samples in a deterministic way, for example to fit certain moments (the sampler
used by the UKF, [2]), or to optimally approximate the shape of the density (published in [32], this
sampler is used by the S2KF, [33]). Deterministic sampling schemes have the advantage of requiring a
significantly smaller number of samples, which is why we will focus on this type of solution.
A na¨ıve solution for approximating a WN density may be the application of a deterministic
sampling scheme for the Gaussian distribution (such as the samplers used in [2], [33]) and subsequently
wrapping the samples. Even though this technique is valid for stochastic samples, it does not provide
satisfactory results for deterministic samples. In extreme cases, wrapping can cause different samples
to be wrapped to the same point, grossly misrepresenting the original density. This problem is
illustrated in Fig. 3. In the case of UKF samples (Fig. 3a), it can be seen that for σ ≈2.5, one
sample is placed at µ and two samples are placed on the opposite side of the circle, i.e., the mode
of the approximation is opposite to the true mode. Furthermore, for σ ≈5 all three UKF samples
are wrapped to the same position, i.e., the sample-based approximation degenerates to a distribution
with single Dirac component even though the true distribution is nearly uniform. Similar issues arise
in the case of S2KF samples (see Fig. 3b).
4.1. Analytic Solutions
First of all, we consider analytic solutions to obtain deterministic samples. These solutions are
based on circular moment-matching and only provide a small, fixed number of Dirac delta components,
but are extremely fast to calculate, making them a good choice for real-time applications.
In [15], we presented a method to obtain a WD approximation with three equally weighted
components, which is based on matching the first circular moment (see Algorithm 1). We further
extended this scheme to obtain a WD with five components by matching the first as well as second
circular moment (see Algorithm 2), which, as we proved in [17], necessitates the use of different
weights. Both approaches can approximate arbitrary symmetric circular densities with given circular
moments.
Algorithm 1: Deterministic approximation with L = 3 components.
Input: first circular moment m1
Output: WD(γ1, . . . , γ3, β1, . . . , β3)
/* extract µ
*/
µ ←atan2(Im m1, Re m1);
/* obtain Dirac positions
*/
α ←arccos
� 3
2|m1| −1
2

;
β1 ←µ −α mod 2π;
β2 ←µ mod 2π;
β3 ←µ + α mod 2π;
/* obtain weights
*/
γ1, γ2, γ3 ←1
3;
8
Algorithm 2: Deterministic approximation with L = 5 components.
Input: first circular moment m1, second circular moment m2,
parameter λ ∈[0, 1] with default λ = 0.5
Output: WD(γ1, . . . , γ5, β1, . . . , β5)
/* extract µ
*/
µ ←atan2(Im m1, Re m1);
m1 ←|m1|;
m2 ←|m2|;
/* obtain weights
*/
γmin
5
←(4m2
1 −4m1 −m2 + 1)/(4m1 −m2 −3);
γmax
5
←(2m2
1 −m2 −1)/(4m1 −m2 −3);
γ5 ←γmin
5
+ λ(γmax
5
−γmin
5
);
γ1, γ2, γ3, γ4 ←(1 −γ5)/4;
/* obtain Dirac positions
*/
c1 ←
2
1−γ5 (m1 −γ5);
c2 ←
1
1−γ5 (m2 −γ5) + 1;
x2 ←(2c1 +
p
4c2
1 −8(c2
1 −c2))/4;
x1 ←c1 −x2;
φ1 ←arccos(x1);
φ2 ←arccos(x2);
(β1, . . . , β5) ←µ + (−φ1, +φ1, −φ2, +φ2, 0) mod 2π;
0
1
2
3
4
5
0
2
4
6
0
0.5
1
1.5
 
σ
x
 
f(x)
WN(π, σ)
proposed
wrapped UKF
(a) Analytic solution from Algorithm 1 and
wrapped UKF [2] samples.
0
1
2
3
4
5
0
2
4
6
0
0.5
1
1.5
 
σ
x
 
f(x)
WN(π, σ)
proposed
wrapped S2KF
(b) Analytic solution from Algorithm 2 with λ =
0.8 and wrapped S2KF [33] samples.
Figure 3: Proposed approaches for generating samples of WN distributions with a different concentra-
tion parameters σ compared to the na¨ıve approach of wrapping samples of a Gaussian with identical
σ. It can be seen that the UKF and S2KF samples are eventually wrapped to the same location,
which produces an extremely poor approximation.
9
0
0.2
0.4
a)
 
 
0
0.2
0.4
b)
0
0.2
0.4
c)
0
0.1
0.2
d)
0
pi
2pi
0
0.1
0.2
e)
VM(π, 1)
WN(π, 1.27)
Figure 4: Example of the deterministic sampling of a VM and WN distribution with equal first circular
moment. From top to bottom: a) original densities, b) result of Algorithm 1, c) result of Algorithm 2,
d) approach based on VM kernels from [34] for 10 components, e) quantization approach from [35] for
10 components. Note that the result of Algorithm 1 is identical for both densities because only the
first circular moment is considered.
4.2. Optimization-based Solutions
If a larger number of samples is desired and there are more degrees of freedom in the samples
than constraints (such as preservation of circular moments), optimization-based solutions can be used.
The number of samples can be adjusted by the user and an optimal approximation is derived by
minimizing a distance measure.
In order to simultaneously calculate optimal locations and weights for the samples, a systematic
approach based on VM kernels has been proposed in [34]. For a WD mixture, an induced VM mixture
is compared to the true distribution with a quadratic integral distance. A specific kernel width is
considered for each component, which depends on the weight of the component and the value of the
true distribution at the location of the component. Both the weights and the locations of a fixed even
number of WD components are optimized to obtain an optimal symmetric approximation. Constraints
in the optimization algorithms are used to maintain a predefined number of circular moments. This
approach results in well-distributed Dirac mixtures that fulfill the moment constraints.
A quantization approach is discussed in [35]. It is based on computing optimal Voronoi quantizers.
In this approach, optimality refers to minimal quadratic distortion. The resulting Voronoi quantizer
gives rise to a circular discrete probability distribution on a continuous domain that approximates the
original continuous distribution. Use of this approximation is particularly beneficial in the prediction
step of stochastic filters, because an error bound for propagation through a non-trivial system function
can be obtained without actually knowing that function. It is sufficient to require it to be Lipschitz
and to know an upper bound for the Lipschitz constant. Furthermore, circular moment constraints
can be introduced in the optimization procedure of the quantization approach.
Examples from all discussed methods for deterministic sampling are depicted in Fig. 4.
5. Operations on Densities
In order to derive a circular filtering algorithm, we need to be able to perform certain operations
on the involved probability densities.
5.1. Shifting and Mirroring
For a given density f(x), we want to obtain the density f(c −x) for a constant c ∈S1. This
operation is necessary in certain cases of the update step. We can split this operation into two steps:
10
mirroring to obtain f(−x), and subsequent shifting by c to obtain f(c + (−x)). Mirroring WN(µ, σ)
and VM(µ, κ) yields WN(2π −µ, σ) and VM(2π −µ, κ) because the distributions are symmetric
around their mean. Shifting WN(µ, σ) and VM(µ, κ) by c yields
WN(µ −c
mod 2π, σ) and VM(µ −c
mod 2π, κ) ,
so the combined operation results in
WN((2π −µ) −c
mod 2π, σ)
and
VM((2π −µ) −c
mod 2π, κ) .
5.2. Circular Convolution
Given two independent circular random variables x1 ∼f1(x1), x2 ∼f2(x2), the sum x1 + x2 is
distributed according to
(f1 ∗f2)(x) =
Z 2π
0
f1(t)f2(x −t) dt ,
where ∗denotes the convolution. This operation is necessary in the prediction step to incorporate
additive noise.
WN distributions are closed under convolution and the new pdf can be obtained just as in the
Gaussian case [15], i.e., µ = µ1 + µ2 mod 2π, σ2 = σ2
1 + σ2
2. VM distributions are not closed under
convolution. For this reason, Azmani et al. [9] used the approximation from [5], which is given by
µ = µ1 + µ2, κ = A−1(A(κ1), A(κ2)). The function A(·) is the same as defined in Lemma 2. This
approximation can be derived from an intermediate WN representation [10]. A similar approximation
has been used by Markovic et al. for the von Mises-Fisher case [14, (7)].
In this paper, we present a more general result that calculates the convolution based on circular
moments.
Lemma 3 (Moments After Addition of Random Variables). Assume independent random variables
x1 ∼f1, x2 ∼f2 defined on the circle. For the sum x = x1 + x2, it holds
E(exp(inx)) = E(exp(inx1))E(exp(inx2)) .
Proof.
mn =E(exp(inx)) =
Z 2π
0
exp(inx)f(x) dx
=
Z 2π
0
Z 2π
0
exp(in(x))f1(y)f2(x −y) dy dx
=
Z 2π
0
Z 2π
0
exp(in(x1 + x2))f1(x1)f2(x2) dx1 dx2
=
Z 2π
0
exp(inx1)f1(x1) dx1
Z 2π
0
exp(inx2)f2(x2) dx2
=E(exp(inx1))E(exp(inx2))
If moment matching of the first circular moment is used to fit a WN or VM to the density that
results from convolution, the solutions for WN and VM distributions from [9] and [15] arise as special
cases of Lemma 3.
Remark 3. In fact, Lemma 3 allows us to calculate the convolution purely moment-based. For this
reason, we do not need to assume any particular distribution, but can just calculate the moments of
the convolved density based on the products of original moments.3
3In general, for example in the case of mixture densities, the complexity of the density increases with each successive
convolution, so considering only a finite number of moments constitutes an approximation.
11
0
1
2
3
4
5
6
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
x
f(x)
 
 
original WN
VM
moment−based
true product
Figure 5:
Multiplication of two WN densities with parameters µ1
=
2, σ1
=
0.7, and
µ2 = 4.95, σ2 = 1.3. The true product and the results of both proposed approximation methods
(VM and moment-based) are depicted. Note that the true product is not a WN density.
5.3. Multiplication
Multiplication of pdfs is an important operation for filtering algorithms, because it is required for
Bayesian inference. In general, the product of two pdfs is not normalized and thus, not a pdf. For
this reason, we consider the renormalized product, which is a valid pdf.
5.3.1. VM
Von Mises densities are closed under multiplication [9]. It holds that VM(µ1, κ1) · VM(µ2, κ2) ∝
VM(µ, κ), where
µ = atan2(Im m1, Re m1),
κ = |m1|,
with m1 = κ1 exp(iµ1) + κ2 exp(iµ2) .
5.3.2. WN
WN densities are not closed under multiplication. In the following, we consider two different
methods to approximate the density of the product with a WN density.
WN via VM. WN densities are not closed under multiplication. In [15], we proposed a method to use
the VM distribution in order to approximate the product of two WN densities. More specifically, we
convert the WN densities to VM densities using Lemma 2, multiply according to the VM multiplication
formula, and convert back to a WN distribution by applying Lemma 2 again. This method has the
disadvantage that, in general, the first circular moment of the resulting WN does not match the first
circular moment of the true product. An example can be seen in Fig. 5.
WN via Moment Matching. In this paper, we present a new method for approximating the product of
WN distributions. This method is based on directly approximating the true posterior moments.
Theorem 1. The first circular moment of WN(µ1, σ1) · WN(µ2, σ2) after renormalization is given
by
m1 =
∞
P
j,k=−∞
w(j, k)
2π
R
0
exp(ix)N(x; µ(j, k), σ(j, k))dx
∞
P
j,k=−∞
w(j, k)
2π
R
0
N(x; µ(j, k), σ(j, k))dx
12
where N(x; µ, σ) is a one-dimensional Gaussian density with mean µ and standard deviation σ, and
µ(j, k) =(µ1 + 2πj)σ2
2 + (µ2 + 2πk)σ2
1
σ2
1 + σ2
2
,
σ(j, k) =
s
σ2
1σ2
2
σ2
1 + σ2
2
,
w(j, k) =
exp

−1
2
((µ1+2πj)−(µ2+2πk))2
σ2
1+σ2
2

p
2π(σ2
1 + σ2
2)
.
Proof. The true renormalized product is given by f(x) = c · f(x; µ1, σ1) · f(x; µ2, σ2), where c
renormalizes the product, i.e.,
c =
Z 2π
0
f(x; µ1, σ1) · f(x; µ2, σ2) dx
−1
We calculate
m1 =c ·
Z 2π
0
exp(ix) · f(x; µ1, σ1) · f(x; µ2, σ2) dx
=c ·
Z 2π
0
exp(ix) ·
∞
X
j=−∞
N(x; µ1 + 2πj, σ1)
·
∞
X
k=−∞
N(x; µ2 + 2πk, σ2) dx
=c ·
∞
X
j=−∞
∞
X
k=−∞
Z 2π
0
exp(ix) · N(x; µ1 + 2πj, σ1)
· N(x; µ2 + 2πk, σ2) dx
=c ·
∞
X
j=−∞
∞
X
k=−∞
Z 2π
0
exp(ix) · w(j, k)
· ·N(x; µ(j, k), σ(j, k)) dx
=c ·
∞
X
j=−∞
∞
X
k=−∞
·w(j, k) ·
Z 2π
0
exp(ix)
· N(x; µ(j, k), σ(j, k)) dx ,
where we use the dominated convergence theorem to interchange summation and integration. We use
the abbreviations,
µ(j, k) =(µ1 + 2πj)σ2
2 + (µ2 + 2πk)σ2
1
σ2
1 + σ2
2
,
σ(j, k) =
s
σ2
1σ2
2
σ2
1 + σ2
2
,
w(j, k) =
exp

−1
2
((µ1+2πj)−(µ2+2πk))2
σ2
1+σ2
2

p
2π(σ2
1 + σ2
2)
based on the multiplication formula for Gaussian densities given in [36, 8.1.8].
13
To obtain the renormalization factor c−1, we use a similar derivation
c−1 =
Z 2π
0
f(x; µ1, σ)1 · f(x; µ2, σ2) dx
=
Z 2π
0
∞
X
j=−∞
N(x; µ1 + 2πj, σ1)
·
∞
X
k=−∞
N(x; µ2 + 2πk, σ2) dx
=
∞
X
j=−∞
∞
X
k=−∞
Z 2π
0
N(x; µ1 + 2πj, σ1)
· N(x; µ2 + 2πk, σ2) dx
=
∞
X
j=−∞
∞
X
k=−∞
w(j, k)
·
Z 2π
0
N(x; µ(j, k), σ(j, k)) dx .
The involved integrals can be reduced to evaluations of the complex error function erf [26, 7.1].
This yields
Z 2π
0
exp(ix) · N(x; µ(j, k), σ(j, k)) dx
=1
2 exp

iµ(j, k) −σ(j, k)2
2

·

erf
µ(j, k) + iσ(j, k)2
√
2σ(j, k)

−erf
µ(j, k) −2π + iσ(j, k)2
√
2σ(j, k)

and
Z 2π
0
N(x; µ(j, k), σ(j, k)) dx
= 1
2

erf
 µ(j, k)
σ(j, k)
√
2

−erf
µ(j, k) −2π
σ(j, k)
√
2

.
There are efficient implementations of the complex error function by means of the related Faddeeva
function [37]. Furthermore, the infinite sums can be truncated to just a few summands without a
significant loss in accuracy. For example, the multiplication in Fig. 5 requires 5 × 5 summands for
an error smaller than the accuracy of the IEEE 754 64 bit double data type [38]. Consequently, the
proposed method allows for efficient calculation of the approximate multiplication of WN densities.
6. Circular Filtering
Based on the results in the previous section, we derive circular filtering algorithms for the scenarios
described in Sec. 2.1. All proposed algorithms follow the recursive filtering concept and consist of
prediction and measurement update steps. We formulate the necessary steps without requiring a
particular density whenever possible such that most methods can be directly applied to WN as well
as VM distributions, and might even be generalized to other continous circular distributions. An
overview of all considered prediction and measurement update algorithms is given in Table 2.
14
system model
measurement model
method
identity
additive noise
arbitrary noise
identity
additive noise
arbitrary noise
WN
[15] (special case)
[15]
contribution
[15]
[16]
[16]
VM
[9]
contribution
contribution
[9]
contribution
contribution
moment-based
contribution
contribution
contribution
-
-
-
Table 2: Prediction and measurement update algorithms. Entries marked with contribution are
contributions of this paper.
6.1. Prediction
The prediction step is used to propagate the estimate through time.
6.1.1. Identity System Model
The transition density is given according to
f(xk+1|xk) =
Z 2π
0
f(xk+1, wk|xk) dwk
=
Z 2π
0
f(xk+1|xk, wk)fw(wk) dwk
=
Z 2π
0
δ(xk+1 −(xk + wk))fw(wk) dwk
= fw(xk+1 −xk) ,
where fw(·) is the density of the system noise. For the predicted density, according to the Chapman-
Kolmogorov equation we obtain
fp(xk+1) =
Z 2π
0
f(xk+1|xk)fe(xk) dxk .
(1)
In the special case of an identity system model, this yields
fp(xk+1) =
Z 2π
0
fw(xk+1 −xk)fe(xk) dxk
= (fw ∗fe)(xk+1) ,
where ∗denotes convolution as defined in Sec. 5.2. For the VM distribution, this system model has
been considered in [9]. If a WN distribution is assumed, (1) is a special case of [15] where we omit the
propagation through the nonlinear function.
6.1.2. Nonlinear System Model with Additive Noise
Similar to the previous case, the transition density is given by
f(xk+1|xk) =
Z 2π
0
δ(xk+1 −(ak(xk) + wk))fw(wk) dwk .
We approximate the prior density fe(xk) ≈PL
j=1 γjδ(xk −βj) using, for example, Algorithm 1 or
Algorithm 2. Then, the prediction density can be approximated according to
fp(xk+1)
=
Z 2π
0
f(xk+1|xk)fe(xk) dxk
15
≈
Z 2π
0
Z 2π
0
δ(xk+1 −(ak(xk) + wk))fw(wk) ddwk

·


L
X
j=1
γjδ(xk −βj)

dxk
=
Z 2π
0
fw(wk)
L
X
j=1
γj
Z 2π
0
δ(xk+1 −(ak(xk) + wk))
· δ(xk −βj) dxk dwk
=
Z 2π
0
fw(wk)
L
X
j=1
γjδ(xk+1 −(ak(βj) + wk))
|
{z
}
≈˜f(xk+1−wk)
dwk
= (fw ∗˜f)(xk+1) ,
where ˜f is obtained by moment matching of the WD PL
j=1 γjδ(x −(ak(βj)) according to Lemma 2.
The convolution can be calculated as described in Sec. 5.2.
6.1.3. Nonlinear System Model with Arbitrary Noise
In this paper, we extend the previous results to deal with arbitrary noise in the prediction step.
For arbitrary noise, the transition density is given by
f(xk+1|xk) =
Z 2π
0
δ(xk+1 −ak(xk, wk))fw(wk) dwk .
We approximate the prior density fe(xk) ≈PL
j=1 γjδ(xk −βj) as well as the noise density fw(wk) ≈
PLw
l=1 γw
l δ(wk −βw
l ). It should be noted that the noise is not necessarily a circular quantity and
different approximation techniques may be required. If W = Rn, the techniques presented in [32] may
be applied. Then, the prediction density can be approximated according to
fp(xk+1)
=
Z 2π
0
f(xk+1|xk)fe(xk) dxk
=
Z 2π
0
Z
W
δ(xk+1 −ak(xk, wk))fw(wk) dwkfe(xk) dxk
≈
Z 2π
0
Z
W
δ(xk+1 −ak(xk, wk))
 Lw
X
l=1
γw
l δ(wk −βw
l )
!
dwk fe(xk) dxk
=
Lw
X
l=1
γw
l
Z 2π
0
δ(xk+1 −ak(xk, βw
l ))fe(xk) dxk
≈
Lw
X
l=1
γw
l
Z 2π
0
δ(xk+1 −ak(xk, βw
l ))
·


L
X
j=1
γjδ(xk −βj)

dxk
=
L
X
j=1
Lw
X
l=1
γjγw
l δ(xk+1 −ak(βj, βw
l ))
16
A continuous density can be fitted to this result by circular moment matching. The algorithm is given
in Algorithm 3. It is worth noting that this algorithm can be executed based purely on the circular
moments of f(xk) and fw(wk) if the deterministic sampling scheme only depends on these moments.
In that case, we do not necessarily need to fit a distribution f(xk+1) to the resulting circular moments,
but can just store the estimate by retaining those circular moments.
Algorithm 3: Prediction with arbitrary noise.
Input: prior density f(xk), system noise density fw(wk), system function ak(·, ·)
Output: predicted density f(xk+1)
/* sample prior density and noise density
*/
(γ1, . . . , γL, β1, . . . , βL) ←sampleDeterm(f(xk));
(γw
1 , . . . , γw
Lw, βw
1 , . . . , βw
Lw) ←sampleDeterm(fw(wk));
/* obtain Cartesian product and propagate
*/
for j ←1 to L do
for l ←1 to Lw do
γp
j+L(l−1) ←γj · γw
l ;
βp
j+L(l−1) ←ak(βj, βw
l );
end
end
/* obtain posterior density
*/
f(xk+1) ←momentMatching(γp
1, . . . , γp
L·Lw, βp
1, . . . , βp
L·Lw));
6.2. Measurement Update
The measurement update step fuses the current estimate with a measurement that was obtained
according to the measurement equation.
6.2.1. Identity Measurement Model
In the case of the identity measurement model and additive noise, the measurement likelihood is
given by
f(zk|xk) = fv(zk −xk) .
For the posterior density, application of Bayes’ theorem yields
f(xk|zk) = f(zk|xk)f(xk)
f(zk)
∝f(zk|xk)f(xk) ,
where f(xk) is the prior density. Thus, we obtain the posterior density
f(xk|zk) ∝fv(zk −xk)f(xk) ,
as the product of the prior density and fv(zk −xk), which can be obtained as described in Sec. 5.1.
The multiplication depends on the assumed probability density and can be performed using the
multiplication formulas given in Sec. 5.3. For the VM case, this is equivalent to the measurement
update from [9] and for the WN case, this is equivalent to the measurement update from [15].
6.2.2. Nonlinear Model with Additive Noise
For a nonlinear measurement function with additive noise, the measurement likelihood is calculated
according to f(zk|xk) = fv(zk −hk(xk)), as given in [16]. The remainder of the measurement update
step is identical to the case of arbitrary noise as described in the following section.
17
6.2.3. Nonlinear Model with Arbitrary Noise
For a nonlinear update with arbitrary noise, we assume that the likelihood is given. The key idea
is to approximate the prior density f(xk) with a WD mixture and reweigh the components according
to the likelihood f(zk|xk). However, this can lead to degenerate solutions, i.e., most or all weights are
close to zero, if the likelihood function is narrow.
We have shown in [16] that a progressive solution as introduced in [39] can be used to avoid this
issue. For this purpose, we formulate the likelihood as a product of likelihoods
f(zk|xk) = f(zk|xk)λ1 · . . . · f(zk|xk)λs ,
where λ1, . . . , λs > 0 and Ps
j=1 λj = 1. This decomposition of the likelihood allows us to perform the
measurement update step gradually by performing s partial update steps. Each update step is small
enough to prevent degeneration and we obtain a new sample set after each step, to ensure that the
differences between the sample weights stay small. In order to determine λ1, . . . , λs and s, we require
that the quotient between the smallest weight γmin and the largest weight γmax after reweighing is
not below a certain threshold R ∈(0, 1), i.e., γmin
γmax ≥R. Using the conservative bounds
γmin ≥
min
j=1,...,L(γn
j ) ·
min
j=1,...,L(f(zk|βn
j )) ,
γmax ≤
max
j=1,...,L(γn
j ) · max
j=1,...,L(f(zk|βn
j )) ,
this leads to the condition
λn ≤
log

R ·
max
j=1,...,L(γn
j )
min
j=1,...,L(γn
j )

log

min
j=1,...,L f(zk|βn
j )
max
j=1,...,L f(zk|βn
j )
 ,
where WD(γn
1 , . . . , γn
L, βn
1 , . . . , βn
L) is the deterministic approximation at n-th progression step4. The
progression continues until P
n λn = 1. This method can be applied in conjunction with WN as well
as VM distributions (see Algorithm 4).
7. Evaluation
7.1. Propagation Accuracy
In order to evaluate the deterministic sampling as introduced in Sec. 4, we investigate the accuracy
when performing propagation through the nonlinear function g : S1 →S1,
g(x) = x + c ×R sin(x)
mod 2π ,
where c ∈[0, 1) is a parameter controlling the strength of the nonlinearity and ×R refers to multipli-
cation in the field of real numbers R. Furthermore, we consider the density WN(0, σ) that we want to
propagate through g(·). For this purpose, we sample WN(0, σ) deterministically using the methods
described in Sec. 4 and obtain WD(γ1, . . . , γL, β1, . . . , βL). Then, we apply g(·) componentwise, which
yields WD(γ1, . . . , γL, g(β1), . . . , g(βL)).
The true posterior is given by
ftrue(x) = f(g−1(x); µ, σ)
g′(x)
4Compared to [16], we extend the progressive scheme to handle discrete approximations with non-equally weighted
components.
18
Algorithm 4: Progressive measurement update for arbitrary noise.
Input: measurement ˆzk, likelihood f(zk|xk), predicted density fp(xk) as WN or VM, threshold
parameter R
Output: estimated fensity as WN or VM
s ←0;
f(xk) ←fp(xk);
while Ps
n=1 λn < 1 do
s ←s + 1;
/* deterministic sampling from WN or VM density (Sec. 4)
*/
(γ1, . . . , γL, β1, . . . , βL) ←sampleDeterm(f(xk));
/* calculate size of progression step
*/
λs ←min



1 −Ps−1
n=1 λn,
log
 
R·
max
j=1,...,L
(γj)
min
j=1,...,L
(γj)
!
log
 
min
j=1,...,L
f(zk|βj)
max
j=1,...,L
f(zk|βj)
!



;
/* execute progression step
*/
for j ←1 to L do
γj ←γj · f(ˆzk|βj)λs;
end
/* use moment matching to obtain WN or VM density
*/
f ←momentMatching(γ1, . . . , γL, β1, . . . , βL);
end
and can only be calculated numerically. We evaluate the first and the second circular moment
mWD
i
, i = 1, 2 of the resulting WD distribution and compare to the first and the second circular
moment mtrue
i
, i = 1, 2 of the true posterior, which is obtained by numerical integration5. The
considered error measure is given by |mWD
i
−mtrue
i
|, i = 1, 2, where | · | is the Euclidean norm in the
complex plane. Additionally, we fit a WN density to the posterior WD by circular moment matching
and numerically calculate the Kullback-Leibler divergence
DKL

ftrue||ffitted
=
Z 2π
0
ftrue(x) log
 ftrue(x)
ffitted(x)

dx ,
(2)
between the true posterior and the fitted WN. The Kullback-Leibler divergence is an information
theoretic measure to quantify the information loss when approximating ftrue with ffitted. This concept
is illustrated in Fig. 6.
The results for different values of σ are depicted in Fig. 7. We compare several samplers, the
analytic methods with L = 3 components (Algorithm 1) and L = 5 components (Algorithm 2,
parameter λ = 0.5) from Sec. 4.1 as well as the quantization approach discussed in Sec. 4.2. It can
be seen that the analytic solution with L = 5 components is significantly better than the solution
with L = 3 components. The quantization-based solution is computationally quite demanding but
gives almost optimal results. However, the analytic solution with L = 5 components has comparable
performance in terms of the Kullback-Leibler divergence even though the posterior moments are not
calculated as accurately.
7.2. Moment-Based WN Multiplication
In this evaluation, we compare the two methods for WN multiplication given in Sec. 5.3.2 and
Sec. 5.3.2.
For two WN densities WN(µ1, σ1) and WN(µ2, σ2), we calculate the true product
ftrue = WN(µ1, σ1) · WN(µ2, σ2) and compare it to the WN approximation ffitted.
5Numerical integration produces very accurate results in this case, but is too slow for use in practical filtering
applications.
19
0
1
2
3
4
5
6
0
0.1
0.2
0.3
0.4
x
f(x)
 
 
prior WN
WD mixture
0
2
4
6
0
2
4
6
x
g(x)
−−−−−−−−−−−−−−−−−−−−−−−→
g(x)
0
1
2
3
4
5
6
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
x
fg(x)
 
 
true posterior
WD mixture
fitted WN
prior
g(x) = x + c sin(x) mod 2π
posterior
Figure 6: Propagation of a WN distribution with parameters µ = 0.1, σ = 1 through a nonlinear
function g by means of a deterministic WD approximation with five components. In this example, we
use c = 0.7.
In order to determine the approximation quality, we compute the Kullback-Leibler divergence (2).
Furthermore, we consider the L2 distance
DL2

ftrue, ffitted
=
Z 2π
0

ftrue(x) −ffitted(x)
2
dx .
The results for different values of σ1, µ2, and σ2 are depicted in Fig. 8 and Fig. 9. We keep µ1 fixed
because only the difference between µ2 and µ1 affects the result. Multiplication is commutative, so we
consider different sets of values for σ1 and σ2 to avoid redundant plots.
As can be seen, the moment-based approach derived in Sec. 5.3.2 significantly outperforms the
approach from Sec. 5.3.2 in almost all cases according to both distance measures. The new approach
is particularly superior for small uncertainties.
7.3. Filtering
Scenario
System Function
Measurement Noise Cv
s
(3)
0.01 · I2×2
m
(3)
0.1 · I2×2
l
(3)
3 · I2×2
s-non-additive
(4)
0.01 · I2×2
m-non-additive
(4)
0.1 · I2×2
l-non-additive
(4)
3 · I2×2
Table 3: Evaluation scenarios.
In order to evaluate the proposed filtering algorithms, we simulated several scenarios. First of all,
we distinguish between models with additive and with a more complex noise structure. In the case of
additive noise, we consider the system function
xk+1 = xk + c1 ×R sin(xk) + c2 + wk ,
(3)
with two parameters c1 = 0.1, c2 = 0.15, noise wk ∼WN(0, 0.2), and ×R is multiplication in the field
of real numbers R. Intuitively, c1 determines the degree of nonlinearity and c2 is a constant angular
velocity that is added at each time step. For the case of arbitrary noise, the system function is given
by
xk+1 = xk + c1 ×R sin(xk + wk) + c2 ,
(4)
20
σ = 0.5
σ = 1.0
σ = 1.5
0
0.5
1
0
0.01
0.02
0.03
0.04
c
first circular moment error
 
 
L=3
L=5
L=50 quantization
0
0.5
1
0
0.05
0.1
0.15
0.2
c
first circular moment error
 
 
L=3
L=5
L=50 quantization
0
0.5
1
0
0.02
0.04
0.06
0.08
0.1
c
first circular moment error
 
 
L=3
L=5
L=50 quantization
0
0.5
1
0
0.05
0.1
0.15
0.2
0.25
c
second circular moment error
 
 
L=3
L=5
L=50 quantization
0
0.2
0.4
0.6
0.8
1
0
0.1
0.2
0.3
0.4
c
second circular moment error
 
 
L=3
L=5
L=50 quantization
0
0.2
0.4
0.6
0.8
1
0
0.1
0.2
0.3
0.4
0.5
c
second circular moment error
 
 
L=3
L=5
L=50 quantization
0
0.2
0.4
0.6
0.8
1
0
2
4
6
8
x 10
−3
c
KLD
 
 
L=3
L=5
L=50 quantization
0
0.5
1
0
0.005
0.01
0.015
0.02
0.025
c
KLD
 
 
L=3
L=5
L=50 quantization
0
0.5
1
0
0.01
0.02
0.03
c
KLD
 
 
L=3
L=5
L=50 quantization
Figure 7: Propagation of WN(0, σ) through a nonlinear function with nonlinearity parameter c.
with the same c1, c2, and wk as above. In both cases, the nonlinear measurement function is given by
ˆzk = [cos(xk), sin(xk)]T + vk
∈R2
with additive noise vk ∼N(0, η · I2×2), η ∈{3, 0.1, 0.01}. An overview of all considered scenarios is
given in Table 3.
In the scenarios with additive system noise, we compare the proposed filter to all standard
approaches described in Sec. 2.2, a UKF with 1D state vector, a UKF with 2D state vector and particle
filters with 10 and 100 particles. In order to handle non-additive noise with a UKF, typically state
augmentation is used, which is not applicable to arbitrary noise. For this reason, we only compare the
proposed approach to the particle filters in the non-additive noise case.
The initial estimate is given by x0 ∼WN(0, 1), whereas the true initial state is on the opposite
side of the circle xtrue
0
= π, i.e., the initial estimate is poor, which is difficult to handle for noncircular
filters.
For the circular filtering algorithm, we use the deterministic sampling method given in Algorithm 2
with parameter λ = 0.5. The progression threshold is chosen as R = 0.2.
In order to evaluate the performance of different filters, we consider a specific error measure that
takes periodicity into account. The angular error is defined as the shortest distance on the circle
d : S1 × S1 →[0, π] , d(a, b) = min(|a −b|, 2π −|a −b|) .
This leads to an angular version of the commonly used root mean square error (RMSE)
v
u
u
t
1
kmax
kmax
X
k=1
d(xk, xtrue
k
)2
21
σ1 = 0.1
σ1 = 0.4
σ1 = 1.0
σ2 = 0.2
0
2
4
6
0
5
10
15
20
µ
KLD
 
 
VM
moment−based
0
2
4
6
0
1
2
3
µ
KLD
0
2
4
6
0
0.01
0.02
0.03
0.04
µ
KLD
σ2 = 0.5
0
2
4
6
0
0.1
0.2
0.3
0.4
0.5
µ
KLD
0
2
4
6
0
0.2
0.4
0.6
0.8
1
µ
KLD
0
2
4
6
0
0.05
0.1
µ
KLD
σ2 = 1.0
0
2
4
6
0
0.005
0.01
µ
KLD
0
2
4
6
0
0.02
0.04
0.06
0.08
0.1
µ
KLD
0
2
4
6
0
0.05
0.1
0.15
µ
KLD
Figure 8: Kullback-Leibler divergence between the true product of WN densities and the proposed
approximations.
between estimates xk and true state variables xtrue
k
. We simulated the system for kmax = 100 time
steps and compared the angular RMSE of all estimators. The results from 100 Monte Carlo runs are
depicted in Fig. 10.
In the scenarios with additive noise, it can be seen that the proposed filter performs very well
regardless of the amount of noise. Only the particle filter with 100 particles is able to produce similar
results. However, it should be noted that the proposed filter uses just five samples. The particle
filter with 10 particles performs a lot worse and fails completely for small noise as a result of particle
degeneration issues. Both variants of the UKF perform worse than the proposed filter. Particularly the
UKF with two-dimensional state does not work very well, which can be explained by the inaccuracies
in the conversion of a one-dimensional into a two-dimensional noise noise.
When non-additive noise is considered, the proposed filter even significantly outperforms the
particle filter with 100 particles. As a result of the low number of particles and the associated issues
regarding particle degeneration, the particle filter with 10 particles has the worst performance.
8. Conclusion
In this paper, we presented a framework for recursive filtering on the circle. The proposed filtering
algorithms can deal with arbitrary nonlinear system and measurement functions. Furthermore, they
can be used in conjunction with different circular probability distributions. We have shown that the
prediction step can be performed based on circular moments only, without ever assuming a particular
distribution.
For the purpose of evaluation, we have considered several aspects of the proposed methods. First
of all, the accuracy of deterministic approximations was evaluated by considering the error when using
them to propagate a continous distribution through a nonlinear function. We have found that the
proposed deterministic approximation with five samples yields good results for most practical scenarios.
22
σ1 = 0.1
σ1 = 0.4
σ1 = 1.0
σ2 = 0.2
0
2
4
6
0
2
4
6
8
µ
squared distance
 
 
VM
moment−based
0
2
4
6
0
0.5
1
1.5
2
2.5
µ
squared distance
0
2
4
6
0
0.02
0.04
0.06
µ
squared distance
σ2 = 0.5
0
2
4
6
0
0.5
1
µ
squared distance
0
2
4
6
0
0.2
0.4
0.6
µ
squared distance
0
2
4
6
0
0.02
0.04
0.06
0.08
µ
squared distance
σ2 = 1.0
0
2
4
6
0
0.01
0.02
0.03
0.04
µ
squared distance
0
2
4
6
0
0.02
0.04
0.06
0.08
0.1
µ
squared distance
0
2
4
6
0
0.01
0.02
0.03
0.04
0.05
µ
squared distance
Figure 9: L2 distance between the true product of WN densities and the proposed approximations.
Second, we evaluated the novel moment-based WN multiplication method and show that it is superior
to the previously published method based on fitting a VM distribution. Finally, we evaluated the
proposed filtering algorithms in several scenarios and compared it to standard approaches. These
simulations show the advantages of using a circular filtering scheme compared to traditional methods
intended for the linear case.
Future work may include extensions of the proposed methods to other manifolds such as the torus
or the hypersphere. Additionally, consideration of multimodal circular distributions may be of interest,
for example by means of WN or VM mixtures.
Acknowledgment
This work was partially supported by grants from the German Research Foundation (DFG) within
the Research Training Groups RTG 1126 “Soft-tissue Surgery: New Computer-based Methods for the
Future Workplace” and RTG 1194 “Self-organizing Sensor-Actuator-Networks”.
References
[1] R. E. Kalman, “A New Approach to Linear Filtering and Prediction Problems,” Transactions of the ASME Journal
of Basic Engineering, vol. 82, pp. 35–45, 1960.
[2] S. J. Julier and J. K. Uhlmann, “Unscented Filtering and Nonlinear Estimation,” Proceedings of the IEEE, vol. 92,
no. 3, pp. 401–422, Mar. 2004.
[3] S. R. Jammalamadaka and A. Sengupta, Topics in Circular Statistics.
World Scientific, 2001.
[4] E. Batschelet, Circular Statistics in Biology, ser. Mathematics in Biology.
London: Academic Press, 1981.
[5] K. V. Mardia and P. E. Jupp, Directional Statistics, 1st ed.
Wiley, 1999.
[6] K. V. Mardia, G. Hughes, C. C. Taylor, and H. Singh, “A Multivariate von Mises Distribution with Applications to
Bioinformatics,” Canadian Journal of Statistics, vol. 36, no. 1, pp. 99–109, 2008.
[7] N. I. Fisher, “Problems with the Current Definitions of the Standard Deviation of Wind Direction,” Journal of
Climate and Applied Meteorology, vol. 26, no. 11, pp. 1522–1529, 1987.
23
small noise η = 0.01
medium noise η = 0.1
large noise η = 3
proposed UKF 1D UKF 2D
PF10
PF100
0.2
0.4
0.6
0.8
1
angular error (RMSE)
proposed UKF 1D UKF 2D
PF10
PF100
0.2
0.4
0.6
0.8
1
angular error (RMSE)
proposed UKF 1D UKF 2D
PF10
PF100
0.5
1
1.5
2
angular error (RMSE)
proposed
PF10
PF100
0
0.1
0.2
0.3
0.4
0.5
angular error (RMSE)
proposed
PF10
PF100
0
0.5
1
1.5
2
angular error (RMSE)
proposed
PF10
PF100
0.5
1
1.5
2
angular error (RMSE)
Figure 10: RMSE (in radians) for different filters obtained from 100 Monte Carlo runs for additive
noise (top) and non-additive noise (bottom).
[8] K. V. Mardia, “Directional Statistics in Geosciences,” Communications in Statistics - Theory and Methods, vol. 10,
no. 15, pp. 1523–1543, 1981.
[9] M. Azmani, S. Reboul, J.-B. Choquel, and M. Benjelloun, “A Recursive Fusion Filter for Angular Data,” in IEEE
International Conference on Robotics and Biomimetics (ROBIO 2009), 2009, pp. 882–887.
[10] G. Stienne, S. Reboul, M. Azmani, J. Choquel, and M. Benjelloun, “A Multi-temporal Multi-sensor Circular Fusion
Filter,” Information Fusion, vol. 18, pp. 86–100, Jul. 2013.
[11] G. Stienne, S. Reboul, J. Choquel, and M. Benjelloun, “Cycle Slip Detection And Repair with a Circular On-line
Change-Point Detector,” Signal Processing, 2014.
[12] ——, “Circular Data Processing Tools Applied to a Phase Open Loop Architecture for Multi-Channels Signals
Tracking,” in IEEE/ION Position Location and Navigation Symposium (PLANS 2012), 2012, pp. 633–642.
[13] K. E. Mokhtari, S. Reboul, M. Azmani, J.-B. Choquel, H. Salaheddine, B. Amami, and M. Benjelloun, “A Circular
Interacting Multi-Model Filter Applied to Map Matching,” in Proceedings of the 16th International Conference on
Information Fusion (Fusion 2013), 2013.
[14] I. Markovic, F. Chaumette, and I. Petrovic, “Moving Object Detection, Tracking and Following Using an Omnidi-
rectional Camera on a Mobile Robot,” in Proceedings of the 2014 IEEE International Conference on Robotics and
Automation (ICRA 2014), Hong-Kong, Jun. 2014.
[15] G. Kurz, I. Gilitschenski, and U. D. Hanebeck, “Recursive Nonlinear Filtering for Angular Data Based on Circular
Distributions,” in Proceedings of the 2013 American Control Conference (ACC 2013), Washington D. C., USA,
Jun. 2013.
[16] ——, “Nonlinear Measurement Update for Estimation of Angular Systems Based on Circular Distributions,” in
Proceedings of the 2014 American Control Conference (ACC 2014), Portland, Oregon, USA, Jun. 2014.
[17] ——, “Deterministic Approximation of Circular Densities with Symmetric Dirac Mixtures Based on Two Circular
Moments,” in Proceedings of the 17th International Conference on Information Fusion (Fusion 2014), Salamanca,
Spain, Jul. 2014.
[18] G. Kurz, F. Faion, and U. D. Hanebeck, “Constrained Object Tracking on Compact One-dimensional Manifolds
Based on Directional Statistics,” in Proceedings of the Fourth IEEE GRSS International Conference on Indoor
Positioning and Indoor Navigation (IPIN 2013), Montbeliard, France, Oct. 2013.
[19] I. Gilitschenski, G. Kurz, and U. D. Hanebeck, “Bearings-Only Sensor Scheduling Using Circular Statistics,” in
Proceedings of the 16th International Conference on Information Fusion (Fusion 2013), Istanbul, Turkey, Jul. 2013.
[20] G. Kurz, M. Dolgov, and U. D. Hanebeck, “Nonlinear Stochastic Model Predictive Control in the Circular Domain
(to appear),” in Proceedings of the 2015 American Control Conference (ACC 2015), Chicago, Illinois, USA, Jul.
2015.
[21] G. Kurz, I. Gilitschenski, S. J. Julier, and U. D. Hanebeck, “Recursive Estimation of Orientation Based on the
Bingham Distribution,” in Proceedings of the 16th International Conference on Information Fusion (Fusion 2013),
Istanbul, Turkey, Jul. 2013.
[22] S. J. Julier and J. J. LaViola, “On Kalman Filtering With Nonlinear Equality Constraints,” IEEE Transactions on
Signal Processing, vol. 55, no. 6, pp. 2774–2784, 2007.
24
[23] M. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A Tutorial on Particle Filters for Online Nonlinear/Non-
Gaussian Bayesian Tracking,” IEEE Transactions on Signal Processing, vol. 50, no. 2, pp. 174–188, 2002.
[24] W. K. Hastings, “Monte Carlo Sampling Methods Using Markov Chains and Their Applications,” Biometrika,
vol. 57, no. 1, pp. 97–109, 1970.
[25] G. Kurz, I. Gilitschenski, and U. D. Hanebeck, “Efficient Evaluation of the Probability Density Function of a
Wrapped Normal Distribution,” in Proceedings of the IEEE ISIF Workshop on Sensor Data Fusion: Trends,
Solutions, Applications (SDF 2014), Bonn, Germany, Oct. 2014.
[26] M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical
Tables, 10th ed.
New York: Dover, 1972.
[27] K. Mardia, “Characterizations of Directional Distributions,” in A Modern Course on Statistical Distributions in
Scientific Work.
Springer Netherlands, 1975, vol. 17, pp. 365–385.
[28] D. E. Amos, “Computation of Modified Bessel Functions and Their Ratios,” Mathematics of Computation, vol. 28,
no. 125, pp. 239–251, 1974.
[29] S. Sra, “A Short Note on Parameter Approximation for von Mises–Fisher Distributions: And a Fast Implementation
of Is (x),” Computational Statistics, vol. 27, no. 1, pp. 177–190, 2012.
[30] G. Stienne, “Traitements des signaux circulaires appliqu´es `a l’altim´etrie par la phase des signaux GNSS,” Ph.D.
dissertation, Universit´e du Littoral Cˆote d’Opale, Dec. 2013.
[31] J. H. Kotecha and P. Djuric, “Gaussian Particle Filtering,” IEEE Transactions on Signal Processing, vol. 51, no. 10,
pp. 2592–2601, 2003.
[32] U. D. Hanebeck and V. Klumpp, “Localized Cumulative Distributions and a Multivariate Generalization of the
Cram´er-von Mises Distance,” in Proceedings of the 2008 IEEE International Conference on Multisensor Fusion and
Integration for Intelligent Systems (MFI 2008), Seoul, Republic of Korea, Aug. 2008, pp. 33–39.
[33] J. Steinbring and U. D. Hanebeck, “S2KF: The Smart Sampling Kalman Filter,” in Proceedings of the 16th
International Conference on Information Fusion (Fusion 2013), Istanbul, Turkey, Jul. 2013.
[34] U. D. Hanebeck and A. Lindquist, “Moment-based Dirac Mixture Approximation of Circular Densities,” in
Proceedings of the 19th IFAC World Congress (IFAC 2014), Cape Town, South Africa, Aug. 2014.
[35] I. Gilitschenski, G. Kurz, and U. D. Hanebeck, “Optimal Quantization of Circular Distributions (submitted),” in
Proceedings of the 53rd IEEE Conference on Decision and Control (CDC 2014), 2014.
[36] K. B. Petersen and M. S. Pedersen, “The Matrix Cookbook,” Nov. 2012.
[37] S. G. Johnson, “Faddeeva Package,” http://ab-initio.mit.edu/wiki/index.php/Faddeeva Package, Dec. 2012.
[38] D. Goldberg, “What Every Computer Scientist Should Know About Floating-point Arithmetic,” ACM Computing
Surveys, vol. 23, no. 1, pp. 5–48, Mar. 1991.
[39] U. D. Hanebeck, “PGF 42: Progressive Gaussian Filtering with a Twist,” in Proceedings of the 16th International
Conference on Information Fusion (Fusion 2013), Istanbul, Turkey, Jul. 2013.
25