JOURNAL OF COMPUTING, VOLUME 2, ISSUE 3, MARCH 2010, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/ 83 A Neuro-Fuzzy Multi Swarm FastSLAM Framework R. Havangi, M.Teshnehlab and M.A. Nekoui Abstract— FastSLAM is a framework for simultaneous localization using a Rao-Blackwellized particle filter. In FastSLAM, particle filter is used for the mobile robot pose (position and orientation) estimation, and an Extended Kalman Filter (EKF) is used for the feature location’s estimation. However, FastSLAM degenerates over time. This degeneracy is due to the fact that a particle set estimating the pose of the robot loses its diversity. One of the main reasons for loosing particle diversity in FastSLAM is sample impoverishment. It occurs when likelihood lies in the tail of the proposal distribution. In this case, most of particle weights are insignificant. Another problem of FastSLAM relates to the design of an extended Kalman filter for landmark position’s estimation. The performance of the EKF and the quality of the estimation depends heavily on correct a priori knowledge of the process and measurement noise covariance matrices (Q and R ) that are in most applications unknown. On the other hand, an incorrect a priori knowledge of Q and R may seriously degrade the performance of the Kalman filter. This paper presents a Neuro-Fuzzy Multi Swarm FastSLAM Framework. In our proposed method, a Neuro-Fuzzy extended kalman filter for landmark feature estimation, and a particle filter based on particle swarm optimization are presented to overcome the impoverishment of FastSLAM. Experimental results demonstrate the effectiveness of the proposed algorithm. Index Terms— SLAM, Mobil robot, Particle filter, Particle Swarm Optimization, Neuro-Fuzzy and Extended Kalman Filter ——————————  —————————— 1 INTRODUCTION he simultaneous localization and mapping (SLAM) is a fundamental problem of robots to perform auto- nomous takes such as exploration in an unknown envi- ronment. It represents an important role in the autonomy of a mobile robot. The two key computational solution to the SLAM are extended kalman filter (EKF-SLAM) and Rao-Blackwellized particle filter (FastSLAM). The EKF- SLAM approach is the oldest and the most popular ap- proach to solve the SLAM. Until now extensive research works have been reported employing EKF to the SLAM problem [1], [2], [3]. Several applications of EKF-SLAM have been developed for indoor applications [2], [4], out- door applications [5], underwater applications [6] and underground applications [7]. However, EKF-SLAM suf- fers from two major problems: the computational com- plexity and data association [12]. Recently, FastSLAM algorithm approach has been proposed as an alternative approach to solve the SLAM problem. FastSLAM is an instance of Rao-Blackwellized particle filter, which parti- tions the SLAM posterior into a localization problem and an independent landmark position estimation problem. In FastSLAM, particle filter is used for the mobile robot posi- tion estimation and EKF is used for the feature location’s estimation. The key feature of FastSLAM, unlike EKF- SLAM, is the fact that data association decisions can be determined on a per-particle basis, and hence different particles can be associated with different landmarks. Each particle in FastSLAM may even have a different number of landmarks in its respective map. This characteristic gives the FastSLAM the possibility of dealing with multi- hypothesis association problem. The ability to simulta- neously pursue multiple data associations makes FastSLAM significantly more robust to data association problems than algorithms based on incremental maxi- mum likelihood data association such as EKF-SLAM. The other advantage of FastSLAM over EKF-SLAM arises from the fact that particle filters can cope with nonlinear and non-Gaussian robot motion models, whereas EKF ap- proaches approximate such models via linear functions. However, FastSLAM also has some drawbacks. In refer- ences [13], [14], [22], [27], it has been noted that FastSLAM degenerates over time. This degeneracy is due to the fact that a particle set estimating the pose of the robot loses its diversity. One of main reasons for losing particle diversi- ty in FastSLAM is sample impoverishment. It occurs when likelihood lies in the tail of the proposal distribution [25]. On the other hand, FastSLAM highly relies on the number of particles to approximate the dis- tribution density. Researchers have been trying to solve those problems in [16], [18],[25]. Another problem of FastSLAM relates to the design of EKF for landmark posi- ————————————————  R. Havangi is with the Faculty of Electrical Engineering, K.N. Toosi Uni- versity of Technology,Tehran, Iran  M.Teshnehlab S is with the Faculty of Electrical Engineering, K.N. Toosi University of Technology,Tehran , Iran  M.A.Nekoui is with the Faculty of Electrical Engineering, K.N. Toosi Uni- versity of Technology, Tehran, Iran T © 2010 Journal of Computing http://sites.google.com/site/journalofcomputing/ JOURNAL OF COMPUTING, VOLUME 2, ISSUE 3, MARCH 2010, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/ 84 tion estimation. A significant difficulty in designing EKF can often be traced to incomplete a priori knowledge of the process covariance matrix Q and measurement noise covariance matrix R . In most application these matrixes are unknown. On the other hand, an incorrect a prior knowledge of Q and R may seriously degrade the Kal- man filter performance. In this paper to solve these prob- lems, particle filter based on Particle Swarm Optimization (PSO) is proposed to overcome the impoverishment of FastSLAM. Also, a Neuro-Fuzzy extended kalman filter is used for landmark feature estimation. 2 THE SLAM PROBLEM The goal of SLAM is to simultaneously localize a robot and determine an accurate map of the environment. To describe SLAM, let us denote the map by  and the pose of the robot at time by ts . The map consists of a collection of features, each of which will be denoted by n and the total number of stationary features will be denoted by N . In this situation, the SLAM problem can be formulized in a Bayesian probabilistic framework by representing each of the robot’s position and map location as a probabilistic density function as: ( , | , , ) t t t t p s z u n  (1) In essence, it is necessary to estimate the posterior density of maps and poses ts given that we know the observa- tion 1 { ,..., } t t z z z  , the control input 1 { ,..., } t t u u u  and the data association tn . Here, data association represents the mapping between map points in and observation in tz .The SLAM problem is then achieved by applying Bayes filtering as follows: 1 ( , | , , ) ( | , , ) ( , | , , ) t t t t t t t t t t t p s z u n p z s n p s z u n      (2) with 1 1 1 1 1 ( , | , , ) ( | , ) ( , | , , ) t t t t t t t t t t t t p s z u n p s s u p s z u n ds          (3) Where 1 ( | , ) t t t p s s u  is the dynamics motion model and ( | , , ) t t t p z s n  is the measurement model. The standard Bayesian solution (2) and (3) can be extremely expensive for high dimensional map. However, equation (3) can not be computed in closed form. FastSLAM is an efficient algorithm for the SLAM problem that is based on a straightforward factorization as follows: 1 ( , | , , ) ( | , , ) ( | , , , ) t t t t t t t t N t t t t n n p s z u n p s z u n p s z u n      (4) where 1 { ,..., } t t s s s  is a robot path or trajectory. This factorization states that the SLAM problem can be de- composed into estimating the product of a posterior over robot path and N landmark posteriors given the know- ledge of the robot’s path. The FastSLAM algorithm im- plements the path estimator ( | , , ) t t t t p s z u n using a par- ticle filter and the landmarks pose ( | , , , ) t t t t n p s z u n  are realized by EKF, using separate filters for different land- marks. Fig.1 shows the structure of the M particles in FastSLAM. Each particle forms the following [22], [26]: [ ] ,[ ] [ ] [ ] [ ] [ ] , , 1, 1, , , ,..., , m t m m m m m t N t N t t t s s       (5) Where [ ] m indicates the index of the particle, and ,[ ] t m s is the m th particle's path estimate, and [ ] [ ] , , , m m N t N t   are the mean and the covariance of the Gaussian distribution representing the n th feature location conditioned on the path ,[ ] t m s . Fig.1. the structure of particles in FastSLAM The algorithm update of posterior of FastSLAM can be described as: 1. Sample a new robot pose. 2. Update estimation of observed landmark with EKF. 3. Calculate importance weight and resample. 2.1 SAMPLING A NEW OF POSE As mentioned in the previous section, FastSLAM employs a particle filter for estimating the path posterior ( | , , ) t t t t p s z u n by particle filter. The particle set tS is calculated from the set 1 tS at time 1 t , the control tu , and the observation tz . In general, it is not possible to draw samples directly from the SLAM posterior. Instead, the samples are drawn from a simpler distribution called the proposal distribution ,[ ] ( | , , ) t m t t t q s z u n .The mis- match between of SLAM posterior and proposal distribu- tion correct using a technique called importance sam- pling. Therefore, in regions where the target distribution is larger than the proposal distribution, the samples are assigned a larger weight and in regions where the target distribution is smaller than the proposal distribution the samples will be given lower weights. For FastSLAM, the important weights of each particle can be calculated as: JOURNAL OF COMPUTING, VOLUME 2, ISSUE 3, MARCH 2010, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/ 85 ,[ ] ,[ ] target distribution ( | , , ) proposal distribution ( | , , ) t m t t t i k t m t t t p s z u n w q s z u n   (6) As a result, the important weight of each particle is equal to the ratio of the slam posterior and the proposal distri- bution. The proposal ,[ ] ( | , , ) t m t t t q s z u n can be represented by a recursive form as: ,[ ] [ ] 1,[ ] 1,[ ] [ ] 1,[ ] 1,[ ] 1 1 1 ( | , , ) ( | , , , ) ( | , , ) ( | , , , ) ( | , , ) t m t t t m t m t t t t Markov t m t t t m t m t t t t t m t t t q s z u n q s s z u n q s z u n q s s z u n q s z u n          (7) Similarity, the posterior can also be given by a recursive form using Baye’s rule as follows: ,[ ] ,[ ] 1 [ ] 1 1,[ ] 1 1 1 ( | , , ) ( | , , , ) ( | , , ) ( | , , ) t m t t t t m t t t t m t t t t m t t t t p s z u n p z s z u n p s z u n p s z u n         (8) Where  is a normalizing constant. By (7) and (8), a se- quential importance weight of m th  particle can be ob- tained as following: ,[ ] 1 [ ] 1 1 [ ] 1,[ ] ( | , , , ) ( | , , ) ( | , , , ) t m t t t m t t t i i t t k k m t m t t t t p z s z u n p s z u n w w q s s z u n       (9) The choice of the proposal distribution [ ] 1,[ ] ( | , , , ) m t m t t t t q s s z u n  is one of the most critical issues in the design of a FastSLAM. Two of those critical reasons are as follows: samples are drawn from the proposal dis- tribution, and the proposal distribution is used to valuate important weights. The optimal importance density func- tion that minimizes the variance of the importance weights is the following equation (FastSLAM 2.0 [9]) ,[ ] 1 1 ,[ ] 1 1 ( | , , ) ( | , , ) t m t t t t m t t t q s z u n p s z u n      (10) However, there are some special cases where the use of the optimal importance density is possible. The most popular suboptimal choice is the transitional prior (FastSLAM1.0 [11]) ,[ ] 1 1 [ ] 1 ( | , , ) ( | , ) t m t t t m t t t q s z u n p s u s     (11) In this paper, FastSLAM1.0 is used due to its easy calcula- tion. Hence, by substitution of (11) into (8), the weight’s update equation is: ,[ ] 1 1 ( | , , , ) i i t m t t t k k t w w p z s z u n     (12) Not that the i k w is updated incrementally while consider- ing a full trajectory of the robot state ,[ ] t m s . 2.2 UPDATING THE OBSERVED LANDMARK ESTIMATE The FastSLAM represents the posterior landmark esti- mates ( | , , , ) t t t t n p s z u n  using low-dimensional EKFs. In fact FastSLAM 1.0 updates the posterior over the landmark estimates, respected by the mean [ ] , 1 m n t  and the covariance [ ] , 1 m n t   .the updated values [ ] , m n t  and [ ] , m n t  are then added to the temporary particle set tS , along with the new pose. The update depends on whether or not a landmark n was observed at time t . For t n n  , the posterior over the landmark remains unchanged as fol- lowing [26]: [ ] [ ] [ ] [ ] , , , 1 , 1 m m m m n t n t n t n t        (13) For the observed feature t n n  , the update is specified through the following equation [26]: ( ) [ ] [ ] , 1 , 1 ,( ) ,( ) 1 ,( ) 1 ,( ) 1 ,( ) 1 ,( ) 1 ( , , ) ( , ( , , ) ( | , , ) ( | , , , ) ( | , , ) ( | , , ) ( | , , , ) ( | , , ) t t t t t i i i n t n t t n t n t t t t i t t n t i t t t i t t t n n t i t t t t i t t t i t t t n n N N z g s R p s n z p z s n z p s n z p z s n z p z s n z p s n z                     (14) The probability ,( ) 1 ( | , , ) t t i t t n p s n z   at time 1 t is represented by a Gaussian distribution with mean [ ] , 1 i n t   and covariance [ ] , 1 i n t   . For the new estimate at time t to also be Gaussian, FastSLAM linearizes the perceptual model ,( ) 1 ( | , , , ) t t i t t t n p z s n z   by EKF. Especially, FastSLAM approximates the measurement function g by the following first degree Taylor expansion [26]: [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] , 1 , 1 , 1 ˆ [ ] [ ] [ ] , 1 ( , ) ( , ) ( , )( ) ˆ = ( ) t t i i t t t i i i i i i n t n t t t n t n n t z G i i i t t n n t g s g s g s z G                     (15) Under this approximation, the posterior of landmark tn is indeed Gaussian. The mean and covariance are obtained using the following measurement update: [ ] , 1 ˆ ( , ) t m t t n t z g s    (16) [ ] [ ] , 1 ; ( , ) | m m n n t t t t t n n t t t t n s s G g s          (17) [ ] , , 1 n n t t t m T n t t n t Z G G R       (18) [ ] 1 , , 1 n t t m T t n t n t K G Z     (19) , , 1 [ ] [ ] ˆ ( ) t t t t m m t t t n n K z z       (20) [ ] [ ] , , 1 ( ) n t t t m m t n t n t I K G      (21) 2.3 RESAMPLING Sine the variance of the importance weights increase over time [23], [24], resampling plays a vital role in FastSLAM. In the resampling process, particles with low importance weight are eliminated and particles with high weights are multiplied. After, the resampling, all particle weights are then reset to JOURNAL OF COMPUTING, VOLUME 2, ISSUE 3, MARCH 2010, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/ 86 1 i t w N  (22) This enables the FastSLAM to estimate increasing envi- ronmental sates defiantly without growing a number of particles. However, resampling can delete good samples from the sample set, in the worst case, the filter diverges. The decision on how to determine the point of time of the resampling is a fundamental issue. Liu introduced the so- called effective number of particles eff N to estimate how well the current particle set represents the true posterior. This quality is computed as ( ) 1 1 eff N i i N w    (23) Where ( ) i w refers to the normalized weight of par- ticle i .The resampling process is operated whenever eff N is bellow a pre-defined threshold, tf N . Here tf N is usually a constant value as following 3 4 tf N M  (24) Where M is number of particles. 3 A MODIFIED FASTSLAM FRAMEWORK In this section, a modified FastSLAM framework is pre- sented. For this purpose, a particle filter based on PSO is presented to overcome the impoverishment of the particle filter. Also a Neuro-Fuzzy extended kalman filter (EKF) for landmark feature estimation is proposed. 3.1 A MODIFIED SAMPLING OF POSE Because the proposal function is suboptimal, there are two serious problems in FastSLAM. One problem is sam- ple impoverishment, which occurs when the likelihood ,[ ] 1 ( | , , , ) t m t t t t p z s z u n  is very narrow or likelihood lies in the tail of the proposal distribution [ ] 1 ( | , ) m t t t p s u s  . The prior distribution is effective when the observation accu- racy is low. But not when prior distribution is a much broader distribution than the likelihood (such as Fig.2.). Hence, in the updating step, only a few particles will have significant importance weights. Fig.2. Prior and Likelihood This problem implies that a large computational effort is devoted to update the particles with negligible weight. Thus, the sample set only contains few dissimilar par- ticles and sometimes they will drop to a single sample after several iterations. As a result, important samples may be lost. Another problem of FastSLAM is the number of particles dependency that estimates the pose of the robot. If the number of particles is small, then there might not have been particles distributed around the true pose of the robot. So after several iterations, it is very difficult for particles to converge to the true pose of the robot. For standard particle filter, there is one method to solve the problem. This is to augment the number of the particles. But this would make the computational complexity unac- ceptable. To solve these problems of FastSLAM, particle swarm optimization is considered to optimize the sam- pling process of FastSLAM. In fact, PSO encourages par- ticles to high likelihood area before the sampling process. 3.1.1 PARTICLE SWARM OPTIMIZATION James Kennedy and Russell C.Eberhart [17] originally proposed the PSO algorithm for optimization. PSO is a population-based search algorithm based on the simula- tion of the social behavior of birds with a flock. PSO is initialized with a group of random particles and then computes the fitness of each one. Finally, it can find the best solution in the problem space via many repeating iterations. In each iteration, each particle keeps track of its coordinates which are associated with the best solution it has achieved so far (pbest) and the coordinates which are associated with the best solution achieved by any particle in the neighboring of the particle (gbest). Supposing that the search space dimension is D and number particles is N, the position and velocity of the i-th particle are represented as 1 ( ,..., ) i i iD x x x  and 1 ( ,..., ) i i iD v v v  respectively. Let 1 [ ,... ] bi i iD P p p  denote the best position which the particle i has achieved so far, and g P the best of bi P for any 1,..., i N  . The PSO algorithm could be performed by the following equations: ( ) ( 1) ( ) i i i x t x t v t       (25) 1 1 2 2 ( ) ( 1) ( ( 1)) ( ( 1)) i i bi i g i v t wv t c r P x t c r P x t               (26) Where t represents the iteration number and 1c , 2c are the learning factors. Usually 1 2 2 c c   . 1r , 2r are ran- dom numbers in the interval (0,1) . w is the inertial factor, and the bigger the value of w , the wider is the search range. 3.1.2 A PARTICLE FILTER BASED ON PSO As discussed in the previous section, Impoverishment JOURNAL OF COMPUTING, VOLUME 2, ISSUE 3, MARCH 2010, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/ 87 occurs when the number of particles in the high likelih- ood area is low. In recent years, several researchers have studied this problem [16], [17], [18], [19]. The Main disad- vantage of these approaches is that they are more difficult to implement than FastSLAM. Also, they are more ma- thematically involved. Here, we use an easy idea to solve this problem. The idea is that the sampling process par- ticles are encouraged to be at the right place (in the region of high likelihood) by incorporating the current observa- tion. This implies that a Simple and effective method for this purpose is the using of PSO. In fact, by using PSO, we can move all the particles towards the region of high like- lihood before the sampling process. For this purpose, first we must define a fitness function. The fitness function must take the newest observations into account and also fitness function’ particle with high likelihood have small value. For this purpose, we consider a fitness function as following: 1 0 , , , ˆ ˆ ( ) ( ) ( ) t t t T k t n t n t t n t f x z z Z z z          (27) Here, , tn t Z is the residual covariance matrix defined in (12), , ˆn t z is the predicted measurement and tz is the ac- tual measurement. The particles should be moved such that the fitness function is optimal. This is done by tuning the position and velocity of the PSO algorithm. The stan- dard PSO algorithm has some parameters that need to be specified before using it. Most approaches use uniform probability distribution to generate random numbers. However it is difficult to obtain fine tuning of the solution and escape from local minima using a uniform distribu- tion. Hence, we use velocity updates based on the Gaus- sian distribution. In this situation, there is no more need to specify the parameter learning factors 1c and 2c . Fur- thermore, using the Gaussian PSO the inertial factor was set to zero and an upper bound for the maximum velocity max v is not necessary anymore [21]. So, the only parameter to be specified by the user is the number of particles. Initial values of particle filter are selected as the initial population of PSO. Initial velocities of PSO are set equal to zero. The PSO algorithm updates the velocity and position of each particle by following equations [21]: ( ) ( 1) ( ) i i i x t x t v t       (28) ( ) ( ( 1)) ( ( 1)) i pbest i gbest i v t randn P x t randn P x t          (29) PSO moves all particles towards the high likelihood re- gions which is the global best of PSO. When the best fit- ness value reaches a certain threshold, the optimized sampling process is stopped. With this set of particles the sampling process will be done on the basis of proposal distribution. The Corres- ponding weights will be as follows: ,[ ] 1 ( | , , , ) k t m t t t i t w p z s z u n   (30) Where ,[ ] 1 , 1 , , , 1 ( | , , , ) exp (2 ) | | 1 ˆ ˆ { ( ) ( )} 2 t t t t t m t t t t n t T t n t n t t n t p z s z u n Z z z Z z z            (31) The flowchart of the proposed algorithm to pose estima- tion is shown in fig.3. Fig3. Modified particle filter for FastSLAM (pose estimation) As observed, this algorithm can be described with the following steps Step1. General initial population Initialization-randomly generation initial pose of mobile robot  Initialized particle velocity  Initial particle position  Initialized particle fitness value  Initialized pbest and gbest Step2. Improve Sampling by PSO Step5. Sampling Sampling is obtaining samples from the prior Step3. Update On receipt of the measurement saw k z , evaluate the like- lihood of each prior sample and obtain a normalized- weight for each sample. Step4. Resampling Multiply Suppress samples with high /low importance weights. Step5. Prediction Each pose is passed through the system model to obtain samples from the prior Setp6. Increase time t and iterate from step 2 to step 4. 3.2 FEATURE UPDATE BY ADAPTIVE NEURO-FUZZY INFE- RENCE SYSTEM (ANFIS) JOURNAL OF COMPUTING, VOLUME 2, ISSUE 3, MARCH 2010, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/ 88 As stated earlier, FastSLAM represents the conditional landmark estimates ( | , , , ) t t t t n p s z u n  by EKF. The tra- ditional EKF assumes complete a priori knowledge of the process and measurement noise statistics; matrices Q and R. However, in most applications these matrixes are un- known. An incorrect a prior knowledge of Q and R may lead to performance degradation [15] and it can even lead to practical divergence [16]. One of the efficient ways to overcome the above weakness is to use an adaptive algo- rithm .Two major approaches that have proposed for adaptive EKF are multiple model adaptive estimation (MMAE) and innovation adaptive estimation (IAE) [15]. Although the implementation of these approaches is dif- ferent, they both share the same concept of utilizing new statistical information obtained from the residual (innova- tion) sequence. As landmarks are static, we assume that the noise covariance Q is completely known. Hence, the algorithm to estimate the measurement noise covariance R can be derived. The adaptation is adaptively adjusting the measurement noise covariance matrix R by using Neuro-fuzzy System. In this case, an innovation based adaptive estimation (IAE) algorithm to adapt the mea- surement noise covariance matrix R is derived. The tech- nique known as covariance-matching is used. The basic idea behind this technique is to make the actual value of the covariance of the residual to be consistent with its theoretical value. The innovation sequence , ˆ ( ) t k t n t r z z   has a theoretical covariance that is ob- tained from the EKF algorithm [ ] , 1 n n t t t m T k t n t S G G R       (32) The actual residual covariance ˆ k C can be approximated by its sample covariance, through averaging inside a moving window of size N as following 1 1 ˆ ( ) k T k i i k N i C r r N      (33) Where 0i is first sample inside the estimation window. If the actual value of covariance ˆ k C has discrepancies with its theo- retical value, then the diagonal elements of k R based on the size of this discrepancy can be adjusted. The objective of these adjustments is to correct this mismatch as far as possible. The size of the mentioned discrepancy is given by a variable called the degree of mismatch ( k DOM ), defined as ˆ k k k DOM S C   (34) The basic idea used to adapt the matrix k R is as follows: from equation (24) an increment in k R will increase k S and vice versa. Thus, k R can be used to vary k S in accordance with the value of k DOM in order to reduce the discrepancies between k S and ˆ k C .The adaptation of the ( , ) i i th element of k R is made in accordance with the ( , ) i i th element of k DOM .The general rules of adaptation are as follows: If ( , ) 0 k DOM i i  then maintain k S unchanged If ( , ) 0 k DOM i i  then decrease k S If ( , ) 0 k DOM i i  then increase k S In this paper the IAE adaptive scheme of the EKF coupled with adaptive Neuro-fuzzy inference system (ANFIS) is presented to adjust R . As the size k DOM and R is two, two system ANFIS to adjust EKF is used. The structure of one of these systems is described in the next section. 3.2.1 THE ANFIS ARCHITECTURE The ANFIS model has been considered as a two-input- single-output system. The inputs of ANFIS are k DOM and k DeltaDOM . Here, k DeltaDOM is defined as following 1 k k k DeltaDOM DOM DOM    (35) Fig.4 and Fig.5 presents membership functions for ( , ) k DOM i i and k DeltaDOM . Fig.4. Membership function k DOM Fig .5.Membership function k DeltaDOM Fig .6.Membership function k AjdR Finally, adjustments of k R is performed using the following relation k k k R R R   (29) where k R  is the ANFIS output and membership function of k R  is shown in Fig.6. This ANFIS is a five layers network as shown in Fig.7. Let l iu and l io denote the input to output from the i th JOURNAL OF COMPUTING, VOLUME 2, ISSUE 3, MARCH 2010, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/ 89 node of the th l layer, respectively. To provide a clear understanding of an ANFIS, the function of layer 1 to layer 5 are defined as follows: Fig .7.The Neuro-fuzzy system for feature update Table 1 Rule Table Layer 1: The node in this layer only transmits input val- ues to the next layer directly, i.e. 1 1 i i o u  (36) Layer2: In this layer, each node only performs a member- ship function. Here, the input variable is fuzzified em- ploying five membership functions (MFs). The output of the th i MF is given as: 2 2 2 2 2 ( ) ( ) exp ( ) ij ij ij ij ij u m o u                 (37) Where ij m and ij  , respectively, are the mean and with of the Gaussian membership function. The subscript ij indicates the jth term of the ith input. Each node in this layer has two adjustable parameters: ij m and ij  Layer3: The nodes in this layer are rule nodes. The rule node performs a fuzzy and operation (or product infe- rence) for calculating the firing strength. 3 3 l i i o u  (38) Layer4: The node in this layer performs the normalization of firing strengths from layer 3, 4 4 9 4 1 l l l l u o u   (39) Layer5: This layer is the output layer. The link weights in this layer represent the singleton constituents ( i W ) of the output variable. The output node integrates all the nor- malization firing strength from layer 4 with the corres- ponding singleton constituents and acts as defuzzfier, 25 5 1 i l l l R u w    (40) The fuzzy rules which complete the ANFIS rule base as table1. 3.2.2 LEARNING ALGORITHM The aim of the training algorithm is to adjust the network weights through the minimization of the following cast function: 2 1 2 k E e  (41) Where ˆ k k k e S C   (42) By using the bake propagation (BP) learning algorithm, the weighting vector of ANFIS is adjusted such that the error defined in (41) is less of than a desired threshold value after a given number of training cycles. The well- known BP algorithm may be written as: ( ) ( 1) ( ) ( ) ( ) E k W k W k W k       (43) Here  and W represent the learning rate and tuning parameter of ANFIS respectively. Let [ , , ]T W m w   be the weighting vector of ANFIS. The gradient of E with respect to an arbitrary weighting vector W as following: ( ) ( ) ( ) ( ) ( ) E k R k e k W k W k            (44) By the recursive application of the chain rule, the error term for each layer is first calculated, and then the para- meters in the corresponding layers are adjusted. 4 IMPLEMENTATION AND RESULTS Simulation experiments have been carried out to evaluate the performance of the proposed approach in comparison with the classical method. The proposed solution for the SLAM problem has been tested for the benchmark envi- ronment, with varied number and position of the land- marks, available in [20] .Fig.8 shows the robot trajectory and landmark location. The star points (*) depict location of the landmarks that are known and stationary in the environment. The state of the robot can be modeled as  ( , , x y ) that ( , x y ) are the Cartesian coordinates and   is the orientation respectively to the global environment. The kinematics equations for the mobile robot are in the JOURNAL OF COMPUTING, VOLUME 2, ISSUE 3, MARCH 2010, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/ 90 following form -150 -100 -50 0 50 100 -100 -80 -60 -40 -20 0 20 40 60 80 100 X(m) Y(m) Fig.8. The experiment environment. The star point “*” denote the landmark positions and blue line is the path of robot. ( )cos( [ ]) ( )sin( [ ]) ( ) sin( ) v v v x V v v y V v v V v v B                                             (45) Where B is the base line of the vehicle and   T u V   is the control input at time t consisting of a velocity input V and a steer input. The process noise T v v v v     is assumed to be Gaussian. The vehicle is assumed to be equipped with a range-bearing sensor that provides a measurement of range ir and bearing ito an observed feature irelative to the vehicle. The observation z of feature iin the map can be expressed as: 2 2 1 ( ) ( ) tan i i r i i i i x x y y r y y x x                                (46) Where ( , ) i i x y is the landmark position in map and   T r W     related to observation noise. The initial position of the robot is assumed to be 0 0 x  . The robot moves at a speed 3m/s and with a maximum steering angle 30 deg. Also, the robot has 4 meters wheel base and is equipped with a range-bearing sensor with a maximum range of 20 meters and a 180 degrees frontal field-of-view. The control noise is 0.3 m/s v  and 3o    . A control frequency is 40 HZ and observa- tion scans are obtained at 5 HZ . The measurement noise is 0.2 m in range and 1o in bearing. Data association is assumed known. At first, the performance of the two al- gorithms can be compared by keeping the noises level (process noise and measurement noise) and varying the number of particles. Fig.9 to Fig.16 shows the perfor- mance of the two algorithms for number of particles equivalent to 20 and 5. As observed, modified FastSLAM is more accurate than the FastSLAM. Also, performance of modified FastSLAM does not depend on the number of particles while the performance of FastSLAM highly de- pends on the number of particles. For very low numbers of particles FastSLAM diverges while modified FastSLAM is completely robust. This is because PSO in the modified FastSLAM places the particles in the high likelihood region. In addition, we observed that the mod- ified FastSLAM requires fewer particles than FastSLAM in order to achieve a given level of accuracy for state es- timates. Fig.9. Estimated robot path and estimated landmark using modified FastSLAM with true robot path and true landmark. The “…” is path estimated, the “+” are the estimated landmark positions. In this expe- riment the control noise is 0.3 m/s v  , 3o    . The measure- ment noise is 0.2 m r  , 1deg    and number of particles is 20 (n=20).Also, the results obtain over 50 Monte Carlo runs. Fig.10.Estimated pose error with 2   bound using modified FastSLAM .In this experiment the control noise is 0.3 m/s v  , 3o   . The measurement noise is 0.2 m r  , 1deg    and number of particles is 20 (n=20).Also, the results obtain over 50 Monte Carlo runs. JOURNAL OF COMPUTING, VOLUME 2, ISSUE 3, MARCH 2010, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/ 91 Fig.11. Estimated robot path and estimated landmark using FastSLAM with true robot path and true landmark. The “…” is path estimated, the “+” are the estimated landmark positions. In this expe- riment the control noise is 0.3 m/s v  , 3o   . The measure- ment noise is 0.2 m r  , 1deg    and number of particles is 20 (n=20).Also, the results obtain over 50 Monte Carlo runs. Fig.12. Estimated pose error with 2   bound using modified FastSLAM. In this experiment the control noise is 0.3 m/s v  , 3o   . The measurement noise is 0.2 m r  , 1deg    and number of particles is 20 (n=20).Also, the results obtain over 50 Monte Carlo runs. Fig.13. Estimated robot path and estimated landmark using FastSLAM with true robot path and true landmark. The “…” is path estimated, the “+” are the estimated landmark positions. In this expe- riment the control noise is 0.3 m/s v  , 3o   . The measurement noise is 0.2 m r  , 1deg    and number of particles is 5 (n=5).Also, the results obtain over 50 Monte Carlo runs. Fig.14. Estimated pose error with 2   bound using modified FastSLAM. In this experiment the control noise is 0.3 m/s v  , 3o   . The measurement noise is 0.2 m r  , 1deg   and num- ber of particles is 5 (n=5).Also, the results obtain over 50 Monte Carlo runs. Fig.15. Estimated robot path and estimated landmark using FastSLAM with true robot path and true landmark. The “…” is path estimated, the “+” are the estimated landmark positions. In this expe- riment the control noise is 0.3 m/s v  , 3o   . The measurement noise is 0.2 m r  , 1deg   and number of particles is 5 (n=5).Also, the results obtain over 50 Monte Carlo runs. Fig.16. Estimated pose error with 2   bound using modified FastSLAM. In this experiment the control noise is 0.3 m/s v  , 3o   . The measurement noise is 0.2 m r  , 1deg   and number of particles is 5 (n=5).Also, the results obtain over 50 Monte Carlo runs. Now, we compare the performance of the two algorithms JOURNAL OF COMPUTING, VOLUME 2, ISSUE 3, MARCH 2010, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/ 92 while varying the level of measurement noise as 0.01 r  a, 0.1    and fixing the control noise i.e. 0.3 m/s v  , 3o   . All experiments run with 5 par- ticles in this case. From Fig.17 to Fig.20 we observe that the performance of our proposed method will outperform classical FastSLAM. This is because sample impoverishment occurs in FastSLAM. In fact, because the proposal distribution is broader than likelihood in FastSLAM, the weight of most of the participles are insignificant when their likelihoods are evaluated. As a result, the estimation accuracy of FastSLAM will decrease whenever the robot's motion is noisier and the range-bearing sensor is more accurate. In this situation, the performance of FastSLAM is reduced mostly while the performance of the modified FastSLAM is almost fixed. For very low values of measurement error FastSLAM clearly begins to diverge while the proposed algorithm is robust. Fig.17. Estimated robot path and estimated landmark using modified FastSLAM with true robot path and true landmark. The “…” is path estimated, the “+” are the estimated landmark positions In this expe- riment the control noise is 0.3 m/s v  , 3o   .The measurement noise is 0.01 m r  , 0.1deg   and number of particles is 5 (n=5).Also, the results obtain over 50 Monte Carlo runs. Fig.18. Estimated pose error with 2   bound using modified FastSLAM. In this experiment the control noise is 0.3 m/s v  , 3o   .The measurement noise is 0.01 m r  , 0.1deg   and number of particles is 5 (n=5).Also, the results ob- tain over 50 Monte Carlo runs. Fig.19. Estimated robot path and estimated landmark using FastSLAM with true robot path and true landmark. The “…” is path estimated, the “+” are the estimated landmark positions. In this expe- riment the control noise is 0.3 m/s v  , 3o   . The measurement noise is 0.01 m r  , 0.1deg   and number of particles is 5 (n=5).Also, the results obtain over 50 Monte Carlo runs. Fig.20. Estimated pose error with 2   bound using modified FastSLAM. In this experiment the control noise is 0.3 m/s v  , 3o   . The measurement noise is 0.01 m r  , 0.1deg    and number of particles is 5 (n=5).Also, the results obtain over 50 Monte Carlo runs. Finally,  we  study  affect  of  the  designed  ANFIS  on  the  performance of FastSLAM. For this purpose, we consider  the situation where measurement noise is wrongly consi‐ dered  as 0.5 r    ,  3.0      and  control  noise  is  rightly  considered  as 0.3 m/s v  ,  3o   .The  performance  of  the  modified  FastSLAM  is  compared  with  classical  FastSLAM  that  its  measurement  covariance  matrix  R  is  kept static throughout the experiment. The proposed al‐ gorithm starts with a wrongly known statistics and then  adapts  the  k R   matrix  through  ANFIS  and  attempts  to  minimize  the  mismatch  between  the  theoretical  and  ac‐ tual values of the innovation sequence. The free parame‐ ters  of  ANFIS  are  automatically  learned  by  SD  during  training.  Fig.21‐22  and  Fig.24‐25  show  the  comparison  between the proposed algorithm and the FastSLAM when  the particle number is 20. It can be seen clearly that the  results of the proposed algorithm are better than that of  JOURNAL OF COMPUTING, VOLUME 2, ISSUE 3, MARCH 2010, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/ 93 classical FastSLAM. This is because  modified FastSLAM  adaptively  tuned  the  measurement  covariance  matrix  R  while  covariance  matrix  measurement  R  in  standard  FastSLAM is kept fixed over time as shown in Fig.23 and  Fig.26.  Also  Fig.23  and  Fig.26  show  that  measurement  covariance matrix measurement R converges to the actual  covariance matrix R in our proposed method.  Fig.21. Estimated robot path and estimated landmark using modified FastSLAM with true robot path and true landmark. The “…” is path estimated, the “+” are the estimated landmark positions. In this expe- riment, measurement noise is wrongly considered as 0.5 r  , 3.0    and control noise is truly considered as 0.3 m/s v  , 3o   . Also, number of particles is 5 (n=5) and the results obtain over 50 Monte Carlo runs. Fig.22. Estimated pose error with 2   bound using modified FastSLAM. In this experiment, measurement noise is wrongly consi- dered as 0.5 r  , 3.0    and control noise is truly considered as 0.3 m/s v  , 3o   . Also, number of particles is 5 (n=5) and the results obtain over 50 Monte Carlo runs. Fig.23. A modified FastSLAM (n=5) Fig.24. Estimated robot path and estimated landmark using FastSLAM with true robot path and true landmark. The “…” is path estimated, the “+” are the estimated landmark positions. In this expe- riment, measurement noise is wrongly considered as 0.5 r  , 3.0    and control noise is truly considered as 0.3 m/s v  , 3o   . Also, number of particles is 5 (n=5) and the results obtain over 50 Monte Carlo runs. Fig.25. Estimated pose error with 2   bound using FastSLAM. In this experiment, measurement noise is wrongly considered as 0.5 r  , 3.0    and control noise is truly considered as 0.3 m/s v  , 3o   . Also, number of particles is 5 (n=5) and the results obtain over 50 Monte Carlo runs. Fig.26. A FastSLAM (n=5) 5 CONCLUSION In this Paper, two problems of FastSLAM are im- proved. The first problem is that FastSLAM degene- rates over time due to the loss of particle diversity. One of the main reasons for losing particle diversity is sample impoverishment. It occurs when likelihood lies JOURNAL OF COMPUTING, VOLUME 2, ISSUE 3, MARCH 2010, ISSN 2151-9617 HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/ 94 in the tail of the proposal distribution. In this case, most of participles weights are insignificant. Another problem of FastSLAM relates to the design of EKF for the landmark position’s estimation. A significant diffi- culty in designing the extended kalman filter can often be traced to incomplete a priori knowledge of the process covariance matrix Q and the measurement noise covariance matrix R . On the other hand, an in- correct a prior knowledge of Q and R may seriously degrade the Kalman filter performance. This paper presents a modified FastSLAM framework by soft computing. In the proposed method, a Neuro-Fuzzy extended kalman filter for landmark feature estima- tion, and a particle filter based on particle swarm op- timization is presented to overcome the impoverish- ment of FastSLAM. Finally, Experimental results con- firm the effective of the proposed algorithm. The main advantage of our proposed method is its more consis- tency than classical FastSLAM. This is because in our proposed method, the theoretical value of the innova- tion sequence matches with its real value. Also, when motion model is noisier than measurement, the per- formance of the proposed method outperforms the standard method. REFERENCES [1] M. W. M. G. Dissanayake, P. Newman, S. Clark, and H. F. Dur rantWhyte, “A solution to the simultaneous localization and map building (SLAM) problem”, IEEE Tran Robot Automat, vol. 17, no. 3, pp.229–241, Jun. 2001. [2] S. Thrun, D. Fox, and W. Burgard, “A probabilistic approach to concurrent mapping and localization for mobile robots,” Mach. Learn., vol. 31, pp. 29–53, 1998. [3] R. Smith and P. Cheeseman, “On the representation and estima- tion of spatial uncertainty,” Int. J. Robot. Res., vol. 5, no. 4, pp. 56–68, 1986. [4] M. Bosse, J. Leonard, and S. Teller, J. Leonard, J. D. Tard’os, S. Thrun,and H. Choset, Eds., “Large-scale CML using a network of multiple local maps,” in workshop Notes of the ICRA Work- shop on Concur rent Mapping and Localization for Autonom- ous Mobile Robots(W4),Washington, D.C., 2002. [5] T. Bailey, “Mobile robot localization and mapping in extensive outdoor environments,” Ph.D. dissertation, Univ. Sydney, Syd- ney, NSW, Australia, 2002. [6] S. Williams, G.Dissanayake, and H. F. Durrant-Whyte, “Towards terrain- aided navigation for underwater robotics,” Adv. Robot., vol. 15,no. 5, pp. 533–550, 2001. [7] S. Thrun, D. H¨ ahnel, D. Ferguson, M. Montemerlo, R. Triebel, W.Burgard, C. Baker, Z. Omohundro, S. Thayer, and W. Whit- taker, “A system for volumetric robotic mapping of abandoned mines,” in Proc. IEEE Int. Conf Robot. Automat. Taipei, Taiwan, May 2003, pp.4270–4275. [8] A. Eliazar, R. Parr,” DP-SLAM: fast, robust simultaneous localiza tion and mapping without predetermined landmarks”, in: Pro- ceedings of the International Joint Conference on Artificial Intel- ligence, 2003. [9] D. Roller, M. Montemerlo, S.Thrun, B. Wegbreit, “Fastslam 2.0: an improved particle filtering algorithm for simultaneous localiza- tion and mapping that provably converges”, in: Proceedings of the International Joint Conference on Artificial Intelligence, 2003. [10] K. Murphy, “Bayesian map learning in dynamic environments”, in: Proc. of the Conf. on Neural Information Processing Systems (NIPS), Denver, CO, USA, 1999, pp.1015–1021. [11] Montemerlo M, Thrun S, Koller D, Wegbreit B (2002), ” Fast SLAM: a factored solution to the simultaneous localization and mapping problem”, In: Proc. AAAI national conf. artif. intell., Edmonton,AB, Canada. [12] S. Thrun, W. Burgard, and D. Fox, ”Probabilistic robotics” , MIT Press, Cambridge, 2005. [13] T. Bailey, J. Nieto, and E. Nebot, ”Consistency of the FastSLAM algorithm”, IEEE International Conference on Robotics and Automation.pp.424-429, 2006. [14] M. Boli´c, P. M. Djuri´c, and S. Hong,”Resampling algorithms for particle filters: a computational complexity perspective” ,Journal on Applied Signal Processing, no. 15, pp. 2267-2277, 2004. [15] R. K. Mehra, “On the identification of variances and adaptive Kalman filtering”, IEEE Trans. Autom. Control, vol. AC-15, no. 2, pp. 175–184, Apr. 1970. [16] C. Kim, R. Sakthivel and W. K. Chung, “Unscented FastSLAM: A Robust and Efficient Solution to the SLAM Problem”, IEEE Trans. Robot., vol. 24, no. 4, 2008. [17] M. Pitt and N. Shephard, “Filtering via simulation: Auxiliary particle filters,” J. Amer. Statist. Assoc., vol. 94, no. 446, pp. 590–599, 1999. [18] C. Kim, R. Sakthivel, andW. K. Chung, “Unscented FastSLAM: A Robust algorithm for the simultaneous localization and map- ping problem”, in Proc. IEEE Int. Conf. Robot. Autom., 2007, pp. 2439–2445. [19] X. Wang and H. Zhang,“A UPF-UKF framework for SLAM”, IEEE Intl. Conf. on Robotics and Automation, 2007, pp.1664- 1669. [20]Available:http://www.personal.acfr.usyd.edu.au/tbailey/softw are/index .html, (2008, Jun.). [21] R. A. Krohling,” Gaussian swarm: a novel particle swarm opti- mization algorithm”, In Proceedings of the IEEE Conference on Cybernetics and Intelligent Systems (CIS), Singapore, pp.372- 376, 2004. [22] M.Montemerlo,” FastSLAM: A Factored Solution to the Simul- taneous Localization and Mapping Problem With Unknown Data Association”,PhD thesis, Carnegie Mellon University, 2003. [23] G. Grisetti, C. Stachniss and W. Burgard, “Improved Techniques for Grid Mapping with Rao-Blackwellized Particle Filters,” IEEE Trans. on Robotics, vol. 23(1), 2007, pp 34-46. [24] G. Grisetti, C. Stachniss and W. Burgard, “Improving Grid- based SLAM with Rao-Blackwellized Particle Filters by Adap- tive Proposal and Selective Resampling”, IEEE Intl. Conf. on Robotics and Automation, 2005, pp 2443-2448. [25] N. Kwak, I. K. Kim, H. C. Lee and B. H. Lee, “Adaptive Prior Boosting Technique for the Efficeint Sample Size in FastSLAM”, Proceedings of the IEEE/RSJ International Conference on Intelli- gent Robots and Systems (San Diego, CA,2007) pp. 630–635. [26] Thrun, S., Montemerlo, M., Koller, D., Wegbreit, B., Nieto, J., Nebot,E.,” FastSLAM: An efficient solution to the simultaneous Localization and mapping problem with unknown data associa- tion”, Journal of Machine Learning Research (2004). [27] Liang Zhang, Xu-jiong Meng, Yao-wu Chen,” onvergence and Consistency analysis for FastSLAM”, 2009 IEEE.