Kalman Filters Predictive Steps Comparison for ... - Benjamin Mourllion

i.e to find the pdf (probability density function) of y: ψ(y). ..... σr, the noise on the angle is centered and has a standard deviation σθ. .... Transactions of the ASME-Journal of Basic ... in derivative-free state estimation for nonlinear systems.
830KB taille 11 téléchargements 326 vues
2005 IEEE/RSJ International Conference on Intelligent Robots and Systems

Kalman Filters Predictive Steps Comparison for Vehicle Localization Benjamin Mourllion*, Dominique Gruyer*, Alain Lambert**, S´ebastien Glaser* * LIVIC INRETS/LCPC Bˆat. 824, 14, route de la Mini`ere 78000 Versailles Satory - France

** IEF UMR 8622 CNRS/Universit´e Paris Sud-XI Bˆat. 220, Centre d’Orsay 91405 Orsay cedex - France [email protected]

Abstract— The aim of this paper is to perform a comparison among several different Kalman filters algorithms designed for nonlinear systems. After presenting the most popular of them and showing its limitations, we introduce some new Kalman filters in order to compare them in the vehicle localization context. This comparison is based on the sole use of their predictive steps that corresponds to the worst case that it can occur in vehicle localization (corrective data are unavailable). Index Terms— vehicle localization, EKF, UKF, DD1, DD2.

I. Introduction In several research fields (mobile robotic [1], intelligent vehicle [14]) the localization step is a very important stage which is needed by higher degrees applications like safe path planning [9], collision avoidance and collision mitigation [8]. . . Classical localization processes use some proprioceptive and exteroceptive sensors data to estimate the position of a vehicle (x, y), its heading (θ) and also, some time, its velocity (v). The Kalman filter [7] (KF) is a well-known recursive state estimator for linear systems. However, in many applications, vehicle localization for instance, the process function (the vehicle model in this case) and/or the measurement function can be nonlinear. In order to solve this problem, the Kalman filter was adapted and named Extended Kalman Filter (EKF) [11], [12]. The idea is that in the EKF, the process and/or measurement functions are linearized with a first order Taylor approximation, making possible to apply the traditional KF. Let us remind the Kalman filter is optimal for linear systems [18]. However, the approximations carried out through the EKF’s linearization make it an suboptimal estimator. The EKF is the most popular KF variant and the most used as well. Nevertheless, since the middle of the 90’s, the related literature has been showing that a first-order Taylor linearisation is a too rough approximation and some other Kalman filters (derivativeless) adapted for

0-7803-8912-3/05/$20.00 ©2005 IEEE.

nonlinear systems [5], [6], [13], [17] have been proposed. All these filters are considered as sub-optimal state estimators. A theoretical comparison has been carried out in [10] about the process and measurement updates. In this paper, we determine the best filter for the vehicle prediction localization problem by the analyses of simulation results. After having presented the shortcomings of the EKF in section II, this paper introduces the different derivativeless KF for nonlinear systems in section III. The comparison among these different filters applied to vehicles localization is then carried out in section IV. We focus on the worst case which appears when corrective data are unavailable. II. EKF Drawbacks A. Kalman and Notations Presentation Let f be the evolution function and g the measurement function. v and n are respectively the process noise and the measurement noise and are both considered as white, gaussian and centered noises.  Xk+1 = f (Xk , u∗k , vk ) v  N (0, Rv ) (1) = g(Xk , nk ) n  N (0, Rn ) Zk Where u∗k is the noised command: u∗k  N (uk , Rγ ). The Kalman filter is defined in two steps: the first one, called predictive step, allows at time k + 1 to estimate the state at the system based on the inputs at the time k. The second step, called corrective step, allows to update the result of the predictive step with a new measurement. The EKF follows exactly the same scheme as the KF. The difference is in the computation of the matrices. In the KF, the process and the measurement matrices are composed of the ”true” linear functions whereas, in the EKF, these matrices (called jacobian matrices in this case) are composed of the first-order linearized functions. The following algorithm shows the implementation of the corrective step of EKF:

1251



ˆ0 X P0 Rv Rn •

y

Filter Initialization:

h( x)

E[X0 ] ˆ 0 )(X0 − X ˆ 0 )T ] E[(X0 − X E[(v − v)(v − v)T ] E[(n − n)(n − n)T ]

= = = =

ψ ( y)

(2)

∀k ∈ [[1, . . . , ∞[[ Process Jacobian Matrix Computation: Axk Dv Buk

= ∇x f (X, u∗k , v)|X=Xˆ k−1|k−1 ˆ k−1|k−1 , u∗ , v)|v=v = ∇v f (X k ˆ k−1|k−1 , u, v)|u=u∗ = ∇u f (X

x ψ ( x)

(3)

k

Fig. 1. The probability density function ψmc of the random variable y is estimated by a montey carlo method.

State and Covariance Prediction Computation: ˆ k|k−1 X Pxk|k−1

ˆ k−1|k−1 , uk , v) = f (X = Axk Pxk−1|k−1 ATxk +Dv Rv DvT + Buk Rγ BuTk

h( x)

h ( x)

(4) ψ ( y)

B. First-Order Linearization Limitations In order to perceive the limitations of the first-order linearization, we will show a brief demonstration that is based on a simple example taken from [5]: Let x be a gaussian random variable (GRV) and consider the following relation between y and x: y = h(x) = x2

(5)

The mean of the x distribution is x = µx and its covariance is σx2 : x  ψ(x) = N (µx , σx2 ). The purpose of this example is to determine the mean y = µy and the covariance σy2 of the random variable y i.e to find the pdf (probability density function) of y: ψ(y). Let us consider the monte carlo (MC) situation (figure 1): (6) xmc = µx + δx with δx  N (0, σx2 ). We expand the expression h(xmc ): y = h(µx + δx) = µ2x + 2µx δx + (δx)2

x ψ ( x)

Fig. 2. The probability density function ψlin of the random variable y is estimated by a first order Taylor method. This method is applied for x = µx .

As the kurtosis (fourth moment) of a gaussian distribution is 3σx4 , the covariance of y is: σy2mc = 2σx4 + 4µ2x σx2

Let us compute the mean and the covariance of y with the linearized function as proceeded in the EKF design (figure 2 with ylin = ytay ):

(7)

The y MC mean value is: µymc =

µ2x

+

σx2

(8)

The mean quadratic error is (from equations (7) and (8)): (y − µymc )2

= =

(y − µy )2mc

(δx)4 + 4µx (δx)3 +(4µ2x − 2σx2 )(δx)2

(9)

−4σx2 µx δx + σx4

As the skewness (third moment) of a gaussian distribution is zero, the computation of the covariance gives: σy2mc = E[(δx)4 ] + 4µ2x σx2 − σx4

(10)

(11)

µytay = µ2x

(12)

σy2tay = 4µ2x σx2

(13)

When we compare µymc and σy2mc with µytay and σy2tay , it appears that the linearization eliminates some terms which could be significant (see figure 3). C. Conclusion Several drawbacks appear: • As pointed out in the previous subsection, the comparison of the expressions (8) with (12) and (11) with (13) makes clear that a first-order linearization truncates some significant terms which makes both computed mean and covariance erroneous. • The second drawback concerns the hypothesis related of the point of the linearization. Theoretically, the function h is linearized around the value

1252

y

zero. For a gaussian distribution, β = 2 is the optimal value. The transformation procedure is: • To compute each sigma points through the nonlinear function. (16) Yi = f [Xi ]

h( x)

h ( x)

ψ ( y) ψ ( y)



x

To compute the mean.

ψ ( x)

Y =

2n  i=0



Fig. 3. The comparaison between monte carlo method and linearisation method shows that the first order approximation leads to a pdf quite different than the monte carlo one.

µx . But in the implementation of the EKF, the nonlinear function is linearized around the estimated value of x leading to an additional error that we can not quantify. • The last limitation of the EKF concerns the computation of the jacobian matrix. Indeed, some systems have very complex jacobian matrices expression or can not be differentiated. After having analyzed the drawbacks of the EKF, in the next section, we focus on the new filters. III. Derivativeless Nonlinear Kalman Filters Presentation A. UKF 1) Presentation: The UKF is a Kalman filter which is based on the (scaled) unscented transformation [2], [16], [17]. This transformation compute the statistic of a random variable which is the argument of a nonlinear function. According to Julier, it is easier to approximate a gaussian distribution than approximate a nonlinear function. To characterize the distribution, the unscented transformation uses a set of deterministic points called sigma points and a set of weights:   X0 = X    Xi (14) = X + ((n + k)Pxx )i   X i+n = X − ( (n + k)Pxx )i where i ∈ [[1, n]] and n is the number of elements of the vector X. ( (n + k)Pxx )i is the ith column of the Cholesky factor of the covariance matrix Pxx .         

(m)

W0 (c) W0 (m) (c) Wi = Wi (m) (c) Wi+n = Wi+n

= κ/(n + λ) = κ/(n + λ) + 1 − α2 + β = 1/2(n + λ) = 1/2(n + λ)

(m)

Wi

Yi

(17)

To compute the covariance. Pyy =

2n  i=0

(c)

Wi {Yi − Y }{Yi − Y }T

(18)

Consider again the example given section II-B. We will compute the mean and variance of y with the unscented transform. We define the sigma points Xi as follows: {X0 , X1 , X2 } = {µx , µx − σ, µx + σ} (19)  (n + κ)σx2 and n = 1. We compute the with σ = transformation through still the same function h(·). {X0 , X1 , X2 } = {µ2x , µ2x − 2µx σ + σ 2 , µ2x + 2µx σ + σ 2 } (20) The mean of the random variable y is: 1 (2κµ2x + 2µ2x + 2(1 + κ)σx2 ) µyut = (21) 2(1 + κ) µyut = µ2x + σx2

(22)

The mean estimated by the unscented transformation is the same as the monte carlo mean (µyut = µymc ) and is independent of the κ parameter. The estimated variance is:  2n   1 2  2 4 σyut = (X − µyut ) + 2κσx (23) 2(1 + κ) i=1 i To find a solution, we must set the κ parameter. For a gaussian distribution, Julier sets (n + κ) = 3. Here n = 1, so κ = 2 and therefore the estimated variance is equal to the monte carlo variance.

(15)

σy2ut = 2σx4 + 4µ2x σx2 2) Implementation: • Filter Initialisation: ˆ 0 = E[X0 ] X P0

where λ = α2 (n + κ) − n is the scale parameter. α, 0 < α ≤ 1, determines the spread of the sigma points around X and is usually set to a small positive value(10−3 ). κ is a second scale parameter usually set to

1253



ˆ 0 )(X0 − X ˆ 0 )T ] = E[(X0 − X

(24)

(25)

∀k ∈ [[1, . . . , ∞[[ Sigma Points Computing  T ˆ k−1|k−1 X  (26) Xk−1|k−1 = ˆ k−1|k−1 ± η Pk−1|k−1 X

State and Covariance Prediction Computation: Xk|k−1 ˆ k|k−1 X

= f [Xk−1|k−1 , uk−1 ] 2L (m) = Wi Xk|k−1,i

Pk|k−1

=

i=0 2L

i=0

(27)

(c)

Wi Ξ × ΞT + Rv

ˆ k|k−1 , Υ = Zk,i − Zˆk , η = With Ξ = Xk|k−1,i − X √ L + λ, L the number of states, Rv the process noise covariance matrix and Rn the measurement noise covariance matrix. 3) Conclusion: The implementation of the filter is quite simpler than the EKF’s one because there is no computation of jacobian matrix.

For the second order approximation, the estimated mean is: n ξ2 − n 1  h(µx +ξSx,p )+h(µx −ξSx,p ) h(µ )+ µyst2 = x ξ2 2ξ 2 p=1 (33) and the estimated covariance is: Pyst2 =

n n ξ2 − 1  1  T Γ × Γ + Λ × ΛT 4ξ 2 p=1 4ξ 4 p=1

with Λ = h(µx + ξSx,p ) + h(µx − ξSx,p ) − 2h(µx ). By considering again the example given in the section II-B, we obtain: µyst2 = µ2x + σx2

(35)

σy2st2 = 4µ2x σx2 + (ξ 2 − 1)σx4

(36)

B. DD1 & DD2 1) Presentation: In the parallel to the development of the UKF, two different teams propose another filter based on the Stirling interpolation. Theses formulations have been proposed firstly in the works of Nørgaard et al [13] with divided difference filters (DD1 et DD2) and secondly in the works of Ito and Xiong’s works [4] with central difference filter (CDKF). These filters are very similar so we decided to focus on DD1 and DD2 only. In one dimension, the second order Stirling formula with the interval length ξ is:

For a gaussian distribution, Nørgaard et al set ξ 2 = 3, so: σy2st2 = 2σx4 + 4µ2x σx2 (37) We notice this time that: µyst2 σy2st2 = σy2ut . 2) Implementation: Filter Initialisation: ˆ 0 = E[X0 ] X

h (µx ) (x − µx )2 h(x)  h(µx ) + hDD (µx )(x − µx ) + DD 2! (28) with: hDD (µx ) = hDD (µx ) =

µyst1 = h(µx )

(29)

and the estimated covariance is: Pyst1 =

n 1  Γ × ΓT 2 4ξ p=1

(30)

with Γ = h(µx + ξSx,p ) − h(µx − ξSx,p ) When we apply these definitions to the example given in the section II-B we obtain: µyst1 = µ2x

(31)

σy2st1 = 4µ2x σx2

(32)

We notice that µyst1 = µytay and σy2st1 = σy2tay .

P0

µyut and

ˆ 0 )(X0 − X ˆ 0 )T ] = E[(X0 − X

(38)

∀k ∈ [[1, . . . , ∞[[ State and Covariance Prediction Computation: ˆx ˆ k|k−1 = f [X – X k−1|k−1 , uk−1 , vk−1 ]. (1)

According to Nørgaard et al, for the first order approximation, the estimated mean is:

=





h(µx + ξ) − h(µx − ξ) 2ξ h(µx + ξ) + h(µx − ξ) − 2h(µx ) ξ2

(34)

(1)

(2)

(2)

– Sxˆx , Sxv , Sxˆx and Sxv computation. – Sxk|k−1 computation with Householder triangularisation. (1) (1) (2) (2) (1) (1) (2) (2) Where Sxˆx , Sxv , Sxˆx , Sxv , Szx , Szw , Szx and Szw are the divided difference matrices of the first and second order defined in [13]. 3) Conclusion: Through this short presentation, it appears that DD1 is similar to EKF with the advantage to have no Jacobian computing. DD2 is similar to UKF. C. Conclusion Concerning the implementation of the filters, the derivativeless filters avoid the computation of the jacobian or hessian matrices. For complex systems this is a high advantage. Indeed, theses matrices could be not defined all time. Concerning the tuning, DD1,DD2 and CDKF need only one parameter (ξ) to be set against three (α,β,κ) for the UKF. It appears here that it is necessary to make a compromise between the simplicity of the tuning and its precision.

1254

Y (m)

10 0 -10 150

Overtaking Traje ctory EKF UKF DD2

500

200

250

300

350 X (m)

400

450

500

400

550

DD1 UKF Last Estimation DD2 Last Estimation

300 200 100

Overtaking Scenario.

Y (m)

Fig. 4.

IV. EKF, UKF, DD1 & DD2 Comparison

EKF & DD1 Last Estimation

0 100

A. Simulation Presentation

200

B. Filtering We proceed to a worst case evaluation of the filters. Such situations can appear in two cases. • In targets tracking purpose, during an occlusion the corrective step is not available (the target is hidden). Thus, the system has to predict the position of the tracked target. This situation corresponds to a data alignements. • In vehicle hybrid localization, the prediction is done with the inertial data and the steering wheel encoder. The corrective step is done with exteroceptive sensor as GPS. There are many cases where the GPS data are not available (bad satellites configuration, road tunnel, trees . . . ). The command is assumed to be un-noised. So, during the simulation of the scenario, only the model noise is cumulated and propagated. The result of the Kalman filtering gives an a posteriori position with an uncertainty symbolized by an ellipse. In order to obtain the size of the axes of the ellipse, we compute the eigenvalues of the covariance matrix P(k + 1/k +1) and we weight these values with the factor: k = −2 log(1 − Pa ), where Pa is the membership probability [15]. The following figures (5, 6) show the estimations of the position of the vehicle given by the four filters.

400 500 200

0

200

400 X (m)

600

800

1000

Fig. 5. Estimated Trajectories with Uncertainty Ellipses. The x and y variables have a non-linear evolution. First-order filters and second-order filters do not return the same results. T rajectories

5 Y (m)

For the simulations, two models are needed. The first one is used to provide the simulated proprioceptive data and the second one is used in the predictive step of filters. Glaser [3] has developed a new model which integrates a fine representation of the suspension/tyre and tyre/road interactions which allows to obtain a very realistic behavior of the vehicle. On the other hand, the model used in the Kalman filtering process is a classical bicycle model. Presentation of the scenario: we have focused the simulations on an overtaking scenario (see figure 4) that allows to have some linear parts (straight lines) and nonlinear portion (curved lines) to see the longitudinal and lateral behaviours of the different filters. The command is given by (a, ϕ)t , where a is the acceleration and ϕ the steering (wheels angle). The experiment lasts 36 seconds. The vehicle velocity is 20 m.s−1 .

True Position

300

0

E KF UKF DD1 DD2

T rue T rajectory

-5

200

Fig. 6.

210

220

230 X (m)

240

250

260

Focus on the curved part of the trajectories.

On one hand, we notice that the four filters have the same estimate for the heading which has a linear evolution. On the other hand it appears that EKF and DD1 (first-order filters) do not have the same estimate than UKF and DD2 (second-order filters) for the localization (x and y have a non-linear evolution). So, filters return the same results for the linear evolution variables and different results for non-linear evolution variables. C. Bias It appears that the bias between the first-order filters results and second-order filters ones comes from the way the estimate is computed. The prediction is computed in the polar space (velocity and heading) and the result is returned in the cartesian space. So, our problematic can be assimilated as a polar to cartesian conversion problem. Let us consider the example of the polar to cartesian transformation (figure 7) given by Julier in [6]. We consider a laserscanner positioned on (0, 0) and an object positioned on (0, 1) (in cartesian coordinates). The laserscanner provides noised polar data. The noise on the radius is centered and has a standard deviation σr , the noise on the angle is centered and has a standard deviation σθ . We take N = 5000 measurements. Thus, the laser returns a set of points with polar coordinates (r, θ)t with r  N (r, σr ) and θ  N (θ, σθ ) where (r, θ) = (1, π/2).

1255

200

R Distribution

Theta Distribution

200

150

150

100

100

50

50

1.1 1 0.9 0.8

0 0.9

0.7 0.6

150

0.5 0.4 0. 8

0. 6

0. 4

0. 2

0

0.2

0.4

0.6

0.95

1

1.05

1.1

X Distribution

1

1.5

2

2.5

Y Distribution

400 300

100

0.8

0 0.5

200 50

Fig. 7. The banana-shape obtained with 5000 measurements. The black square is the monte carlo mean computed in the polar space, the black triangle is the monte carlo mean computing in the cartesian space.



Let µc =

x y

be the mean of the set of points

x computed in the cartesian space and µp = y f (·) the mean computed in the polar space. The larger is the bearing standard deviation, the nearer is the mean (µc ) from the origin of the global frame (see figure 7) and thus, µc is out of the set of points. We are in possession of two means (µc and µp ). Which one to take into account to have the best estimated state ? To answer to this question let us see the signification of the mean and the covariance: Considering (x1 , . . . , xn ) n samples of a (µ, σ 2 ) gaussian law, their likelihood is: f (·)

n

1

(xi −µ)2 2σ 2

√ L(x1 , . . . , xn , µ, σ 2 )= e 2πσ 2 i=1 (39)

n 1 1 = √ e− 2σ2 (xi −µ) 2πσ 2 To find the maximum of this function, we use the logarithm function: n n log(L(x1 , . . . , xn , µ, σ 2 )) = − log(σ 2 ) − log(2π) 2 2 n 1  − 2 (xi − µ)2 2σ i=1 The partial derivative functions with respect to the µ and σ 2 parameters are: n ∂ log(L(x1 , . . . , xn , µ, σ 2 )) 1  = 2 (xi − µ) ∂µ σ i=1

and n n ∂ log(L(x1 , . . . , xn , µ, σ 2 )) 1  = − + (xi − µ)2 ∂(σ 2 ) 2σ 2 2σ 4 i=1

The derivative functions are equal to zero for: ˆ)2 xi (xi − µ and σ ˆ2 = µ ˆ= n n To verify if this point is a maximum, we compute the second order partial derivative functions:

0 1

100 0. 5

0

0.5

1

0

0

0.5

1

1.5

Fig. 8. Variables Distributions. r and theta respect a gaussian distribution.

∂ 2 log(L(x1 , . . . , xn , µ, σ 2 )) n =− 2 ∂µ2 σ 1  ∂ 2 log(L(x1 , . . . , xn , µ, σ 2 )) =− 4 (xi − µ) 2 ∂µ∂(σ ) σ 2 2 ∂ log(L(x1 , . . . , xn , µ, σ )) n 1  = − − (xi − µ)2 ∂(σ 2 )2 2σ 4 σ8 The hessian matrix at the point (ˆ µ, σ ˆ 2 ) is:

0 − σˆn2 0 − 2ˆnσ4

(40)

It is negative defined, therefore the point (ˆ µ, σ ˆ 2 ) is a maximum of L function. So, under a gaussian constraint, the mean and the covariance are the estimators of the likelihood maximum. In our purpose, only r and θ are gaussian. Therefore, µp is the sole mean to consider. A priori, the means of x and y are not maximum of likelihood estimators, these means have not special signification. D. Physical Signification Smith and Cheeseman said in [15]: ”By assuming the (multivariable) distribution is Gaussian, we can still make probabilistic estimates that agree with simulations very well, except when the angular errors are large.” The first order filters compute the transformation of the mean and the transformation of the variance with the linearized function. The state vector stays thus a gaussian multivariable. It is not the case for the second order filters. We can see, in figure 8, that the multivariable (r, θ)t is gaussian, but (x, y)t is not gaussian anymore. The mathematical expectation of the estimated position is different of the most probable position when variables are not gaussian. So, as we have a large standard deviation noise on the angle and according to [15], computing the transformation of the mean (µp ) has a physical signification, but computing the mean

1256

of the transformation is physically irrelevant (the mean position is not the most probable position). Let D = i v(i).∆t(i) be the distance covered, computed with the estimated velocity and D = i ∆s(i) the distance covered, computing with the sum of each segment between two points (∆s(i) is the curvilinear distance between points i − 1 and i). As the conversion from the polar space to the cartesian space appears in each prediction (x and y are computed from v and θ), for the second order method, the bias is cumulated, so we obtain D  D, whereas we obtain D = D with the first order method. Figure 6 shows the result of the prediction around the nonlinear portion. The difference between the solid line (the true trajectory) and the results of EKF and DD1 is due to the modelisation noise. For the second order filter, the UKF and DD2 give the sensation that the vehicle turns earlier than it should be. This result is due to the cumulated effect of the modelisation noise and the cumulated bias.

References

[1] J. L. Crowley. World modeling and position estimation for a mobile robot using ultrasonic ranging. In IEEE International Conference on Robotics and Automation, pages pp 674–680, Scottsdale, AZ, 1989. [2] R. Van der Merwer, A. Doucet, Nando de Freitas, and Eric Wan. The unscented particle filter. Technical Report CUED/F-INFENG/TR 380, Cambridge University Engineering Departement, August 2000. [3] S. Glaser, S Mammar, and J. Sainte-Marie. Lateral driving assistance using embedded driver vehicle road model. In ESDA, Turkey, 2002. [4] K. Ito and K. Xiong. Gaussian filters for nonlinear filtering problems. In IEEE Transaction on Automatic Control, volume 45 of 5, May 2000. [5] S.J. Julier and J.K. Uhlmann. A general method for approximating nonlinear transformation of probability distribution. Technical report, University of Oxford, November 1996. [6] S.J. Julier and J.K. Uhlmann. A new extension of the kalman filter and nonlinear systems. In Proceedings of the SPIE Aerosense International Symposium on Aerospace/Defense Sensing, Simulation and Controls, Orlando, Florida, September 2001.

E. Conclusion The advantage of the first order methods is that the distribution of the variable remains always gaussian (a GRV which goes under a linear fonction gives another GRV). So, the computed mean of the state has still a physical signification. The second order methods give a more consistent statistic of the variable, but the computed mean has no physical signification.

[7] R.E Kalman. A new approach to linear filtering and prediction system. Transactions of the ASME-Journal of Basic Engineering, 82(D):35–45, 1960. [8] Rickard Karlsson and Jonas Jansson. Model-based statistical tracking and decision making for collision avoidance application. In ACC, 2004. [9] A. Lambert and N. Le Fort-Piat. Safe task planning integrating uncertainties and local maps federation. International Journal of Robotics Research, 19(6):597–611, 2000.

V. Conclusion We have provided in this paper a comparison between four filters of the family of Kalman filters for nonlinear systems. Our comparison focus on the predictive step which corresponds to the worst case evaluation in intelligent vehicle localization. We have first presented the drawbacks of the linearisation and then introduced some new filters based on higher order approximations. These filters should give better results than the EKF. Unfortunately, it seems that the second order filters can not be applied to our purpose. Indeed, second order filters appear to be more consistent but the computed mean has no physical signification (since it is not an estimator of a likelihood maximum) and a bias appears between the mean of the transformed variable and the transformed mean of the variable, as underlined in sections IV.C and IV.D. After having eliminated the second order filters, we compared the first order ones. According to our simulations, and the tested tunings, the most adapted filter is DD1. It has the same estimate state than the EKF filter but the covariance is smaller than the EKF one. In our further works, we will compare these filters by using other mathematical tools which can allow us to compute estimators likelihood maximums of generic noises distributions.

[10] T. Lefebvre, H. Bruyninckx, and J. De Schutter. Kalman filters for nonlinear systems: a comparison of performance. The International Journal of Control, 77(7), May 2004. [11] F.L. Lewis. Optimal Estimation. John Wiley & Sons, New York, 1986. [12] P.S. Maybeck. Stochastic Models, Estimation and Control, volume vol. 2. McGraw-Hill, Singapore, 1984. [13] M. Nørgaard, N.K. Poulsen, and O. Ravn. Advances in derivative-free state estimation for nonlinear systems. (IMM-REP-1998-15), April 2000. [14] S. Roumeliotis and G. Bekey. An extended kalman filter for frequent local and infrequent global sensor data fusion. In SPIE International Symposium on Intelligent Systems and Advanced Manufacturing, 1997. [15] R.C. Smith and P. Cheeseman. On the representation and estimation of spatial uncertainty. Internationnal Journal of Robotics Research, 5(4), winter 1986. [16] Eric A. Wan and Rudolph Van der Merwer. Kalman Filtering and Neural Networks, chapter 7 : The Unscented Particle Filter. Wiley Publishing, edited by simon haykin edition, 2001. [17] Eric A. Wan and Rudolph van der Merwe. The unscented kalman filter for nonlinear estimation. In Adaptive Systems for Signal Processing, Communication and Control (ASSPCC), 2000. [18] Greg Welch and Gary Bishop. An introduction to the kalman filter. Technical Report Report 95-041, University of North Carolina at Chapel Hill, 1995.

1257