Kalman Filter

Jan 21, 2005 - Using the equation (2), we can give an estimation of the predicted mea- ... Tr( ̂Pk/k) is equivalent to minimise the quadratic form when the ...
96KB taille 11 téléchargements 343 vues
Kalman Filter January 21, 2005

1

Equations

The Kalman filter equations are: Xk+1 =Ak .Xk + Bk .uk + Γk .vk Zk =Ck .Xk + Dk .uk + wk

(1) (2)

where k denotes the observation at k ( k ≥ 0) and: • Xk is the none measurable state vector (n×1), • uk is the input vector,(m×1) supposed known, • vk is the random vector (q×1) which represents the state noise which known statistics, • Zk is the measure vector (p×1), • wk is the vector giving the measurement noise (p×1) with known statistics, • Ak is the transition matrix (n×n) from the state k to k + 1, • Bk is the known command matrix (n×m), • Γk is a known matrix (n×q), • Ck is the known measurement matrix (p×n), • Dk is the known transmission matrix (p×m), The constaints are: • E[vk ] = 0, 1

• E[vk .vk0 T ] = Qk .δkk0 , Qk is a matrix (q×q) defined positive • E[wk ] = 0, • E[wk .wk0 T ] = Rk .δkk0 , Rk is a matrix (p×p)defined positive • E[vk .wk0 T ] = 0, • E[X0 .vk0 T ] = 0, • E[X0 .wk0 T ] = 0 bk0 /k of the state The aim of the Kalman filter is to find the best estimation of X 0 Xk0 /k at k when measured observations are known up to k (Z0 . . . Zk ). The best estimation is reached by looking for the minimum standard deviation. This estimation is called mean square estimation: bk0 /k )T (Xk0 − X bk0 /k )/Z0 , Z1 , . . . , Zk ] ≤ E[(Xk0 − X) b T (Xk0 − X)/Z b E[(Xk0 − X 0 , Z1 , . . . , Zk ] b ∀X estimated using measurements {Z0 , Z1 , . . . , Zk } (3) Which is an conditional mathematical expectation: bk0 /k = E[Xk0 /Z0 , Z1 , . . . , Zk ] X

(4)

Given k 0 , the Kalman filter is used for : • k 0 < k : smoothing • k 0 = k : filtering • k 0 > k : prediction

2

One step predicting filter

In this case, k 0 = k + 1, the conditional expectation (4) is: bk+1/k =E[Xk+1 /Z0 , Z1 , . . . , Zk ] X =Ak .E[Xk /Z0 , Z1 , . . . , Zk ] + Bk .E[uk /Z0 , Z1 , . . . , Zk ] + E[vk /Z0 , Z1 , . . . , Zk ] bk/k + Bk .uk =Ak .X The estimation error at k is: ek/k = Xk − X bk/k X 2

(5)

and the covariance matrix (filtered covariance) is: ek/k X ek/k T ] Pbk/k = E[X

(6)

ek/k ] = 0). when the error average is nul ( E[X The prediction error is: ek+1/k =Xk+1 − X bk+1/k X bk/k + Bk .uk ) =Ak .Xk + Bk .uk + Γk .vk − (Ak .X bk/k ) + Γk .vk =Ak (Xk − X ek/k + Γk .vk =Ak .X The predicted covariance matrix is: ek+1/k .X ek+1/k T ] Pbk+1/k =E[X bk/k .X bk/k T ].Ak T + Γk .E[vk .vk T ].Γk T =Ak .E[X =Ak .Pbk/k .Ak T + Γk .Qk .Γk T Using the equation (2), we can give an estimation of the predicted measure: Zek+1/k =E[Zk+1 /Z0 , Z1 , . . . , Zk ] =Ck+1 .E[Xk+1 /Z0 , Z1 , . . . , Zk ] + Dk+1 .E[uk+1 /Z0 , Z1 , . . . , Zk ] + E[wk+1 /Z0 , Z1 , . . . , Zk ] bk+1/k + dk+1 .uk+1 =Ck+1 .X The error between the observed measure and the predicted measure is: Zek+1/k =Zk+1 − Zbk+1/k bk+1/k + Dk+1 .uk+1 ) =Ck+1 .Xk+1 + Dk+1 uk+1 + wk+1 − (Ck+1 .X bk+1/k ) + wk+1 =Ck+1 (Xk+1 − X ek+1/k + wk+1 =Ck+1 .X This errror, called innovation, is used to update the filter. The predicted measure error covariance is : b k+1/k =E[Zek+1/k .Zek+1/k T ] Σ ek+1/k X ek+1/k T ].Ck+1 T + E[wk+1 .wk+1 T ] =Ck+1 .E[X =Ck+1 .Pbk+1/k .Ck+1 T + Rk+1 3

bk/k using the previous predicted state X bk/k−1 and the We want to write X innovation Zek/k−1 : bk/k =E[Xk /Z0 , Z1 , . . . , Zk ] X =E[Xk /Z0 , Z1 , . . . , Zk−1 ] + E[Xk /Zek/k−1 ] bk/k−1 + Kk .Zek/k−1 =X bk/k is the best estimator in the miminum covariance matrix meaning. X The conditional variance is obtained by minimizing tr(Pbk/k ), and consequently the gain matrix. The filtered estimation error is : ek/k =Xk − X bk/k X bk/k−1 + Kk .Zek/k−1 ) =Xk − (X bk/k−1 ) − Kk (Zk − Zbk/k−1 ) =(Xk − X bk/k−1 ) − Kk (Ck .X ek/k−1 + wk ) =(Xk − X bk/k−1 ) − Kk .wk =(I − Kk .Ck )(Xk − X ek/k−1 − Kk .wk =(I − Kk .Ck )X The filtered state covariance matrix is : ek/k .X ek/k T ] Pbk/k =E[X

Pbk/k

(7)

ek/k−1 .X ek/k−1 T ].(I − Kk .Ck )T + Kk .E[wk .wk T ].Kk T =(I − Kk .Ck ).E[X (8) =(I − Kk .Ck ).Pbk/k .(I − Kk .Ck )T + Kk .Rk .Kk T (9) 1

The equation (9) is known as the Joseph formulation. It is a Kk polynomial matrix of second order is called Jospeh formulation. Minimisation of Tr(Pbk/k ) is equivalent to minimise the quadratic form when the gradient is nul for the optimum Kk ; b k/k−1 −1 Kk = Pbk/k−1 .Ck T .Σ

(10)

The optimal covariance matrix is: Pbk/k = (I − Kk .Ck ).Pbk/k−1

4

(11)

3

Using Kalman filter b0/0 et Pb0/0 1. Initialisation: X 2. Prediction: bk/k−1 = Ak .X bk/k + Bk−1 .uk−1 • X bk/k−1 + Dk .uk • Zbk/k−1 = Ck .X • Pbk/k−1 = Ak−1 .Pbk−1/k−1 .Ak−1 T + Γk−1 .Qk−1 .Γk−1 T 3. Innovation: • Zek/k−1 = Zk − Zbk/k−1 b k/k−1 −1 = Ck .Pbk/k−1 .Ck T + Rk • Σ 4. Filtration: b k/k−1 −1 • Kk = Pbk/k−1 .Ck T .Σ bk/k = X bk/k−1 + Kk .Zek/k−1 • X • Pbk/k = (I − Kk .Ck ).Pbk/k−1

4

Multi step prediction

The multi step prediction gives the state Xk+i/k by doing just the prediction step: ! i−1 Y bk+i/k = bk+1/k X Aj . X (12) j=1

5

Extended Kalman filter

Here the process is non-linear, meaning either the measure vector, or the model is non-linear. In this case, equations become: Xk+1 =fk (Xk ) + Bk .uk + Γk .vk Zk =g(Xk ) + Dk .uk + wk

(13) (14)

Both equations are linearised: ∂fk (X) .(Xk − Xk0 ) fk (Xk ) ≈ fk (Xk0 ) + ∂X X=Xk0 5

(15)

The new predicted state is: Xk+1

∂fk (X) b bk/k ) + Bk .uk + Γk .vk ≈f (Xk/k ) + .(Xk − X ∂X X=Xbk/k

bk/k ) − Ak .X bk/k + Bk .uk + Γk .vk =Ak .Xk + f (X k (X) . With Ak = ∂f∂X

(16) (17)

bk/k X=X

By doing the same operation of the measure equation: bk/k−1 ) − Ck .X bk/k−1 + Dk .uk + wk Zk ≈ Ck .Xk + gk (X ∂gk (X) where Ck = ∂X . b X=Xk/k−1

To use the filter, we have to do a: b0/0 et Pb0/0 1. Initialisation: X 2. Prediction: bk/k−1 = fk (X bk/k−1 ) + Bk−1 .uk−1 • X bk/k−1 ) + Dk .uk • Zbk/k−1 = gk (X • Pbk/k−1 = Ak−1 .Pbk−1/k−1 .Ak−1 T + Γk−1 .Qk−1 .Γk−1 T ∂fk (X) ∂gk (X) • Ak = ∂X , Ck = ∂X b b X=Xk/k

X=Xk/k−1

3. Innovation: • Zek/k−1 = Zk − Zbk/k−1 b k/k−1 −1 = Ck+1 .Pbk+1/k .Ck+1 T + Rk+1 • Σ 4. Filtration: b k/k−1 −1 • Kk = Pbk/k−1 .Ck T .Σ bk/k = X bk/k−1 + Kk .Zek/k−1 • X • Pbk/k = (I − Kk .Ck ).Pbk/k−1

6

(18)

6

Unstenced Kalman filter

The unstenced Kalman filter is a different way to extend the Kalman filter for non-linear cases. The main idea behing the unstenced transformation is that it is easier to approximate the mean and the covariance matrix after a non-linear transformation than to approximate the linear function. The unstenced transformation is working this way: given the mean, x, and the covariance matrix Px , of the random variable x, we want to estimate the mean y and the covariance matrix Py after the transformation y = f (x). From x and Px a set of 2n points+1, Xi , are generated with weights Wi : X0 = x p W0 = k/(n + k) i Xi = x + ( p (n + k)Px ) Wi = 1/2(n + k) Xi+n = x − ( (n + k)Pxi ) Wi+n = 1/2(n + k)

(19)

where n is the dimension of x, Pxi is the ith row, or column, and k is a number to set. Some experiments show that k can be set such as n + k = 3. P2n We can easily verify that i=0 Wi = 1. Using Xi , Yi are generated by the transformation Yi = f (Xi ). Finally y and Py are given by: X y= Wi Yi (20) i

Py =

X

Wi (Yi − y)(Yi − y)T

(21)

i

The application is straight forward. For example, using Eq. 13, we can bk+1/k using X bk/k and Eq. 4. estimate X

7

Obtaining the matrix Kk

The matrix Kk has the minimise the Pk/k covariance. It is achieve by introduction the matrix trace: n X Tr(C) = c(i, i) (22) i=1

where C is an n × n matrix with c(i, j) elements. We define the matrix scalar product of two matrices, A and B, written < A, B >: p n X X < A, B >= a(i, j).b(i, j) (23) i=1 j=1

7

The scalar product < A, B > has a direct relation with the matrix trace: < A, B >= Tr(AB T ) = Tr(BAT ) = Tr(AT B) = Tr(B T A)

(24)

< C, I >= Tr(C) =< C T , I >

(25)

The Kk gain should minimise Pk/k , and consequently, it should minimise also Tr(Pk/k ). The expression of Pk/k is: Pbk/k = (I − Kk .Ck ) Pbk/k−1 (I − Kk .Ck )T + Kk .Rk .Kk T =Pbk/k−1 − Kk .Ck .Pbk/k−1 − Pbk/k−1 (Kk .Ck )T + (Kk .Ck ).Pbk/k−1 .(Kk .Ck )T + Kk .Rk .Kk T =Pbk/k−1 − Kk .Ck .Pbk/k−1 − (Kk .Ck .Pbk/k−1 )T + (Kk .Ck ).Pbk/k−1 .(Kk .Ck )T + Kk .Rk .Kk T Pbk/k

=Pbk/k−1 − Kk .Ck .Pbk/k−1 − (Kk .Ck .Pbk/k−1 )T + Kk .(Ck .Pbk/k−1 .Ck T + Rk )Kk T b k/k−1 Kk T =Pbk/k−1 − Kk .Ck .Pbk/k−1 − (Kk .Ck .Pbk/k−1 )T + Kk .Σ

Now we can compute Tr(Pbk/k ):

Tr(Pbk/k ) = < Pbk/k , I > b k/k−1 Kk T , I = < Pbk/k−1 , I > − < Kk .Ck .Pbk/k−1 , I > − < (Kk .Ck .Pbk/k−1 )T , I > + < Kk .Σ b k/k−1 Kk T , I > = < Pbk/k−1 , I > − < Kk .Ck .Pbk/k−1 , I > − < Kk .Ck .Pbk/k−1 , I > + < Kk .Σ b k/k−1 Kk T , I > Tr(Pbk/k ) = < Pbk/k−1 , I > −2. < Kk .Ck .Pbk/k−1 , I > + < Kk .Σ Moreover, we have : b k/k−1 Kk T , I >= < Kk .Σ b k/k−1 , Kk > < K k .Σ et < Kk .Ck .Pbk/k−1 , I >= < (Kk .Ck .Pbk/k−1 )T , I > = < (Ck .Pbk/k−1 )T .Kk T , I > = < (Ck .Pbk/k−1 )T , Kk >

The expression of Tr(Pbk/k ) becomes: b k/k−1 , Kk > −2 < (Ck .Pbk/k−1 )T , Kk > + < Pbk/k−1 , I > Tr(Pbk/k ) = < Kk .Σ (26)

8

Kk ∗ should minimise Tr(Pbk/k ), i.e.:   b k/k−1 , Kk > −2 < (Ck .Pbk/k−1 )T , Kk > + < Pbk/k−1 , I > Kk ∗ =arg min < Kk .Σ Kk   b k/k−1 , Kk > −2 < (Ck .Pbk/k−1 )T , Kk > =arg min < Kk .Σ Kk

=arg min F (Kk ) Kk

The F (Kk ) function is a quadratic fonction. The gradient is: F (Kk + λδKk ) − F (Kk ) λ→0 λ = < ∇FKk (Kk ), δKk >

∇FKk (Kk ) =lim

(27) (28)

Now we can compute F (Kk + λδKk ): b k/k−1 , (Kk + λδKk ) > F (Kk + λδKk ) = < (Kk + λδKk ).Σ − 2 < (Ck .Pbk/k−1 )T , (Kk + λδKk ) >

b k/k−1 , Kk > +λ < Kk .Σ b k/k−1 , δKk ) > = < K k .Σ b k/k−1 , Kk > +λ2 < δKk .Σ b k/k−1 , δKk > + λ < δKk .Σ − 2 < (Ck .Pbk/k−1 )T , Kk > −2λ < (Ck .Pbk/k−1 )T , δKk > b k/k−1 , Kk > +2λ < Kk .Σ b k/k−1 , δKk ) > +λ2 < δKk .Σ b k/k−1 , δKk = < K k .Σ − 2 < (Ck .Pbk/k−1 )T , Kk > −2λ < (Ck .Pbk/k−1 )T , δKk > b k/k−1 , δKk ) > +λ2 < δKk .Σ b k/k−1 , δKk > F (Kk + λδKk ) − F (Kk ) =2λ < Kk .Σ − 2λ < (Ck .Pbk/k−1 )T , δKk > Finally, we have: F (Kk + λδKk ) − F (Kk ) b k/k−1 , δKk ) > +λ < δKk .Σ b k/k−1 , δKk > =2 < Kk .Σ λ − 2 < (Ck .Pbk/k−1 )T , δKk > (29) when λ reach 0, the expression becomes: F (Kk + λδKk ) − F (Kk ) b k/k−1 , δKk ) > −2 < (Ck .Pbk/k−1 )T , δKk > = 2 < K k .Σ λ→0 λ (30)

∇Kk Kk = lim

9

The condition, to reach, is ∇Kk Kk = 0, consequently : b k/k−1 , δKk ) > −2 < (Ck .Pbk/k−1 )T , δKk >= 0 2 < K k .Σ b k/k−1 , δKk ) >=< (Ck .Pbk/k−1 )T , δKk > < K k .Σ b k/k−1 = (Ck .Pbk/k−1 )T K k .Σ The Kk ∗ value solving this equation is: b k/k−1 −1 Kk ∗ =(Ck .Pbk/k−1 )T .Σ b k/k−1 Kk ∗ =Pbk/k−1 .Ck T .Σ

−1

By putting the Kk ∗ value in the filtered covariance matrix expression Pbk/k , we have: Pbk/k = (I − Kk .Ck )Pbk/k−1 (31)

10