EKF(抄写自Wiki)

Extended Kalman filter

History

Formulation

Discrete-time predict and update equations

Disadvantages

References

Extended Kalman filter

In estimation theory, the extended Kalman filter (EKF) is the nonlinear version of the Kalman filter which linearizes about an estimate of the current mean and covariance. In the case of well defined transition models, the EKF has been considered[1] the de facto standard in the theory of nonlinear state estimation, navigation systems and GPS.[2]

History

The papers establishing the mathematical foundations of Kalman type filters were published between 1959 and 1961.[3][4][5] The Kalman filter is the optimal linear estimator for linear system models with additive independent white noise in both the transition and the measurement systems. Unfortunately, in engineering, most systems are nonlinear, so attempts were made to apply this filtering method to nonlinear systems; Most of this work was done at NASA Ames.[6][7] The EKF adapted techniques from calculus, namely multivariate Taylor series expansions, to linearize a model about a working point. If the system model (as described below) is not well known or is inaccurate, then Monte Carlo methods, especially particle filters, are employed for estimation. Monte Carlo techniques predate the existence of the EKF but are more computationally expensive for any moderately dimensioned state-space.

Formulation

In the extended Kalman filter, the state transition and observation models don’t need to be linear functions of the state but may instead be differentiable functions.

x k = f ( x k − 1 , u k ) + w k x k = f ( x k − 1 , u k ) + w k z k = h ( x k ) + v k z k = h ( x k ) + v k {\displaystyle {\boldsymbol {x}}_{k}=f({\boldsymbol {x}}_{k-1},{\boldsymbol {u}}_{k})+{\boldsymbol {w}}_{k}}{\displaystyle {\boldsymbol {x}}_{k}=f({\boldsymbol {x}}_{k-1},{\boldsymbol {u}}_{k})+{\boldsymbol {w}}_{k}} {\displaystyle {\boldsymbol {z}}_{k}=h({\boldsymbol {x}}_{k})+{\boldsymbol {v}}_{k}}{\boldsymbol {z}}_{{k}}=h({\boldsymbol {x}}_{{k}})+{\boldsymbol {v}}_{{k}} xk=f(xk1,uk)+wkxk=f(xk1,uk)+wkzk=h(xk)+vkzk=h(xk)+vk
Here wk and vk are the process and observation noises which are both assumed to be zero mean multivariate Gaussian noises with covariance Qk and Rk respectively. uk is the control vector.

The function f can be used to compute the predicted state from the previous estimate and similarly the function h can be used to compute the predicted measurement from the predicted state. However, f and h cannot be applied to the covariance directly. Instead a matrix of partial derivatives (the Jacobian) is computed.

At each time step, the Jacobian is evaluated with current predicted states. These matrices can be used in the Kalman filter equations. This process essentially linearizes the non-linear function around the current estimate.

See the Kalman Filter article for notational remarks.

Discrete-time predict and update equations

Notation $${\displaystyle {\hat {\mathbf {x} }}{n\mid m}}{\hat {\mathbf {x} }}{n\mid m} represents the estimate of {\displaystyle \mathbf {x} }\mathbf {x} at time n given observations up to and including at time m ≤ n.

Predict
Predicted state estimate
x ^ k ∣ k − 1 = f ( x ^ k − 1 ∣ k − 1 , u k ) x ^ k ∣ k − 1 = f ( x ^ k − 1 ∣ k − 1 , u k ) {\displaystyle {\hat {\boldsymbol {x}}}_{k|k-1}=f({\hat {\boldsymbol {x}}}_{k-1|k-1},{\boldsymbol {u}}_{k})}{\displaystyle {\hat {\boldsymbol {x}}}_{k|k-1}=f({\hat {\boldsymbol {x}}}_{k-1|k-1},{\boldsymbol {u}}_{k})} x^kk1=f(x^k1k1,uk)x^kk1=f(x^k1k1,uk)
Predicted covariance estimate
P k ∣ k − 1 = F k P k − 1 ∣ k − 1 F k ⊤ + Q k P k ∣ k − 1 = F k P k − 1 ∣ k − 1 F k ⊤ + Q k {\displaystyle {\boldsymbol {P}}_{k|k-1}={{\boldsymbol {F}}_{k}}{\boldsymbol {P}}_{k-1|k-1}{{\boldsymbol {F}}_{k}^{\top }}+{\boldsymbol {Q}}_{k}}{\displaystyle {\boldsymbol {P}}_{k|k-1}={{\boldsymbol {F}}_{k}}{\boldsymbol {P}}_{k-1|k-1}{{\boldsymbol {F}}_{k}^{\top }}+{\boldsymbol {Q}}_{k}} Pkk1=FkPk1k1Fk+QkPkk1=FkPk1k1Fk+Qk
Update
Innovation or measurement residual
y ~ k = z k − h ( x ^ k ∣ k − 1 ) y ~ k = z k − h ( x ^ k ∣ k − 1 ) {\displaystyle {\tilde {\boldsymbol {y}}}_{k}={\boldsymbol {z}}_{k}-h({\hat {\boldsymbol {x}}}_{k|k-1})}{\tilde {{\boldsymbol {y}}}}_{{k}}={\boldsymbol {z}}_{{k}}-h({\hat {{\boldsymbol {x}}}}_{{k|k-1}}) y~k=zkh(x^kk1)y~k=zkh(x^kk1)
Innovation (or residual) covariance
S k = H k P k ∣ k − 1 H k ⊤ + R k S k = H k P k ∣ k − 1 H k ⊤ + R k {\displaystyle {\boldsymbol {S}}_{k}={{\boldsymbol {H}}_{k}}{\boldsymbol {P}}_{k|k-1}{{\boldsymbol {H}}_{k}^{\top }}+{\boldsymbol {R}}_{k}}{\boldsymbol {S}}_{{k}}={{{\boldsymbol {H}}_{{k}}}}{\boldsymbol {P}}_{{k|k-1}}{{{\boldsymbol {H}}_{{k}}^{\top }}}+{\boldsymbol {R}}_{{k}} Sk=HkPkk1Hk+RkSk=HkPkk1Hk+Rk
Near-optimal Kalman gain
K k = P k ∣ k − 1 H k ⊤ S k − 1 K k = P k ∣ k − 1 H k ⊤ S k − 1 {\displaystyle {\boldsymbol {K}}_{k}={\boldsymbol {P}}_{k|k-1}{{\boldsymbol {H}}_{k}^{\top }}{\boldsymbol {S}}_{k}^{-1}}{\boldsymbol {K}}_{{k}}={\boldsymbol {P}}_{{k|k-1}}{{{\boldsymbol {H}}_{{k}}^{\top }}}{\boldsymbol {S}}_{{k}}^{{-1}} Kk=Pkk1HkSk1Kk=Pkk1HkSk1
Updated state estimate
x ^ k ∣ k = x ^ k ∣ k − 1 + K k y ~ k x ^ k ∣ k = x ^ k ∣ k − 1 + K k y ~ k {\displaystyle {\hat {\boldsymbol {x}}}_{k|k}={\hat {\boldsymbol {x}}}_{k|k-1}+{\boldsymbol {K}}_{k}{\tilde {\boldsymbol {y}}}_{k}}{\hat {{\boldsymbol {x}}}}_{{k|k}}={\hat {{\boldsymbol {x}}}}_{{k|k-1}}+{\boldsymbol {K}}_{{k}}{\tilde {{\boldsymbol {y}}}}_{{k}} x^kk=x^kk1+Kky~kx^kk=x^kk1+Kky~k
Updated covariance estimate
P k ∣ k = ( I − K k H k ) P k ∣ k − 1 P k ∣ k = ( I − K k H k ) P k ∣ k − 1 {\displaystyle {\boldsymbol {P}}_{k|k}=({\boldsymbol {I}}-{\boldsymbol {K}}_{k}{{\boldsymbol {H}}_{k}}){\boldsymbol {P}}_{k|k-1}}{\boldsymbol {P}}_{{k|k}}=({\boldsymbol {I}}-{\boldsymbol {K}}_{{k}}{{{\boldsymbol {H}}_{{k}}}}){\boldsymbol {P}}_{{k|k-1}} Pkk=(IKkHk)Pkk1Pkk=(IKkHk)Pkk1
where the state transition and observation matrices are defined to be the following Jacobians

F k = ∂ f ∂ x ∣ x ^ k − 1 ∣ k − 1 , u k F k = ∂ f ∂ x ∣ x ^ k − 1 ∣ k − 1 , u k H k = ∂ h ∂ x ∣ x ^ k ∣ k − 1 H k = ∂ h ∂ x ∣ x ^ k ∣ k − 1 {\displaystyle {{\boldsymbol {F}}_{k}}=\left.{\frac {\partial f}{\partial {\boldsymbol {x}}}}\right\vert _{{\hat {\boldsymbol {x}}}_{k-1|k-1},{\boldsymbol {u}}_{k}}}{\displaystyle {{\boldsymbol {F}}_{k}}=\left.{\frac {\partial f}{\partial {\boldsymbol {x}}}}\right\vert _{{\hat {\boldsymbol {x}}}_{k-1|k-1},{\boldsymbol {u}}_{k}}} {\displaystyle {{\boldsymbol {H}}_{k}}=\left.{\frac {\partial h}{\partial {\boldsymbol {x}}}}\right\vert _{{\hat {\boldsymbol {x}}}_{k|k-1}}}{{{\boldsymbol {H}}_{{k}}}}=\left.{\frac {\partial h}{\partial {\boldsymbol {x}}}}\right\vert _{{{\hat {{\boldsymbol {x}}}}_{{k|k-1}}}} Fk=xfx^k1k1,ukFk=xfx^k1k1,ukHk=xhx^kk1Hk=xhx^kk1

Disadvantages

Unlike its linear counterpart, the extended Kalman filter in general is not an optimal estimator (it is optimal if the measurement and the state transition model are both linear, as in that case the extended Kalman filter is identical to the regular one). In addition, if the initial estimate of the state is wrong, or if the process is modeled incorrectly, the filter may quickly diverge, owing to its linearization. Another problem with the extended Kalman filter is that the estimated covariance matrix tends to underestimate the true covariance matrix and therefore risks becoming inconsistent in the statistical sense without the addition of “stabilising noise” [8] .

Having stated this, the extended Kalman filter can give reasonable performance, and is arguably the de facto standard in navigation systems and GPS.

References

https://en.wikipedia.org/wiki/Extended_Kalman_filter
[1]: Julier, S.J.; Uhlmann, J.K. (2004). “Unscented filtering and nonlinear estimation” (PDF). Proceedings of the IEEE. 92 (3): 401–422. doi:10.1109/jproc.2003.823141. S2CID 9614092.
[2]: Courses, E.; Surveys, T. (2006). Sigma-Point Filters: An Overview with Applications to Integrated Navigation and Vision Assisted Control. Nonlinear Statistical Signal Processing Workshop, 2006 IEEE. pp. 201–202. doi:10.1109/NSSPW.2006.4378854. ISBN 978-1-4244-0579-4. S2CID 18535558.
[3]: R.E. Kalman (1960). “Contributions to the theory of optimal control”. Bol. Soc. Mat. Mexicana: 102–119. CiteSeerX 10.1.1.26.4070.
[4]: R.E. Kalman (1960). “A New Approach to Linear Filtering and Prediction Problems” (PDF). Journal of Basic Engineering. 82: 35–45. doi:10.1115/1.3662552.
[5]: R.E. Kalman; R.S. Bucy (1961). “New results in linear filtering and prediction theory” (PDF). Journal of Basic Engineering. 83: 95–108. doi:10.1115/1.3658902.
[6]: Bruce A. McElhoe (1966). “An Assessment of the Navigation and Course Corrections for a Manned Flyby of Mars or Venus”. IEEE Transactions on Aerospace and Electronic Systems. 2 (4): 613–623. Bibcode:1966ITAES…2…613M. doi:10.1109/TAES.1966.4501892. S2CID 51649221.
[7]: G.L. Smith; S.F. Schmidt and L.A. McGee (1962). “Application of statistical filter theory to the optimal estimation of position and velocity on board a circumlunar vehicle”. National Aeronautics and Space Administration.
[8]: Huang, Guoquan P; Mourikis, Anastasios I; Roumeliotis, Stergios I (2008). “Analysis and improvement of the consistency of extended Kalman filter based SLAM”. Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on. pp. 473–479.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值