Image for post
Image for post

Self-driving car: Tracking cars with Extended and Unscented Kalman Filter

In the previous article, we discuss how Kalman Filter works. The mathematical solution for Kalman Filter assumes states are Gaussian Distributed. However, this is not true sometimes. In this article, we extend our method to Extended Kalman Filter and the Unscented Kalman Filter which produce more accurate results than the Kalman Filter if the dynamic model is not linear.

This is part of a 5-series self-driving. Other articles includes

Extended Kalman Filter

Recall from the Kalman Filter, this is how the observer world looks like:

Image for post
Image for post

This assume our dynamic model is linear.

Image for post
Image for post

where A, B and C are matrix. If the initial state is Gaussian distributed, the prediction is also Gaussian distributed. So Kalman Filter will work nicely. However, many dynamic models are non-linear. The model will be written as:

Image for post
Image for post

If f or h is not linear, will the output Gaussian distributed? In the bottom right below, the probability distribution function for our state x is Gaussian distributed. We apply a function f on x, and the plot on the top left is the probability distribution function for f(x). As you can see, it is not Gaussian distributed anymore. Hence, if the function f is not linear, our output will not be Gaussian distributed.

Image for post
Image for post

However, if we limit the range of x, the function f can be approximated by a linear function.

Image for post
Image for post

If the function f is close to a linear function or f is close to linear for our target range of x, we can use Extended Kalman Filter to replace Kalman Filter.

In Extended Kalman Filter, we approximate the function f by Jacobian matrix.

Image for post
Image for post
Jacobian matrix

In the left below is the original equation for the Kalman Filter and the right is the Extended Kalman Filter. The difference is we replace A by the Jacobian matrix F, and C by the Jacobian matrix H.

Image for post
Image for post

So instead of using f(x), we replace it with f’(xi). Here is our visualization which the orange curve (computed from f’(xi)) in the top left is our new Gaussian Distribution which approximate the blue one (computed from f(x) ).

Image for post
Image for post

Unscented Kalman Filter

Extended Kalman Filter handles cases where f is close to linear which we will use f’(xi) to approximate f(x). However, this is not feasible if f is not close to linear. In this case, we can sample values in x and compute f(x). Then we use the sampled outputs to compute a Gaussian distribution which is used to approximate the probability distribution of f(x). Since x is Gaussian distributed, instead of randomly sample x, we can just compute the output of predefined sigma points (on the bottom right).

Image for post
Image for post

Then we assign a weight to the output which is proportional to the probability distribution of x. We then compute the Gaussian Distribution based on the weighted output. This becomes our state prediction.

Image for post
Image for post

Written by

Deep Learning

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store