In my last post in the extended Kalman filter derivation, I’ve talked about the linearization step. I have briefly talked about how the linearization step occurs so that the linear Kalman filter could be used to approximate the nonlinearity. I’ve felt that it might help little bit to demonstrate what the linearization error could do, and how a bad linearization point could affect our estimates.

## Taylor Series

So, we’ve talked about that the linearization is done using Taylor series. I’m pretty sure that there are many other series expansions in mathematics, but I’m pretty positive that Taylor series is one of the most popular one. Again, Taylor series expansion looks like this:

Well, let’s first see if this holds true, and it should. Here’s an example here:

Since I cannot take the infinite number of derivatives myself, I took a liberty of taking only up to the 4th order terms.

Let’s evaluate this function at as well as take the Taylor series about :

Well, this is disappointing. That’s quite a difference. What if I take the linearization point closer? It should help because the dominant term becomes closer to the real value. If I take the instead of , the 4th order summation becomes 31.667438. If , it becomes 33.14626. If , it becomes 33.47607. So it does come closer as the linearization point, , comes closer to the true point, .

But, it doesn’t convince us yet, does it? I want to know if the Taylor series really hold. Let’s take a conveniently simple function

whose derivative is also itself.

Same as above, I’m evaluating using linearization point at . Of course, I cannot compute the infinite terms, so I added 100 terms. Writing a small Python script, I get following:

Real: 7.38905609893065 vs. linearized: 7.389056098930649

which is pretty good. It is only one sample function, but it does show that Taylor series do work – which is good to know ðŸ™‚

Let’s move on. Taylor series do work, but here at EKF example, we take the first order approximation. Let us visualize how it affects.

## One-dimensional Linearization Example

I made a 1D example here. What I’m showing may be obvious and trivial, but thought it would help to visualize one.

Above plot shows what happens when linearization approximation happens. The quadratic equation is linearized about . Two points are evaluated: and . You can see how the first order approximation has errors and they grows as they get further from the linearization point. The approximation does a simple thing: to the function value at the linearization point, it adds: , where the slope is the derivative.

## Two-dimensional Example

Let’s also see how the covariance matrix changes when a linearization happens. Again, the first order Taylor series is an approximation of the nonlinear function , and it will never be exact. To minimize the linearization error, we want the linearization point to be very close to the true value.

In the context of extended Kalman filter, the linearization point is the current estimate. Let’s look at following example:

where the derivatives are:

Let’s say the current covariance matrix is:

and assume the processing noise does not exist (). According to the EKF equation, the predicted covariance matrix becomes: where . Let’s also assume the current state to be: . Now, when we carry out the calculation, we get:

Now, let’s visualize what the true distribution would actually look like. For this, I’ve utilized 500 samples drawn from the original covariance matrix (). Each point is propagated using the equations above to get the samples at time .

Above figure shows the 500 samples at two different times: at (in blue) and (in red). The ellipses represent the error bound of the current estimate. Note that the blue ellipse captures the blue samples very well. These points are propagated to and now the shape has drastically changed (in red). The red ellipse is the covariance directly measured from the 500 samples, which I consider as the true covariance. Given the shape of how the points are distributed, it’s probably not the best to represent it as a Gaussian, but the covariance value is true by the definition. The green ellipse shows the error bound of the predicted covariance using the EKF equation above. It does a fair job at capturing the mean and most of the red samples, but it fails to capture the long spread across direction.

This isn’t the case where the current estimate is far from the true. It actually is matched perfectly at time . However, given the nonlinearity of the function, the EKF’s estimate isn’t perfectly capturing the true distribution. The severity of it will depend on how nonlinear the function is as well as how close the current estimate is to the true.

Okay. So in this post, I tried to cover what the consequence of the linearization is. The first-order approximation suffers to capture the true estimates based on two things (among many others): 1) nonlinearity of the function, and 2) how close the current estimate is to the true value. There are other ways to handle for 1) such as using different estimation methods or capturing higher-order terms. Better handling of nonlinearity will reduce the inaccuracy/inconsistency of the filter, and will bring the current estimate closer to the true; which also will address 2). The bottom line is that if the function you’re dealing w/ is too nonlinear, you’d need to use something better than the first-order approximation, or not approximate at all.

Hope this is a helpful post to you all. If you have any question or comment, please leave them here! I’d love to know what your thoughts are.