Probabilistic Robotics Reading Note(1) - Bayes Filter

Subscribe Send me a message home page tags


Introduction

This post is a reading note of the first four chapters in the book Probabilistic Robotics. It consists of two sections

  1. Probability Review. This section will breifly describe the importantce sampling. As we will see in later chapters, the importance sampling technique is used in the particle filter.
  2. Bayes Filter. Bayes Filter is the key in many probabilistic algorithms described in the book. There are two are two main approaches: (1) parametric approach and (2) non-parametric approach. In this post, we will mainly describe two classic Bayes filters: Kalman filter and Partical filter.

Probability Review

Importance Sampling

Assuming there is a random variable \(X\) and two probability distribution \(f\) and \(g\). The key idea of importantce sampling is to calcuate the expectation of r.v. \(X\) under the distribution \(f\) but using the distribution \(g\).

$$ \begin{eqnarray} E_f[X] & = & \int x f(x) dx \\ & = & \int x \frac{f(x)}{g(x)} g(x) dx \\ & = & E_g[\frac{f(X)}{g(X)}X] \end{eqnarray} $$

The following condition needs to be satisfied in order for the above transformation to be correct:

$$ f(x) > 0 \Rightarrow g(x) > 0 $$

In some sense \(f\) is the desired or target distribution and \(g\) is the distribution that is avaialbe when we do the calculation. The term \(\frac{f(x)}{g(x)}\), which is called importance factor, is an adjustment because the same value \(x\) has different weights in the two different distribution.

There is another view of this weight factor that is related to sampling. Suppose we have N samples of randome variable \(X\) under the distribution \(g\). Let us denote the samples by \({x_1, x_2, ..., x_n}\). If we need to calcuate the empirical expection of \(X\) under \(g\), we would have

$$ E_g[X] = \frac{1}{n}\sum_i x_i $$

Now imaging there is another discrete distribution \(h\) which can only take values in \({x_1, x_2, ..., x_n}\). In this case, the expection of \(X\) under \(h\) is given by

$$ E_h[X] = \sum_i x_i p(x_i) $$

And in order to calaucate an empirical expection, we can generate samples from \({x_1, x_2, ..., x_n}\) with \(x_i\) having probability \(p(x_i)\) to be chosen. Now if we set \(p(x_i) = \frac{1}{n}\frac{f(x)}{g(x)}\), then \(E_h[X]\) will be equal to the empirical expection of \(X\) under distribution \(f\).

In summary, if we have N samples of \(X\) under distribution \(g\) and if re-sample them in a way that each \(x_i\) is chosen with a probability proportional to \(\frac{f(x_i)}{g(x_i)}\), then

Therefore, performing sampling of \(X\) under distribution \(g\) first and then under \(h\) allows us to get an empirical estimate of \(E_f[X]\).

Bayes Filter

In the probabilistic robotics framework, the state of the robot is described by a probability distribution and this distribution can change over time when more information is avaialble(e.g. receive more data from sensors).

Bayes filter is a class of algorithms that models the change of distribution of the robot state.

There are three main concepts in this section:

  1. State. The state is denoted by \(x_t\). It's always unknown and the only information we know about the state is a probability distribution.
  2. Control data. The control data is denoted by \(u_t\). It represents the action applied to robot between \(t-1\) and \(t\). When we apply an action to the robot, it usually changes its state.
  3. Measurement data. The measurement from sensor is denoted by \(z_t\). It is the source of new information and it changes our belief on the state variable.

Note: The belief on the state varialbe \(x_t\) means the probability distribution of \(x_t\).

General Model and Algorithm

Definition: The belief on the state variable is defined as

$$ bel(x_t) = p(x_t|z_{1:t}, u_{1:t}) $$

Definition: The prediction of the state variable is defined as

$$ \overline{bel}(x_t) = p(x_t|z_{1:t-1}, u_{1:t}) $$

The only difference between the prediction and the belief at time \(t\) is that belief takes the latest measurement \(z_t\) into account.

In Bayes filter, we need to make two assumptions to simplify the math calculation.

Assumption: State transition probability only depends on the previous state and the new action applied to it.

$$ p(x_t|x_{0:t-1}, z_{1:t-1}, u_{1:t}) = p(x_t|x_{t-1}, u_t) $$

Asusmption: The measurement at time \(t\) only depends on the state at time \(t\).

$$ p(z_t|x_{0:t}, z_{1:t-1}, u_{1:t}) = p(z_t|x_t) $$

Now we are ready to present the Bayes Filter algorihtm:

Algo_Bayes_Filter(\(bel(x_t)\), \(u_t\), \(z_t\)):
 for all \(x_t\) do
  \(\overline{bel}(x_t) = \int p(x_t|u_t, x_{t-1})bel(x_{t-1})dx_{t-1}\)
  \(bel(x_t) = \eta \cdot p(z_t|x_t)\overline{bel}(x_t)\)
 endFor
return \(bel(x_t)\)

The inner for loops has two steps. The first step is prediciotn step. Given the previous state and the latest action, we can calcuate the expected new distribution of the state. The second step is correction step. In this step, we adjust our prediction with newly avaialble measurement data.

Kalman Filter

Kalman filter is a concrete "implementation" of the general Bayes filter algorithm. It specifies the formula of station transition probability and the measurement probability. More specificaly, in Kalman filter:

The stat transition probability is modeled as

$$ x_t = A_tx_{t-1} + B_tu_t + \epsilon_t $$

And the measure probability is modeled as

$$ z_t = C_tx_t + \delta_t $$

We also need a prior belief distribution, which is defined as

$$ bel(x_0) = p(x_0) = \mathcal{N}(\mu_o, \Sigma_0) $$

These assumptions allow us to implement the general Bayes filter algorithm with closed math formula:

Extension: Non-linearity

As we can see, Kalman filter makes linear asusmptions on the state transition probability and measurement probability. However, most of the robotic systems in the real word exhibit non-linear behavior. In order to apply the Kalman filter to those problems, we need to "convert" the non-linear behavior to a linear one. One of the common approaches is to use Taylor expension to get a local linear approximation of the non-linear function.

Particle Filter

The key idea of particle filter is to represent the posterior \(bel(x_t)\) by a set of random state samples drawn from this posterior. Let \(M_t\) denote the collections of samples at time \(t\), the question is how we can update these sample data after we apply actions to the robot and receive more sensor measurement data.

As usual, there are two steps in the algorithm. The prediction step becomes a simple sampling drawn from the state transition probability distribution. This will give us a collection of sample data following \(\overline{bel}(x_t)\).

Let \(\overline{M}_t\) denote the collection of samples from distribution \(\overline{bel}(x_t)\), then the question is how to construct \(M_t\) from \(\overline{M}_t\) .

When we discuss the importance sampling earlier, we have three different distributions.

  1. desired but unknown distribution \(f\)
  2. distribution that is available for calculation \(g\)
  3. distribution \(h\), under which the expectation of \(X\) is equal to the empirical expection of \(X\) under \(f\).

In the context of particle filter, \(f\) is the belief distribution \(bel(x_t\)); \(g\) is the pridicted distribution \(\overline{bel}(x_t)\) and the importance factor is the measurement probability \(p(z_t|x_t)\), which is an adjustment of belief as new measurement data \(z_t\) becomes available.

Note that the expectation under \(h\) is equal to the empirical expection under \(f\). Therefore we can use the empirical expection under \(h\) as an approximation of the expectaion under \(f\). To some extent, the distribution \(h\) is an approximate of distribution \(f\). As the idea of particle filter is to represent a distribution using samples, now the remaining question is how to constuct a collection of sample data that follows distribution \(h\).

As we have seen in the discussion on importance sampling, \(h\) is constructed by re-sampling the samples generated from distribution \(g\) and the probability of being chosen in the re-sampling is proportional to the importance factor, which in the context of particle filter is the measurement probabilityu \(p(z_t|x_t)\).

Putting everything together, we get the following algorithm:

----- END -----

If you have questions about this post, you could find me on Discord.
Send me a message Subscribe to blog updates

Want some fun stuff?

/static/shopping_demo.png