# Probabilistic Robotic Reading Note - Perception

#### Introduction

There are two steps in Bayes Filter algorithm

1. Make a guess
2. Update the belief

Robot perception is involved in the first step. When we need to make a guess, we usually need a model for the physical world. The problem can be summarized as the following:

Given the pose of the robot $$(x, y, \theta)$$, the position of the sensor and the direction of the sensor beam, what is the probability of detecting an object? We need to make additional assumptions to reduce the calculation complexity. This assumption is called conditional independence assumption.

$$p(z_t|x_t, m) = \prod_{i} p(z_t^{(i)} | x_t, m)$$

Here $$z_t$$ denotes all measurements at time $$t$$ and $$z_t^{(i)}$$ is the individual measurement.

#### Beam Model

The geometry of the beam model is illustrated by the figure blow: Note that the sensor and the direction of the beam is defined in the robot's local coordinates. The equation in the figure is a simple coordinate transformation. As we mentioned earlier, what we need is a model of the physical world and in this particular case we need a model of the measurement.

The beam model consists of four parts:

• Correct range with local measurment noise. We assume the sensor reading following a normal distribution centered around the true value $$z_t^*$$.
• Unexpected object. There may be objects between the robot and the object to be detedcted. This makes the sensor readins smaller than the true value and this impact is modelde by a n exponential distrubition.
• Failure. We need to model the sensor reading failure, which can happen for any reasons and the sensor simply returns the maximum distance. The failure is modelde by a point-mass distribution.
• Random measurment. This is used to model the noise in the measurement.

The distribution of the expected measurement is the weighted mixture of these four components. Note that in this model, we have hyper-parameters such as weights of components and parameters in distributions. We need to apply learning algorithms to learn this hyper-parameters from the data. #### Feature-Based Measurement Model

Some maps contains distinctive features. These features provides valuable information when the robot tries to localize itself. A detected feature in the environment can be represented by a triplet

$$\textrm{feature} = \begin{pmatrix} r \\ \phi \\ s \end{pmatrix}$$

where r is the detected range, $$\phi$$ is the detected bearing of the object and $$s$$ is the (numerical) signature of the object. The signature can be considered as a numerical tag of the feature. For example, if the sensor measurement is an image, we could run a classification algorithm and determine the object in the image is a house, which belongs to category 10. Here 10 can be used as the signature of this detected house.

For the same reason, we also need to make conditional independence assumption for feature based measurement model.

$$p(f(z_t)|x_t, m) = \prod_i p(r_t^{(i)}, \phi_t^{(i)}, s_t^{(i)} | x_t, m)$$

The interpretation of $$p(f(z_t)|x_t, m)$$ is the probablity of detecting the features on the map given the robot pose at time $$t$$. One of the challenges of the feature-based measurement model is the data association problem. This issue arises when the feature cannot be uniquely determined. In this post (chapter), the data-feature association is assumed to be known. This association is characterized by the superscript. If we look at the expression

$$p(r_t^{\color{red}{(i)}}, \phi_t^{\color{red}{(i)}}, s_t^{\color{red}{(i)}} | x_t, m)$$

The superscript means that we know the measurement data $$(r_t, \phi_t, s_t)$$ is assocaited with the $$i^{th}$$ feature.

----- END -----

Welcome to join reddit self-learning community.

Want some fun stuff?  