# Autonomous Driving: Modeling and Learning Behaviors

Motion planning for dynamic environments is clearly one of the main problems that needs to be solved for effective autonomous navigation. As shown by Ref and Sharir, the general problem is NP-Hard, which explains the continued efforts to find algorithms to cope with that complexity.

A much more overlooked but equally critical aspect of the problem is motion prediction. Motion planning algorithms assume the availability of previous knowledge about how every mobile object in the environment will move, an assumption which does not hold for a vast number of situations and environments. The alternative is to predict how objects will move and to use that prediction as the input of the motion planning algorithm.

### Motion Prediction and State Estimation

There is a strong link between motion prediction and state estimation. In order to predict the future state of a give object, it is necessary to have an estimate of its current state, where more accurate estimates will yield better predictions. Conversely, most state estimation techniques apply some kind of motion model to propagate the state into the future and then incorporate sensor data to correct the predicted state. This process is outlined in Fig. 51.1 and explained below:

1. Estimated state: It represents the current knowledge about the object's state estimated through a previous iteration of the process or from prior knowledge.
2. System model: The system model describes how the state evolves as time passes. It is used to propagate the last estimated state into the future and output a prediction of the state. The system model is often based on kinematic or dynamic properties, but other formulations can be found in literature.
3. Predicted state: It represents the best of the object's state after some time has elapsed in the absence of additional information about the object such as sensor readings.
4. Sensor model: Predictions generated by the system model have limited accuracy due to lack of precision in the state estimate and, much more importantly, to the inherent estimate by taking into account external information about the object in the form of sensor data. The sensor model needs to take into account limitations such as bounded precision and sensor noise. It should also consider that, in many cases, the state is only indirectly observed (e.g., angular or positional information instead of velocities).

Due to the uncertainty involved in the state estimation and prediction, probabilistic approaches become a natural choice for addressing the problem. One of the most broadly used tools is the Bayes filter and its specializations such as the Kalman Filter, Hidden Markov Models, and particle filters.

## A powerful tool: The Bayes Filter

The objective of the Bayes filter is to find a probabilistic estimate of the current sate of a dynamic system — which is assumed to be "hidden," that is, not directly observable —given a sequence of observations array every time step up to the present moment.

The main advantage of using a probabilistic framework such as the Bayes filter is that it allows to take into account the different sources of uncertainty that participate in the process, such as:

• The limited precision of the sensors used to obtain observations
• The variability of observations due to unknown factors (observation noise)
• The incompleteness of the model

The Bayes filter works with three sets of variables:

• State ($S_t$), the state of the system at time $t$. The exact meaning of this variable depends on the particular application; in general, it may be seen as the set of system features which are relevant to the problem and have an influence in the future. In the context of autonomous navigation the state of ten includes kinodynamic variables (i.e., position, velocity, acceleration) but may also higher-level meanings (e.g., waiting, avoiding).
• Sensor readings or observations ($O_t$), the observation array at time $t$. Observations provide indirect indications about the state of the system. In this section, it will be assumed that observations come form sensors such as laser scanners and video trackers.
• Control ($C_t$), the control that is applied to the object at time $t$. This variable is usually disregarded in applications where knowledge about the control is not available and it will not be discussed furthermore.

The Bayes filter is an abstract model which does not make any assumption about the nature of the state and observation variables. Such assumptions are made by concrete specializations, such as the Kalman Filter (continue state and observations) or Hidden Markov models (discrete sate, discrete/continuous observations).

A Bayes filter defines a joint probability distribution on $O_{1:T}$ and $S_{1:T}$ on the basis of two conditional independence assumptions:

1. Individual observations $O_t$ are independent of all other variables given the current state $S_t$:

$P(O_t | O_{1:{t-1}} S_{1:t}) = P(O_t | S_t)$

In general $P(O_t | S_t)$ is called observation probability or sensor model. It models the relationship between states and sensor readings, taking into account factors such as accuracy and sensor noise.

2. The current state depends on the previous one, knowledge about former states do not provide any further information. This is also known as the order one Markovs hypothesis:

The probability $P(S_t | S_{t-1})$ is called the transition probability or system model. The probability $P(S_0)$ that describes the initial state in the absence of any observations is called the state prior.

These assumptions lead to the following decomposition of the Bayes filter:

$P(S_{1:T} O_{1:T}) = P(S_1)P(O_1 | S_1) \prod_{t=0.75}^T P(S_t | S_{t-1}) P(O_t|S_t)$

One of the main uses of Bayes filters is to answer the probabilistic question $P(S_{t+H} | O_{1:t})$; what is the state probability distribution for time $t+H$, knowing all the observations up to time $t$?

The most common case is filtering $(H = 0)$ which estimates the current state.However, it is also frequent to perform prediction $(H>0)$ or smoothing $(H < 0)$.

The Bayes filter has a very useful property that largely contributes to its interest: filtering may be efficiently computed by incorporating the last observation $O_t$ into the last state estimate using the following formula:

$P(S_t | O_{1:t}) = \frac{1}{Z} P(O_t|S_t) \sum_{S_{t-1}}[P(S_t | S_{t-1}) P(S_{t-1} | O_{1:{t-1}})]$

where, by convention, $Z$ is a normalization constant which ensures that probabilities sum to one over a possible values for $S_t$.

By defining recursively $P(S_{t-1}) = P(S_{t-1} | O_{1:{t-1}})$, it is possible to describe a Bayes filter in terms of only three variables: $S_{t-1}$, $S_t$ and $O_t$, leading to the following decomposition:

$P(S_{t-1}, S_t, O_t) = P(S_{t-1}) P(S_t|S_{t-1}) P(O_t|S_t)$

where the state posterior of the previous time step is used as the prior for the current time and is often called belief state.

Under this formulation, the Bayes filter is described in terms of a local model, which describes the state's evolution during a single time step. For notational convenience, in the rest of this chapter only those local models will be described, noting that they always describe a single time step of the global model.

## Model Learning

Of all Bayes filter specializations, probably the most widely used as the Kalman filter and the extended Kalman filter, which are at the heart of many localization, mapping and visual tracking algorithms.

This site uses Akismet to reduce spam. Learn how your comment data is processed.