**2017 AAAI Classic Paper Award **

**Monte Carlo Localization: Efficient Position Estimation for Mobile Robots,***by Dieter Fox, Wolfram Burgard, Frank Dellaert, Sebastian Thrun*

http://robots.stanford.edu/papers/fox.aaai99.pdf

## Description

Monte Carlo Localization uses randomized samples to represent a robot’s belief about its location in an environment and is notable for its accuracy, efficiency, and ease of use compared to previous approaches. MCL was first presented in 1999 at the International Conference on Robotics and Automation and was the first application of sample-based estimation in robotics, where it is now used across a wide range of application.

The new sample-based Monte Carlo Localization (MCL) is computationally efficient while retaining the ability to represent arbitrary distributions. The MCL applied the sample-based methods for approximating probability distributions employed by the number of samples is adapted on-line, thereby invoking the sample sets dynamic and necessary, which the grid-based approached that represented computationally cumbersome by 3D grids failed to achieve.

## Introduction

In mobile robot applications, the sensor-based location has been recognized as a key problem. Localization is a version of on-line spatial state estimation, where a mobile robot seeks to estimate its position in a global coordinate frame. The localization problem comes in two flavors: __global localizaion__ and __position tracking__. The paper indicates that the global localization problem is difficult to solve because it involves a robot which is not told its initial position; the position tracking problem is the most-studied problem that a robot knows its initial position and only has to accommodate small errors as it moves.

The existing approaches to robot localization include Markov localization, Kalman Filtering, Particle Filtering, SLAM (Simultaneous localization and mapping), and Multi-robot localization.

- Markov Localization:
- Central idea: represent the robot’s belief by a probability distribution over possible positions, and use Bayes’ rule and convolution to update the belief whenever the robot senses or moves.
- Markov Assumption: the past and future data are independent if one knows the current state

- Kalman Filtering:
- Central idea: posing localization problem as a sensor fusion problem
- Assumption: Gaussian distribution function

- Particle Filtering (Monte-Carlo Localization)
- Central idea: the posterior belief by a set of
*N*weighted, random samples or particles

- Central idea: the posterior belief by a set of
- SLAM (Simultaneous Localization and Mapping)
- Multi-robot localization

In this paper, authors considered the limitation of Kalman filter-based techniques and Markov localization, including Topological Markov localization and Grid-based Markov localization, and then provided the sample-based density approximate Monte Carlo localization. These approaches have the following limitations:

- The Kalman filter-based techniques are based on the assumption that uncertainty in the robot’s position can be represented by a unimodal Gaussian distribution. Kalman filter-based techniques have proven to be robust and accurate for keeping track of the robot’s position. However, this approach cannot deal with multi-modal densities typical in global localization. The other limitation is that the initial posture must be known with Gaussian uncertainty at most.
- The Topological Markov localization methods uses increasingly richer schemes to represent uncertainty, moving beyond the Gaussian density assumption inherent in the vanilla Kalman filter, and is roughly distinguished by the type of discretization used for the representation of the state space. However, the coarse resolution of the state representation limits the accuracy of the position estimates.
- The Grid-based Markov localization methods are powerful to deal with multi-modal and non-Gaussian densities. However, this approach suffers from excessive computational overhead and
*a priori*commitment to the size and resolution of the state space. Moreover, the computational requirements have an effect on accuracy as well, as not all measurements can be processed in real-time and valuable information about the state is thereby discarded.

By using a sampling-based representation, MCL has several key advantages over earlier work in the field:

- In contrast to existing Kalman filtering based techniques, it is able to represent multi-modal distributions and thus can
*globally*localize a robot. - It drastically reduces the amount of memory required compared to grid-based Markov localization and can integrate measurements at a considerably higher frequency. The online algorithm lends itself nicely to an any time implementation.
- It is more
*accurate*than Markov localization with a fixed cell size, as the state represented in the samples is not discretized. - It is much easier to implement.

## Monte Carlo Localization

Monte Carlo Localization is generically known as the Particle Filter, a version of sampling / importance re-sampling (SIR), so-called the bootstrap filter, Monte Carlo filter, the Condensation algorithm or the survival of the fittest algorithm. The key idea is to represent the posterior belief *Bel(l)* by a set of *N* weighted, random samples or particles* S*.

The samples in MCL are of the type *<<x, y, theta>, p>* which represents the robot’s position (coordinate *x,y *and orientation *theta*) and numerical weighting factor *p*, and assume *Sum_{n=1}^N p_n = 1. *

### Robot motion:

When the robot moves, MCL generates *N* new samples that approximate the robot’s position after the motion command. Each sample is generated by randomly drawing a sample from the previously computed sample set, with likelihood determined by their *p*-value.

The new sample’s *l* is then generated by generating a single, random sample from *P(l | l’, a)*, using the action *a* as observed. The *p*-value of the new sample is *N^{-1}*.

Note that the effect of this sampling technique, starting at an initial known position, executing actions as indicated by the solid line, and the sample sets approximate distribution with increasing uncertainty, representing the gradual loss of position information due to slippage and drift. The motion model must compensate for noise. Inevitably, the particles diverge during the motion update as a consequence. This is expected since a robot becomes less sure of its position if it moves blindly without sensing the environment.

### Sensor readings:

Sensor readings is incorporated by re-weighting the sample set, in a way that implements Bayes rule in Markov localization. The added samples are essential for relocalization in the rare event that the robot loses track of its position. By adding a small number of random samples, MCL can effectively re-localize the robot for solving the issue if no sample is generated close to the correct robot position when MCL uses finite sample sets.

Let *<l, p>* is a sample, *p ← alpha P(s | l), s* is the sensor measurement, and *alpha* is a normalization. The incorporation of constant that enforces *Sum_{n=1}^N p_n = 1.* An algorithm to perform this re-sampling process efficiently in O(N) time is given in (Carpenter, Clifford, & Fernhead 1997).

## Supplements

*More readable, Synced tech analysts provide supplementary materials below, including the pseudocode and an example for one-dimensional robot.

### pseudocode

Given a map of the environment, the goal of the algorithm is for the robot to determine its position within the environment. Every time the algorithm takes as input the previous belief

an actuation command *u_t*, and data received from sensors *z_t*; and the algorithm outputs the new belief * X_t*.

### Example

An example for the one-dimensional robot. Consider a robot in a one-dimensional circular corridor with three identical doors, using a sensor that returns either true or false depending on whether there is a door. At the end of the three iterations, most of the particles have converged on the actual position of the robot as desired.

**Experimental Results**

In this paper, the researchers evaluated the MCL in comparison to grid-based approach, tested MCL in extreme situation that only using vision information, and evaluated the utility of MCL’s adaptive approach to sampling.

In comparison, the grid-based approach with a resolution of 20 cm, requires almost exactly ten times as much memory when compared to MCL with 5,000 samples. During global localization, integrating a single sensor scan requires up to 120 seconds using the grid-based approach, whereas MCL consistently requires less than 3 seconds under otherwise equal conditions. In addition, the results for grid-based localization were not generated in real-time.

*Figure 1. Accuracy of (a) grid-based Markov localization using different spatial resolutions and (b) MCL for different numbers of samples (log scale).*

In vision-based localization, the grid-based localization fails to track the robot accurately because the computational overhead makes it impossible to incorporate a sufficient number of images. However, MCL succeeded in globally localizing and tracking position, the system is able to keep track of multiple hypotheses to recover from localization errors.

In the evaluation of adaptive sampling, MCL is superior to fixed sample sets. The top curve depicts the frequency with which the error was large for different sample set sizes, while the bottom line gives the same result for the adaptive sampling approach. Adaptive sampling yields smaller error than the best MCL with fixed sample set sizes.

To evaluate the utility of sampling in localization, the authors thoroughly tested MCL in a range of real-world environments, applying it to three different types of sensors (cameras, sonar, and laser proximity data). The two primary results are:

- MCL yields significantly more accurate localization results than the previous most accurate Markov localization algorithm, while consuming an order of magnitude less memory and computational resources. In some cases, MCL reliably localizes the robot whereas previous methods fail.
- By and large, adaptive sampling performs equally well as MCL with fixed sample sets. However, in scenarios involving a large range of different uncertainties (global vs. local), adaptive sampling is superior to fixed sample sizes.

## Future

In future work, the increased efficiency of sample-based localization will be applied to multi-robot scenarios, where the sample sets of the different robots can be synchronized whenever one robot detects another. They also plan to apply Monte Carlo methods to the problem of map acquisition, where recent work has led to new statistical frameworks that have been successful applied to large, cyclic environments using grid representations.

## Comments:

In AAAI 2017, Sebastian Thrun obtained two awards: AAAI Classic Paper Award and AAAI/EAAI Outstanding Educator Award. He is the Co-Founder and Chief Executive Officer of Udacity, and founded Google’s self driving car team. Last year, Udacity developed the first Self-Driving Car Nanodegree program and lured over 11,000 applicants at the first cohort application round.

The MCL on-line adaptive sampling was far more efficient, accurate, and valuable in practices than grid-based Markov localization. It was also easier to implement as the pseudocode in the supplements. MCL randomly guesses possible positions in a way that favors likely positions over unlikely ones, and adjusts the number of samples in proportion to the amount of surprise in the sensor data.

In comparison to SLAM, all localization approaches – include MCL – require a prior map or a predefined map, because the goal of localization is to localize a robot with respect to some feature. We have to know the feature to confirm the robot position. If the features are uncertain, Simultaneous Localization and Mapping (SLAM) is more suitable.

**References：**

- Dieter Fox, Wolfram Burgard, Sebastian Thrun, “Markov Localization for Mobile Robots in Dynamic Environments”, J. of Artificial Intelligence Research 11 (1999) 391-427
- Sebastian Thrun, “Probabilistic Robotics: Tutorial presented in AAAI 200” – available through S.Thrun Web page
- Ioannis M. Rekleitis. “A Particle Filter Tutorial for Mobile Robot Localization.”
*Centre for Intelligent Machines, McGill University, Tech. Rep. TR-CIM-04-02*(2004). - Frank Dellaert, Dieter Fox, Wolfram Burgard, Sebastian Thrum “Monte Carlo Localization for Mobile Robots.”
*Proc. of the IEEE International Conference on Robotics and Automation*Vol. 2. IEEE, 1999. - Sebastian Thrun, Wolfram Burgard, Dieter Fox “A Probabilistic Approach to Concurrent Mapping and Localization for Mobile Robots “- Autonomous Robots, 1998, Springer
- Sebastian Thrun, Wolfram Burgard, Dieter Fox.
*Probabilistic Robotics*MIT Press, 2005. Ch. 8.3 ISBN 9780262201629. - Sebastian Thrun, Dieter Fox, Wolfram Burgard, Frank Dellaert. “Robust monte carlo localization for mobile robots.”
*Artificial Intelligence*128.1 (2001): 99–141. - Dieter Fox. “KLD–Sampling: Adaptive Particle Filters.”
*Department of Computer Science and Engineering, University of Washington.*NIPS, 2001.

**Analyst**: Elva Wang | Localized by Synced Global Team : Xiang Chen

## 0 comments on “Monte Carlo Localization: Efficient Position Estimation for Mobile Robots”