3️Localization

Bayes Filter

Bayes Rule, shown in the equation below, is the foundation behind many of the best localization algorithms (among a very many other things outside the field of robotics).

It describes how you can update a probability distribution using an observation that is related to the quantity of interest that the distribution describes. This fits our needs for localization perfectly, as it allows us to update our belief of the robot’s state P(x) using sensor measurements z.

The algorithm for the Bayes Filter is composed of two steps. First, we use the state transition model (we will talk about this soon) to compute our belief of the current state using the belief of the previous state, but without considering the sensor measurements.

This intermediate belief is denoted with a bar over the x. Then, we use Bayes Rule and the sensor measurement to update the belief and hopefully reduce the uncertainty by increasing the concentration of the probability mass towards the actual state of the robot.

Particle Filter

A particle filter is a Bayesian filtering algorithm used for state estimation in a dynamic system. It's particularly useful when dealing with non-linear and non-Gaussian systems, where traditional methods like the Kalman filter may not perform well. Particle filters maintain a set of particles, each representing a hypothesis of the system state. These particles are propagated through time and weighted based on how well they explain the observed data.

  1. Initialization: Start by initializing a set of particles representing possible states of the system. These particles are drawn from the initial state distribution.

  2. Prediction: Predict the next state of the system by applying the system dynamics model to each particle.

  3. Update (Correction): Update the weights of the particles based on the likelihood of observing the measurement given the predicted state. Weights are calculated using the measurement likelihood function.

  4. Resampling: Resample particles with replacement according to their weights. Particles with higher weights are more likely to be sampled multiple times.

  5. Estimation: Estimate the state of the system based on the weighted average of the particles.

Example - 1D

In this example, the robot's motion follows a constant velocity model, and we observe noisy measurements of its position.

import numpy as np

# Define motion model
def motion_model(x, dt):
    # Constant velocity model: x = x0 + v * dt
    velocity = 1.0  # Constant velocity
    return x + velocity * dt

# Define measurement model
def measurement_model(x, noise_std):
    # Add Gaussian noise to the true position
    return x + np.random.normal(0, noise_std)

# Initialize particles
num_particles = 100
particles = np.random.uniform(0, 10, num_particles)

# Simulate motion and measurement updates
for t in range(1, 11):  # Simulate 10 time steps
    # Predict next state (motion update)
    dt = 1.0  # Time step
    particles = motion_model(particles, dt)
    
    # Simulate measurement update
    true_position = t * 1.0  # True position of the robot
    measurement_noise_std = 0.5
    measurements = measurement_model(true_position, measurement_noise_std)
    
    # Update weights based on measurement likelihood
    weights = np.exp(-0.5 * ((particles - measurements) / measurement_noise_std)**2)
    
    # Normalize weights
    weights /= np.sum(weights)
    
    # Resampling
    indices = np.random.choice(np.arange(num_particles), size=num_particles, replace=True, p=weights)
    particles = particles[indices]
    
    # Estimate state
    estimated_position = np.mean(particles)
    print(f"Time step {t}: True position = {true_position:.2f}, Estimated position = {estimated_position:.2f}")

Last updated