UP | HOME

Data Association for SLAM

Overview

  1. To perform SLAM effectively, measurements must be associated with the proper features in the map.
  2. The correspondence is \(c_i = j\), which indicates that measurement \(i\) belongs to feature \(j\).
  3. There is technically a joint distribution of the robot pose, the map, and the features, given the measurements \(p(x, m, c | z)\).
    • However, this joint distribution is difficult to compute (and it is not Gaussian).
    • Instead we try to find the most likely data association rather than the full distribution based on a
  4. Standard EKF SLAM is rather brittle in the face of incorrect data association
    • Wrong associations throw the algorithm off
    • There are methods to mitigate the negative effects

The Simplest Data Association

  1. Since \(z = h(x)\), if \(h\) is invertible then \(x = h^{-1}(z)\).
    • The inversion of the measurement model provides a feature location
    • Check if the estimated feature location is near an existing feature
      • If not add it to the map
      • If so, the measurement is associated with the nearest feature
  2. Advantage: Simplicity
  3. Disadvantage: Does not account for covariance

Mahalanobis Distance

  • A distance metric that accounts for covariance
  • "How many standard deviations away", but for multiple dimensions
  • \((x_a - x_b)^T\Sigma^{-1}(x_a - x_b)\).
    • \(\Sigma\) is the covariance
  • What do we take the Mahalanobis distance of?
    • Distance between measurement resulting from current state and the new measurement

Incorporating into SLAM

The Basic Idea

  1. Find the existing landmark that generates a measurement closest to the the current measurement
  2. If the closest existing landmark is farther than some threshold, the landmark is a new landmark

Algorithm Overview

  1. Let \(N\) be the number of landmarks already seen
  2. The \(j\) -th landmark is at \((\mu_{x,j}, \mu_{y,j})\)
  3. Let \((\mu_{x, N+1}, \mu_{y, N+1}) = h^{-1}(z_i)\).
    • We are temporarily adding a landmark corresponding to the latest observation. This is landmark \(N+1\)
  4. For each measurement \(i\)
    1. For each landmark \(k\) (including the temporarily added landmark)
      1. Compute \(H_k\), the linearized measurement model (see Kalman Filter)
      2. Compute the covariance \(\Psi_k = H_k \Sigma H_k^T + R\)
      3. Compute the expected measurement \(\hat{z}_k = h(\mu)\)
      4. Compute the mahalanobis distance \(d_k = (z_i - \hat{z}_k)^T\Psi^{-1}(z_i-\hat{z}_k)\)
    2. Set the mahalanobis distance for landmark \(N+1\) (the temporary landmark) to be a the distance threshold.
      • If all the landmarks are farther away than this threshold, then the landmark is new.
    3. Let \(d_\ast\) be the minimum mahalonbis distance and \(\ell\) be the landmark index corresponding to the minimimum distance.
    4. \(\ell\) is the landmark corresponding to \(z_i\). If \(\ell = N+1\) then we have a new landmark so increment \(N\).
      • The \(\ell+1\) landmark should now be initialized with \(h^{-1}(z_{i+1})\), if the \(i+1\) measurement exists
    5. Incorporate the new measurement into your EKF estimate
      • If getting many false associations you can choose to only incorporate the measurement if it is new or if \(d_\ast\) is less than some threshold value

Practical Considerations

There are several practical considerations to the above algorithm to improve robustness, however, as outlined above, the algorithm should work well enough in our examples.

  1. If likelihood of features is similar, an incorrect association can occur
    • Choose an environment/sensor combination such that likelihoods are vastly different for example, maintain a minimum distance between landmarks to help set a good threshold
    • Farther apart landmarks are, less chance of incorrect association but fewer landmarks leads to worse estimate leads to incorect association.
  2. The algorithm is susceptible to spurious measurements/outliers. This can be improved by maintaining a provisional landmark list
    • Provisional landmarks: don't add a landmark right away, wait for a few measurements associated with it to come in
    • Provisional landmarks are not used for localization
    • An even more advanced idea is to maintain probability that landmarks exist
  3. EKF Slam only updates based on landmarks it sees not those that it does not see. We can use this to our advantage when doing data association.
    • Only add a new landmark if it is seen multiple times in the same location
      • Avoids false positives, makes the method more robots
  4. Multi-hypothesis kalman filter
    • Run several filters with different data associations
  5. While it is possible to explicitly compute \(c_i\) you need not do this: For each measurement:
    1. Temporarily assume that the measurement is for a previously unseen landmark
    2. Compute mahalonobis distances to each landmark
    3. Find the mimimum mahalanobis distance
    4. If the minimum is aboe a threshold add the landmark permanently Otherwise incorporate the measurement for the closest landmark.
  6. Your landmark detection scheme should work perfectly on the perfect data form a simulation

Resources

Author: Matthew Elwin