\(\def\dt{\Delta t}\)
\(\newcommand{\transpose}[0]{^\mathrm{T}}\)
\(\newcommand{\half}[0]{\tfrac{1}{2}}\)
\(\newcommand{\Half}[0]{\frac{1}{2}}\)
\(\newcommand{\norm}[1]{\left\lVert#1\right\rVert}\)
\(\newcommand\given[1][]{\:#1\vert\:}\)
UP | HOME

Introduction to Path Planning

Table of Contents

1 Introduction

One of the key algorithmic tools used in the ROS navigation stack is a motion planning algorithm. In the navstack, there are both "global" and "local" planners. In this page, we are going to talk about motion planning in general, and specifically, we will discuss the primary motion planning algorithms that are built into the global_planner package.

Planning in general can mean many different things, and the algorithms that we will discuss below are only a few basic examples that are commonly used in mobile robot navigation. Common terms encountered in the planning field include motion planning, trajectory planning, path planning, navigation problem, or the piano mover's problem, and depending on who you are talking to, each of these terms can mean very different things or possibly the exact same thing. When discussing planning, people may be talking about, for example, any of the following problems:

  • Determining control inputs for a nonlinear dynamic system to navigate safely from a starting state to an ending state
  • Determining a collision-free path for a robotic arm to grasp a desired object
  • Finding a driving path for a mobile robot to navigate safely through a set of obstacles

A planning problem may involve dynamics, or it may not. There may be constraints on what states are allowable, one may be concerned with time, there may be metrics defining the quality of a given plan, there may be actuation constraints, or transition constraints, and planning may be occurring in a continuous space or a discrete space. The point is that this is a huge field with people interested in solving a wide variety of problems, and tackling these problems using a wide array of strategies.

Here, we are specifically interested in the particular problem that the global planners in ROS solve, and how those planners work. Note that these planning algorithms can be applied to many other problems. Also note that the planners discussed today could also be useful for the local planners in ROS, but the most common local planners (dwa_local_planner and eband_local_planner) use different strategies.

1.1 Problem setup and description

In the ROS navstack, we are interested in a mobile robot navigating in a two-dimensional world. In the global planning problem, we must have the following items:

  1. A finite state space \(X\). In the ROS global_planner, this can be built by looking at the cells in our costmap that have a less-than-lethal cost. I.e., this is the set of cells that our robot is allowed to visit. In the global_planner, the robot is assumed to be a point i.e. the real-world size of the robot is ignored during planning. To ensure that the real robot doesn't collide with obstacles (even though we only consider it having infinitesimal size during planning), we use the inflation layer of the costmap to provide sufficient clearance around real-world obstacles..
  2. For every state in the allowable space i.e. \(\forall x \in X\), we must have a finite set of actions \(U(x)\). For a real robot, we need some way to convert an infinite set of velocities into this finite set. In the global_planner, this is done by ignoring all kinematics and velocity constraints of the real robot, and the control space \(U(x)\) reduces to the controls that move the robot into neighboring, obstacle free cells. In other words, the controls are just move up, move down, move left, and move right (assuming all four directions are obstacle free).
  3. We must have a state transition function that maps an action \(u\in U(x)\) at a particular valid state \(x \in X\) forward to a new state i.e. \(x^\prime = f(x,u)\) where \(x^\prime \in X\). In the global_planner, this is map simply returns the appropriate neighboring cell.
  4. We need an initial state \(x_I \in X\).
  5. We also have a goal state \(x_G \in X\).

We are interested in finding a sequence of actions that takes us from \(x_I\) to \(x_G\).

2 Generic Search Algorithm

All of the algorithms that are used by the ROS global_planner can be written as a specific form of a generic forward search algorithm. The word forward is used to indicate that we are starting at the initial state and searching towards the goal. Other planning algorithms may start at the goal and work backwards, or they may simultaneously search from the initial state and goal at the same time.

Forward Search Algorithm
Input: Initial state \(x_{I}\), goal state, \(x_G\)
Output: Success or Failure?

Q.insert(\(x_I\))
Mark \(x_I\) as visited
while Q not empty do
    \(x\) ← Q.get_first()
    if \(x\) \(in\) \(X_G\)
        return SUCCESS
    forall u \(\in\) \(U(x)\)
        \(x^\prime\) ← \(f(x,u)\)
        if \(x^\prime\) not visited
            Mark \(x^\prime\) as visited
            Q.insert(\(x^\prime\))
        else
            Resolve duplicate \(x^\prime\)
return FAILURE

This general search algorithm forms the basis for both of the algorithms that will be discussed below. The primary difference between the algorithms is how the Q.get_first() function is implemented. Also note that as the algorithm is written, it only returns whether or not there is a path from the initial state to the goal state, it doesn't return the actual path. The only modification required to extract the actual path is to keep track of what the parent state is for each visited state. I.e. we need to know how we visited each state. Then, once we visit an \(x \in X_g\), all we have to do is follow the tree of parent states back to the initial state. Note that in the Resolve duplicate \(x^\prime\) line, we may need to update what the parent is for a particular state. In other words, for a particular state \(x_c\) we may have already found a parent \(x_p\) that has a control \(u\in U(x_p)\) maps the system to \(x_c\). But later on in the search, it is possible that we will find a new parent that also has a control that maps to \(x_c\). In this case, we will need some way of deciding which parent is the "best" parent.

3 Dijkstra's Algorithm

Dijkstra's algorithm is nothing more than a clever way of sorting the priority queue Q that helps us not only determine a path, but it also results in an optimal path (at least according to some metric). Imagine we represent the planning problem as a directed graph. Each node in the graph is a state \(x \in X\), and each node \(x\) is connected to all nodes that can be reached from that node by choosing all \(u\in\ U(x)\). For every edge in the graph we are going to assign a nonnegative \(l(x,u)\) that is the "cost" of traveling from the parent to the child i.e. it is the cost of applying the control \(u\) at \(x\). For a particular route through the graph connecting the goal node and the initial node, the total cost is simply the sum of all edge costs along the path.

Now, we will sort the queue Q smallest to largest according to the cost-to-come . The cost-to-come is a function \(C\,:\,X \rightarrow [0,\infty]\) that represents the total cost to get to a state \(x\) from the initial state \(x_I\). For a particular state \(x\), the optimal cost-to-come \(C^*(x)\) is the cost of following the minimum-cost path from the start to the state \(x\). In other words, if we were to find all possible paths from the initial state to the particular state \(C^*(x)\) would be the cost of the path with the least total cost.

In the Resolve duplicate \(x^\prime\) line, essentially, we only replace the parent for a particular node if we have found a new path that is lower cost than the path for the old parent. It turns out, that that because we are always keeping track which path produces the lowest cost for a given node, and we are always processing nodes with the lowest cost first, then we are guaranteed that when the algorithm terminates, we will have found a path from the start to the goal that has the minimum possible cost. The Proof of correctness section on Wikipedia does a good job of explaining how we are guaranteed to find the minimum cost path.

Note that if we never provided a goal state, and we let the algorithm terminate only after it had explored every possible path between every possible node, then the result of the algorithm would be the optimal path from any node to any other node.

4 A*

The A* algorithm is very similar to Dijkstra's Algorithm. The only difference will be in the cost that we assign to each edge of our graph. We modify the cost with a heuristic estimate of the cost to get from the node to the goal. This cost is referred to as the cost-to-go, and we will denote this term \(G(x)\). In Dijkstra's algorithm, if we are currently exploring all reachable states from a particular state \(x\), we already know the optimal cost-to-come \(C^*(x)\). However, it is impossible to know the optimal cost-to-go, \(G^*(x)\) because we don't yet have a plan from \(x\) to the goal. This is why we have to choose a heuristic estimate of this cost. We will denote our estimate of the optimal cost with \(\hat{G}^*(x)\)

In A* we will now sort our queue according to \(C^*(x) + \hat{G}^*(x)\). Just like Dijkstra, A* is capable of finding optimal paths. However, this is only true if the heuristic underestimates the optimal cost-to-go. This type of heuristic is referred to as an admissible heuristic. Typical heuristics involve estimates of the straight-line distance from the node \(x\) to a goal \(x_G\). This type of heuristic is typically admissible because we know that the straight line is always the shortest possible distance to the goal, thus, the optimal cost-to-go cannot be lower than this.

4.1 Comparison of A* and Dijkstra

Generally if an appropriate heuristic can be chosen, A* will find an optimal path from an initial state to a goal state using fewer iterations than Dijkstra. This is because A* will tend to expand the graph in a direction pointed towards the goal while Dijkstra will tend to expand the graph in all directions evenly. Thus, sometimes A* is referred to as a best-first search algorithm, and Dijkstra's is referred to as breadth-first search algorithm. However, A* is not always better. Finding an "appropriate" heuristic is sometimes challenging as the performance of a given heuristic may be influenced by the particulars of the problem. Further, if the heuristic is computationally expensive, it may be more-efficient to perform more iterations of Dijkstra and avoid ever evaluating the heuristic. If the heuristic is chosen to be zero (which is an admissible heuristic), then A* will reduce to Dijkstra's algorithm.

5 Specifics of global_planner

In this section I will describe a few details of how the planning algorithms are implemented in the global_planner package.

Algorithms
The package can be configured to either use either A* or Dijkstra's algorithm. This is done with the ~<name>/use_dijkstra parameter (defaults to true).
Robot Point
The robot is assumed to be a point of infinitesimal size. Obstacle inflation is used to ensure that the real, finite-sized robot doesn't collide with obstacles.
Robot Kinematics
No kinematics of the robot are considered. I.e. the plan that is returned may not be perfectly follow-able for a given robot. It is assumed that the local planner will find produce feasible velocity commands that follow the global plan in a collision-free way (hopefully closely). This decision is primarily made for computational efficiency reasons.
Cost Function
The cost that is used for calculating the cost-to-go is a bit unique in the global_planner. It is based off the paper linked below titled A Gradient Method for Realtime Robot Control. Basically, instead of using a simple distance to evaluate \(l(x,u)\) they additionally consider the non-lethal unsigned 8-bit integer costs of each cell in the costmap; this is referred to as the intrinsic cost. Imagine you have an obstacle in a particular cell, then that cell will have a LETHAL_OBSTACLE intrinsic cost (integer value 254). All cells that are within an inscribed radius of that obstacle will have an INSCRIBED_INFLATED_OBSTACLE intrinsic cost (integer value 253). Cells neighboring the outer boundaries of these "inflated" cells will be assigned a cost that reduces exponentially with distance. This is described on this costmap page. The values assigned to the cells are defined in this header file. The global_planner will never allow the path to pass through a cell with either LETHAL_OBSTACLE or INSCRIBED_INFLATED_OBSTACLE for its intrinsic cost as these are "for-sure" collisions. However, it will allow the robot to pass through any other cell with an intrinsic cost between FREE_SPACE and INSCRIBED_INFLATED_OBSTACLE.
Grid Plans
The ~<name>/use_grid_path parameter toggles whether the global plan should perfectly follow the center of each cell in the costmap. If this is false, the path that is produced can pass through any portion of any cell. This non-grid path is constructed by estimating the gradient of the total costs (cost-to-come plus cost-to-go estimate (if A*) plus intrinsic cost) assigned to each cell, and then following the gradient for a short distance, and then re-interpolating the gradient.
Orientation Filter
Paths that are produced by the global_planner are of type nav_msgs/Path, which is basically an array of geometry_msgs/PoseStamped messages. This matters because this path includes a description of the orientation of the robot. However, by assuming that we have a point robot, and by specifying our initial state and goal state as just a 2D point in the costmap, the result of the global_planner has no intrinsic knowledge of the appropriate orientation that the robot should have as it traverses the global plan. After planning is complete, the global_planner package then applies an "orientation filter" to the planned path to fill out the orientations using a variety of different assumptions. To set the behavior of the orientation filter, use the ~<name>/orientation_mode dynamic_reconfigure parameter. The values that this parameter is allowed to take, and their meanings are described in the GlobalPlanner.cfg file.

6 Other Algorithms

6.1 Rapidly exploring random tree

One of the shortcomings of both of the algorithms discussed above is that they do not scale well to system's with high-dimensional state or control spaces, and it is difficult to successfully implement the algorithms for robotic systems with complex dynamics. For many systems roboticists are interested in the above planners become computationally intractable (although they work great for planar mobile robot navigation). A rapidly exploring random tree (RRT) is a common planning algorithm that is often employed to deal with systems where the above grid-based algorithms are not ideal. There are many variations of RRTs, and many of them even try to combine ideas from the algorithms discussed above.

You can read a bit more about RRTs on a page in the resource section of this course's website.

6.2 Probabilistic roadmap

A probabilistic roadmap (PRM) is often combined with one of the algorithms above for solving planning problems. In the algorithms above, we start by assuming that we have a discrete, finite representation of a robot's state space e.g. in the navstack, we have "gridded up" the area the robot is allowed to navigate in with an arbitrary grid size. We also know which cells a robot is allowed to be in. The algorithms above then essentially reduce to searching through a graph of possible states to find an optimal path in the graph. It's not always clear how a state space should be discretized, and by discretizing the entire space, we are likely wasting computational resources discretizing and searching in areas of the state space that are unimportant.

A PRM is a technique for building the graph that you will later search for an optimal path with. The idea is to randomly sample the robot's state space, and then connect these samples with neighboring samples using some sort of collision-free local planning technique. Once the graph is "sufficiently dense", you can then search through the graph for an optimal plan.

7 Notes and Resources

7.1 Notes on the global planner used in the navstack

Creative Commons License
ME 495: Embedded Systems in Robotics by Jarvis Schultz is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.