UP | HOME

3-D Sensing

Overview

There are many methods of providing 3-D sensor data to a robot, including ultrasound, sonar, radar, lidar, and stereo-vision. Here we specifically focus on various light (lidar or vision) based sensing modalities.

Depth Image

A depth image or depth map is a 2D dimensional image where each pixel value corresponds to the distance of an object from the camera. There are several methods for generating a depth image.

Structured Light

  1. An active light emitter outputs a pattern
  2. Objects deform the pattern
  3. Image sensor senses the pattern and, using the distortion, computes the depth map
  4. Good for indoors and short range, because they require seing the projected patterns.
  5. Can be interfered with when, for example, there are multiple devices outputting patterns

Stereo Depth

  1. Two images taken some known fixed distance apart
  2. By comparing the images taken from known different angles, the depth can be known
  3. The farther an object is from the sensors, the more similar the two images are
  4. No active light emission is required so they do not interfere with each other.
  5. Range is limited by distance between the two sensors.
  6. Active IR can be used to create known features and enhance the accuracy

Time of Flight

  • LiDar
  • Measure depth by emitting light and timing how long it takes to reach the sensor
  • Very long range

Point Clouds

  • A Point Cloud is a group of 3 dimensional coordinates (points) that constitute a discrete description of 3-dimensional objects.
  • The point cloud library contains routines for manipulating point clouds
  • perception_pcl is a ROS meta-package for point clouds: install ros-iron-perception-pcl
Not Ported ROS 1 (yet)
  • Point Cloud Tutorials
  • pcl_ros is a package that enabled a user to run a series of nodes performing specific point-cloud operations
    • It appears that it is installable on iron and parts of it work but parts of it definitely do not.

How To Point Cloud

  1. A depth map provides the distance from the camera in pixel coordinates
  2. Using a camera calibration, this depth map is converted into real-world x,y,z coordinates
  3. The pixels can be superimposed with data taken from an RGB camera to provide a 3D image

Devices that Generate Point Clouds

Manipulating Point Clouds

There are several ways of manipulating point clouds in ROS.

ROS

  • In ROS, point clouds are sent using the sensor_msgs/msg/PointCloud2 messages
    • You can access the message fields directly but the format is not the most intuitive
  • The pcl_ros package (Not fully in ROS 2) is the primary ROS package for manipulating pointclouds
    • It includes several stand-alone filters that can be loaded as nodelets to perform common operations

C++

  • pcl_conversions package also lets you convert point cloud messages into C++ data types for use with the Point Cloud Library
    • While this is the most common way of handling point clouds in ROS, it is also a C++ only way

Python

Resources

Author: Matthew Elwin