3-D Sensing
Overview
There are many methods of providing 3-D sensor data to a robot, including ultrasound, sonar, radar, lidar, and stereo-vision. Here we specifically focus on various light (lidar or vision) based sensing modalities.
Depth Image
A depth image or depth map is a 2D dimensional image where each pixel value corresponds to the distance of an object from the camera. There are several methods for generating a depth image.
Structured Light
- An active light emitter outputs a pattern
- Objects deform the pattern
- Image sensor senses the pattern and, using the distortion, computes the depth map
- Good for indoors and short range, because they require seing the projected patterns.
- Can be interfered with when, for example, there are multiple devices outputting patterns
Stereo Depth
- Two images taken some known fixed distance apart
- By comparing the images taken from known different angles, the depth can be known
- The farther an object is from the sensors, the more similar the two images are
- No active light emission is required so they do not interfere with each other.
- Range is limited by distance between the two sensors.
- Active IR can be used to create known features and enhance the accuracy
Time of Flight
- LiDar
- Measure depth by emitting light and timing how long it takes to reach the sensor
- Very long range
Point Clouds
- A Point Cloud is a group of 3 dimensional coordinates (points) that constitute a discrete description of 3-dimensional objects.
- The point cloud library contains routines for manipulating point clouds, from C++
- perception_pcl is a ros meta-package for point clouds.
- pcl_ros Is used to convert between the point cloud library and ROS 2 data types
How To Point Cloud
- A depth map provides the distance from the camera in pixel coordinates
- Using a camera calibration, this depth map is converted into real-world x,y,z coordinates
- The pixels can be superimposed with data taken from an RGB camera to provide a 3D image
Devices that Generate Point Clouds
- Lidars/Laser Scanners Velodyne SCIP 2.2 laser range-finders
- Stereo Cameras
- Depth Cameras
Manipulating Point Clouds
There are several ways of manipulating point clouds in ROS.
ROS
- In ROS, point clouds are sent using the PointCloud2 messages
- You can access the message fields directly but the format is not the most intuitive
- The pcl_ros package (Not fully in ROS 2) is the primary ROS package for manipulating pointclouds
- It includes several stand-alone filters that can be loaded as nodelets to perform common operations
C++
- pcl_conversions package also lets you convert point cloud messages into C++ data types for use with the Point Cloud Library
- While this is the most common way of handling point clouds in ROS, it is also a C++ only way
- Unfortunately, the python bindings to the PointCloud Library are no longer maintained
Python
- The sensor_msgs_py package contains utilities for converting from PointCloud2 datatypes to numpy arrays
For details:
from sensor_msgs_py import point_cloud2 help(point_cloud2)
- More details at https://pointclouds.org/