3-D Sensing
Overview
There are many methods of providing 3-D sensor data to a robot, including ultrasound, sonar, radar, lidar, and stereo-vision. Here we specifically focus on various light (lidar or vision) based sensing modalities.
Depth Image
A depth image or depth map is a 2D dimensional image where each pixel value corresponds to the distance of an object from the camera. There are several methods for generating a depth image.
Structured Light
- An active light emitter outputs a pattern
- Objects deform the pattern
- Image sensor senses the pattern and, using the distortion, computes the depth map
- Good for indoors and short range, because they require seing the projected patterns.
- Can be interfered with when, for example, there are multiple devices outputting patterns
Stereo Depth
- Two images taken some known fixed distance apart
- By comparing the images taken from known different angles, the depth can be known
- The farther an object is from the sensors, the more similar the two images are
- No active light emission is required so they do not interfere with each other.
- Range is limited by distance between the two sensors.
- Active IR can be used to create known features and enhance the accuracy
Time of Flight
- LiDar
- Measure depth by emitting light and timing how long it takes to reach the sensor
- Very long range
Point Clouds
- A Point Cloud is a group of 3 dimensional coordinates (points) that constitute a discrete description of 3-dimensional objects.
- The point cloud library contains routines for manipulating point clouds
- perception_pcl is a ROS meta-package for point clouds: install
ros-iron-perception-pcl
Not Ported ROS 1 (yet)
- Point Cloud Tutorials
- pcl_ros is a package that enabled a user to run a series of nodes performing specific point-cloud operations
- It appears that it is installable on
iron
and parts of it work but parts of it definitely do not.
- It appears that it is installable on
How To Point Cloud
- A depth map provides the distance from the camera in pixel coordinates
- Using a camera calibration, this depth map is converted into real-world x,y,z coordinates
- The pixels can be superimposed with data taken from an RGB camera to provide a 3D image
Devices that Generate Point Clouds
- Lidars/Laser Scanners Velodyne SCIP 2.2 laser range-finders
- Stereo Cameras
- Depth Cameras
- Intel RealSense
- opennni (Like the Asus Xtion 2)
- Kinect for Box 360 using freenect_launch
- Kinect for Xbox One using iai_kinect2
Manipulating Point Clouds
There are several ways of manipulating point clouds in ROS.
ROS
- In ROS, point clouds are sent using the sensor_msgs/msg/PointCloud2 messages
- You can access the message fields directly but the format is not the most intuitive
- The pcl_ros package (Not fully in ROS 2) is the primary ROS package for manipulating pointclouds
- It includes several stand-alone filters that can be loaded as nodelets to perform common operations
C++
- pcl_conversions package also lets you convert point cloud messages into C++ data types for use with the Point Cloud Library
- While this is the most common way of handling point clouds in ROS, it is also a C++ only way
Python
- The sensor_msgs_py package contains utilities for converting from PointCloud2 datatypes to numpy arrays
For details:
from sensor_msgs_py import point_cloud2 help(point_cloud2)
- The python-pcl library lets you use the point cloud library from python (install with
apt install python-pcl
) - More details at https://pointclouds.org/
- An example project https://github.com/m-elwin/pointcloud_demo