UP | HOME

Homework 3

Remember to follow the Homework Guidelines

Task F

Lasers!

Task F.6 (Laser Scanner)

  • In nuturtle_control start_robot.launch start the hlds_laser_publisher node from the hls_lfcd_lds_driver, whenever robot is set to localhost
  • The hls_lfcd_lds_driver package has a launchfile that can be used. The port is /dev/ttyUSB0

Task V (Visualization)

  • This task gathers all the visualization requirements for each important mode of running the code in this project.
  • Implementing this task depends on having multiple rviz configuration files, one for each situation.
  • The general rule of thumb is that if a launchfile starts rviz, that is optional. This way it can be included in a higher-level launchfile that loads it's own rviz setup.
  • Some elements have already been implemented in previous assignments, and others will be added in later tasks.
  • You may find that publishing additional messages/tfs and modifying existing nodes will help accomplish these tasks.

Task V.1 (Simulation Standalone)

When the nusim is run in a standalone manner (e.g. ros2 launch nusim nusim, the following should be displayed:

From Previous Homeworks

  1. The walls (in red)
  2. The obstacles (in red)
  3. The robot model, at the robot's location (in red)
  4. The color red indicates that the item is the "ground truth", which can only be known from the simulation

To Add

  1. A nav_msgs/Path (in red) tracing the path of the robot
  2. A MarkerArray (in yellow) displaying the relative \((x,y)\) measurements of the markers
    • These markers are added in task C.11, and will not necessarily align with the red markers due to noise.
    • If the measured marker cannot be seen by the robot, it should not be displayed
  3. A sensor_messages/LaserScan displaying the simulated laser scan data (in red)

Task V.2 (Simulation with Odometry)

When the nusim is run with the odometry node (e.g., ros2 launch nuturtle_control start_robot.launch robot:=nusim

  1. Everything from task V.1
  2. The robot model, at the location of the odometry estimate (in blue)
  3. A nav_msgs/Path corresponding to the path the robot takes according to odometry measurements (in blue)
  4. Blue indicates information learned via odometry

Task V.3 (Real robot with Odometry)

When ros2 launch nuturtle_control start_robot.launch robot:=none (on your computer) and ros2 launch nuturtle_control start_robot.launch robot:=localhost (on the turtlebot).

  1. Draw the actual obstacle locations and walls (in red)
    • Add a parameter called draw_only to nusim that causes it to just draw the obstacles from the configuration file rather than simulating anything
    • When the node is launched under these circumstances, set its name to nuwall to distinguish it from a running simulation
  2. Draw the robot according to the odometry (in blue)
  3. A nav_msgs/Path corresponding to the path the robot takes according to odometry measurements (in blue)
  4. A sensor_messages/LaserScan displaying the real laser scan data (in Yellow)

Task V.4 (Simulated Robot With SLAM)

When ros2 launch nuslam slam.launch.xml robot:=nusim is used

  1. Everything from Task V.2
    • The odometry node in this task will be broadcasting from odom to green/base_footprint, and there will also be a transform from map->odom
    • The blue robot should be located where the robot thinks it is according to it's uncorrected odometry in the world frame
    • Hint: You will need to have an additional tf broadcast from world to blue/base_footprint. You can either modify the odometry node or run two of them to see where the robot is according to it's odometry, without correction from SLAM. This extra tf is needed because the odometry used by SLAM is continuously corrected using the sensor measurements
  2. The map, consisting of the estimated obstacle locations (in green)
  3. The robot (in green) at the estimated robot location (from SLAM)
  4. A nav_msgs/Path corresponding to the path the robot takes according to slam (in green).
  5. Green represents the best estimate of the robot location, according to SLAM

Task C (Simulation)

Task C.9 (Robot Position)

We will now add error/noise into the nusim robot simulation.

  1. Create a parameters for the nusim node called input_noise and slip_fraction
  2. The following steps should occur whenever the simulator receives a wheel_cmd (nuturtlebot_msgs/msg/WheelCommands) message
    1. Let \(u_i\) be the commanded wheel velocity, in rad/sec
    2. Let \(v_i = u_i + w_i\), where \(w_i\) is zero mean Guassian noise with variance input_noise, when \(u_i \neq 0\) and \(u_i\) otherwise.
      • When \(u_i = 0\) we are commanding the wheels to stop, and are confident in the ability of the motors to achieve that
      • However, when commanding other wheel speeds, we may not move exactly as commanded.
    3. After computing \(v_i\), move the robot accordingly
    4. Let \(\eta_r\) and \(\eta_l\) be uniform random between -slip_fraction and slip_fraction
      • Update the right wheel position as if it had followed \(v_i*(1+\eta_r)\) and the left wheel's position as if it had followed \(v_i*(1+\eta_l)\)
      • In this way, the actual wheel positions don't match where they would be if they had not slipped

Task C.10 (Basic Sensor)

We will add a sensor that determines the relative \((x,y)\) locations of the obstacles, and assigns each a unique identifier.

  1. At a frequency of 5 Hz, publish a MarkerArray on the /fake_sensor topic. It contains the measured positions of the cylindrical obstacles relative to the robot.
    • The positions of these markers should have zero mean Gaussian noise added to them, with the amount added being specified as parameter basic_sensor_variance.
    • If a marker is not within a radius max_range of the robot, the action for it should be set to DELETE to make it non-visible.
      • max_range should be a settable parameter for nusim
    • A node can subscribe to these markers and treat them like sensor data, being sure to ignore measurements with an action of DELETE
  2. Optionally, you can scale the marker so that it represents a level-set of the Gaussian distribution caused by the sensor noise, which may help with debugging

Task C.11 (Collision Detection)

  1. Implement collision detection between the simulated robot and the obstacles
    • Represent the robot as a circle with a fixed collision radius. There should already be a collision_radius parameter in diff_params.yaml (see Homework 2: A.7)
    • If after updating the robot's position, the robot intersects with a cylinder obstacle
      • Compute the line between the robot center and the obstacle center
      • Move the robot's center along this line so that the collision circles are tangent
      • Spin the wheels as if the robot had moved normally
      • This simulates what happens when a turtlebot collides with a fixed obstacles: the wheels spin but the robot does not move
  2. Check for collision each time you update the position of the robot
  3. You may assume that obstacles are spaced far enough apart that all collisions involve exactly one cylinder and the robot.
  4. With this collision implemented, you will be able to crash into obstacles and generate a big disparity between the robot location and the odometry, which will be useful during testing.

Task C.12 (Lidar)

The goal of this task is to simulate the turtlebot's lidar.

  1. Your nusim simulation node should publish a sensor_msgs/msg/LaserScan message with simulated lidar data at 5Hz
    • You may omit the intensities from the message.
    • Use values from the real turtlebot's LaserScan message to fill this in by looking at a live message from the turtlebot
  2. The laser scanner will have Gaussian noise added to the simulated values.
  3. Values for the following constants should be settable with ROS parameters
    1. Minimum/maximum range (by default match the real turtlebot)
    2. Angle increment and number of samples (by default match the real turtlebot)
    3. Resolution (by default match the real turtlebot)
    4. Noise level
  4. The simulated lidar should also return distances to the border wall in the simulator, if a ray hits that wall

Hint

  1. There are many ways of implementing this sensor, but this article on Circle-Line intersection may help https://mathworld.wolfram.com/Circle-LineIntersection.html

Task L (SLAM)

  • This task is about implementing Feature-Based Kalman Filter SLAM.
  • You will do this task in a new package called nuslam
  • Task L.1 - outlines the main results for the task.
  • Task L.2 - This is a subset of L.1 (implementing SLAM in a controlled environment). Once you implement this, you have completed the core algorithm.
  • I am not mandating specific nodes or parameters or a structure for you to use. The design is up to you.
  • However, following the pattern we've been using in this course of keeping calculations separate from the mechanics of the nodes may prove helpful
  • Be sure to incldue in your README all instructions to run SLAM

Task L.1 (Main SLAM Results)

  • This task is about the main interface for running and visualizing your algorithms.
  • You can write whatever nodes you want, as long as you can run a slam.launch file as specified below and, where appropriate view the necessary visualizations.
  • You may also need to modify some nodes to let them publish appropriate visualization topics
  • Write a launchfile, slam.launch robot:=X that runs your SLAM algorithm and all other necessary nodes
    • If robot == nusim the robot is simulated by nusim.
    • In Homework 4, robot == turtlebotname will run the code on the turtlebot.
    • This launchfile should also start a method to control the robot, I recommend using the turtlebot3_teleop keyboard.
  • Visualization specifications are determined in Task V.

Task L.2 (SLAM in controlled environment)

  • In this task, you will test the extended Kalman Filter SLAM algorithm in the simulation environment.
  • This environment enables you to verify the correctness of the algorithm without worrying about data-association or noise.
  • You should be able to run the code for this task by specifying robot:=nusim with the slam.launch launchfile
  • In the next assignment, you will be using machine learning to find landmarks from 2D-Lidar data.
    • The data from the simulator, however, has identifiers directly specifying the landmarks.
  • The algorithm should compute the pose of the robots and the pose of the landmarks relative to a map frame.
  • Your SLAM node should publish a transform from map to odom such that map to base_footprint reflects the pose of the robot in the map frame.
  • Implement extended Kalman Filter Slam, assuming a known data association. (That is you know which measurement goes with which landmark)
  • Drive the robot along closed path in nusim with several landmarks and take a screenshot of rviz at the end of your run.
    • The screenshot should clearly show the actual robot position, the estimated position, the map frame, the odom frame, and the paths of the real, odometry, and SLAM robots.
    • You should have noise enabled and a limited detection radius for this test (the radius should be small enough such that you do not see all the landmarks all the time)
    • The robot should follow a path such that it sees all the landmarks on its journey
  • If you set to the simulator to generate no sensor noise and have a large detection radius, the SLAM path and actual path should line-up almost exactly and the landmark positions should be correct, even if the odometry noise is still present.
    • A good way to test your algorithm (in simulation!) is to drive your robot into a landmark (to make the odometry be very wrong) and watch the map estimate remain good.

Suggested Structure

The below ideas are my suggestions for how to accomplish this task; however, you may choose to ignore them.

Matrix Library

  1. I recommend using the armadillo library http://arma.sourceforge.net/ for C++
  2. Install libarmadillo-dev
  3. In cmake, find_package(Armadillo), then add ${ARMADILLO_INCLUDE_DIRS} to the include_directories() and target_link_libraries(${ARMADILLO_LIBRARIES}) to targets that use armadillo

SLAM

  1. You may choose to use the odometry node as-is, or to copy its source code into the nuslam package and modify it to be a slam and odometry node.
    • Keeping it separate is more versatile/realistic, since then somebody could theoretically use your slam node with different odometry
      • It also makes re-using launchfiles easier.
    • Making a combined node arguably could be easier to synchronize odometry data with sensor updates.
  2. Start with the odometery node from a previous homework. Copy the source code into the nuslam package and call it slam.cpp
    • In this guide, we will implement all SLAM functionality in a single node.
    • This node will also implement the odometry functionality
    • This decision makes it easier to keep everything in sync.
      • You need to be careful about synchronization. Since tf must be a tree we can't broadcast the map to green/base_footprint transform directly: instead, we must publish a map to odom transform such that the map to base_footprint transform is the desired value. Having all these quantities in the same node makes this process easier
      • However, you lose flexibility: odometry and SLAM are now coupled: for this project the trade-off between flexibility and complication seems worth it.
    • It is also important that the SLAM filter be updated one with odometry and once with the sensor measurements, even though odometry comes in at a faster rate
      • It is easiest to accumulate the odometry within the 5Hz period, and use it to make a single prediction
  3. Prior to implementing the algorithm, make sure the visualizations are working
    • The inverted measurement model can provide a crude map estimate at each timestep, just to help get the visualization working
  4. It may be helpful to create several simulated worlds with different geometry and landmarks.
    • For example, test on just one landmark with a 1-D linear kalman filter
    • The goal here would be to estimate just the x position of the robot
    • After completing this intermediate step you can then expand to the full problem
  5. It may be helpful to implement a basic Kalman filter first, then an extended Kalman filter
    • The SLAM method we use is a direct application of an extended Kalman Filter

Author: Matthew Elwin