Homework 3
Remember to follow the Homework Guidelines
Task F
Lasers!
Task F.6 (Laser Scanner)
- In 
nuturtle_controlstart_robot.launchstart thehlds_laser_publishernode from thehls_lfcd_lds_driver, wheneverrobotis set tolocalhost - The hls_lfcd_lds_driver package has a launchfile that can be used. The port is 
/dev/ttyUSB0 
Task V (Visualization)
- This task gathers all the visualization requirements for each important mode of running the code in this project.
 - Implementing this task depends on having multiple 
rvizconfiguration files, one for each situation. - The general rule of thumb is that if a launchfile starts 
rviz, that is optional. This way it can be included in a higher-level launchfile that loads it's ownrvizsetup. - Some elements have already been implemented in previous assignments, and others will be added in later tasks.
 - You may find that publishing additional messages/tfs and modifying existing nodes will help accomplish these tasks.
 
Task V.1 (Simulation Standalone)
When the nusim is run in a standalone manner (e.g. ros2 launch nusim nusim, the following should be displayed:
From Previous Homeworks
- The walls (in red)
 - The obstacles (in red)
 - The robot model, at the robot's location (in red)
 - The color red indicates that the item is the "ground truth", which can only be known from the simulation
 
To Add
- A 
nav_msgs/Path(in red) tracing the path of the robot - A MarkerArray (in yellow) displaying the relative \((x,y)\) measurements of the markers
- These markers are added in task C.11, and will not necessarily align with the red markers due to noise.
 - If the measured marker cannot be seen by the robot, it should not be displayed
 
 - A 
sensor_messages/LaserScandisplaying the simulated laser scan data (in red) 
Task V.2 (Simulation with Odometry)
When the nusim is run with the odometry node (e.g., ros2 launch nuturtle_control start_robot.launch robot:=nusim
- Everything from task V.1
 - The robot model, at the location of the odometry estimate (in blue)
 - A 
nav_msgs/Pathcorresponding to the path the robot takes according to odometry measurements (in blue) - Blue indicates information learned via odometry
 
Task V.3 (Real robot with Odometry)
When ros2 launch nuturtle_control start_robot.launch robot:=none (on your computer) and
     ros2 launch nuturtle_control start_robot.launch robot:=localhost (on the turtlebot).
- Draw the actual obstacle locations and walls (in red)
- Add a parameter called 
draw_onlytonusimthat causes it to just draw the obstacles from the configuration file rather than simulating anything - When the node is launched under these circumstances, set its name to 
nuwallto distinguish it from a running simulation 
 - Add a parameter called 
 - Draw the robot according to the odometry (in blue)
 - A 
nav_msgs/Pathcorresponding to the path the robot takes according to odometry measurements (in blue) - A sensor_messages/LaserScan displaying the real laser scan data (in Yellow)
 
Task V.4 (Simulated Robot With SLAM)
When ros2 launch nuslam slam.launch.xml robot:=nusim is used
- Everything from Task V.2
- The 
odometrynode in this task will be broadcasting fromodomtogreen/base_footprint, and there will also be a transform frommap->odom - The 
bluerobot should be located where the robot thinks it is according to it's uncorrected odometry in the world frame - Hint: You will need to have an additional 
tfbroadcast fromworldtoblue/base_footprint. You can either modify theodometrynode or run two of them to see where the robot is according to it's odometry, without correction from SLAM. This extratfis needed because theodometryused by SLAM is continuously corrected using the sensor measurements 
 - The 
 - The map, consisting of the estimated obstacle locations (in green)
 - The robot (in green) at the estimated robot location (from SLAM)
 - A 
nav_msgs/Pathcorresponding to the path the robot takes according to slam (in green). - Green represents the best estimate of the robot location, according to SLAM
 
Task C (Simulation)
Task C.9 (Robot Position)
We will now add error/noise into the nusim robot simulation.
- Create a parameters for the 
nusimnode calledinput_noiseandslip_fraction - The following steps should occur whenever the simulator receives a 
wheel_cmd (nuturtlebot_msgs/msg/WheelCommands)message- Let \(u_i\) be the commanded wheel velocity, in 
rad/sec - Let \(v_i = u_i + w_i\), where \(w_i\) is zero mean Guassian noise with variance 
input_noise, when \(u_i \neq 0\) and \(u_i\) otherwise.- When \(u_i = 0\) we are commanding the wheels to stop, and are confident in the ability of the motors to achieve that
 - However, when commanding other wheel speeds, we may not move exactly as commanded.
 
 - After computing \(v_i\), move the robot accordingly
 - Let \(\eta_r\) and \(\eta_l\) be uniform random between 
-slip_fractionandslip_fraction- Update the right wheel position as if it had followed \(v_i*(1+\eta_r)\) and the left wheel's position as if it had followed \(v_i*(1+\eta_l)\)
 - In this way, the actual wheel positions don't match where they would be if they had not slipped
 
 
 - Let \(u_i\) be the commanded wheel velocity, in 
 
Task C.10 (Basic Sensor)
We will add a sensor that determines the relative \((x,y)\) locations of the obstacles, and assigns each a unique identifier.
- At a frequency of 5 Hz, publish a MarkerArray on the 
/fake_sensortopic. It contains the measured positions of the cylindrical obstacles relative to the robot.- The positions of these markers should have zero mean Gaussian noise added to them, with the amount added being specified as parameter 
basic_sensor_variance. - If a marker is not within a radius 
max_rangeof the robot, the action for it should be set toDELETEto make it non-visible.max_rangeshould be a settable parameter fornusim
 - A node can subscribe to these markers and treat them like sensor data, being sure to ignore measurements with an action of 
DELETE 
 - The positions of these markers should have zero mean Gaussian noise added to them, with the amount added being specified as parameter 
 - Optionally, you can scale the marker so that it represents a level-set of the Gaussian distribution caused by the sensor noise, which may help with debugging
 
Task C.11 (Collision Detection)
- Implement collision detection between the simulated robot and the obstacles
- Represent the robot as a circle with a fixed collision radius. There should already be a 
collision_radiusparameter indiff_params.yaml(see Homework 2: A.7) - If after updating the robot's position, the robot intersects with a cylinder obstacle
- Compute the line between the robot center and the obstacle center
 - Move the robot's center along this line so that the collision circles are tangent
 - Spin the wheels as if the robot had moved normally
 - This simulates what happens when a turtlebot collides with a fixed obstacles: the wheels spin but the robot does not move
 
 
 - Represent the robot as a circle with a fixed collision radius. There should already be a 
 - Check for collision each time you update the position of the robot
 - You may assume that obstacles are spaced far enough apart that all collisions involve exactly one cylinder and the robot.
 - With this collision implemented, you will be able to crash into obstacles and generate a big disparity between the robot location and the odometry, which will be useful during testing.
 
Task C.12 (Lidar)
The goal of this task is to simulate the turtlebot's lidar.
- Your 
nusimsimulation node should publish asensor_msgs/msg/LaserScanmessage with simulated lidar data at 5Hz- You may omit the intensities from the message.
 - Use values from the real turtlebot's LaserScan message to fill this in by looking at a live message from the turtlebot
 
 - The laser scanner will have Gaussian noise added to the simulated values.
 - Values for the following constants should be settable with ROS parameters
- Minimum/maximum range (by default match the real turtlebot)
 - Angle increment and number of samples (by default match the real turtlebot)
 - Resolution (by default match the real turtlebot)
 - Noise level
 
 - The simulated lidar should also return distances to the border wall in the simulator, if a ray hits that wall
 
Hint
- There are many ways of implementing this sensor, but this article on Circle-Line intersection may help https://mathworld.wolfram.com/Circle-LineIntersection.html
 
Task L (SLAM)
- This task is about implementing Feature-Based Kalman Filter SLAM.
 - You will do this task in a new package called 
nuslam - Task L.1 - outlines the main results for the task.
 - Task L.2 - This is a subset of L.1 (implementing SLAM in a controlled environment). Once you implement this, you have completed the core algorithm.
 - I am not mandating specific nodes or parameters or a structure for you to use. The design is up to you.
 - However, following the pattern we've been using in this course of keeping calculations separate from the mechanics of the nodes may prove helpful
 - Be sure to incldue in your README all instructions to run SLAM
 
Task L.1 (Main SLAM Results)
- This task is about the main interface for running and visualizing your algorithms.
 - You can write whatever nodes you want, as long as you can run a 
slam.launchfile as specified below and, where appropriate view the necessary visualizations. - You may also need to modify some nodes to let them publish appropriate visualization topics
 - Write a launchfile, 
slam.launch robot:=Xthat runs your SLAM algorithm and all other necessary nodes- If 
robot == nusimthe robot is simulated bynusim. - In Homework 4, 
robot == turtlebotnamewill run the code on the turtlebot. - This launchfile should also start a method to control the robot, I recommend using the 
turtlebot3_teleopkeyboard. 
 - If 
 - Visualization specifications are determined in Task V.
 
Task L.2 (SLAM in controlled environment)
- In this task, you will test the extended Kalman Filter SLAM algorithm in the simulation environment.
 - This environment enables you to verify the correctness of the algorithm without worrying about data-association or noise.
 - You should be able to run the code for this task by specifying 
robot:=nusimwith theslam.launchlaunchfile - In the next assignment, you will be using machine learning to find landmarks from 2D-Lidar data.
- The data from the simulator, however, has identifiers directly specifying the landmarks.
 
 - The algorithm should compute the pose of the robots and the pose of the landmarks relative to a map frame.
 - Your SLAM node should publish a transform from 
maptoodomsuch thatmaptobase_footprintreflects the pose of the robot in themapframe.- See ROS REP 105
 
 - Implement extended Kalman Filter Slam, assuming a known data association. (That is you know which measurement goes with which landmark)
 - Drive the robot along closed path in 
nusimwith several landmarks and take a screenshot ofrvizat the end of your run.- The screenshot should clearly show the actual robot position, the estimated position, the map frame, the odom frame, and the paths of the real, odometry, and SLAM robots.
 - You should have noise enabled and a limited detection radius for this test (the radius should be small enough such that you do not see all the landmarks all the time)
 - The robot should follow a path such that it sees all the landmarks on its journey
 
 - If you set to the simulator to generate no sensor noise and have a large detection radius, the SLAM path and actual path should line-up almost exactly and 
the landmark positions should be correct, even if the odometry noise is still present.
- A good way to test your algorithm (in simulation!) is to drive your robot into a landmark (to make the odometry be very wrong) and watch the map estimate remain good.
 
 
Suggested Structure
The below ideas are my suggestions for how to accomplish this task; however, you may choose to ignore them.
Matrix Library
- I recommend using the armadillo library http://arma.sourceforge.net/ for C++
 - Install 
libarmadillo-dev - In cmake, 
find_package(Armadillo), then add${ARMADILLO_INCLUDE_DIRS}to theinclude_directories()andtarget_link_libraries(${ARMADILLO_LIBRARIES})to targets that use armadillo 
SLAM
- You may choose to use the odometry node as-is, or to copy its source code into the 
nuslampackage and modify it to be aslamandodometrynode.- Keeping it separate is more versatile/realistic, since then somebody could theoretically use your 
slamnode with differentodometry- It also makes re-using launchfiles easier.
 
 - Making a combined node arguably could be easier to synchronize 
odometrydata withsensorupdates. 
 - Keeping it separate is more versatile/realistic, since then somebody could theoretically use your 
 - Start with the 
odometerynode from a previous homework. Copy the source code into thenuslampackage and call itslam.cpp- In this guide, we will implement all SLAM functionality in a single node.
 - This node will also implement the odometry functionality
 - This decision makes it easier to keep everything in sync.
- You need to be careful about synchronization. Since 
tfmust be a tree we can't broadcast themaptogreen/base_footprinttransform directly: instead, we must publish amaptoodomtransform such that themaptobase_footprinttransform is the desired value. Having all these quantities in the same node makes this process easier - However, you lose flexibility: odometry and SLAM are now coupled: for this project the trade-off between flexibility and complication seems worth it.
 
 - You need to be careful about synchronization. Since 
 - It is also important that the SLAM filter be updated one with odometry and once with the sensor measurements, even though odometry comes in at a faster rate
- It is easiest to accumulate the odometry within the 5Hz period, and use it to make a single prediction
 
 
 - Prior to implementing the algorithm, make sure the visualizations are working 
- The inverted measurement model can provide a crude map estimate at each timestep, just to help get the visualization working
 
 - It may be helpful to create several simulated worlds with different geometry and landmarks.
- For example, test on just one landmark with a 1-D linear kalman filter
 - The goal here would be to estimate just the 
xposition of the robot - After completing this intermediate step you can then expand to the full problem
 
 - It may be helpful to implement a basic Kalman filter first, then an extended Kalman filter
- The SLAM method we use is a direct application of an extended Kalman Filter