UP | HOME

Homework 1

Task 0 (Preliminaries)

Task 0.1 (Reading the Guidelines)

  1. Carefully read the homework submission guidelines.
  2. Be sure to follow these guidelines for every assignment.

Task 0.2 (Making a Git Repository)

  1. Create a git repository using the github classroom link (sent separately).
  2. All packages made for this class should be a subdirectory in this git repository.
  3. You will be expected to maintain this git repository throughout the whole quarter.

Task 0.3 (Making a README)

  1. Add a file called README.md to the base directory of the repository, based on the following template:

    # ME495 Sensing, Navigation and Machine Learning For Robotics
    * <First Name> <Last Name>
    * Winter 2025
    # Package List
    This repository consists of several ROS packages
    - <PACKAGE1> - <one sentence description>
    
  2. Replace the <ITEM> with the appropriate item in the template above.
  3. Add this README.md to your repository and commit.
  4. Whenever you add a new package, list it in this README.md
  5. Linux is case-sensitive. Readme.md and README.md are not the same file as README.md

Task 0.4 (Making a Tasks.md)

  1. Add a file called Tasks.md to the repository
  2. Whenever a task is complete, list it in this file on its own line.
  3. After completing all the tasks (including this one) your Tasks.md will look like

    Task 0.1
    Task 0.2
    Task 0.3
    Task 0.4
    
  4. If, when you submit an assignment, a task is only partially complete, you can add a description about what is working and what is not in the Tasks.md

Task A (Robot Description)

The goal of this task is to copy and modify the turtlebot3 model in the turtlebot3_description for our needs. Throughout this project, we may have reason to visualize multiple turtlebots in rviz simultaneously: for example we may want to see the estimated turtlebot location and the actual turtlebot location. We will use a custom version of the provided package.

Upon completion, you will be able to display multiple turtlebot3 models in rviz, each appearing with a different color. You will also be able to change the physical properties of the robot by editing a yaml file.

Task A.1 (nuturtle_description package)

  1. Create an ament_cmake ROS package called nuturtle_description.
    • This should be a directory within your repository (i.e., <repo>/nuturtle_description)
    • The package will contain urdf files and basic debugging, testing, and visualization code for the robots you will be using in this class.
  2. Update the package.xml as follows:
    • Give it version number of 0.2.6
    • Provide a descriptive description.
    • Fill in your name and email address as the maintainer and as an author.
    • Set the License to APLv2 (the Apache License 2.0). You could use a different one but this is what turtlebot3 code is released under).
    • The package has an exec_depend for the packages used in its launchfiles, so these should be updated appropriately.
    • It also has an exec_depend on ros2launch.
  3. The package (like all packages you write) must pass colcon test with no warnings or problems.
  4. HINT: It is a good idea to commit after creating the initial package but before it actually does anything.

Task A.2 (visualization)

  1. Write a launchfile called load_one.launch.xml (in <repo>/nuturtle_description/launch) that loads the turtlebot3_burger URDF into a robot_state_publisher and optionally allows viewing it in rviz.
    • The argument use_rviz (boolean, default true) controls whether rviz is launched.
    • The argument use_jsp (boolean, default true) controls whether the joint_state_publisher is used to publish default joint states.
    • The appropriate rviz configuration should be stored in nuturtle_description/config/load_one.rviz.
    • If rviz is launched, the launchfile should terminate when rviz is closed.
  2. Copy the minimal set of mesh (e.g., .stl) and urdf/xacro files from the branch of the turtlebot3_description package required to display the turtlebot.
    • Do not include any files that are not necessary.
    • Include all the necessary .stl files in the base meshes/ directory (e.g., do not keep the meshes/bases, meshes/sensors, and meshes/wheels sub-directories.
    • Make sure that when copied to your repository the .stl files have the proper permissions (e.g. are not executable).
    • If you use any .urdf files (as opposed to .urdf.xacro files) rename them to .urdf.xacro and set them up to use xacro.
      • We will be modifying them later and need xacro.
  3. Modify the urdf/xacro files so that the meshes are able to be loaded from the nuturtle_description package.
  4. No turtlebot3 should need to be installed on your system for ros2 launch nuturtle_description load_one.launch.xml to work.
    • In other words, your package does not have any dependencies on any turtlebot3 packages.
    • Be careful: the urdf files, by default, load assets from the turtlebot3_description package, so you need to modify this behavior.
    • If you temporarily remove the meshes/ in your package and everything still works, the assets are likely being loaded from the turtlebot3_description package
    • If you remove all turtlebot3_* packages from your system, your package should still work.
  5. Every file from turtlebot3_description that you modify should have a comment at the top stating that it was modified and why.

Task A.3 (yaml File)

  1. Create <repo>/nuturtle_description/config/diff_params.yaml file to provide a complete parametric description of a differential drive robot.
    • Not all these parameters will be used in this assignment, but they will be useful later.
  2. The diff_params.yaml has the following parameters:
    • wheel_radius: The radius of the wheels (see: Turtlebot3 Specifications)
    • track_width: The distance between the wheels (see: Turtlebot3 Specifications)
    • motor_cmd_max: 265. The motors are provided commands in the interval [-motor_cmd_max, motor_cmd_max]
    • motor_cmd_per_rad_sec: Each motor command unit (mcu) is 0.024 rad/sec (i.e., 1 mcu = 0.024 rad/sec)
    • encoder_ticks_per_rad: The number of encoder ticks per radian. One revolution of the wheel is \(2^{12}\) ticks because it is a 12-bit encoder. (i.e. \(2^{12} \mathrm{ticks} = 2 \pi \mathrm{rad}\))
    • collision_radius: Set this value to be 0.11: we will use a cylinder as simplified geometry used for collision detection.
  3. Modify the turtlebot3 URDF so that it uses the parameters from diff_params.yaml such that:
    • Changing wheel_radius changes the collision geometry of the wheels.
    • Changing track_width changes the distance between the wheels.
    • The base_link of the robot uses the collision_radius as follows:
      • It's collision box should be cylinder the same height as the box it is replacing that fully encloses the turtlebot3.
  4. Hint: See Xacro Notes

Task A.4 (prefix)

Throughout this project we will need to track and display multiple robots. We will add an argument called color to the turtlebot3 xacro file, which will change the color of the robot and launch nodes corresponding to each robot in their own respective namespaces.

  1. Modify the xacro URDF files to take an argument called color: possible values are red, green, blue, purple.
    • The RGB values for Purple are Northwestern Purple: Red: 0.3, Green: 0.16, Blue: 0.52
  2. Based on the value of color, the resulting URDF file should set the color of the base_link appropriately
  3. Modify load_one.launch.xml to accept an argument called color (defaulting to purple) that
    • Determines the color that is passed to the xacro file as an argument.
    • Determines the namespace in which all nodes (including rviz) are launched.
      • All nodes will be launched in a namespace given by $(var color)
    • Sets the frame_prefix parameter of the robot_state_publisher such that all tf frames published are prefixed with $(var color)/
    • Set the appropriate tf-prefix in the Robot View in rviz.
      • A different rviz configuration file will be needed for each tf prefix.
    • Sets the appropriate fixed_frame in rviz.
    • Use the choices attribute to automatically document and restrict the value of the argument to valid colors

Task A.5 (multi robot)

In this task we will test the ability to load multiple independent robots in rviz.

  1. Create an XML launchfile called load_all.launch.xml that loads the red, green, blue, and purple robots and displays them in rviz
    • Create and save the configuration in config/basic_all.rviz
  2. This launchfile should include load_one.launch.xml several times to accomplish its task.
  3. This launchfile should start rviz in the global namespace and terminate the launchfile when rviz closes
  4. Each robot should have it's own joint_state_publisher and robot_state_publisher in the appropriate <color> namespace (i.e., red/, green/, blue/, or purple/)
  5. The locations of the robots should be as follows:
    • red is at (0.25, 0, 0) in the nusim/world frame
    • green is at (0, 0.5, 0) in the nusim/world frame
    • blue is at (-0.5, 0, 0) in the nusim/world frame
    • purple is at (0, -1.0, 0) in the nusim/world frame
  6. A tf view should be added to rviz and the nusim/world frame should be the only frame visible.
  7. In the rviz model tree, rename each model to it's appropriate color
    • Instead of four items name "RobotModel", have a "BlueRobot" and a "RedRobot" etc.
  8. Hint: use ros2 run tf2_ros static_transform_publisher --help for information on a node that the launchfile can run to publish static transforms in order to publish nusim/world

Task A.6 (README)

  1. Write a README.md for your package based on the following template (fill in the <X Here> with the appropriate command. Remember to remove the <>:

    # Nuturtle  Description
    URDF files for Nuturtle <Name Your Robot>
    * `<Command Here>` to see the robot in rviz.
    * `<Command Here>` to see four copies of the robot in rviz.
    ![](images/rviz.png)
    * The rqt_graph when all four robots are visualized (Nodes Only, Hide Debug) is:
    ![](images/rqt_graph.svg)
    
    # Launch File Details
    * `<Command To Show Arguments of load_one.launch.py>`
      `<Output of the Above Command>`
    * `<Command To Show Arguments of load_all.launch.py>`
      `<Output of the Above Command>`
    
  2. The rqt_graph should be saved as an .svg from the rqt_graph program and stored as images/rqt_graph.svg.
  3. A screenshot from rviz showing all four robots should be saved as images/rviz.png
  4. The images must display properly when viewing the README.md on GitHub

Task B (C++ and 2D Transforms)

Here you will begin to write a library called turtlelib for performing 2D rigid body transformations and other functionality that will be needed for the project.

You are not permitted to use any libraries other than the C++ standard library to complete this task.

The first steps in this assignment take you through the process of building a non-ros C++ project.

The tasks here are grouped according to what needs to be done, but it is likely not a good idea to simply implement the whole library and then test it. Instead you should read through the tasks, establish the problem and framework, and then implement each feature and test one by one.

Task B.1 (The Package)

  1. Create a new directory called turtlelib in your base repository.
    • Create a CMakeLists.txt that includes the ability to generate doxygen documentation.
    • You should create and install a library called turtlelib that can be referenced as turtlelib::turtlelib from other packages
    • The CMake Basics page should help get you started.
  2. This package will be independent from ROS and not use any ROS libraries or functionality
    • However, colcon will still be able to build the package

Task B.2 (angles)

  1. Download angle.hpp
    • This file will be a header that is part of a library called turtlelib
    • You are responsible for filling in the function definitions, but are not permitted to change any function prototypes or otherwise modify the public API of angle.hpp.
  2. Write a program called converter that:
    1. Prompts the user for input with the following string "Enter an angle: <angle> <deg|rad>, (CTRL-D to exit)\n"
    2. Reads in the angle (as a double) and the unit (a string either "deg" or "rad").
      • The angle and the unit are separated by whitespace.
    3. Outputs ="{angle} {unit} is {converted_normal_angle} {other_unit}.\n", where
      • {angle} is the angle entered by the user, but normalized to (-180, 180] or (-pi, pi) (depending on the unit)
      • {unit} is the unit entered by the user
      • {converted_normal_angle} is {angle} converted to degrees (if {unit} is rad) or radians (if unit is deg).
        • The converted_normal_angle angle should be normalized.
      • {other_unit} is "deg" if {unit} is "rad" or "rad" if {unit} is "deg"
    4. The program should loop and prompt again, until the user enters the EOF character (CTRL-D)
    5. If the input is malformed, the user should be prompted with "Invalid input: please enter <angle> <deg|rad>, (CTRL-D to exit)\n"

Task B.3 (geometry primitives)

  1. Download geometry2d.hpp
    • You are responsible for implementing this file in geometry2d.cpp and filling in any blank implementations (indicated by {} in geometry2d.hpp).
    • You may add private members to any class.
    • Do not add or modify any public members, function prototypes or add any names or variables to the global or turtlelib namespaces (or add any new namespaces).
    • The doxygen style comments in the header files should not be repeated in the implementation files
      • It is a matter of style whether to put these detailed comments in the .cpp or .hpp file
      • Use doxygen style comments throughout your code for the rest of this course.

Hints

  1. For more information about the operator overloading going on in this example see (a good Stack Overflow post)
  2. For more information about transforms, see Rigid Body Transformations in 2D
  3. Implementing the stream extraction operators >> and << can successfully be done in only a few lines of code.
    • Simplifications can be made by looking at the specification for operator>> and the hint in geometry2d.hpp
  4. It makes sense to iterate with B.4, implementing some functionality and then testing.

Task B.4 (unit testing geometry)

  1. In test_geometry2d.cpp, test all non-constexpr functions in geometry2d.hpp using Catch2
  2. Every function should have at least one test.
  3. Make sure the tests all pass! You will be using this library in future assignments so you need your implementation to be correct.

Hints

  1. It makes sense to iterate with B.3 so that you can use the tests to help you implement functionality
  2. The std::stringstream class will be helpful for testing operator<< and operator>>.
    • It lets you use an std::string as a stream instead of a file.

Task B.5 (SE(2) geometry)

  1. Download se2d.hpp, a turtlelib header file that lets you work with SE(2) geometry.
    • This file will be a header that is part of the turtlelib library
    • Implement the specified functionality in a file called se2d.cpp
    • You may add private members to any class but do not add or modify any public members or any function prototypes or otherwise add to or modify the turtlelib or global namespaces.
    • Do not use any external libraries

Hints

  1. It will be helpful to iterate with writing unit tests so you can test as you go
  2. This task is significantly more difficult to implement, debug, and optimize if you take the approach of doing everything in SE(3) and specializing to SE(2), rather than immediately assuming that the world is 2 dimensional

Task B.6 (unit testing SE(2))

  1. In test_se2d.cpp test every function and method in se2d.hpp
  2. You may directly combine your tests with the work of your classmates to achieve this coverage, as long as the following conditions are met:
    • You personally must develop and write at least six test cases.
    • You may only share or receive test cases directly from the person who wrote them.
    • You do not work with more than three other students
    • Each test is annotated with the author's name as follows:

      TEST_CASE("inverse", "[transform]") // First Name, Last Name
      
    • The author name and test name are listed on their own line in citations.txt
    • Each TEST_CASE can have multiple assertions (e.g. CHECK, REQUIRE) but only one author

Task B.7 (visualization)

It can be difficult to know if your geometry-related code is working without being able to visualize it. Unfortunately there is no generally agreed upon good plotting library for C++. Therefore, you will make your own visualizations by outputting the content to an SVG file (a vector-graphics file format). Although the SVG specification is complicated, the subset that we need is small and can be output by a simple C++ program.

Examine this example svg file in a text editor to understand how SVG (for our purposes) works and what content you will need to produce. I have commented the file with XML comments (<!-- -->): these do not need to be included in your version but rather are meant to explain the format.

  1. Create a new header file svg.hpp and implementation file svg.cpp in turtlelib.
  2. Design and implement a class called Svg in the turtlelib namespace that lets you "Draw" points, vectors, and coordinate frames
    • The specifics of each of these primitives is specified in the example svg.
  3. You will need some provision to write the SVG to a file, or get the file contents as a string that can then be written to a file.
  4. Your Svg class must be safe to use, even in the face of exceptions.
    • An invalid SVG file should never be written to disk: either the file is a valid Svg or it is not written
  5. Write at least one unit test for the SVG library in a file called test_svg.cpp. Here's how:
    1. Get the Svg class working and verify it manually, making sure that all object types are present
    2. Use the output from your manually verified example in a test case
    3. This way, if you ever make a change that breaks the test example the test will fail.

Hint

  1. Modify some values of the sample svg by hand and view the image to get a feel for what they do.
  2. You may wish to maintain a separate "testing" program that you manually use to call your class to help.
  3. You will need a way to convert between turtlelib coordinate frames and the SVG ViewBox coordinate frame
  4. The location of a point can be determined by a Transform2D relative to the midpoint of the page
  5. Drawing a vector depends on not just the vector's length and direction, but also the position of the vector's tail.

Task B.8 (executable implementation)

  1. Create a file called frame_main.cpp, which will compile into an executable called frame_main. Here is what the program should do:
    1. Prompt the user to enter two transforms: \(T_{ab}\) and \(T_{bc}\).
    2. Compute and output \(T_{ab}\), \(T_{ba}\), \(T_{bc}\), \(T_{cb}\), \(T_{ac}\), and \(T_{ca}\) and draw each frame in the svg file (with frame \({a}\) located at \((0, 0)\)).
    3. Prompt the user to enter a point \(p_a\) in Frame {a}
    4. Compute \(p_a\)'s location in frames \({b}\) and \({c}\) and output the locations of all 3 points
      • Use purple to draw \(p_a\), brown to draw \(p_b\), and orange to draw \(p_c\).
    5. Prompt the user to enter a vector \(v_b\) in frame \({b}\)
      • Normalize the vector to form \(\hat{v}_b\).
      • Draw \(\hat{v}_b\) with the tail located at \((0, 0)\) in frame \({b}\), in brown.
      • Draw \(v_b\) with tail located at \((0,0)\) in frame \({b}\), in black.
    6. Output \(v_b\) expressed in frame \({a}\) and frame \({c}\) coordinates
      • Draw \(v_a\) with the tail at \((0, 0)\) in frame \({a}\), in purple.
      • Draw \(v_c\) with the tail at \((0, 0)\) in frame$ \({c}\), in orange
    7. Output the drawing to /tmp/frames.svg.
    8. All outputs that are there to prompt the user should be written to stderr
    9. All outputs that are in response to a calculation should be written to stdout
  2. Run your program using numbers that differ from the ones in my example.
    • Create a transcript of the input and save it as turtlelib/exercises/B6_frame_input.txt
      • Running frame_main < frame_input.txt should result in your program running as if you entered each line in frame_input.txt via the keyboard
    • Save and commit the output to turtlelib/exercises/B6_frame_output.txt
      • After creating B6_frame_input.txt you can create this file with frame_main < B6_frame_input.txt > B6_frame_output.txt
      • The stderr and stdout is separated so the output you capture is only the results of computations, not your prompts.
    • Save and commit /tmp/frames.svg to turtlelib/exercises/B6_frames.svg

Hint

  1. Think about how the transforms should work and what should be displayed, then open the image in inkscape.
  2. Sometimes, objects will be drawn on top of each other (e.g., points show up in the same location regardless of what frame they are expressed in)
    • In inkscape you can select objects and move them around to see what is stacked.
    • In some cases above, the correct answer may involve objects being drawn on top of one another.

Task B.9 (README.md)

  1. Provide a README.md for the turtlelib library
  2. It should offer a brief description of each of the three header files you implemented.

Task C (The Simulator)

We will now create a package called nusim that can be used as a simulator and visualizer.

The nusimulator node will provide a simulated robot environment that we will build upon throughout the course. The nusimulator node uses rviz2 for visualization. Initially, it will be capable of creating some stationary walls and tracking the position of a robot.

The overall structure of the simulation is as follows:

  1. Initialization
  2. Loop at a fixed frequency until terminated.
  3. On each loop iteration:
    • Update the state of the world (integrate time forward by a timestep)
    • Publish messages that provide state information as if it were coming from a real robot, and update the rviz visualization
    • Process service/subscriber callbacks to get commands for the next time step

We will separate information that can be only be known/done by the simulation (such as teleporting the robot or exact distance measurements) and information that can be known/done by the robot (such as driving forward or noisy distance measurements) using ROS namespaces.

Anything topics/services/parameters relating directly to the domain of the simulation will be done in the nusim namespace and should not be accessed by nodes not in that namespace.

It may be useful to start writing the launchfile C.5 prior to finishing some of the other tasks so that you may easily run your code as you work on it.

Task C.1 (nusim package)

  1. Create an ament_cmake ros package called nusim.
    • This should be a directory within your repository (i.e., <reponame>/nusim)
    • Update the package.xml as follows
      • Give it a version number of 0.2.6
      • Provide a description other than the default.
      • Fill in your name and email address as the maintainer and as an author
      • Choose a license other than TODO
      • The package has a depend on rclcpp
      • Fill out the other dependencies properly.
  2. The package must pass colcon test with no warnings or problems, including the default tests that are created when running ros2 pkg create

Task C.2 (simulation node)

  1. Create the node nusimulator (the main simulation node) and implement it in src/nusim.cpp
  2. It should run a main loop at a frequency specified by the parameter rate.
    • If rate is not specified, default to 100 Hz
  3. In the main timer publish an std_msgs/msg/UInt64 ~/timestep value, which tracks the current timestep of the simulation
    • Each time the main timer executes, another timestep of the simulation occurs.
  4. Implement a ~/reset service of type std_srvs/srv/Empty that restores the initial state of the simulation.
    • For now, the only state is the ~/timestep value, which should be reset to zero.
    • As more functionality is added to the simulation, the ~reset service will need to do more.
  5. Note the ~ before the topic names. This symbol makes the topic "private" to the node, meaning that if the node is called, for example, nusimulator than ~/timestep will resolve to nusimulator/timestep

Task C.3 (simulated turtle)

We will next add the turtlebot3 robot to the simulation. The simulation must track, control, and publish information about this robot.

  1. The actual (ground truth) state of the simulated turtlebot will be represented by the red turtlebot from Task A.
    • The ground truth is known only by the simulator
  2. The nusimulator should broadcast a transform between the nusim/world frame and red/base_footprint frame.
    • This transform represents the actual pose of the robot.
    • Your control algorithms are never allowed to lookup frames starting with nusim as that is ground-truth data known only to the simulator.
    • Your control algorithms are also never allowed to lookup frames starting with red, since these are part of the "ground truth" robot.
  3. The initial pose of the robot should be specified by the parameters x0, y0, and theta0 provided to the nusim node
    • When the nusim starts, the robot should be at the position specified by these parameters relative to the nusim/world frame
    • These values default to 0.0 if not specified
  4. When the ~/reset service is called, the robot should be restored to the location specified by the x0, y0, and theta0 parameters
    • The ~/reset service should read the current values of the x0, y0, and theta0 parameters, even if they were set after startup

Task C.4 (walls)

  1. The arena where the robot drives will be rectangular, with walls on the boundary.
  2. Allow the user to specify the size of the arena using parameters:
    • arena_x_length is the length of the arena in the world \(x\) direction
    • arena_y_length is the length of the arena in the world \(y\) direction
    • The arena is centered at \((0,0)\),
    • The walls are 0.25m tall
    • You can use whatever thickness you would like, but the arena is sized in terms of the free space inside the walls.
  3. The walls should be red to signify that their location is known only to the simulator.
  4. Publish the walls as a visualization_msgs/MarkerArray message on the ~/real_walls topic once when the simulator starts.
    • Use the appropriate QoS settings so that the walls will show up even if rviz subscribes after they are published.

Task C.5 (cylindrical obstacles)

  1. Add the ability to add cylindrical obstacles to the environment
  2. The cylinders should be 0.25m tall, but can have a variable radius (as specified by the user)
  3. The cylinders should be red (locations of the cylinders are ground-truth) and published in the red namespace of the Marker message.
  4. All obstacles will be specified as parameters to the nusim node:
    • obstacles.x is a list of the obstacles' x coordinates (float64)
    • obstacles.y is a list of the obstacles' y coordinates (float64)
    • obstacles.x and obstacles.y should always be the same length or the node should log an error message and exit.
    • obstacles.r is the radius of the obstacles ( float65). We will assume that all obstacles are the same radius.
    • The node reads these parameters once and never updates them
  5. You should be able to specify an arbitrary number of obstacles using this method.
  6. The nusim should publish a visualization_msgs/MarkerArray message on the ~/real_obstacles topic once on startup
    • Use the appropriate QoS settings so that the walls will show up even if rviz subscribes after they are published.

Task C.6 (nusim launch)

  1. Write an xml launchfile called nusim.launch.xml that starts the simulator
    • The launchfile should start rviz, nusim, and load all parameters required to run the simulation
    • The configuration file for rviz should be stored in config/nusim.rviz
    • When launched, the robot, obstacles, and nusim/world tf frame should be visible
    • The launchfile should include other launchfiles you've written as needed
  2. The default parameters to run the simulation should be stored in config/basic_world.yaml
    • The basic_world should have three cylindrical obstacles of radius 0.038m, placed at (-0.4, -0.7), (0.7, -0.8), and (0.5, 0.8).
    • Start the robot at (-0.6, 0.3, 1.28)
    • We will add to this configuration file as more parameters are required
  3. The launchfile takes an argument called config_file to the nusim.launch launchfile.
    • This argument should let a user specify a .yaml file to configure the simulator.
    • If blank, use the default config/basic_world.yaml configuration file.

Task C.7 (README.md)

Create a README.md for this package. It should provide

  1. A brief description of the package.
  2. Descriptions of the provided launchfiles
  3. A description of the parameters that can be used to change simulator settings.
  4. Include a screenshot from rviz as launched from nusim.launch
    • Store the image in nusim/images/nusim1.png

Resources

Author: Matthew Elwin.