UP | HOME

Homework 1

Task 0 (Preliminaries)

Task 0.1 (Reading the Guidelines)

  1. Carefully read the homework submission guidelines.
  2. Be sure to follow these guidelines for every assignment.

Task 0.2 (Making a Git Repository)

  1. Create a git repository using the github classroom link (sent separately).
  2. All packages made for this class should be a subdirectory in this git repository.
  3. You will be expected to maintain this git repository throughout the whole quarter.

Task 0.3 (Making a README)

  1. Add a file called README.md to the base directory of the repository, based on the following template:

    # ME495 Sensing, Navigation and Machine Learning For Robotics
    * <First Name> <Last Name>
    * Winter 2022
    # Package List
    This repository consists of several ROS packages
    - <PACKAGE1> - <one sentence description>
    
  2. Replace the <ITEM> with the appropriate item in the template above.
  3. Add this README.md to your repository and commit.
  4. Whenever you add a new package, list it in this README.md
  5. Linux is case-sensitive. Readme.md and README.md are not the same file as README.md

Task 0.4 (Making a Tasks.md)

  1. Add a file called Tasks.md to the repository
  2. Whenever a task is complete, list it in this file on its own line.

Task A (Robot Description)

The goal of this task is to adapt the model in the turtlebot3_description for our needs. Throughout this project, we may have reason to visualize multiple turtlebots in rviz simultaneously: for example we may want to see the estimated turtlebot location and the actual turtlebot location.

Upon completion, you will be able to display multiple turtlebot3 models in rviz, each appearing with a different color. You will also be able to change the physical properties of the robot by editing a yaml file.

Task A.1 (nuturtle_description package)

  1. Create an ament_cmake ROS package called nuturtle_description.
    • This should be a directory within your repository (i.e., <repo>/nuturtle_description)
    • The package will contain urdf files and basic debugging, testing, and visualization code for the robots you will be using in this class
  2. Update the package.xml as follows:
    • Give it version number of 0.1.1
    • Provide a descriptive description.
    • Fill in your name and email address as the maintainer and as an author
    • Set the License to APLv2 (the Apache License 2.0). You could use a different one but this is what turtlebot3 code is released under)
    • The package has an exec_depend for the packages used in its launchfiles, so these should be updated appropriately.
    • It also has an exec_depend on ros2launch
  3. If you modify files from the turtlebot3 repository, include a comment at the top of that file stating that it has been modified from the original version (as per APLv2).
  4. The package (like all packages you write) must pass colcon test with no warnings or problems.
  5. HINT: It is a good idea to commit after creating the initial package but before it actually does anything.

Task A.2 (visualization)

  1. Write a launchfile called load_one.launch.py (in <repo>/nuturtle_description/launch) that loads the turtlebot3_burger URDF into a robot_state_publisher and optionally allows viewing it in rviz.
    • The argument use_rviz (true or false, default true) controls whether rviz is launched.
    • The argument use_jsp (true or false, default true) controls whether the joint_state_publisher is used to publish default joint states.
    • ros2 launch nuturtle_description load_one.launch --show-args should print documentation for the launchfile arguments.
      • This is the last reminder that all launchfile arguments must be documented: see the guidelines
    • The appropriate rviz configuration should be stored in nuturtle_description/config/basic_purple.rviz
    • If rviz is launched, the launchfile should terminate when rviz is closed
    • See Launch Notes
  2. Copy the minimal set of mesh (e.g., .stl) and urdf/xacro files from the humble-devel branch of the turtlebot3_description package required to display the turtlebot
    • Do not include any files that are not necessary.
    • Include all the necessary .stl files in the base meshes/ subdirectory and do not keep the bases/, sensors, and wheels/ sub-directories.
    • Make sure that when copied to your repository the .stl files have the proper permissions (e.g. are not executable).
      • Be careful as they are executable in the turtlebot3 repository.
    • If you use .urdf files (as opposed to .urdf.xacro files) rename them to .urdf.xacro and set them up to use xacro, as we will be modifying them later and need xacro.
  3. Modify the urdf/xacro files so that the meshes are able to be loaded from their new location, as installed into the install space of your repository
    • Remember, all files that you want to use must be installed via the CMakeLists.txt
  4. No turtlebot3_packages should need to be installed for ros2 launch nuturtle_description load_one.launch to work.
    • Be careful here. The urdf files, by default, load assets from the turtlebot3_description package, so you need to modify this behavior.
    • If you temporarily move the meshes in your package and everything still works, the assets are likely being loaded from the turtlebot3_description package
  5. Your launchfile should be written in a declarative style, which means that it
    • Should not use any variables
    • The entire launch description should be returned from a single return statement
    • Hint: See this example for how to load a xacro URDF into the robot_state_publisher

Task A.3 (yaml File)

  1. Create <repo>/nuturtle_description/config/diff_params.yaml file to provide a complete parametric description of a differential drive robot.
    • Not all these parameters will be used in this assignment, but they will be useful later
  2. You should have the following parameters:
    • wheel_radius: The radius of the wheels (see: Turtlebot3 Specifications)
    • track_width: The distance between the wheels (see: Turtlebot3 Specifications)
    • motor_cmd_max: 265. The motors are provided commands in the interval [-motor_cmd_max, motor_cmd_max]
    • motor_cmd_per_rad_sec: Each motor command unit (mcu) is 0.024 rad/sec (i.e., 1 mcu = 0.024 rad/sec)
    • encoder_ticks_per_rad: The number of encoder ticks per radian. One revolution of the wheel is \(2^{12}\) ticks because it is a 12-bit encoder. (i.e. \(2^{12} \mathrm{ticks} = 2 \pi \mathrm{rad}\))
    • collision_radius: Set this to be 0.11. This is some simplified geometry used for collision detection
  3. Modify the turtlebot3 URDF so that it uses the parameters from diff_params.yaml such that
    • Changing wheel_radius changes the collision geometry of the wheels
    • Changing track_width changes the distance between the wheels
    • The base_link of the robot uses the collision_radius as follows:
      • The cylinder should be the same height as the box it is replacing
      • If collision-radius is negative, the original box collision geometry is used
    • Hint: in XML, '<' and '>' are special characters reserved for tags. Use &lt; and &gt; to represent them when they are used as attributes
  4. See Xacro Notes

Task A.4 (prefix)

Throughout this project we will need to track and display multiple robots. Toward this end, we will add an argument called color to the robot's xacro file, which will change the color of the robot and launch nodes corresponding to each robot in their own respective namespaces.

  1. Modify the xacro URDF files to take an argument called color: possible values are red, green, blue, and purple.
    • The RGB values for Purple are Northwestern Purple: Red: 0.3, Green: 0.16, Blue: 0.52
  2. Based on the value of color, the resulting URDF file should set the color of the base_link appropriately
  3. Modify load_one.launch.py to accept an argument called color (defaulting to purple) that
    • Determines the color that is passed to the xacro file as an argument (be sure to document it).
      • Use the choices keyword argument to DeclareLaunchArgument to automatically document and restrict the value to valid colors
    • Determines the namespace in which all nodes (including rviz) are launched (namespace used is the value of color)
    • Sets the frame_prefix parameter of the robot_state_publisher such that all tf frames published are prefixed with color/
    • Set the appropriate Tf-prefix in the Robot View in rviz
      • HINT: you may need to create a separate rviz configuration file for each color (basic_<color>.rviz)
      • HINT: The launch.action SetLaunchConfiguration let's you create a LaunchConfiguration from substitution rules, which can be concatenated using a list. So you can build the name of the file using SetLaunchConfiguration and access it later.
    • Sets the appropriate fixed_frame in rviz, without changing the .rviz configuration file (hint: see rviz2 --help).
    • Using the choices argument to DeclareLaunchDescription, restrict the value of color to be only the valid values (supported colors and empty "").

Task A.5 (multi robot)

In this task we will test the ability to load multiple independent robots in rviz.

  1. Create an XML launchfile called load_all.launch.xml that loads the red, green, blue, and purple robots and displays them in rviz
    • Create and save the configuration in config/basic_all.rviz
  2. This launchfile should include load_one.launch.py several times to accomplish its task.
  3. This launchfile should start rviz in the global namespace and terminate the launchfile when rviz closes
    • This is accomplished with the "required" attribute to the node tag in ROS 1. It is not implemented yet in ROS2, so just leave it off for now.
    • As of 1/2024 there is a pull request to enable this feature in XML launch files, but it is not released for iron.
  4. Each robot should have it's own joint_state_publisher and robot_state_publisher in the appropriate <color> namespace (i.e., red/, green/, blue/, or purple/)
  5. The locations of the robots should be as follows:
    • red is at (0.3,0,0) in the nusim/world frame
    • green is at (0,0.6,0) in the nusim/world frame
    • blue is at (-0.71,0,0) in the nusim/world frame
    • purple is at (0,-0.9,0) in the nusim/world frame
  6. A tf view should be added to rviz and the nusim/world frame should be the only frame visible.
  7. In the rviz model tree, rename each model to it's appropriate color
    • Instead of four items name "RobotModel", have a "BlueRobot" and a "RedRobot" etc.
  8. Hint: use ros2 run tf2_ros static_transform_publisher --help for information on a node that the launchfile can run to publish static transforms

Task A.6 (README)

  1. Write a README.md for your package based on the following template (fill in the <X Here> with the appropriate command. Remember to remove the <>:

    # Nuturtle  Description
    URDF files for Nuturtle <Name Your Robot>
    * `<Command Here>` to see the robot in rviz.
    * `<Command Here>` to see four copies of the robot in rviz.
    ![](images/rviz.png)
    * The rqt_graph when all four robots are visualized (Nodes Only, Hide Debug) is:
    ![](images/rqt_graph.svg)
    # Launch File Details
    * `<Command To Show Arguments of load_one.launch.py>`
      `<Output of the Above Command>`
    * `<Command To Show Arguments of load_all.launch.py>`
      `<Output of the Above Command>`
    
  2. The rqt_graph should be saved as an .svg from the rqt_graph program and stored as images/rqt_graph.svg.
  3. A screenshot from rviz showing the robots should be saved as images/rviz.png
  4. The images must display properly when viewing the README on GitHub

Task B (C++ and 2D Transforms)

Here you will begin to write a library called turtlelib for performing 2D rigid body transformations and other functionality.

You are not permitted to use any libraries other than the C++ standard library to complete this task.

The first steps in this assignment take you through the process of building a non-ros C++ project, first from scratch, then with CMake.

The tasks here are grouped according to what needs to be done, but it is likely not a good idea to simply implement the whole library and then test it. Instead you should read through the tasks and work on them concurrently so you can examine and test your implementation as you go along.

Task B.1 (geometry primitives)

  1. Create a new directory called turtlelib in your base repository.
    • Set this up as a CMake project. Include the ability to generate documentation with doxygen and run unit tests.
    • You may use the example in CMake Basics to get started, but make sure you read through it and do not have any unnecessary code.
  2. This directory will hold the turtlelib library and associated executables and will be a ROS-Independent CMake Project
    • Follow the directory structure specified in CMake Basics and be sure to put files in the appropriate locations.
    • I will provide file names but not the full path to each file, they should be placed in the correct location based on what they are for.
  3. Download geometry2d.hpp
    • This file will be a header that is part of a library called turtlelib
    • You are responsible for implementing this file in geometry2d.cpp and filling in any blank implementations (indicated by {} in geometry2d.hpp).
    • You may add private members to any class but do not add or modify any public members or any function prototypes or otherwise add to or modify the turtlelib or global namespaces.
    • The doxygen style comments in the header files should not be repeated in the implementation files
      • It is a matter of style whether to put these detailed comments in the .cpp or .hpp file
      • Use doxygen style comments throughout your code for the rest of this course.

Hints

  1. For more information about the operator overloading going on in this example see (a good Stack Overflow post)
  2. For more information about transforms, see Rigid Body Transformations in 2D
  3. Implementing the stream extraction operators >> can successfully be done in only a few lines of code.
    • Simplifications can be made by looking at the specification for operator>> and the hint in geometry2d.hpp
  4. A good strategy for implementation is:
    • Write minimal stub functionality that does not work but does compile.
    • Move on to task B.2 to write some tests (which initially fail).
    • Iterate between writing tests and implementing the functionality until both B.1 and B.2 are complete.

Task B.2 (unit testing geometry)

  1. In test_geometry2d.cpp, test all non-constexpr functions in geometry2d.hpp
    • See The Catch2 Tutorial
    • Be sure to use approximate comparisons: see Floating Point Comparisons
      • Do NOT use your almost_equal implementation for testing purposes. This function is not designed to provide testing diagnostics, it is for use in an actual system that will require approximate comparisons.
  2. Every function should have at least one test, including operator<< and operator>>.
  3. The normalize_angle function is crucial and traditionally a major source of bugs.
    • Including at least the following cases: \(\pi\), \(-\pi\), \(0\), \(-\frac{\pi}{4}\), \(\frac{3\pi}{2}\), \(-\frac{5\pi}{2}\).
  4. Make sure the tests all pass! You will be using this library in future assignments so you need your implementation to be correct.
  5. Catch2 v3 is not shipped with Ubuntu 22.04. Here is how to install it from source:

    git clone https://github.com/catchorg/Catch2.git
    cd catch2
    cmake -Bbuild . -DBUILD_TESTING=OFF
    cmake --build build/
    sudo cmake --build build/ --target install
    

Hints

  1. It makes sense to iterate with B.1 so that you can use the tests to help you implement functionality
  2. The std::stringstream class will be helpful for testing operator<< and operator>>. It lets you use an std::string as a stream instead of a file.
  3. Catch2 Tutorial

Task B.3 (SE(2) geometry)

  1. Download se2d.hpp, a turtlelib header file that lets you work with SE(2) geometry.
    • This file will be a header that is part of the turtlelib library
    • Implement the specified functionality in a file called se2d.cpp
    • You may add private members to any class but do not add or modify any =public members or any function prototypes or otherwise add to or modify the turtlelib or global namespaces.

Hints

  1. It will be helpful to iterate by working on B.4, B.5, and B.6 as you implement the required functionality so you can test it as you go.

Task B.4 (unit testing SE(2))

  1. In test_se2d.cpp test every function and method in se2d.hpp
  2. You may directly combine your tests with the work of your classmates to achieve this coverage, as long as the following conditions are met:
    • You personally must develop and write at least six test cases.
    • You may only share or receive test cases directly from the person who wrote them.
    • Each test is annotated with the author's name as follows:

      TEST_CASE("inverse", "[transform]") // First Name, Last Name
      
    • The author name and test name are listed on their own line in citations.txt
    • Each TEST_CASE can have multiple assertions (e.g. CHECK, REQUIRE) but only one author
  3. We will use the Catch2 v3 framework which is not yet released for Ubuntu

Task B.5 (visualization)

It can be difficult to know if your geometry-related code is working without being able to visualize it. Unfortunately there is no generally agreed upon good plotting library for C++. Therefore, you will make your own visualizations by outputting the content to an SVG file (a vector-graphics file format). Although the SVG specification is complicated, the subset that we need is small and can be output by a simple C++ program.

Examine this example svg file in a text editor to understand how SVG (for our purposes) works and what content you will need to produce. I have commented the file with XML comments (<!-- -->): these do not need to be included in your version.

  1. Create a new header file svg.hpp and implementation file svg.cpp in turtlelib.
  2. Design and implement a class called Svg in the turtlelib namespace that lets you "Draw" points, vectors, and coordinate frames (as specified in the example)
  3. You will need some provision to write the SVG to a file, or get the file contents as a string.
  4. Write at least unit test for the SVG library in a file called test_svg.cpp. Here's how:
    1. Get the Svg class working and verify it manually
    2. Use the output from your manually verified example in a test case
    3. This way, if you ever make a change that breaks the test example the test will fail.

Hint

  1. Modify some values of the sample svg by hand and view the image to get a feel for what they do.
  2. You may wish to maintain a separate "testing" program to help.
  3. You will need a way to convert between turtlelib coordinate frames and the SVG ViewBox coordinate frame
  4. The location of a point can be determined by a Transform2D relative to the midpoint of the page
  5. Drawing a vector depends on not just the vector's length and direction, but also the position of the vector's tail.

Task B.6 (executable implementation)

  1. Create a file called frame_main.cpp, which will compile into an executable called frame_main. Here is what the program should do:
    1. Prompt the user to enter two transforms: \(T_{ab}\) and \(T_{bc}\).
    2. Compute and output \(T_{ab}\), \(T_{ba}\), \(T_{bc}\), \(T_{cb}\), \(T_{ac}\), and \(T_{ca}\) and draw each frame in the svg file (with frame A located at (0, 0)).
    3. Prompt the user to enter a point \(p_a\) in Frame {a}
    4. Compute \(p_a\)'s location in frames \({b}\) and \({c}\) and output the locations of all 3 points
      • Use purple to draw \(p_a\), brown to draw \(p_b\), and orange to draw \(p_c\).
    5. Prompt the user to enter a vector \(v_b\) in frame \({b}\)
      • Normalize the vector to form \(\hat{v}_b\).
      • Draw \(\hat{v}_b\) with the tail located at \((0, 0)\) in frame \({b}\), in brown.
      • Draw \(v_b\) with tail located at \((0,0)\) in frame \({b}\), in brown.
    6. Output \(v_b\) expressed in frame \({a}\) and frame \({c}\) coordinates
      • Draw \(v_a\) with the tail at \((0, 0)\) in frame \({a}\), in purple.
      • Draw \(v_c\) with the tail at \((0, 0)\) in frame$ \({c}\), in orange
    7. Output the drawing to /tmp/frames.svg.
  1. An example transcript from a running program is presented below.

    • Lines that start with > are entered by the user, but the > symbol is not actually printed to the screen
    • It is important that your output format matches the format below precisely, as this output will be read by a computer program
    Enter transform T_{a,b}:
    >90 0 1
    Enter transform T_{b,c}:
    >90 1 0
    T_{a,b}: deg: 90 x: 0 y: 1
    T_{b,a}: deg: -90 x: -1 y: -6.12323e-17
    T_{b,c}: deg: 90 x: 1 y: 0
    T_{c,b}: deg: -90 x: -6.12323e-17 y: 1
    T_{a,c}: deg: 180 x: 6.12323e-17 y: 2
    T_{c,a}: deg: -180 x: -1.83697e-16 y: 2
    Enter point p_a:
    >1 1
    p_a: [1 1]
    p_b: [0 -1]
    p_c: [-1 1]
    Enter vector v_b:
    >1 1
    v_bhat: [0.707107 0.707107]
    v_a: [-1 1]
    v_b: [1 1]
    v_c: [1 -1]
    Enter twist V_b:
    >1 1 1
    V_a: [1 0 1]
    V_b: [1 1 1]
    V_c: [1 2 -1]
    
  2. Run your program using numbers that differ from the ones in my example.
    • Create a transcript of the input and save it as turtlelib/exercises/B6_frame_input.txt
      • Running frame_main < frame_input.txt should result in your program running as if you entered each line in frame_input.txt via the keyboard
    • Save and commit the output to turtlelib/exercises/B6_frame_output.txt
      • After creating B6_frame_input.txt you can create this file with frame_main < B6_frame_input.txt > B6_frame_output.txt
    • Save and commit /tmp/frames.svg to turtlelib/exercises/B6_frames.svg

Hint

  1. Think about how the transforms should work and what should be displayed, then open the image in inkscape.
  2. Sometimes, objects will be drawn on top of each other (e.g., points show up in the same location regardless of what frame they are expressed in)
    • In inkscape you can select objects and move them around to see what is stacked.

Task B.7 (conceptual questions)

README.md Template

Answer the questions below in the template for turtlelib/README.md. Copy and paste into your own README.md and then answer the questions inline.

There are also other sections of the README.md that you should fill in as you complete the assignment

# Turtlelib Library
A library for handling transformations in SE(2) and other turtlebot-related math.

# Components
- geometry2d - Handles 2D geometry primitives
- se2d - Handles 2D rigid body transformations
- frame_main - Perform some rigid body computations based on user input

# Conceptual Questions
1. If you needed to be able to ~normalize~ Vector2D objects (i.e., find the unit vector in the direction of a given Vector2D):
   - Propose three different designs for implementing the ~normalize~ functionality

   - Discuss the pros and cons of each proposed method, in light of the C++ Core Guidelines.

   - Which of the methods would you implement and why?

2. What is the difference between a class and a struct in C++?


3. Why is Vector2D a struct and Transform2D a Class (refer to at least 2 specific C++ core guidelines in your answer)?


4. Why are some of the constructors in Transform2D explicit (refer to a specific C++ core guideline in your answer)?


5. Why is Transform2D::inv() declared const while Transform2D::operator*=() is not?
   - Refer to [[https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines#con-constants-and-immutability][C++ Core Guidelines (Constants and Immutability)]] in your answer

Normalize Vector

  • Implement the normalize vector functionality (in the geometry2d module) using the method you decided upon in the conceptual questions.
  • Include at least one unit test for the normalize functionality.

Task C (The Simulator)

We will now create a package that can be used as a simulator and visualizer.

This node, called nusim will provide a simulated robot environment that we will build upon throughout the course. The simulator node uses rviz2 for visualization. To start, it will be capable of creating some stationary walls and tracking the position of a robot.

The basic overall structure of the simulation is as follows:

  1. Initialization
  2. Loop at a fixed frequency until terminated.
  3. On each loop iteration:
    • Update the state of the world (integrate time forward by some timestep)
    • Publish messages that provide state information as if it were coming from a real robot, and updates the rviz visualization
    • Process service/subscriber callbacks to get commands for the next time step

We will separate information that can be only be known/done by the simulation (such as teleporting the robot or exact distance measurements) and information that can be known/done by the robot (such as driving forward or noisy distance measurements) using ROS namespaces.

Anything topics/services/parameters relating directly to the domain of the simulation will be done in the nusim namespace and should not be accessed by nodes not in that namespace.

It may be useful to start writing the launchfile C.5 prior to finishing some of the other tasks so that you may easily run your code as you work on it.

Task C.1 (nusim package)

  1. Create an ament_cmake ros package called nusim.
    • This should be a directory within your repository (i.e., <reponame>/nusim)
    • Update the package.xml as follows
      • Give it a version number of 0.1.2
      • Provide a description other than the default.
      • Fill in your name and email address as the maintainer and as an author
      • Choose a license other than TODO
      • The package has a depend on rclcpp
      • Fill out the other dependencies properly.
  2. The package must pass colcon test with no warnings or problems, including the default tests that are created when running ros2 pkg create

Task C.2 (simulation node)

  1. Create the node nusim, the main simulation node, and implement it in src/nusim.cpp
  2. It should run a main loop at a frequency of rate (rate is a parameter to the node)
    • If rate is not specified, default to 200 Hz
  3. In the main timer publish an std_msgs/msg/UInt64 ~/timestep value, which tracks the current timestep of the simulation
    • Each time the main timer executes, another timestep of the simulation occurs.
  4. Implement a ~/reset service that restores the initial state of the simulation.
    • For now, the only state is the ~/timestep value, which should be reset to zero.
    • As more functionality is added to the simulation, the ~reset service will need to do more.
  5. Note the ~ before the topic names. This symbol makes the topic "private" to the node, meaning that if the node is called, for example, nusim than ~/timestep will resolve to nusim/timestep

Task C.3 (simulated turtle)

We will next add the turtlebot3 robot to the simulation. The simulation must track, control, and publish information about this robot.

  1. The actual (ground truth) state of the simulated turtlebot will be represented by the red turtlebot from Task A.
    • The ground truth is known only by the simulator
  2. The nusim should broadcast a transform between the nusim/world frame and red/base_footprint frame.
    • This transform represents the actual pose of the robot.
    • Your control algorithms are never allowed to lookup frames starting with nusim as that is ground-truth data known only to the simulator.
  3. The nusim should offer a ~/teleport service that enables moving the robot to a desired \((x, y, \theta)\) pose.
    • Create a custom service type called Teleport.srv and give it 3 double-precision floating point values: x, y, theta
    • Calling this service is the simulated equivalent of picking the robot up and moving it somewhere.
  4. The initial pose of the robot should be specified by the parameters x0, y0, and theta0 provided to the nusim node
    • When the nusim starts, the robot should be at the position specified by these parameters relative to the nusim/world frame
    • These values default to 0 if not specified
  5. When the ~/reset service is called, the robot should be restored to its initial location.

Task C.4 (walls)

  1. The arena where the robot drives will be rectangular, with walls on the boundary.
  2. Allow the user to specify the size of the arena using parameters:
    • arena_x_length is the length of the arena in the world \(x\) direction
    • arena_y_length is the length of the arena in the world \(y\) direction
    • The arena is centered at \((0,0)\),
    • The walls are 0.25m tall
    • You can use whatever thickness you would like, but remember that the arena is sized in terms of the free space inside.
  3. The walls should be red to signify that their location is known only to the simulator.
  4. Publish the walls as a visualization_msgs/MarkerArray message on the ~/walls topic once when the simulator starts.
    • Use the appropriate QoS settings so that the walls will show up even if rviz subscribes after they are published.

Task C.5 (cylindrical obstacles)

  1. Add the ability to add cylindrical obstacles to the environment
  2. The cylinders should be 0.25m tall, but can have a variable radius (as specified by the user)
  3. The cylinders should be red (locations of the cylinders are ground-truth).
  4. All obstacles will be specified as parameters to the nusim node:
    • obstacles/x is a list of the obstacles' x coordinates (float64)
    • obstacles/y is a list of the obstacles' y coordinates (float64)
    • obstacles/x and obstacles/y should always be the same length or the node should log an error message and exit.
    • obstacles/r is the radius of the obstacles ( float65). We will assume that all obstacles are the same radius.
  5. You should be able to specify an arbitrary number of obstacles.
  6. The nusim should publish a visualization_msgs/MarkerArray message on the ~/obstacles topic once on startup
    • Use the appropriate QoS settings so that the walls will show up even if rviz subscribes after they are published.

Task C.6 (nusim launch)

  1. Write an xml launchfile called nusim.launch.xml that starts the simulator
    • The launchfile should start rviz, nusim, and load all parameters required to run the simulation
    • The configuration file for rviz should be stored in config/nusim.rviz
    • When launched, the robot, obstacles, and nusim/world tf frame should be visible
    • The launchfile should include other launchfiles you've written as needed
  2. The default parameters to run the simulation should be stored in config/basic_world.yaml
    • The basic_world should have three cylindrical obstacles of radius 0.038m, placed at (-0.5, -0.7), (0.8, -0.8), and (0.4, 0.8).
    • Start the robot at (-0.5, 0.7, 1.28)
    • We will add to this file as more parameters are required
  3. The launchfile takes an argument called config_file to the nusim.launch launchfile.
    • This argument should let a user specify a .yaml file to configure the simulator. If blank, use the default config/basic_world.yaml configuration file.

Task C.7 (README.md)

Create a README.md for this package. It should provide

  1. A brief description of the package.
  2. Descriptions of the provided launchfiles
  3. A description of the parameters that can be used to change simulator settings.
  4. Include a screenshot from rviz as launched from nusim.launch
    • Store the image in nusim/images/nusim1.png

Resources

Author: Matthew Elwin