Final Project (Fall 2017)
Table of Contents
Competition is Thursday, December 7 from 3-5 PM
Final submission due Friday, December 8 at 11:59 PM
1 Introduction
For the final project in this year's course, we are going to split up into 6 teams of 5 students, and have a Baxter/Sawyer manipulation competition. The guidelines for what you must accomplish are intentionally only loosely specified. Judges will be judging creativity, complexity, and reliability.
2 Specifications
2.1 Requirements
Each team will be required to design and build a demonstration using either a Baxter or Sawyer robot. You are given quite a lot of freedom in choosing the specifications of your specific demonstration, but the demonstration must have the following elements:
- Part Manipulation
- Every demonstration should include using the robot's arms and grippers to physically manipulate an object or multiple objects. This may be either the primary purpose of the demonstration, or it may be only a small piece.
- Sensing
- The demonstrations must incorporate at least one sensor, and the output of the sensor readings must influence the behavior of the robot (i.e. the sensor must be used in some sort of feedback loop). This sensor could be internal to the robot (cameras, accelerometers, joint angles, IR range sensors, torque sensors, etc.), or you could introduce an external sensor (Kinect, camera, etc.).
- ROS Package
- Every team will be responsible for at least one ROS package (could also be a metapackage or a loosely coordinated set of packages). This package should conform to ROS standards, it should include multiple custom nodes, launch files, topics, and services.
- GitHub Documentation
- All teams will put their project on GitHub, and the project must include thorough documentation in either the form of a single README, or a collection of documents linked from a README. All documentation should be written in either a markup language that GitHub can render (Markdown, org mode) as a README, or directly in HTML (if, for example, you used GitHub pages to automatically generate HTML from Markdown using Jekyll).
- Videos
- Each team must produce a video showing their demonstration working, and this video must be posted online (Vimeo, YouTube, etc.) and linked from their documentation. If you'd like me to upload your video to MSR's Vimeo Pro account, I'm happy to help with that.
2.2 Competition logistics and timing
Each team will be given twenty total minutes to present their demonstration. The presentation should include a high level description their specific project (you don't need a PowerPoint, could just be verbal), a live demonstration, and at least 5 minutes for questions. The "on-deck" team will be able to setup their demonstration on a different robot while the previous team is presenting. We'll organize the schedule such that no teams using the same robot are consecutive.
2.3 Environment
Teams may use either of two Baxter robots or Sawyer, and they are free to adjust the relative positioning of Baxter and the surrounding world (tables, parts, sensors, etc.). We have several tables with adjustable heights, and teams should feel free to adjust the height of the tables. If an external sensor is desired, that is fine, and it can be placed anywhere within Baxter's reachable workspace (it does not need to be mounted on Baxter; e.g., you could put a Kinect on a tripod). No permanent modifications to Baxter may be made. If you would like to strap a Kinect to Baxter's head, that is probably okay, but we need to check that it wouldn't interfere with the other teams using the same robot.
We are likely only going to use the grippers provided with the Electric Parallel Gripper kits sold by Rethink. The grippers in the kit have a 44 mm throw, and a variety of fingers and fingertips that can be used. Changing to a different finger width or length, or changing the tips on the grippers is easy, and you should feel free to do this. You could also fabricate custom fingertips or even a replacement to the grippers (removing the gripper is easy), but you should not expect to be able to develop custom actuated grippers in the short amount of time we have. If any teams would like to explore this option, let me know and I can get you the specifications of the grippers. It is perfectly acceptable to completely remove the gripper for your demo (the fingers of the gripper are in the hand camera view – this could be a good reason for removing the grippers).
3 Inspiration
Below are some ideas that you could possibly incorporate into your projects to serve as inspiration.
- GUI
- It would be great to build a nice GUI to interact with your project.
This GUI could allow changing of settings, control of the system state,
or user interactivity. This could be done with something like rqt or
interactive markers in
rviz
, or you could write a custom GUI using something like Tkinter or PyQt. - Web Interface
- You could use something like Robot Web Tools and rosbridge_suite to build a web-based interface for communicating with your Baxter. You could even set this up to work over the internet (not just local network).
- Simulation
- You could base some of the technical content of the project largely around building a complex simulation environment in Gazebo or V-REP. You could even have the outputs of the simulation feeding into the decisions that the real Baxter makes.
- IK
- You could solve IK by implementing your own algorithms or adapting the Modern Robotics library. An interesting piece of a project would be to compare the IK solutions from pre-configured IK tools like IKFast, Baxter's IK service, trac_ik, bio_ik, and KDL.
- Force/Torque Control/Sensing
- You could do some very interesting things using the robot's torque control modes, or you could use the estimate of end-effector force to try and control the arm to apply specific forces to an object or to estimate the inertia of an object. This could be used ensure the arm is compliant in one or more directions, for keeping pressure on some object constant (e.g. drawing on a surface), or for implementing advanced object placing strategies (e.g. when inserting a peg into a hole you could measure when forces applied to the peg).
- Responsive Grasping
- You could try and estimate whether grasps have been successful by utilizing forces measured by the grippers.
- Joint Velocity Control
- You could have the arm stabilize to target EE poses or trajectories by writing a joint velocity controller (note that Chapter 11 of Modern Robotics covers this).
- Visual Servoing
- You could implement a controller that continuously tracks a target object and adjusts the control as the estimate of the object's relative pose evolves. This is different from simply measuring where an object is, planning a path to get there, and then blindly executing that path.
- Automatic Calibration
- You could implement a routine that allows the robot to automatically calibrate to his environment. For example, imagine that you have an external camera, and that you are relying on knowing the pose of the camera relative to a fixed frame on the robot. Your calibration could involve taking pictures of a tag with the external camera and one of the robot's cameras. Then, since both cameras have an estimate of the tag location in their respective camera frames, you could calibrate the pose from the robot to the camera. This idea could also be used for calibrating the pose from the robot to a "work cell" (e.g. a table that the robot will be working with).
- Face Communication
- You could use the face display to animate/illustrate/indicate the how your demo is functioning. For example, you could display the output of motion planning schemes or image processing algorithms directly on the face, or you could use the display to print out the status/operation mode of your demo.
- Button Interfaces
- You could use the robot's built-in arm buttons for implementing some sort interactivity.
- Voice Command
- Tools such as pocketsphinx or gspeech could be used to implement voice-based control (would likely want an external, directional microphone).
- Face Recognition
- You could recognize/track users with tools such as face_recognition, face_detector, or OpenCV's face recognition module.
- Dynamic Reconfigure
- You could use dynamic_reconfigure to create parameters that are easy to modify during a running demo.
- Other Robots
- If a team was ambitious, they could likely come up with a demo where Baxter/Sawyer was somehow "teaming" with another robot (e.g. a TurtleBot or Jackal).
4 Deliverables
For the final project, each student will receive four separate grades.
- Each member of a team will receive a proposal grade. Theses points will be completion points. Details of the proposal will be posted in Canvas.
- All students will receive a group grade. This grade will be assigned based on the effort put forth in the competition and final submission. All members receive the same grade.
- Each student will receive a group assessment grade. You will be asked to assess your own group. This grade will depend on the quality of your assessment.
- Each student will receive individual points for their effort on the final project. Ideally every student will receive full marks for this individual grade. However, if I hear reports of group members not fully contributing, they may not receive full credit.
4.1 Competition
Your team is expected to show up to the competition and try their best during their allotted time. You should also expect to pay attention and cheer on the other teams when you are not setting up for your turn.
4.2 Write-Up
Each team will be expected to submit a single GitHub repository via Canvas. This repository should include well-written documentation about how to run your demos, what the important nodes and topics are, high-level conceptual overviews, etc. I will be looking at how well you used Git as a team, how well your package (or meta-package) conforms to ROS standards, and the quality of your Python code. For the write-up and the competition portion of the final project, each group member will be receiving the same grade. Be sure your demonstration and write-up conform to all requirements presented above.
4.3 Team Assessment
In addition to a single group write-up each team member will individually be responsible for submitting a single group assessment via Canvas. Each assessment will be kept completely confidential, and I will use the assessments to help me determine how to allocate individual points.
5 Resources and Considerations
- Tags
- There are a variety of tag tracking packages in ROS. These packages
are usually designed to provide a pose estimate from a camera frame to
the real-world location of some sort of visual tag (AR tags, QR codes,
etc.). Some of the packages can also read metadata off of the tags. These
packages may be a great way to calibrate your system, identify graspable
objects, etc. They are also, generally, pretty easy to use. Typically all
you need is a calibrated camera, some topic remapping, and a few
configuration files.
- Mini Project: During this course in 2014, two students did a fantastic mini project where they provided tutorials on how to use several different tag tracking packages, and they compared and contrasted the pros and cons of each.
- Project from 2014: This project last year used a couple of different tag
tracking packages with Baxter. This may be a good resource to see how to
use
ar_track_alvar
andvisp_auto_tracker
with these robots.
- Perception
- For many of your demonstrations, you may be interested in
using some form of external world perception. Below are
several tools that may be helpful for perception.
- OpenCV: This is by far the most widely-used computer vision library.
This package integrates easily with ROS via the
cv_bridge
package. I will provide one or two introductory lectures on this in the coming weeks. - PCL: The Point Cloud Library "is a standalone, large scale, open project for 2D/3D image and point cloud processing." It is nicely integrated with ROS via the perception_pcl and the pcl_ros packages. If you are interested in using a 3D camera such as a Kinect, this is how to do it.
- Camera Calibration: The lenses on cameras tend to distort images. In order to accurately use a camera for perception, it is often desirable to use a calibration procedure to remove these distortions. Additionally, geometric information of the camera is required to determine how to map pixel coordinates of an object to real-world coordinates. Most of the tag tracking tools require calibrated cameras. Camera calibration is easily done in ROS using the camera_calibration package. The YAML files produced by this calibration are then fully compatible with the ROS image_pipeline.
- OpenCV: This is by far the most widely-used computer vision library.
This package integrates easily with ROS via the
- Motion Planning
- Motion planning for a robot arm in ROS is accomplished almost exclusively with MoveIt!. Many of your demonstrations can likely be accomplished with quite simple motion planning strategies. However, if your team chooses to leverage MoveIt!, you will certainly have significantly enhanced capabilites (at the expense of higher complexity). I will lecture on this in the coming weeks.
- Examples
- Here are a few good pick-and-place Baxter demos that you may
find useful. These may be good starting places, but they are far from
perfect. I strongly encourage you to use these for inspiration not
solutions. If you use code from any examples online, you must cite your sources.
- Jon Rovira Pick-Place Demo: A few years ago, an undergraduate working with me put together a quick demo that allowed Baxter to find and grasp a red cube on a table (he was using a Rubik's cube, red-side-up). It worked reliably, but was sensitive to the height of the table.
- Visual Servoing Baxter Example: This is an example from Active Robots that is quite similar to what we are trying to do. Do not plan on copying this code and having it work.
- Baxter Pick and Learn: This example is from VUB/ULB universities in Belgium. "It uses a shape recognition algorithm to identify similar shapes and place them to a location previously demonstrated by the operator." I have not studied or used this example, but it could be helpful.
- 2014 Final Projects: There were two interesting final projects in 2014.
These may give you some inspiration, but they are not necessarily the best
code examples to follow :)
- Baxter part sorting by mass: https://github.com/sherifm/baxter_sort
- Baxter stocking stuffer: https://github.com/ChuChuIgbokwe/ME495-Final-Project-Baxter-Stocking-Stuffer
- 2015 Final Projects: Here are some of the final projects from the part sorting competition in 2015.
- Group 1: GitHub repo
- Group 2: Documentation, GitHub repo, video
- Group 4: GitHub repo, video
- Group 5: GitHub repo, video
- 2016 Final Projects: Here are the final projects from this course last year:
- Group 1: Robots of Catan
- Group 2: Baxter Checkers
- Group 3: Shell Game Baxter Controlled
- Group 4: Table Setting
- Group 5: Shell Game User Controlled
- Group 6: Baxter Barista
- Simulators
- Gazebo: Rethink has released a baxter_simulator package that could be
very useful for developing much of your demonstrations. It is not
currently on
apt-get
for Kinetic, but it does build successfully on thekinetic-devel
branch. Check out the main documentation page for more. I know the Gazebo simulator for Sawyer is under active development, but as far as I know, it has not yet been released. - V-REP: If you'd like to investigate using V-REP instead-of or in-addition-to Gazebo, feel free to do so. Check out Jon Rovira's demo for getting started. Both Sawyer and Baxter have similar support.
- Gazebo: Rethink has released a baxter_simulator package that could be
very useful for developing much of your demonstrations. It is not
currently on
- State Machines
- Likely a good ROS-y way of implementing the behaviors that you want are through actions (you certainly don't need actions, they are somewhat complicated). ROS provides the SMACH package to quickly build a hierarchical state machine that is implemented via actions. Here is a Baxter Demo Manager state machine a student wrote a few years ago using SMACH. You could also use a behavior tree implementation such as behavior_tree or pi_trees.
- Setup and Calibration
- Likely a big concern you should have is how to quickly guarantee that the environment is configured properly for your code. You could certainly rely on manual calibrations (jigs, tape measures, etc.). Alternatively, you could write some calibration routines that allow your system to adjust based on the current environment setup. Obviously the automatic solution is more advanced, but a manual procedure will likely work fine. Be sure that you know how to get calibrated quickly so that you don't waste your time during the competition.
- Trajectory Action Server
- The Baxter/Sawyer software provides a Joint Trajectory Action Server that allows the robot to execute motions through a sequence of joint positions and possibly velocities/accelerations. While this interface is not the easiest to use, I've had much better luck achieving high precision motions when properly using this tool because the robot can do a much better job of accounting for its own internal dynamics if the trajectory is known ahead of time.