We would like people to work in teams of two. Alternatives (more or fewer people) will be considered on a case-by-case basis. Please find a teammate (for example, on the class forum) and set up a time before or on Thursday, Feb 2, to meet with Dieter and Peter. Please have (at least) 2 ideas when you come to the meeting.
We have one TurtleBot already, and a second is on the way. We have several independent Kinect cameras, and can get more based on demand (right, Dieter? ? ).
Following are some ideas. None of these are especially precisely defined, but should help stimulate thinking regarding robotics projects in a variety of areas:
SLAM on the TurtleBot
There are existing tools in ROS for running SLAM on the TurtleBot by pretending that a strip of Kinect points is a laser scan. However, the results are not very reliable. Work on improving the SLAM capabilities of the TurtleBot. Some ideas:
Use color information.
Use the full frame of data as opposed to just a strip.
Use visual features.
Use shape features.
Use additional sensors. I am investigating adding a laser scanner to one of the TurtleBots, which may afford a way to create more accurate maps, or compare Kinect- and Laser-based mapping.
Localization of the TurtleBot against an existing map
We have existing laser maps of the Allen Center. Implementing localization with the Kinect (with a particle filter, say) in such a map would be interesting.
Use an existing laser map to aid creation of a corresponding 3D Kinect map.
Freehand Kinect mapping
We are among several groups to have done work on full 6 degree of freedom 3D mapping: RGB-D Mapping. KinectFusion is another, recent, impressive example: project page.
Investigate alternative techniques, using a combination of visual and shape information.
Create an exploration strategy for finding new views of unseen areas of the 3D map.
Investigate reconstruction and rendering techniques for such 3D maps.
Given an existing 3D map, figure out how to do probabilistic localization and filtering.
Kinect Segmentation (e.g. Fully Connected CRFs)
The Kinect exists as it does because depth cameras improve segmentation performance.
There has been some recent work on “Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials” (Krahenbuhl & Koltun, NIPS 2011) which was applied to traditional 2D images, for which code is publicly available. It would be exciting to apply their technique to RGB-D (Kinect) frames.
One resource could be the labeled dataset of Kinect data available here: NYU Depth Dataset.
Kinect Object Recognition
There has been work here on object recognition from Kinect data, including a large dataset of many objects: dataset page.
Find ways of improving accuracy or speed of object recognition on this dataset, or try out alternative techniques
Investigate object detection in scenes. There is a scenes dataset on the page, or the actual physical objects (which we have) could be placed in novel scenes to test detection algorithms.
Human-Robot Interaction There are many ways to explore the possibilities of human-robot interaction.
Implement person-following on the TurtleBot.
Using either the Kinect on the TurtleBot, or adding a second Kinect with a view of the person, enable a person to point at objects or locations which should be moved towards by the robot. This will involve perception and planning.
Manipulator Planning (Simulated)
Though we don’t have access to real-world manipulators (arms/hands) for the course, ROS includes some powerful manipulator simulation capabilities.
Implement a planner to create a stack of simulated blocks (without knocking it over!).
Investigate planning in restricted spaces.
Grasp planning and evaluation is an important part of manipulation. Learning and testing grasps would be interesting.
Other cool things to do with a robot
Scribbler: By attaching a pen to a robot, it can be made to write text or reproduce images (on butcher paper, say). Do this ?
Obstacle course: Set up and navigate an obstacle course. This would involve a combination of mapping, navigation, and planning.
Touch flags or other objects in some order: Similar to an obstacle course, this will involve a combination of perception and planning.
Multi-robot coordination (Simulated): Investigate multi-robot coordination with limited communication range and sharing sensor data (for example, exploration strategies).
Object tracking and interception: Track a rolling or bouncing ball (or another object). Move the robot to intercept or retrieve it.