We present a labeled training set, i.e., a set of images of objects labeled with the 2-d location of the grasping point in each image. Collecting real-world data of this sort is cumbersome, and manual labeling is prone to errors. Thus, we instead chose to generate, and learn from, synthetic data that is automatically labeled with the correct grasps.
In detail, we generate synthetic images along with correct grasp using a computer graphics ray
tracer, as this produces more realistic images than other simpler rendering methods. One of the
advantages of using synthetic images is that once a synthetic model of an object has
been created, a large number of training examples can be automatically generated by rendering the
object under different (randomly chosen) lighting conditions, camera positions and orientations, etc.
Each dataset below consist of:
Explanation of Data format: README, File to read data into matlab: getCorrectedValuesStanfordData.m, getXYZfromDepth_fast.m.
|
|
|
|
Any report/publication that uses the data above should cite:
Robotic Grasping of Novel Objects,
Ashutosh Saxena, Justin Driemeyer, Justin Kearns, Andrew Ng. In Neural Information Processing Systems (NIPS) 19, 2006.
Learning to Grasp Novel Objects using Vision,
Ashutosh Saxena, Justin Driemeyer, Justin Kearns, Chioma Osondu, Andrew Y. Ng.
10th International Symposium of Experimental Robotics (ISER), 2006.
Note: Use of this data is restricted to research purposes only.
For use of this data in any commercial purposes, please contact authors.