Robotic Grasping on the Stanford Artificial Intelligence Robot

Placeholder Show Content

Abstract/Contents

Abstract

The STanford Artificial Intelligence Robot (STAIR) project aims to construct and develop robots that can work in real-world environments such as homes and offices, where they will interact regularly with people. To work in such places, manipulation is an essential ability, and one fundamental skill of manipulation is grasping. Robotic grasping has therefore long been a focus of the STAIR project. Because STAIR needs to operate in dynamic and unknown environments, no model of the world or objects to grasp is available; instead, it must perceive all this from its visual sensors. This makes grasping a very difficult problem because data from visual sensors is noisy and incomplete. A robust grasping system for STAIR using only visual input is therefore presented here.
One approach used in the past to solve the grasping problem is to separate it into two components, the first for finding feasible robotic arm/hand grasp configurations, and the second to select the best one of these configurations as the final grasp to execute. This is known as the comparative approach. We build upon and generalize this approach. For the first component, which we denote as “search,” a previously developed image-based classifier that we developed is used to find plausible grasping points, then a search strategy is applied to find feasible grasp configurations. For the second component, which we denote as “selection,” we notice that there are some properties of a candidate grasp configuration that are indicative of good/bad grasps, and that these properties remain consistent across different objects. Moreover, these properties can be easily and reliably computed even if the 3-D point cloud from visual sensors is incomplete. We therefore present a selection algorithm that computes these features for candidate grasp candidates, and using a trained logistic classifier produces a quality score for each grasp. The grasp with maximum score is then executed. Extensive experiments using this grasping system were performed on STAIR. In particular, reasonable performance was found in all three tasks assessed: grasping single objects of various appearances, shapes, and sizes, grasping objects in cluttered environments, and applying the algorithm on a separate robotic platform to unload items from a dishwasher.
Performance was comparable in each case, which indicates that the grasping system is effective in its targeted environments and is generalizable to other robots.

Description

Type of resource text
Date created 2008-05-20

Creators/Contributors

Author Wong, Lawson L.S.
Advisor Ng, Andrew Y.
Department Stanford University. Department of Computer Science.

Subjects

Subject Robot hands
Subject Robots > Control systems
Genre Thesis

Bibliographic information

Access conditions

Use and reproduction
User agrees that, where applicable, content will not be used to identify or to otherwise infringe the privacy or confidentiality rights of individuals. Content distributed via the Stanford Digital Repository may be subject to additional license and use restrictions applied by the depositor.
License
This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).

Preferred citation

Preferred Citation
Wong, Lawson L.S. (2008). Robotic Grasping on the Stanford Artificial Intelligence Robot. Stanford Digital Repository. Available at http://purl.stanford.edu/kn730cj8320

Collection

Undergraduate Theses, School of Engineering

View other items in this collection in SearchWorks

Contact information

Also listed in

Loading usage metrics...