Reports

Integrating Vision and Touch for Object Recognition Tasks

Allen, Peter K.

A robotic system for object recognition is described that uses passive stereo vision and active exploratory tactile sensing. The complementary nature of these sensing modalities allows the system to discover the underlying 3-D structure of the objects to be recognized. This structure is embodied in rich, hierarchical, viewpoint-independent 3-D models of the objects which include curved surfaces, concavities and holes. The vision processing provides sparse 3-D data about regions of interest that are then actively explored by the tactile sensor mounted on the end of a six-degree-of-freedom manipulator. A robust, hierarchical procedure has been developed to integrate the visual and tactile data into accurate 3-D surface and feature primitives. This integration of vision and touch provides geometric measures of the surfaces and features that are used in a matching phase to find model objects that are consistent with the sensory data. Methods for verification of the hypothesis are presented, including the sensing of visually occluded areas with the tactile sensor. A number of experiments have been performed using real sensors and real, noisy data to demonstrate the utility of these methods and the ability of such a system to recognize objects that would be difficult for a system using vision alone.

Subjects

Files

More About This Work

Academic Units
Computer Science
Publisher
Department of Computer Science, Columbia University
Series
Columbia University Computer Science Technical Reports, CUCS-240-86
Published Here
November 2, 2011