Cooperative integration of vision and touch

Allen, Peter K.

Vision and touch have proved to be powerful sensing modalities in humans. In order to build robots capable of complex behavior, analogues of human vision and taction need to be created. In addition, strategies for intelligent use of these sensors in tasks such as object recognition need to be developed. Two overriding principles that dictate a good strategy for cooperative use of these sensors are the following: 1) sensors should complement each other in the kind and quality of data they report, and 2) each sensor system be used in the most robust manner possible. We demonstrate this with a contour following algorithm that recovers the shape of surfaces of revolution from sparse tactile sensor data. The absolute location in depth of an object can be found more accurately through touch than vision; but the global properties of where to actively explore with the hand are better found through vision.


Also Published In

Sensor fusion II : human and machine strategies : 6-9 November 1989, Philadelphia, Pennsylvania
Society of Photo-optical Instrumentation Engineers

More About This Work

Academic Units
Computer Science
Proceedings of SPIE--The International Society for Optical Engineering, 1198
Published Here
November 8, 2012