Hand-eye coordination for grasping moving objects
- Hand-eye coordination for grasping moving objects
- Allen, Peter K.
- Computer Science
- Persistent URL:
- Proceedings of SPIE--The International Society for Optical Engineering
- Part Number:
- Book/Journal Title:
- Sensor Fusion III: 3D Perception and Recognition
- Book Author:
- Schenker, Paul S.
- Society of Photo-optical Instrumentation Engineers
- Publisher Location:
- Bellingham, Wash.
- Most robotic grasping tasks assume a stationary or fixed object. In this paper, we explore the requirements for grasping a moving object. This task requires proper coordination between at least 3 separate subsystems: dynamic vision sensing, real-time arm control, and grasp control. As with humans, our system first visually tracks the object's 3-D position. Because the object is in motion, this must be done in a dynamic manner to coordinate the motion of the robotic arm as it tracks the object. The dynamic vision system is used to feed a real-time arm control algorithm that plans a trajectory. The arm control algorithm is implemented in two steps: 1) filtering and prediction, and 2) kinematic transformation computation. Once the trajectory of the object is tracked, the hand must intercept the object to actually grasp it. We present 3 different strategies for intercepting the object and results from the tracking algorithm.
- Publisher DOI:
- Item views
text | xml
- Suggested Citation:
- Peter K. Allen, Aleksandar Timcenko, Billibon Yoshimi, Paul Michelman, 1991, Hand-eye coordination for grasping moving objects, Columbia University Academic Commons, https://doi.org/10.7916/D88D05KB.