Active Vision for Manipulation

 

 

The vast majority of the robotic grasping algorithms utilize a single image of the object. As a result, the algorithms performance is highly dependent on the viewpoint of the camera, which greatly limits their success. Active vision allows robots to more intelligently collect information by controlling how their sensors move. This allows us to quickly and reliably interact with unknown objects in uncertain conditions. Our work attempts to find the most efficient next viewpoint to collect data for vision-based grasping. In this project, we designed two algorithms and implemented several more to select the next best viewpoint and rigorously test them on a benchmark object set. We then compared their performances to each other, as well as to optimal and random strategies. This allowed us to draw rigorous conclusions about what objects are difficult to grasp, and what approaches are most promising for grasping them.

Our related papers on this research:

Aiding Grasp Synthesis for Novel Objects Using Heuristic-Based and Data-Driven Active Vision Methods,
S. Natarajan, G. Brown, B. Calli,
Frontiers in Robotics and AI, 2021.
[Paper] [Video]