This project develops robotic manipulation algorithms and human-robot collaboration architecture that aims to improve efficiency and profitability in the recycling industry, while re-creating recycling jobs to be safer, cleaner, and more meaningful. The specific goal is to improve sorting — separation of mixed-waste into plastics, paper, metal, glass, and non-recyclables.
Collaborators — Yale University · Boston University · University of Washington
Industry Partner — Casella Waste Systems
Metal cutting operations in metal recycling scrapyards are labor-intensive, difficult, and dangerous. These are performed by skilled workers on decommissioned structures using gas torches. In our project, we develop a human-robot collaboration workflow for robotic metal scrap cutting in unstructured scrapyards. Our workflow combines worker expertise with robot autonomy for greater productivity and safety.
Industry Partner — European Metal Recycling Ltd.
Our lab participates in Public Interest Technology – University Network (PIT-UN) for creating a web portal presenting current environmental robotics research, grant opportunities, solutions in the industry, and several existing sustainability frameworks. Our goal is to spark interest and awareness for the potential of robotics technology while developing solutions that considers the problems’ social context.
Within-hand manipulation provides significant dexterity and flexibility for robots operating in dynamic unstructured environments. Our recent work features a 2-DOF robot gripper that uses a simple mechanism to change the effective friction of its finger surfaces. We examine robust and autonomous vision-based manipulation strategies levering the advantages of such switching mechanisms. Haptic-based recognition is also explored using these mechanisms.
Collaborators — Yale University · Imperial College London
This project investigates the control of non-traditional robotic systems that lack proprioceptive configuration sensors, such as soft/continuum robots or inexpensive 3D-printed systems. We use vision-based control algorithms to control such systems with limited prior information about the robot model or system parameters and minimal online information from internal configuration measurements.
The rapid advances in robotic manipulation require tools for systematic assessment of manipulation performance of given systems to draw meaningful comparisons. We present benchmarking tools and datasets such as YCB Object and Model set, ZeroWaste Data set, and Household Cloth Object Set, along with benchmarking protocols such as adapted box and blocks test and RB2. Our lab has also been contributing to the organization of the Robotic Grasping and Manipulation Challenge.
Collaborators — Yale University · Carnegie Mellon University · UC Berkeley
This project develops an ensemble learning methodology that combines multiple existing robotic grasp synthesis algorithms to obtain success rates significantly better than the individual algorithms. The methodology treats grasping algorithms as “experts” providing grasp “opinions”. An Ensemble Convolutional Neural Network is trained using a Mixture of Experts model that integrates these opinions to determine the final grasping decision. This architecture is tested with open-source algorithms (e.g., GQCNN 4.0, GGCNN, custom variant of GGCNN, …), on publicly available grasping datasets (e.g., Cornell Dataset, Jacquard Dataset, …). This approach is also tested on a real robot (Franka Emika Panda arm).
Active vision allows robots to more intelligently collect information by controlling how their sensors move. Our work uses active vision to find the most efficient ways to collect data for vision-based grasping. This helps the grasp planning algorithms by providing them better, more suitable data.