ENVIRONMENTAL ROBOTICS

Robotic Waste Sorting

This project develops robotic manipulation algorithms and human-robot collaboration architecture that aims to improve efficiency and profitability in the recycling industry, while re-creating recycling jobs to be safer, cleaner, and more meaningful. The specific goal is to improve sortingseparation of mixed-waste into plastics, paper, metal, glass, and non-recyclables.

Collaborators — Yale University  ·  Boston University  ·  University of Washington
Industry Partner — Casella Waste Systems

ROBOTIC METAL SCRAP CUTTING

Metal cutting operations in metal recycling scrapyards are labor-intensive, difficult, and dangerous. These are performed by skilled workers on decommissioned structures using gas torches. In our project, we develop a human-robot collaboration workflow for robotic metal scrap cutting in unstructured scrapyards. Our workflow combines worker expertise with robot autonomy for greater productivity and safety.

BUILDING INSPECTION FOR SUSTAINABILITY

In order to detect inefficiencies in the older building stock and help the city governments plan their climate resilience plans, we worked on developing a lizard-like robot that can navigate in intricate places and map them. This project is conducted in collaboration with WPI’s Soft Robotics Laboratory. A news piece about our work can be found here.

Partner — City of Worcester

Environmental Robotics Research portal

Our lab participates in Public Interest Technology – University Network (PIT-UN) for creating a web portal presenting current environmental robotics research, grant opportunities, solutions in the industry, and several existing sustainability frameworks. Our goal is to spark interest and awareness for the potential of robotics technology while developing solutions that considers the problems’ social context.

Visit our web portal on environmental robotics here!

 


MANIPULATION RESEARCH

Dexterous Manipulation with Controlled Sliding

Within-hand manipulation provides significant dexterity and flexibility for robots operating in dynamic unstructured environments. Our recent work features a 2-DOF robot gripper that uses a simple mechanism to change the effective friction of its finger surfaces. We examine robust and autonomous vision-based manipulation strategies levering the advantages of such switching mechanisms. Haptic-based recognition is also explored using these mechanisms.

 

Collaborators — Yale University  ·  Imperial College London

Encoderless Robots

This project investigates the control of non-traditional robotic systems that lack proprioceptive configuration sensors, such as soft/continuum robots or inexpensive 3D-printed systems. We use vision-based control algorithms to control such systems with limited prior information about the robot model or system parameters and minimal online information from internal configuration measurements.

Benchmarking for Robotic Manipulation

The rapid advances in robotic manipulation require tools for systematic assessment of manipulation performance of given systems to draw meaningful comparisons. We present benchmarking tools and datasets such as YCB Object and Model set, ZeroWaste Data set, and Household Cloth Object Set, along with benchmarking protocols such as adapted box and blocks test and RB2. Our lab has also been contributing to the organization of the Robotic Grasping and Manipulation Challenge.

Collaborators — Yale University  ·  Carnegie Mellon University  ·  UC Berkeley

Ensemble Learning for Grasping

This project develops an ensemble learning methodology that combines multiple existing robotic grasp synthesis algorithms to obtain success rates significantly better than the individual algorithms. The methodology treats grasping algorithms as “experts” providing grasp “opinions”. An Ensemble Convolutional Neural Network is trained using a Mixture of Experts model that integrates these opinions to determine the final grasping decision. This architecture is tested with open-source algorithms (e.g., GQCNN 4.0, GGCNN, custom variant of GGCNN, …), on publicly available grasping datasets (e.g., Cornell Dataset, Jacquard Dataset, …). This approach is also tested on a real robot (Franka Emika Panda arm).

Active Vision for Manipulation

Active vision allows robots to more intelligently collect information by controlling how their sensors move. Our work uses active vision to find the most efficient ways to collect data for vision-based grasping. This helps the grasp planning algorithms by providing them better, more suitable data.

 

Vision-based Control of Continuum Robots

Vision-based control is beneficial for closing the loop when using inaccurate or unknown robot models. Additionally, it is also useful for incorporating task relevant information. The aim of this project is to control the entire shape of continuum robots using purely image information without any prior knowledge of the robot-camera model.

Left Image: Initial shape of the origami continuum manipulator is represented by the purple clothoid curve and the target curve is shown in pink. Right Image: The final shape of the origami manipulator is represented by the dashed purple curve. The reference shape is denoted by the pink curve. The markers along the curves in both images represent the points at which we sample our shape feature vector.

Collaborators – Soft Robotics Laboratory, Worcester Polytechnic Institute