Vision-based Control of Continuum Robots

Continuum robots mark a revolutionary shift in robotics, thanks to their flexible mechanics and hyper-redundant degrees of freedom. They present great potential for applications in medical and assistive robotics. Most existing control literature focuses on regulating the end effector pose of continuum robots. These approaches fail to exploit the redundancy of continuum manipulators. In contrast, this research project develops novel control strategies that regulate the entire shape of the robot using visual cues from an external camera observing the robot. Utilizing vision to close the loop enables us to adapt to changes in the system behavior in real-time and allows for incorporating task-based constraints into the control loop. To that extent, we propose a completely model-free adaptive image-based visual servoing strategy that controls the viewed shape of an extensible continuum manipulator. This method does not require any proprioceptive feedback from the robot itself.

In the image , the backbone shape of the continuum manipulator is parametrized using clothoids. The current shape is depicted by the blue curve whereas the desired shape is depicted by the pink curve in each of the image tiles. The controller servos the robot from its current (initial) shape to the desired (final) shape in the image and the achieved final shape is shown by the purple curve with broken lines. From the image tiles, it is clearly visible that the controller drives the robot to its desired shape with minimal errors. The orange points denote the points along the clothoid where we sample image features that are used to derive an image space error that drives the robot to the final shape.

Although the completely model-free approach achieves promising results in terms of shape convergence, the transient behavior of the algorithm may be unpredictable since the system Jacobian is adapted online. This is undesirable when the robot is performing in cluttered environments. We formulate an adaptive image-based visual tracking controller that grows the robot from its minimal length state to the final shape. As the robot grows,  the controller closely tracks along the desired shape curve in the image with minimal deviations throughout the transient stage. The video below provides an overview of our proposed method and demonstrates the experiments carried out on the real robot.

Further, we are interested in extending the convergence of the proposed image-based controllers to the 3D workspace of the robot and investigate the application of the proposed control algorithms to practical assistive tasks such as opening drawers and retrieving objects. Such tasks require the robot to interact with its environment. This environmental interaction introduces additional uncertainty to the robot’s model, which is typically developed under no-load conditions, and requires additional constraints on the control formulation and task planning front.

Papers related to this project:

[1] Shape Control of Variable Length Continuum Robots using Clothoid-based Visual Servoing [Paper] [Video]

2023 IEEE/RSJ IROS Conference on Intelligent Robots and Systems (IROS)

[2] Grow-to-Shape Control of Variable Length Continuum Robots via Adaptive Visual Servoing [Paper] [Video coming soon]

2024 IEEE/RSJ IROS Conference on Intelligent Robots and Systems (IROS)