Author:
Zhu Ruiqi, ,Zhang Dandan,Lo Benny,
Abstract
In recent year, autonomy has been widely introduced into surgical robotic systems to assist surgeons to carry out complex tasks reducing the workload during surgical operation [1]. Most of the existing methods normally rely on learning from demonstration [2], which re- quires a collection of Minimally Invasive Surgery (MIS) manoeuvres from expert surgeons. However, collecting such a dataset to regress a template trajectory can be tedious and may induce significant burdens to the expert surgeons. In this paper, we propose a semi-autonomous control framework for robotic surgery and evaluate this frame- work in a simulated environment. We applied deep reinforcement learning methods to train an agent for au- tonomous control, which includes simple but repetitive manoeuvres. Compared to learning from demonstration, deep reinforcement learning can learn a new policy by altering the goal via modifying the reward function instead of collecting new dataset for a new goal. In addition to the autonomous control, we also created a handheld controller for manual precision control. The user can seamlessly switch to manual control at any time by moving the handheld controller. Finally, our method was evaluated in a customized simulated environment to demonstrate its efficiency compared to full manual control.
Publisher
The Hamlyn Centre, Imperial College London London, UK
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献