Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Vision-Guided Autonomous Surgical Subtasks via Surgical Robots with Artificial Intelligence

Abstract

The introduction of automation into surgery may redefine the role of surgeons in operating rooms. While the majority of the manipulation will be performed autonomously by surgical robots, the surgeons may focus on decision-making procedures. This will drastically reduce the burden to surgeons by allowing them to instead interpret the abundant and intelligent information from the system, and will enhance the surgical outcome. To introduce the automation into surgery, the surgical robots are required to have: 1) high precision, 2) motion planning capabilities, and 3) scene understanding. Currently, surgical robots are commonly designed as cable-driven due to safety and several benefits such as low inertia. However, the cable-driven system has low precision because of cable stretch and long chains of cables. Therefore, a new control scheme of cable-driven surgical robots should be developed to overcome these limitations. Surgery is a complicated task consisting of multiple subtasks. To achieve the intermediate steps, motion planner should be developed. In surgery, the manipulation target objects are mostly soft tissue which introduces challenges in modeling the dynamics between the tool and the soft tissue. The motion planner should deal with the unknown dynamics while accomplishing each task. The surgical environment is further complicated by the many blood-covered anatomical structures. Surgeons use the visual feedback through an endoscope camera or other imaging devices, which provide rich information. Although the imaging devices are useful in understanding the surrounding anatomy, images from the devices are high-dimensional and it is difficult to process using algorithms to get high-level information. Therefore, vision-based perception algorithms to understand the relevant anatomy should be developed.

This dissertation addresses the three problems above. In chapter two, a hybrid control scheme which utilizes both model-based and data-driven methods is introduced to improve the precision of the cable-driven surgical robots and robustness to hand-eye calibration errors. The convergence of the controller is shown theoretically and experimentally with the Raven IV. Additionally, the efficacy of the controller to clinical tasks is shown by demonstrating the autonomous operations of needle transfer and tissue debridement tasks. In chapter three, learning-based path planning algorithms are proposed for autonomous soft tissue manipulation. The planning algorithms learn the dynamics between the motion of a surgical tool and soft tissue, and the internal controller uses the learned dynamics to manipulate the soft tissue. The performance of developed algorithms is verified on a designed simulation and a robot experiment with the Raven IV. In chapter four, the semantic segmentation algorithm of the optical coherence tomography images for the automated lens extraction is presented. The algorithm uses the deep learning method and provides the capability of understanding the cross-sectional view of the eye anatomy. Furthermore, this segmentation algorithm is incorporated into the Intraocular Robotic Interventional and Surgical System (IRISS) to realize the semi-autonomous lens removal. The experimental results on 7 ex vivo pig eyes verified the efficacy of the developed framework.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View