Visual force estimation in robot-assisted minimally invasive surgery
Abstract/Contents
- Abstract
- Tissue handling and force sensitivity are important skills for surgeons to possess to conduct safe and effective surgery. In robot-assisted minimally invasive surgery (RMIS), where a surgeon teleoperates a multi-arm robot equipped with endoscopic tools without haptic feedback, surgeons rely heavily on visual feedback to estimate the amount of force they are applying to tissue. Thus, good tissue handling and force sensitivity in RMIS is difficult to achieve. To develop general surgical skill, researchers and surgical educators have provided objective performance measurement and multi-sensory training. However, attempts to do this for tissue handling have proven challenging. This dissertation presents work towards teaching and evaluating surgeon tissue handling ability in RMIS by studying how humans and machines can learn to estimate force visually. First, I present an experiment to understand how different forms of prior haptic experience inform a teleoperator's ability to perform visual force estimation for a previously learned task and an unseen task. The results of the experiment show that, for a retraction task on silicone samples, human teleoperators relied on a proprioception heuristic as opposed to a visually informed learned representation of tissue stiffness to perform the task. However, when performing visual force estimation during an unseen silicone palpation task, teleoperators who previously performed the silicone retraction task manually had the best speed-accuracy, suggesting that because they learned the visual force estimation task under the same motion scaling as other manipulations encountered in daily living, they successfully used that prior experience to improve their performance of the unseen task. Having shown that human teleoperators are able to learn how to estimate forces visually after training with haptic and visual feedback, I investigate if neural network systems can perform a similar form of visual force estimation. I present a multimodal neural network and a mock tissue manipulation dataset for performing visual force estimation. I evaluate the multimodal network, and its unimodal variants on their generalization performance over different viewpoints, unseen tools and unseen materials, as well as the contribution of different state inputs to the performance of the networks. I found that vision-based force estimation neural networks can generalize over changes in viewpoints and robot configurations, as well as unseen tools while having faster performance than existing recurrent neural networks. As expected, kinematic information was less useful for estimating force than joint torque or force information in networks that relied on robot state inputs. Including both types of inputs resulted in the best performance. Following this, I show how the neural network-based force estimates perform when used for real-time kinesthetic haptic feedback on an RMIS robot. I present a novel approach which models the teleoperation dynamics and measures stability using a passivity-based metric. Networks that used robot state inputs were more transparent but displayed lower stability compared to a network that only used visual inputs. Due to the inaccessibility of accurate end effector force sensing for RMIS tools, the studies above are limited to one-handed teleoperated manipulation. To address this hardware limitation, I present an open-source three-degree-of-freedom force sensor for bimanual RMIS research applications. I describe the theoretical principles behind the sensor design, as well as the manufacturing approaches that enable the sensors to be easily built and modified by other researchers in the field. I characterize the performance of the force sensor both as a standalone sensor and in a dual jaw set up mounted on an existing RMIS tool. I found that the sensor achieved an accuracy that was below what is detectable by human haptic perception in the range of forces typical of tissue manipulation. The design of the sensor was shown to be robust to manufacturing variations, maintaining desired accuracy over two separate builds of the sensor. When used in a dual jaw configuration, the sensor was also capable of measuring grip force in the range used for delicate tissue manipulation.
Description
Type of resource | text |
---|---|
Form | electronic resource; remote; computer; online resource |
Extent | 1 online resource. |
Place | California |
Place | [Stanford, California] |
Publisher | [Stanford University] |
Copyright date | 2022; ©2022 |
Publication date | 2022; 2022 |
Issuance | monographic |
Language | English |
Creators/Contributors
Author | Chua, Zonghe |
---|---|
Degree committee member | Bohg, Jeannette, 1981- |
Degree committee member | Nisky, Ilana |
Degree committee member | Okamura, Allison |
Thesis advisor | Bohg, Jeannette, 1981- |
Thesis advisor | Nisky, Ilana |
Thesis advisor | Okamura, Allison |
Associated with | Stanford University, Department of Mechanical Engineering |
Subjects
Genre | Theses |
---|---|
Genre | Text |
Bibliographic information
Statement of responsibility | Zonghe Chua. |
---|---|
Note | Submitted to the Department of Mechanical Engineering. |
Thesis | Thesis Ph.D. Stanford University 2022. |
Location | https://purl.stanford.edu/dh588yv0611 |
Access conditions
- Copyright
- © 2022 by Zonghe Chua
- License
- This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).
Also listed in
Loading usage metrics...