Human-robot collaboration in challenging environments
- Being able to use robots for exploration and deployment in venues that are beyond human reach, or simply inhospitable, has been a longstanding ambition of scientists, engineers, and explorers across numerous fields. Robot technology promises to relieve humans from many inherently dangerous tasks, such as working in toxic or otherwise unsafe environments, or those that are simply inconvenient, such as involving repetitive motion or prolonged strain in confined spaces. Beyond the simple replacement of humans in such tasks, one of the main aims of robotic research has been to deploy robots in places that are inaccessible to humans and incompatible with their presence. Remotely operating robots are needed, for instance, in maintaining space stations, operating high-altitude facilities, and preparing infrastructure for human settlement in space, such as on Mars. Another environment that is largely inaccessible to humans is the deep sea, where the demand for manipulation (i.e. the physical interaction with the environment) arises from a wide range of potential applications including exploration of marine archaeological and natural sites, inspection, maintenance and repair of existing infrastructure, and, eventually, the construction of artificial installations for providing human access to extended operations at depth. Central to accomplishing such tasks is the establishing of robotic manipulation capabilities in unstructured environments. This flexible manipulation is one of the major challenges in robotics today. Currently, robotic manipulation is only successful in strictly controlled environments, those where operations are highly repetitive and where no humans are within proximity. In unstructured environments, robots have mostly served to provide passive mobility solutions for observation and mapping. Field deployment of dexterous manipulation capabilities in unknown environments is a special challenge because it requires what we term intelligence -- the capacity to perceive the local context and to make decisions based on these observations -- a characteristic well beyond the abilities of current autonomous systems. The aim of this dissertation is to present a framework for human-robot collaboration that through a combination of whole-body control, compliant skills, and human interfaces, enables the deployment of dexterous robotic manipulation in unstructured field environments. This synergy is created by distributing the workload between the human pilot and the humanoid robot in a way that leverages each of their inherent and complementary abilities. On the human's part these are higher-level cognition, perception, and decision-making, and on the robot's they are computation, controllable accuracy, and repeatability. We demonstrate this synergistic capacity through two field deployments in the deep sea. Throughout this work, I will use Ocean One, a humanoid underwater robot built at the Stanford Robotics Laboratory, for illustration and validation. While Ocean One is designed mechanically for sea operations, the presented framework generalizes to other domains -- on land, in the air, or in space. I present the constraint-consistent whole-body control architecture implemented on Ocean One, and include a detailed explanation of all tasks and their hierarchy, the handling of constraints such as collision avoidance (including with self), issues dealing with joint limits, and the resolution of actuation redundancy. The applied augmented object and virtual linkage models enable another higher level of abstraction allowing the direct control of manipulated objects through one or more robots. Through exploiting this architecture, the robot obtains functional autonomy, where a small set of human inputs is sufficient for controlling the high-DoF robot. I introduce the interfaces that permit connecting the human pilot to the robotic system at varying levels of control that work to decrease the demand for human intervention and increase the expressiveness of robot autonomy. These interfaces range from haptic teleoperation in avatar-mode, to haptic interaction at the level of the augmented object, to shared autonomy through skills supplemented with constraining haptic interaction, to simulation-mediated manipulation. Completing the picture will be a presentation of the live vision augmenting the pilot's perception and the graphical user interface that permits detailed control and assessment of Ocean One's operation. I validate the controller and the human interfaces using both simulation and practical deployments; the former through experiments on computers, and the latter with experiments in the laboratory and progressing to true in-the-field demonstrations at the Stanford Aquatic Center. As ultimate test of the methods to be described, we sent Ocean One on two challenging sea deployments. On its maiden mission -- into the Mediterranean -- Ocean One explored and recovered archaeological remnants from the ruins of the Lune, Louis XIV's two-decked, 54-gun flagship that sank in 1664 at about 100-meter depth off the coast of Toulon, France. In its second expedition, Ocean One assisted a team of human divers in investigating underwater volcanic structures off the coast of Santorini, Greece.
|Type of resource
|electronic resource; remote; computer; online resource
|1 online resource.
|Cutkosky, Mark R
|Cutkosky, Mark R
|Degree committee member
|Stanford University, Department of Mechanical Engineering.
|Statement of responsibility
|Submitted to the Department of Mechanical Engineering.
|Thesis Ph.D. Stanford University 2018.
- © 2018 by Gerald Brantner
- This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).
Also listed in
Loading usage metrics...