Perception for control and control for perception of vision-based autonomous aerial robots

Placeholder Show Content

Abstract/Contents

Abstract
The mission of this thesis is to develop visual perception and feedback control algorithms for autonomous aerial robots that are equipped with an onboard camera. We introduce light-weight algorithms that parse images from the robot's camera directly into feedback signals for control laws that improve perception quality. We emphasize the co-design, analysis, and implementation of the perception, planning, and control tasks to ensure that the entire autonomy pipeline is suitable for aerial robots with real-world constraints. The methods presented in this thesis further leverage perception for control and control for perception: the former uses perception to inform the robot how to act while the later uses robotic control to improve the robot's perception of the world. Perception in this work refers to the processing of raw sensor measurements and the estimation of state values while control refers to the planning of useful robot motions and control inputs based on these state estimates. The major capability that we enable is a robot's ability to sense this unmeasured scene geometry as well as the three-dimensional (3D) robot pose from images acquired by its onboard camera. Our algorithms specifically enable a UAV with an onboard camera to use control to reconstruct the 3D geometry of its environment in a both sparse sense and a dense sense, estimate its own global pose with respect to the environment, and estimate the relative poses of other UAVs and dynamic objects of interest in the scene. All methods are implemented on real robots with real-world sensory, power, communication, and computation constraints to demonstrate the need for tightly-coupled, fast perception and control in robot autonomy. Depth estimation at specific pixel locations is often considered to be a perception-specific task for a single robot. We instead control the robot to steer a sensor to improve this depth estimation. First, we develop an active perception controller that maneuvers a quadrotor with a downward facing camera according to the gradient of maximum uncertainty reduction for a sparse subset of image features. This allows us to actively build a 3D point cloud representation of the scene quickly and thus enabling fast situational awareness for the aerial robot. Our method reduces uncertainty more quickly than state-of-the-art approaches for approximately an order of magnitude less computation time. Second, we autonomously control the focus mechanism on a camera lens to build metric-scale, dense depth maps that are suitable for robotic localization and navigation. Compared to the depth data from an off-the-shelf RGB-D sensor (Microsoft Kinect), our Depth-from-Focus method recovers the depth for 88% of the pixels with no RGB-D measurements in near-field regime (0.0 - 0.5 meters), making it a suitable complimentary sensor for RGB-D. We demonstrate dense sensing on a ground robot localization application and with AirSim, an advanced aerial robot simulator. We then consider applications where groups of aerial robots with monocular cameras seek to estimate their pose, or position and orientation, in the environment. Examples include formation control, target tracking, drone racing, and pose graph optimization. Here, we employ ideas from control theory to perform the pose estimation. We first propose the tight-coupling of pairwise relative pose estimation with cooperative control methods for distributed formation control using quadrotors with downward facing cameras, target tracking in a heterogenous robot system, and relative pose estimation for competitive drone racing. We experimentally validate all methods with real-time perception and control implementations. Finally, we develop a distributed pose graph optimization method for networks of robots with noisy relative pose measurements. Unlike existing pose graph optimization methods, our method is inspired by control theoretic approaches to distributed formation control. We leverage tools from Lyapunov theory and multi-agent consensus to derive a relative pose estimation algorithm with provable performance guarantees. Our method also reaches consensus 13x faster than a state-of-the-art centralized strategy and reaches solutions that are approximately 6x more accurate than decentralized pose estimation methods. While the computation times between our method and the benchmarch distributed method are similar for small networks, ours outperforms the benchmark by a factor of 100 on networks with large numbers of robots (> 1000). Our approach is easy to implement and fast, making it suitable for a distributed backend in a SLAM application. Our methods will ultimately allow micro aerial vehicles to perform more complicated tasks. Our focus on tightly-coupled perception and control leads to algorithms that are streamlined for real aerial robots with real constraints. These robots will be more flexible for applications including infrastructure inspection, automated farming, and cinematography. Our methods will also enable more robot-to-robot collaboration since we present effective ways to estimate the relative pose between them. Multi-robot systems will be an important part of the robotic future as they are robust to the failure of individual robots and allow complex computation to be distributed amongst the agents. Most of all, our methods allow robots to be more self sufficient by utilizing their onboard camera and by accurately estimating the world's structure. We believe these methods will enable aerial robots to better understand our 3D world.

Description

Type of resource text
Form electronic resource; remote; computer; online resource
Extent 1 online resource.
Place California
Place [Stanford, California]
Publisher [Stanford University]
Copyright date 2020; ©2020
Publication date 2020; 2020
Issuance monographic
Language English

Creators/Contributors

Author Cristofalo, Eric
Degree supervisor Montijano, Eduardo
Degree supervisor Schwager, Mac
Thesis advisor Montijano, Eduardo
Thesis advisor Schwager, Mac
Thesis advisor Pavone, Marco, 1980-
Thesis advisor Rock, Stephen M
Degree committee member Pavone, Marco, 1980-
Degree committee member Rock, Stephen M
Associated with Stanford University, Department of Aeronautics and Astronautics

Subjects

Genre Theses
Genre Text

Bibliographic information

Statement of responsibility Eric Cristofalo.
Note Submitted to the Department of Aeronautics and Astronautics.
Thesis Thesis Ph.D. Stanford University 2020.
Location electronic resource

Access conditions

Copyright
© 2020 by Eric Cristofalo
License
This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).

Also listed in

Loading usage metrics...