Multimodal representations for vision, language, and embodied AI

Placeholder Show Content

Abstract/Contents

Abstract
Recent years have seen incredible growth and advances in artificial intelligence research. Much of this progress has primarily been made on three fronts: computer vision, natural language processing, and robotics. For example, image recognition is widely considered the holy grail of computer vision, whereas language modeling and translation have been fundamental tasks in natural language processing. However, many practical applications and tasks require going beyond solving these domain-specific problems and instead require solving problems which involve all three of the domains together. An autonomous system not only needs to be able to recognize objects in an image, but also interpret natural language descriptions or commands and understand how they might relate to its perceived visual observations. Furthermore, a robot needs to utilize this information for decision-making and determining which physical actions to take in order to complete a task. In the first part of this dissertation, I present a method for learning how to relate natural language and 3D shapes such that the system can draw connections about words like "round" described in a text description with the geometric attributes of round in a 3D object. To relate the two modalities, we rely a cross-modal embedding space for multimodal reasoning and learn this space without fine-grained, attribute-level categorical annotations. By learning how to relate these two modalities, we can perform tasks such as text-to-shape retrieval and shape manipulation, and also enable new tasks such as text-to-shape generation. In the second part of this dissertation, we allow the agent to be embodied and explore a task which relies on all three domains (computer vision, natural language, and robotics): robot navigation by following natural language instructions. Rather than relying on a fixed dataset of images or 3D objects, the agent is now situated in a physical environment and captures its own visual observations of the space using an onboard camera. To draw connections between vision, language, and robot physical state, we propose a system that performs planning and control using a topological map. This fundamental abstraction allows the agent to relate parts of the language instruction with relevant spatial regions of the environment and to relate a stream of visual observations with physical movements and actions.

Description

Type of resource text
Form electronic resource; remote; computer; online resource
Extent 1 online resource.
Place California
Place [Stanford, California]
Publisher [Stanford University]
Copyright date 2021; ©2021
Publication date 2021; 2021
Issuance monographic
Language English

Creators/Contributors

Author Chen, Kevin
Degree supervisor Sadigh, Dorsa
Degree supervisor Savarese, Silvio
Thesis advisor Sadigh, Dorsa
Thesis advisor Savarese, Silvio
Thesis advisor Guibas, Leonidas J
Degree committee member Guibas, Leonidas J
Associated with Stanford University, Department of Electrical Engineering

Subjects

Genre Theses
Genre Text

Bibliographic information

Statement of responsibility Kevin Chen.
Note Submitted to the Department of Electrical Engineering.
Thesis Thesis Ph.D. Stanford University 2021.
Location https://purl.stanford.edu/qw400zc0878

Access conditions

Copyright
© 2021 by Kevin Chen
License
This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).

Also listed in

Loading usage metrics...