Understanding the interactions between text and visualizations

Placeholder Show Content

Abstract/Contents

Abstract
Visualizations and text are commonly used together in various applications, ranging from communicative documents to interactive tools for analyzing and exploring data. However, much about the relationship between visualizations and text remains unexplored. This thesis specifically focuses on three problems related to communicating the connections between the two representations and ways to surface these connections to address the problems. We begin by asking how readers integrate information between visualizations and text when the two representations emphasize different aspects of the underlying data. Through a user study, we find that readers can miss information presented in the text because they rely more on the visualizations for their takeaways. Based on the study results, we provide guidelines for authoring effective visualization-text pairs that doubly emphasize intended aspects of the underlying data. Identifying references between visualizations and text, which are often also spatially separated, is a mentally taxing process. The cognitive burden disrupts the flow of reading as readers traverse back and forth between visualizations and text in an attempt to mentally link their contents. We present an interactive document reader that extends existing PDF documents based on automatically extracted references between visualizations and text. Specifically, it facilitates document reading by highlighting the references and dynamically positioning visualizations close to relevant text. Our user study shows that the interface helps readers integrate visualizations into their flow of reading by helping them identify references more quickly and more accurately. When using a natural language interface for visualizations, users are often not informed about how the system operated on a visualization based on its interpretation of a text query. This lack of transparency leads users to question the system because they cannot easily verify the correctness of its outputs. We present a chart question answering system that generates visual explanations that clarifies how it used an input question and a visualization to obtain an answer. A user study reveals that our visual explanations significantly improve transparency and achieve levels of trust close to human-generated explanations.

Description

Type of resource text
Form electronic resource; remote; computer; online resource
Extent 1 online resource.
Place California
Place [Stanford, California]
Publisher [Stanford University]
Copyright date 2021; ©2021
Publication date 2021; 2021
Issuance monographic
Language English

Creators/Contributors

Author Kim, Dae Hyun
Degree supervisor Agrawala, Maneesh
Thesis advisor Agrawala, Maneesh
Thesis advisor Landay, James A, 1967-
Thesis advisor Setlur, Vidya
Degree committee member Landay, James A, 1967-
Degree committee member Setlur, Vidya
Associated with Stanford University, Computer Science Department

Subjects

Genre Theses
Genre Text

Bibliographic information

Statement of responsibility Dae Hyun Kim.
Note Submitted to the Computer Science Department.
Thesis Thesis Ph.D. Stanford University 2022.
Location https://purl.stanford.edu/xk655mq8748

Access conditions

Copyright
© 2021 by Dae Hyun Kim
License
This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).

Also listed in

Loading usage metrics...