Nonverbal cues in avatar-mediated virtual environments

Placeholder Show Content

Abstract/Contents

Abstract
This research focuses on the individual and joint contributions of two nonverbal channels (i.e., face and body) in avatar mediated-virtual environments. A total of 140 dyads (280 participants) were randomly assigned to communicate with each other via avatar-mediated platforms that varied in their capacity to deliver nonverbal cues (i.e., avatars that displayed both face and body cues, only face cues, only body cues, or avatars that were static). Dyads that were able to see their partner's facial movements being mapped on to their avatars reported higher levels of interpersonal attraction and were able to form more accurate impressions about their partners. Furthermore, dyads that had access to their partner's facial movements described their interaction experiences more positively, although this was only true when they could also see their partner's bodily gestures. Dyads that could only see their partner's bodily movements, but not their facial movements, described their experiences more negatively compared to those who interacted with a static avatar that did not portray any nonverbal cues. Behaviorally, dyads that were able to see their partner's bodily movements moved their bodies more, although this trend was only significant when they were also able to see their partner's facial movements. In contrast, the extent to which dyads exhibited facial movement was not influenced by the visibility of their partner's facial movements or bodily gestures. The current study also found that dyads showed higher levels of bodily and facial nonverbal synchrony when their partner's bodily movements and facial movements were available, respectively. Finally, the present study employed two machining learning algorithms to explore whether nonverbal cues that were automatically tracked during dyadic interactions would be able to predict interpersonal attraction through an inductive process. Results showed that (1) these classifiers could predict high and low interpersonal attraction individuals at an accuracy rate that was approximately 14-17% higher than the majority class baseline and that (2) the main features that predicted interpersonal attraction were related to synchrony of smiling behavior.

Description

Type of resource text
Form electronic resource; remote; computer; online resource
Extent 1 online resource.
Place California
Place [Stanford, California]
Publisher [Stanford University]
Copyright date 2019; ©2019
Publication date 2019; 2019
Issuance monographic
Language English

Creators/Contributors

Author Oh, Catherine Suyoun
Degree supervisor Bailenson, Jeremy
Thesis advisor Bailenson, Jeremy
Thesis advisor Hancock, Jeff
Thesis advisor Reeves, Byron, 1949-
Thesis advisor Zaki, Jamil, 1980-
Degree committee member Hancock, Jeff
Degree committee member Reeves, Byron, 1949-
Degree committee member Zaki, Jamil, 1980-
Associated with Stanford University, Department of Communication.

Subjects

Genre Theses
Genre Text

Bibliographic information

Statement of responsibility Catherine Suyoun Oh.
Note Submitted to the Department of Communication.
Thesis Thesis Ph.D. Stanford University 2019.
Location electronic resource

Access conditions

Copyright
© 2019 by Catherine Suyoun Oh
License
This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).

Also listed in

Loading usage metrics...