Probing the Pragmatic Information Encoded in Neural Networks’ Word Embedding Space

Placeholder Show Content

Abstract/Contents

Abstract
It is frequently argued that natural language understanding models relying on deep learning techniques are not likely to admit of analytic understanding, for people cannot explicitly tell which piece of semantics, pragmatics, or other linguistic knowledge these models are able to understand and utilize. In this project, different pre-trained word embeddings as well as different neural network architectures have been explored. We specifically analyze the performance of predicting the implicature strength ratings of some on Switchboard corpus of telephone dialogues. The results provide evidence that multiple hand-mined contextual features previously identified as influencing inference strength become less significant when we include the neural network predictions as the new indicator and that the neural model can capture such qualitative effects, from which we come to the conclusion that at least some of the pragmatic information has already been encoded in the pre-trained word vectors and the neural network is able to learn this knowledge to predict human inference strength judgments for sentences with some.

Description

Type of resource text
Date created August 29, 2019

Creators/Contributors

Author Chen, Yuxing
Degree granting institution Stanford University, Symbolic Systems Program
Primary advisor Degen, Judith
Advisor Potts, Christopher

Subjects

Subject Scalar Implicature
Subject Neural Network Models
Subject Word Embeddings
Subject Pragmatics
Subject Symbolic Systems Program
Genre Thesis

Bibliographic information

Access conditions

Use and reproduction
User agrees that, where applicable, content will not be used to identify or to otherwise infringe the privacy or confidentiality rights of individuals. Content distributed via the Stanford Digital Repository may be subject to additional license and use restrictions applied by the depositor.
License
This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).

Preferred citation

Preferred Citation
Chen, Yuxing. (2019). Probing the Pragmatic Information Encoded in Neural Networks’ Word Embedding Space. Stanford Digital Repository. Available at: https://purl.stanford.edu/zd075wt9031

Collection

Master's Theses, Symbolic Systems Program, Stanford University

View other items in this collection in SearchWorks

Contact information

Also listed in

Loading usage metrics...