Causal Distillation for Language Models
Abstract/Contents
- Abstract
- Distillation efforts have led to language models that are more compact and efficient without serious drops in performance. The standard approach to distillation trains a student model against two objectives: a task-specific objective (e.g., language modeling) and an imitation objective that encourages the hidden states of the student model to be similar to those of the larger teacher model. In this paper, we show that it is beneficial to augment distillation with a third objective that encourages the student to imitate the causal computation process of the teacher through interchange intervention training(IIT). IIT pushes the student model to become a causal abstraction of the teacher model - a simpler model with the same causal structure. IIT is fully differentiable, easily implemented, and combines flexibly with other objectives. Compared with standard distillation of BERT, distillation via IIT results in lower perplexity on Wikipedia (masked language modeling) and marked improvements on the GLUE benchmark (natural language understanding), SQuAD (question answering), and CoNLL-2003 (named entity recognition).
Description
Type of resource | text |
---|---|
Date created | May 20, 2022 |
Date modified | December 5, 2022 |
Publication date | July 22, 2022 |
Creators/Contributors
Author | Wu, Zhengxuan |
---|
Subjects
Subject | Natural language processing (Computer science) |
---|---|
Genre | Text |
Genre | Thesis |
Bibliographic information
Access conditions
- Use and reproduction
- User agrees that, where applicable, content will not be used to identify or to otherwise infringe the privacy or confidentiality rights of individuals. Content distributed via the Stanford Digital Repository may be subject to additional license and use restrictions applied by the depositor.
- License
- This work is licensed under a Creative Commons Zero v1.0 Universal license (CC0).
Preferred citation
- Preferred citation
- Wu, Z. (2022). Causal Distillation for Language Models. Stanford Digital Repository. Available at https://purl.stanford.edu/kb552zc2656
Collection
Master's Theses, Symbolic Systems Program, Stanford University
View other items in this collection in SearchWorksContact information
- Contact
- wuzhengx@stanford.edu
Also listed in
Loading usage metrics...