Towards Fairness in the Wild: Controlling Disparities in Machine Learning Systems with Human Interaction

Placeholder Show Content

Abstract/Contents

Abstract
The problem of fairness in machine learning (ML) systems has become increasingly important as algorithmic decision making is permeating society, from social media applications with billions of users to the criminal justice system. We consider two aspects of fairness in ML systems that have received relatively little attention in the literature. First, many real-world ML systems are “online” and are fed data continuously over time, resulting in an amplification of initial disparities. Second, the particular demographic groups we often seek to be fair across depend on specific case-settings, and are not sufficient in capturing all possible groups that may suffer from an unfair system. We therefore present two new methods of understanding fairness in machine learning systems that interact with humans: 1. We develop a technique based on distributionally robust optimization to control for losses faced by minority groups at each time step of an online ML system, and we demonstrate improvements in minority group satisfaction in a real-world text autocomplete task, and 2. We demonstrate a clustering algorithm that groups data by a model’s expected loss, allowing human users to determine which high-loss groups present fairness concerns for the particular system. Together, these approaches allow us to address fairness in ML systems as not only just a small number of high-stakes fairness cases, but also as an inherently social problem.

Description

Type of resource text
Date created May 28, 2018

Creators/Contributors

Author Srivastava, Megha
Primary advisor Liang, Percy
Degree granting institution Stanford University, Department of Computer Science

Subjects

Subject artificial intelligence
Subject bias
Subject fairness
Subject machine learning
Subject human interaction
Genre Thesis

Bibliographic information

Related Publication Hashimoto, Tatsunori, Srivastava, Megha, Namkoong, Hongseok, and Liang, Percy. Fairness Without Demographics in Repeated Loss Minimization. In International Conference on Machine Learning (ICML), ​To Appear​, 2018.
Location https://purl.stanford.edu/dw390tb2855

Access conditions

Use and reproduction
User agrees that, where applicable, content will not be used to identify or to otherwise infringe the privacy or confidentiality rights of individuals. Content distributed via the Stanford Digital Repository may be subject to additional license and use restrictions applied by the depositor.
License
This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).

Preferred citation

Preferred Citation

Srivastava, Megha. (2018). Towards Fairness in the Wild:
Controlling Disparities in Machine Learning Systems with Human Interaction. Stanford Digital Repository. Available at: https://purl.stanford.edu/dw390tb2855

Collection

Undergraduate Theses, School of Engineering

View other items in this collection in SearchWorks

Contact information

Also listed in

Loading usage metrics...