A complexity-theoretic perspective on fairness

Placeholder Show Content

Abstract/Contents

Abstract
Algorithms make predictions about people constantly. The spread of such prediction systems---from precision medicine to targeted advertising to predictive policing---has raised concerns that algorithms may perpetrate unfair discrimination, especially against individuals from minority groups. While it's easy to speculate on the risks of unfair prediction, devising an effective definition of algorithmic fairness is challenging. Most existing definitions tend toward one of two extremes---individual fairness notions provide theoretically-appealing protections but present practical challenges at scale, whereas group fairness notions are tractable but offer marginal protections. In this thesis, we propose and study a new notion---multi-calibration---that strengthens the guarantees of group fairness while avoiding the obstacles associated with individual fairness. Multi-calibration requires that predictions be well-calibrated, not simply on the population as a whole but simultaneously over a rich collection of subpopulations C. We specify this collection---which parameterizes the strength of the multi-calibration guarantee---in terms of a class of computationally-bounded functions. Multi-calibration protects every subpopulation that can be identified within the chosen computational bound. Despite such a demanding requirement, we show a generic reduction from learning a multi-calibrated predictor to (agnostic) learning over the chosen class C. This reduction establishes the feasibility of multi-calibration: taking C to be a learnable class, we can achieve multi-calibration efficiently (both statistically and computationally). To better understand the requirement of multi-calibration, we turn our attention from fair prediction to fair ranking. We establish an equivalence between a semantic notion of domination-compatibility in rankings and the technical notion of multi-calibration in predictors---while conceived from different vantage points, these concepts encode the same notion of evidence-based fairness. This alternative characterization illustrates how multi-calibration affords qualitatively different protections than standard group notions

Description

Type of resource text
Form electronic resource; remote; computer; online resource
Extent 1 online resource
Place California
Place [Stanford, California]
Publisher [Stanford University]
Copyright date 2020; ©2020
Publication date 2020; 2020
Issuance monographic
Language English

Creators/Contributors

Author Kim, Michael Pum-Shin
Degree supervisor Reingold, Omer
Thesis advisor Reingold, Omer
Thesis advisor Valiant, Gregory
Thesis advisor Zou, James
Degree committee member Valiant, Gregory
Degree committee member Zou, James
Associated with Stanford University, Computer Science Department

Subjects

Genre Theses
Genre Text

Bibliographic information

Statement of responsibility Michael Pum-Shin Kim
Note Submitted to the Computer Science Department
Thesis Thesis Ph.D. Stanford University 2020
Location electronic resource

Access conditions

Copyright
© 2020 by Michael Pum-Shin Kim
License
This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).

Also listed in

Loading usage metrics...