STRETCHING HUMAN LAWS TO APPLY TO MACHINES: THE DANGERS OF A “COLORBLIND” COMPUTER
Abstract/Contents
- Abstract
Automated decision making has become widespread in recent years, largely due to
advances in machine learning methods. As a result of this trend, automated systems are increasingly used in high-impact applications, such as university admissions decisions. The weightiness of these decisions has prompted the realization that, like humans, machines must also comply with the law. But the human decision-making process is quite different from the computational decision-making process, which creates a mismatch between the laws and the decision makers they were designed to apply to. This mismatch can lead to counterproductive outcomes. I take antidiscrimination laws in university admissions as a case example, with a
particular focus on Title VI of the Civil Rights Act of 1964.
Description
Type of resource | text |
---|---|
Date created | 2019 |
Creators/Contributors
Author | Harned, Zach |
---|
Subjects
Subject | Machine learning |
---|---|
Subject | artificial intelligence |
Subject | discrimination |
Subject | employment |
Subject | Title VII |
Genre | Thesis |
Bibliographic information
Access conditions
- Use and reproduction
- User agrees that, where applicable, content will not be used to identify or to otherwise infringe the privacy or confidentiality rights of individuals. Content distributed via the Stanford Digital Repository may be subject to additional license and use restrictions applied by the depositor.
- License
- This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).
Collection
Master's Theses, Symbolic Systems Program, Stanford University
View other items in this collection in SearchWorksContact information
- Contact
- zharned@stanford.edu
Also listed in
Loading usage metrics...