Injecting Transparency into the AI Revolution: Mandated Documentation Demonstrating Results to Third Parties

Placeholder Show Content

Abstract/Contents

Abstract
The artificial intelligence (AI) revolution has brought about positive change, but it has also uncovered major issues with algorithmic bias and error – as seen in discriminatory predictive algorithms in the criminal justice system. Many call for transparency to solve these issues, but neural networks (NNs) – a prominent form of AI – are impervious to most traditional transparency approaches. Thus, this thesis investigates how society can compel developers to incorporate transparency into ML systems during the development process. Transparency is defined in this thesis as developers’ ability to justify system outcomes on sensitive groups by explaining decisions made during the development process. To do so, this thesis analyzes two forms of documentation proposed to inject transparency into algorithmic systems: 1) data protection impact assessments (DPIAs) within the EU’s General Data Protection Regulation and 2) Google’s model cards. This analysis is supplemented by interviews of six experts. From this analysis, this thesis surfaces four main challenges encountered by documentation pushing for transparency: 1) dangers from direct transparency to the public, 2) implementation difficulties, 3) tensions with other norms (especially privacy), and 4) a lack of positive incentives. This thesis then recommends a framework to address these challenges. In this framework, legally mandated forms of documentation requirements accompanied by standardized measurement of results will be the starting point for developers to incorporate transparency into ML systems. However, this formal documentation must be accompanied by efforts to establish broader norms that will emphasize the importance of incorporating transparency throughout the ML development process. These conclusions strongly suggest that policy-makers should enact legislation requiring such documentation. Effectively adopting this work will help address algorithmic error and bias issues plaguing AI as well as help push the AI revolution to the next level.

Description

Type of resource text
Date created May 2019

Creators/Contributors

Author Pan, Christina Ashley
Primary advisor Edwards, Paul
Primary advisor Christin, Angele

Subjects

Subject artificial intelligence
Subject machine learning
Subject AI
Subject ML
Subject algorithmic bias
Subject algorithmic harm
Subject transparency
Subject documentation
Subject Science Technology and Society
Subject Humanities and Sciences
Genre Thesis

Bibliographic information

Access conditions

Use and reproduction
User agrees that, where applicable, content will not be used to identify or to otherwise infringe the privacy or confidentiality rights of individuals. Content distributed via the Stanford Digital Repository may be subject to additional license and use restrictions applied by the depositor.
License
This work is licensed under a Creative Commons Attribution 3.0 Unported license (CC BY).

Preferred citation

Preferred Citation
Pan, Christina Ashley. (2019). Injecting Transparency into the AI Revolution: Mandated Documentation Demonstrating Results to Third Parties . Stanford Digital Repository. Available at: https://purl.stanford.edu/dy769ps6339

Collection

Stanford University, Program in Science, Technology and Society, Honors Theses

View other items in this collection in SearchWorks

Contact information

Also listed in

Loading usage metrics...