Free, Open-Source, and Anonymous: Why deep learning regulators are in deep water

Placeholder Show Content

Abstract/Contents

Abstract
Deep learning models have been instrumental in driving recent breakthroughs in artificial intelligence. Beyond autonomous navigation and game-playing, some deep learning applications, such as facial recognition and deep fakes, or counterfeit audio and video created by algorithms, pose challenges to individual privacy and US national security. The Director of National Intelligence's 2019 Worldwide Threat Assessment explicitly mentions the growing threat from deep fakes and machine learning systems. Yet, as most deep learning software, datasets, and academic papers are publicly accessible, individuals can effortlessly access the technical prerequisites required to develop deep learning models, build surveillance systems, and access instruments of mass deception. Thousands of deep fakes of politicians and celebrities have already been shared on the internet. The first step towards combating this threat is understanding how deep learning spreads and how its proliferation process differs from other dual-use technologies, or technologies with both civilian and military applications. This thesis seeks to fill this crucial gap by examining how all three critical components of deep learning - data, software, and hardware - have become accessible to internet users. We analyze how existing approaches to mitigating risks from dual-use technology, including establishing norms, limiting supply, and controlling exports, may fail to effectively delay or prevent deep learning threats. After examining how deep fake technology spread from academia to individuals, we present an original experimental study of how technical countermeasures could confuse a facial recognition deep learning model and be used to mitigate risks from deep fakes. Our experiments indicate how US policymakers could leverage both technical and policy mechanisms to delay or undermine malicious deep learning systems.

Description

Type of resource text
Date created May 2019

Creators/Contributors

Author Milich, Andrew Burke
Advisor Zegart, Amy

Subjects

Subject deep learning
Subject ai
Subject computer science
Subject stanford
Subject center for international security and cooperation
Subject cisac
Subject proliferation
Subject adversarial learning
Subject national security
Genre Thesis

Bibliographic information

Access conditions

Use and reproduction
User agrees that, where applicable, content will not be used to identify or to otherwise infringe the privacy or confidentiality rights of individuals. Content distributed via the Stanford Digital Repository may be subject to additional license and use restrictions applied by the depositor.
License
This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).

Preferred citation

Preferred Citation
Milich, Andrew Burke. (2019). Free, open-source, and anonymous: Why deep learning regulators are in deep water. Stanford Digital Repository. Available at: https://purl.stanford.edu/fp343bx6008.

Collection

Stanford University, Center for International Security and Cooperation, Interschool Honors Program in International Security Studies, Theses

View other items in this collection in SearchWorks

Contact information

Also listed in

Loading usage metrics...