Fake news risk : modeling management decisions to combat disinformation

Placeholder Show Content

Abstract/Contents

Abstract
Strategically using information to affect the views of a population is not a new phenomenon and dates all the way back to the earliest development of political systems. Some of the earliest examples still in existence today were produced by a Greek writer Herodotus who penned an inaccurate account of historical events in the Persian empire approximately 500 years after the birth of Jesus Christ. Political propaganda is nearly as old as politics. Similarly, fake news, as a specific tactic of propaganda, is not relegated only to the information age. Older examples are easy to find that pre-date the digital revolution. However, the speed of distribution and the number of people that can be reached by leveraging the modern information infrastructure is unprecedented. These factors combine to result in increased risk from fake news that must be addressed. The internet has enabled the distribution of vast amounts of information to an incredibly large population virtually instantaneously and for comparatively low cost. While the development of this capability has resulted in enormous economic development and provided great benefit to the world, it has also exposed the same population to increased risk. The rapid distribution of fake news can cause contagion, manipulate markets, spark conflict, or fracture strategic relationships. The scourge of fake news is even more problematic with the current limitations on fact-checking methodologies which are unable to keep pace with the increased volume of fake news production. In the search for methods to combat fake news, both public and private organizations are struggling. Probabilistic Risk Analysis (PRA) can be leveraged to quantitatively describe the risk associated with fake news. This thesis presents a method for modeling management decisions designed to combat fake news in an online network. It leverages established infectious disease modeling to describe online virality and implements countermeasure regimes to inform opposition decision making. The model was informed by two online surveys of a representative sample of the U.S. voting population that endeavored to measure the impact of limited but targeted fake news. The results point to both the potential effectiveness and limitations of fake news that is leveraged as part of a targeted influence campaign. The online survey results also point to the dangers of the use of modified video and audio, known as "deep fakes, " in fake news of the future. Technological improvements including expertly crafted deep fakes, online microtargeting, smart trolls, and the potential use of artificial intelligence for content production suggests fake news will continue to persist as a scourge in the future and could present a viable threat to democratic self-governance

Description

Type of resource text
Form electronic resource; remote; computer; online resource
Extent 1 online resource
Place California
Place [Stanford, California]
Publisher [Stanford University]
Copyright date 2020; ©2020
Publication date 2020; 2020
Issuance monographic
Language English

Creators/Contributors

Author Trammell, Travis Ira
Degree supervisor Paté-Cornell, M. Elisabeth (Marie Elisabeth)
Thesis advisor Paté-Cornell, M. Elisabeth (Marie Elisabeth)
Thesis advisor Persily, Nathaniel
Thesis advisor Shachter, Ross D
Degree committee member Persily, Nathaniel
Degree committee member Shachter, Ross D
Associated with Stanford University, Department of Management Science and Engineering.

Subjects

Genre Theses
Genre Text

Bibliographic information

Statement of responsibility Travis Ira Trammell III
Note Submitted to the Department of Management Science and Engineering
Thesis Thesis Ph.D. Stanford University 2020
Location electronic resource

Access conditions

Copyright
© 2020 by Travis Ira Trammell

Also listed in

Loading usage metrics...