"Fairness in AI and Its Long-Term Implications on Society"

Placeholder Show Content

Abstract/Contents

Abstract
Successful deployment of artificial intelligence (AI) in various settings has led to numerous positive outcomes for individuals and society. However, AI systems have also been shown to harm parts of the population due to biased predictions. AI fairness focuses on mitigating such biases to ensure AI decision making is not discriminatory towards certain groups. We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time and act as a social stressor. More specifically, we discuss how biased models can lead to more negative real-world outcomes for certain groups, which may then become more prevalent by deploying new AI models trained on increasingly biased data, resulting in a feedback loop. If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest. We examine current strategies for improving AI fairness, assess their limitations in terms of real-world deployment, and explore potential paths forward to ensure we reap AI’s benefits without causing society’s collapse.

Description

Type of resource text
Date modified September 18, 2023
Publication date September 18, 2023; September 18, 2023

Creators/Contributors

Author Bohdal, Ondrej
Author Hospedales, Timothy
Author Torr, Philip H.S.
Author Barez, Fazl

Subjects

Subject AI fairness
Subject AI safety
Subject AI risks
Subject cascading risks
Subject biased AI models
Genre Text
Genre Article

Bibliographic information

Access conditions

Use and reproduction
User agrees that, where applicable, content will not be used to identify or to otherwise infringe the privacy or confidentiality rights of individuals. Content distributed via the Stanford Digital Repository may be subject to additional license and use restrictions applied by the depositor.
License
This work is licensed under a Creative Commons Attribution Non Commercial No Derivatives 4.0 International license (CC BY-NC-ND).

Preferred citation

Preferred citation
Bohdal, O., Hospedales, T., Torr, P., and Barez, F. (2023). "Fairness in AI and Its Long-Term Implications on Society" in Intersections, Reinforcements, Cascades: Proceedings of the 2023 Stanford Existential Risks Conference. The Stanford Existential Risks Initiative. Stanford Digital Repository. Available at https://purl.stanford.edu/pj287ht2654. https://doi.org/10.25740/pj287ht2654.

Collection

Intersections, Reinforcements, Cascades: The Proceedings of the 3rd Annual Stanford Existential Risks Conference

Contact information

Loading usage metrics...