"Fairness in AI and Its Long-Term Implications on Society"
- Successful deployment of artificial intelligence (AI) in various settings has led to numerous positive outcomes for individuals and society. However, AI systems have also been shown to harm parts of the population due to biased predictions. AI fairness focuses on mitigating such biases to ensure AI decision making is not discriminatory towards certain groups. We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time and act as a social stressor. More specifically, we discuss how biased models can lead to more negative real-world outcomes for certain groups, which may then become more prevalent by deploying new AI models trained on increasingly biased data, resulting in a feedback loop. If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest. We examine current strategies for improving AI fairness, assess their limitations in terms of real-world deployment, and explore potential paths forward to ensure we reap AI’s benefits without causing society’s collapse.
|Type of resource
|September 18, 2023
|September 18, 2023; September 18, 2023
|Torr, Philip H.S.
|biased AI models
- Use and reproduction
- User agrees that, where applicable, content will not be used to identify or to otherwise infringe the privacy or confidentiality rights of individuals. Content distributed via the Stanford Digital Repository may be subject to additional license and use restrictions applied by the depositor.
- This work is licensed under a Creative Commons Attribution Non Commercial No Derivatives 4.0 International license (CC BY-NC-ND).
- Preferred citation
- Bohdal, O., Hospedales, T., Torr, P., and Barez, F. (2023). "Fairness in AI and Its Long-Term Implications on Society" in Intersections, Reinforcements, Cascades: Proceedings of the 2023 Stanford Existential Risks Conference. The Stanford Existential Risks Initiative. Stanford Digital Repository. Available at https://purl.stanford.edu/pj287ht2654. https://doi.org/10.25740/pj287ht2654.
Intersections, Reinforcements, Cascades: The Proceedings of the 3rd Annual Stanford Existential Risks Conference
Loading usage metrics...