Artificial Intelligence & Malicious Steganography
Abstract/Contents
- Abstract
- Fears about Artificial Intelligence (AI) often elicit thoughts of HAL refusing to “Open the pod bay doors,” in 2001: A Space Odyssey movie. AI is already used in libraries, usually to augment searching. It is difficult to spot some threats. For instance, “malicious steganography" is the act of concealing messages in images, audio tracks, video clips or text files to avoid detection by security systems. Librarians need to worry about potential breaches of their very expensive data sets, as well as patron privacy when using AI.
Description
Type of resource | text |
---|---|
Date modified | August 10, 2021; December 5, 2022; March 15, 2023 |
Publication date | June 8, 2020; June 1, 2018 |
Creators/Contributors
Author | SMITH, FELICIA |
![]() |
---|
Subjects
Subject | Artificial Intelligence |
---|---|
Subject | Threats |
Subject | Libraries |
Subject | Steganography |
Genre | Text |
Genre | Article |
Bibliographic information
Access conditions
- Use and reproduction
- User agrees that, where applicable, content will not be used to identify or to otherwise infringe the privacy or confidentiality rights of individuals. Content distributed via the Stanford Digital Repository may be subject to additional license and use restrictions applied by the depositor.
- License
- This work is licensed under a Creative Commons Attribution Non Commercial 4.0 International license (CC BY-NC).
Preferred citation
- Preferred citation
- Smith, Felicia A. (2018). Artificial Intelligence & Steganography. Stanford Digital Repository. Available at: https://purl.stanford.edu/wq122sz5135
Collection
Stanford Libraries staff presentations, publications, and research
View other items in this collection in SearchWorksContact information
- Contact
- felicias@stanford.edu
Also listed in
Loading usage metrics...