Dataset: CNN2h -- Video Search Using Image Queries
Abstract/Contents
- Abstract
- We present the CNN2h dataset, which can be used for evaluating systems that search videos using image queries. It contains 2 hours of video and 139 image queries with annotated ground truth (based on video frames extracted at 10 frames per second). The annotations also include: i) 2,951 pairs of matching image queries and video frames, and ii) 21,412 pairs of non-matching image queries and video frames (which were verified to avoid visual similarities). Please read the "README" file for a description of the files included here.
Description
Type of resource | software, multimedia |
---|---|
Date created | June 2014 |
Creators/Contributors
Author | Araujo, Andre | |
---|---|---|
Author | Makar, Mina | |
Author | Chandrasekhar, Vijay | |
Author | Chen, David | |
Author | Tsai, Sam | |
Author | Chen, Huizhong | |
Author | Angst, Roland | |
Author | Girod, Bernd |
Subjects
Subject | video search |
---|---|
Subject | image-based retrieval |
Subject | visual search |
Genre | Dataset |
Bibliographic information
Access conditions
- Use and reproduction
- User agrees that, where applicable, content will not be used to identify or to otherwise infringe the privacy or confidentiality rights of individuals. Content distributed via the Stanford Digital Repository may be subject to additional license and use restrictions applied by the depositor.
Preferred citation
- Preferred Citation
- A. Araujo, M. Makar, V. Chandrasekhar, D. Chen, S. Tsai, H. Chen, R. Angst and B. Girod. "Efficient Video Search Using Image Queries", in Proc. ICIP 2014. http://dx.doi.org/10.1109/ICIP.2014.7025623
Collection
Research Datasets for Image, Video, and Multimedia Systems Group at Stanford
View other items in this collection in SearchWorksContact information
- Contact
- afaraujo@stanford.edu
Also listed in
Loading usage metrics...