Image, Video and Multimedia Systems - Stanford University

Datasets


Stanford Mobile Visual Search Dataset


Stanford Streaming Mobile Augmented Reality Dataset


San Francisco Landmark Dataset


Names 100 Dataset


Compact Descriptors for Visual Search Patches Dataset


CNN 2-Hours Videos Dataset


Google I/O Dataset




Stanford Mobile Visual Search Data Set


We propose the Stanford Mobile Visual Search data set. The data set contains camera-phone images of products, CDs, books, outdoor landmarks, business cards, text documents, museum paintings and video clips. The data set has several key characteristics lacking in existing data sets: rigid objects, widely varying lighting conditions, perspective distortion, foreground and background clutter, realistic ground-truth reference data, and query data collected from heterogeneous low and high-end camera phones. We hope that the data set will help push research forward in the field of mobile visual search. This dataset is described in our 2011 paper in the ACM Multimedia Systems Conference.

Download Data Set

Citation: V. Chandrasekhar, D. Chen, S. Tsai, N.-M. Cheung, H. Chen, G. Takacs, Y. Reznik, R. Vedantham, R. Grzeszczuk, J. Bach, and B. Girod, "The Stanford mobile visual search dataset", ACM Multimedia Systems Conference (MMSys), February 2011.



Reference

Motorola Droid

Nokia 5800

Apple iPhone

Canon G11

Reference

Motorola Droid

Palm Pre

Nokia E63

Canon G11

Reference

Motorola Droid

Palm Pre

Nokia E63

Canon G11




Stanford Streaming Mobile Augmented Reality Data Set


We introduce the Stanford Streaming MAR dataset. The dataset contains 23 different objects of interest, divided to four categories: Books, CD covers, DVD covers and Common Objects. We first record one video for each object where the object is in a static position while the camera is moving. These videos are recorded with a hand-held mobile phone with different amounts of camera motion, glare, blur, zoom, rotation and perspective changes. Each video is 100 frames long, recorded at 30 fps with resolution 640 x 480. For each video, we provide a clean database image (no background noise) for the corresponding object of interest.

We also provide 5 more videos for moving objects recorded with a moving camera. These videos help to study the effect of background clutter when there is a relative motion between the object and the background. Finally, we record 4 videos that contain multiple objects from the dataset. Each video is 200 frames long and contains 3 objects of interest where the camera captures them one after the other.

We provide the ground-truth localization information for 14 videos, where we manually define a bounding quadrilateral around the object of interest in each video frame. This localization information is used in the calculation of the Jaccard index.

1. Static single object:
1.a. Books: Automata Theory, Computer Architecture, OpenCV, Wang Book.
1.b. CD Covers: Barry White, Chris Brown, Janet Jackson, Rascal Flatts, Sheryl Crow.
1.c. DVD Covers: Finding Nemo, Monsters Inc, Mummy Returns, Private Ryan, Rush Hour, Shrek, Titanic, Toy Story.
1.d. Common Objects: Bleach, Glade, Oreo, Polish, Tide, Tuna.

2. Moving object, moving camera:
Barry White Moving, Chris Brown Moving, Titanic Moving, Titanic Moving - Second, Toy Story Moving.

3. Multiple objects:
3.a. Multiple Objects 1: Polish, Wang Book, Monsters Inc.
3.b. Multiple Objects 2: OpenCV, Barry White, Titanic.
3.c. Multiple Objects 3: Monsters Inc, Toy Story, Titanic.
3.d. Multiple Objects 4: Wang Book, Barry White, OpenCV.

Download Data Set

Citation: Mina Makar, Sam Tsai, Vijay Chandrasekhar, David Chen and Bernd Girod, "Interframe Coding of Canonical Patches for Low Bit-Rate Mobile Augmented Reality," Special Issue of the International Journal of Semantic Computing, vol. 7, no. 1, pp. 5-24, March 2013.







San Francisco Landmark Data Set for Mobile Landmark Recognition


We present the San Francisco Landmark Dataset, which contains a database of 1.7 million images of buildings in San Francisco with ground truth labels, geotags, and calibration data, as well as a difficult query set of 803 cell phone images taken with a variety of different camera phones. The data is originally acquired by vehicle-mounted cameras with wide-angle lenses capturing spherical panoramic images. For all visible buildings in each panorama, a set of overlapping perspective images is generated. More details about the dataset generation process and a set of recognition experiments on this dataset are presented in our 2011 paper in the IEEE Conference on Computer Vision and Pattern Recognition. We provide this dataset to facilitate further research in the important area of landmark recognition with mobile devices.

Download Data Set

Citation: D. Chen, G. Baatz, K. Koeser, S. Tsai, R. Vedantham, T. Pylvanainen, K. Roimela, X. Chen, J. Bach, M. Pollefeys, B. Girod, and R. Grzeszczuk, "City-scale landmark identification on mobile devices", IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2011.







Names 100 Dataset


We present the Names100 dataset, which contains 80,000 unconstrained human face images, including 100 popular names and 800 images per name. The dataset can used to study the relation between a person's first name and his/her facial appearance, and train name classifiers which may be used for practical applications such as gender and age recognition.

Download Data Set

Citation: H. Chen, A. Gallagher, B. Girod, "What's in a Name: First Names as Facial Attributes," IEEE Conference on Computer Vision and Pattern Recognition, 2013.




Compact Descriptors for Visual Search Patches Dataset


MPEG is currently developing a standard titled Compact Descriptors for Visual Search (CDVS) for descriptor extraction and compression. In this work, we develop comprehensive patch-level experiments for a direct comparison of low bitrate descriptors for visual search. For evaluating different compression schemes, we propose a data set of matching pairs of image patches from the MPEG-CDVS image-level data sets.

Download Data Set

Citation: V. Chandrasekhar, G. Takacs, D. Chen, S. Tsai, M. Makar, and B. Girod, "Feature matching performance of compact descriptors for visual search", IEEE Data Compression Conference (DCC), March 2014.




CNN 2-Hours Videos Dataset


We present the CNN2h dataset, which can be used for evaluating systems that search videos using image queries. It contains 2 hours of video and 139 image queries with annotated ground truth (based on video frames extracted at 10 frames per second). The annotations also include: - 2,951 pairs of matching image queries and video frames - 21,412 pairs of non-matching image queries and video frames (which were verified to avoid visual similarities).

Download Data Set

Citation: A. Araujo, M. Makar, V. Chandrasekhar, D. Chen, S. Tsai, H. Chen, R. Angst, and B. Girod, "Efficient video search using image queries", IEEE International Conference on Image Processing (ICIP), October 2014.




The Google I/O Dataset


The Google I/O Dataset contains slide and spoken text data crawled from 209 presentations in the Google I/O Conference (2010-2012), with 275 manually labeled ground truth relevance judgements. The dataset is particularly suitable for studying information retrieval using multi-modal data.

Download Data Set

Citation: H. Chen, M. Cooper, D. Joshi, and B. Girod, "Multi-modal language models for lecture video retrieval", ACM International Conference on Multimedia (ACM MM), November 2014.


©2013-2014 IVMS, Stanford University