image

Showing 1-4 of 4 results

Artstor Museum Image Data

Creators: ARTSTOR
Publication Date: 2025
Creators: ARTSTOR

Explore Artstor’s collections of high-quality images, curated from leading museums and archives around the world. Artstor’s diverse collections are rights-cleared for education and research, and include Open Access content as well as rare materials not available elsewhere. Artstor gives access to 865.914 items in 308 collections.

Instagram Influencer Marketing Dataset

Creators: Kim, Seungbae; Jiang, Jyun-Yu; Nakada, Masaki; Han, Jinyoung; Wang, Wie
Publication Date: 2020
Creators: Kim, Seungbae; Jiang, Jyun-Yu; Nakada, Masaki; Han, Jinyoung; Wang, Wie

This dataset contains 33,935 Instagram influencers who are classified into the following nine categories including beauty, family, fashion, fitness, food, interior, pet, travel, and other. The dataset is 262 GB in size, including both metadata in JSON format and images in JPEG format. We collect 300 posts per influencer so that there are 10,180,500 Instagram posts in the dataset. The dataset includes two types of files, post metadata and image files. Post metadata files are in JSON format and contain the following information: caption, usertags, hashtags, timestamp, sponsorship, likes, comments, etc. Image files are in JPEG format and the dataset contains 12,933,406 image files since a post can have more than one image file. If a post has only one image file then the JSON file and the corresponding image files have the same name. However, if a post has more than one image then the JSON file and corresponding image files have different names. Therefore, we also provide a JSON-Image_mapping file that shows a list of image files that corresponds to post metadata.

If you want to use this dataset, please cite it accordingly. The data can be accessed on the respective website link below.

“Multimodal Post Attentive Profiling for Influencer Marketing,” Seungbae Kim, Jyun-Yu Jiang, Masaki Nakada, Jinyoung Han and Wei Wang. In Proceedings of The Web Conference (WWW ’20), ACM, 2020.

ImageNet Large Scale Visual Recognition Challenge

Creators: Russakovsky, Olga; Deng, Jia; Su, Hao; Krause, Jonathan; Satheesh, Sanjeev; Ma, Sean; Huang, Zhiheng; Karpathy, Andrej; Khosla, Aditya; Bernstein, Michael; Berg, Alexander C.; Fei-Fei, Li
Publication Date: 2009
Creators: Russakovsky, Olga; Deng, Jia; Su, Hao; Krause, Jonathan; Satheesh, Sanjeev; Ma, Sean; Huang, Zhiheng; Karpathy, Andrej; Khosla, Aditya; Bernstein, Michael; Berg, Alexander C.; Fei-Fei, Li

ImageNet is an image database organized according to the WordNet hierarchy (currently only the nouns), in which each node of the hierarchy is depicted by hundreds and thousands of images. The project has been instrumental in advancing computer vision and deep learning research. It contains data from 2012 until 2017. The dataset includes over 14 million images, while the biggest subset ImageNet Large Scale Visual Recognition Challenge (ILSVRC) covers 1,281,167 training images, 50,000 validation images, and 100,000 test images. In total, the dataset has a size of 167 GB. The data is available for free to researchers for non-commercial use on the data provider’s website.

For access to the full ImageNet dataset and other commonly used subsets, please login or request access on the website of the data providers. In doing so, you will need to agree to the ImageNet’s terms of access. Therefore, no data preview can be provided here.

When reporting results of the challenges or using the datasets, please cite:

Olga Russakovsky*, Jia Deng*, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg and Li Fei-Fei. (* = equal contribution) ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015.

File Descriptions

1) ILSVRC/ contains the image data and ground truth for the train and validation sets, and the image data for the test set.

  • The image annotations are saved in XML files in PASCAL VOC format. Users can parse the annotations using the PASCAL Development Toolkit.
  • Annotations are ordered by their synsets (for example, “Persian cat”, “mountain bike”, or “hot dog”) as their wnid. These id’s look like n00141669. Each image’s name has direct correspondence with the annotation file name. For example, the bounding box for n02123394/n02123394_28.xml is n02123394_28.JPEG.
  • You can download all the bounding boxes of a particular synset from http://www.image-net.org/api/download/imagenet.bbox.synset?wnid=%5Bwnid]
  • The training images are under the folders with the names of their synsets. The validation images are all in the same folder. The test images are also all in the same folder.
  • ImageSet folder contains text files specifying lists of images for the main localization task.

2) LOC_sample_submission.csv is the correct format of the submission file. It contains two columns:

  • ImageId: the id of the test image, for example ILSVRC2012_test_00000001
  • PredictionString: the prediction string should be a space delimited of 5 integers. For example, 1000 240 170 260 240 means it’s label 1000, with a bounding box of coordinates (x_min, y_min, x_max, y_max). We accept up to 5 predictions. For example, if you submit 862 42 24 170 186 862 292 28 430 198 862 168 24 292 190 862 299 238 443 374 862 160 195 294 357 862 3 214 135 356 which contains 6 bounding boxes, we will only take the first 5 into consideration.

3) LOC_train_solution.csv and LOC_val_solution.csv: These information are available in ILSVRC/ already, but we are providing them in csv format to be consistent with LOC_sample_submission.csv. Each file contains two columns:

  • ImageId: the id of the train/val image, for example n02017213_7894 or ILSVRC2012_val_00048981
  • PredictionString: the prediction string is a space delimited of 5 integers. For example, n01978287 240 170 260 240 means it’s label n01978287, with a bounding box of coordinates (x_min, y_min, x_max, y_max). Repeated bounding boxes represent multiple boxes in the same image: n04447861 248 177 417 332 n04447861 171 156 251 175 n04447861 24 133 115 254

4) LOC_synset_mapping.txt: The mapping between the 1000 synset id and their descriptions. For example, Line 1 says n01440764 tench, Tinca tinca means this is class 1, has a synset id of n01440764, and it contains the fish tench.

Flickr30k

Creators: Young, Peter; Lai, Alice; Hodosh, Micah; Hockenmaier, Julia
Publication Date: 2014
Creators: Young, Peter; Lai, Alice; Hodosh, Micah; Hockenmaier, Julia
The Flickr30k dataset consists of 31,783 images, each accompanied by five human-generated captions, adding up to 158,915 captions. These images predominantly depict people engaged in everyday activities and events. The dataset serves as a benchmark for sentence-based image description tasks. Each image is associated with five descriptive captions provided by human annotators. The dataset has been further enhanced by the Flickr30k Entities extension, which adds 244,000 coreference chains linking mentions of the same entities across different captions for the same image, and associates them with 276,000 manually annotated bounding boxes. This augmentation facilitates tasks such as phrase localization and grounded language understanding.

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.