Updated
December 2021
Annotation tools

Machine learning datasets

A list of machine learning datasets from across the web.

Use this form to add new datasets to the list.

Subscribe to get updates when new datasets and tools are released.
A multitask benchmarking framework comprising complementary data modalities at a city-scale size, registered across different representations, and enriched with human and machine generated annotations. 27,745 high-resolution 360° images with human-curated annotations, 3D point clouds from: aerial and street-level LIDAR, Structure-from-Motion and Multiview-Stereo reconstructions, geo-anchored based on high-precision, survey-grade ground control points. Full aerial image cover with 7.5 cm/px resolution. Manually labeled 2D / 3D object annotations for up to 39 semantic categories.
A dataset of building footprints to support social good applications. The dataset contains 516M building detections, across an area of 19.4M km2 (64% of the African continent).
Facebook AI and Matterport have collaborated on the release of the largest-ever 3D dataset of indoor spaces made up of accurately-scaled residential and commercial spaces. The dataset consists of 3D Meshes and Textures of 1,000 Matterport spaces.
The Unsplash Dataset is created by 250,000+ contributing photographers and billions of searches across thousands of applications, uses, and contexts. Lite version has 25.000 images, Full version has 3.000.000+ images.
Image
A large-scale dataset of 3D building models, contains 513K annotated mesh primitives, grouped into 292K semantic part components across 2K building models.
A photorealistic synthetic dataset for holistic indoor scene understanding. 77,400 images of 461 indoor scenes with detailed per-pixel labels and corresponding ground truth geometry.
Image
An ImageNet replacement for self-supervised pretraining without humans. PASS contains 1.4 million distinct images.
A dataset of Amazon products with metadata, catalog images, and 3D models. 147,702 products and 398,212 unique catalog images in high resolution.
Image
Unlimited Road-scene Synthetic Annotation (URSA) Dataset, a synthetic dataset containing upwards of 1,000,000 images.
https://github.com/HDCVLab/EDFace-Celeb-1M
Casual Conversations dataset is designed to help researchers evaluate their computer vision and audio models for accuracy across a diverse set of age, genders, apparent skin tones and ambient lighting conditions. Casual Conversations is composed of over 45,000 videos (3,011 participants) and intended to be used for assessing the performance of already trained models.
A large dataset aimed at teaching AI to code, it consists of some 14M code samples and about 500M lines of code in more than 55 different programming languages, from modern ones like C++, Java, Python, and Go to legacy languages like COBOL, Pascal, and FORTRAN.
The Mapillary Vistas Dataset is the most diverse publicly available dataset of manually annotated training data for semantic segmentation of street scenes. 25,000 images pixel-accurately labeled into 152 object categories, 100 of those instance-specific.
The podcast dataset contains about 100k podcasts filtered to contain only documents which the creator tags as being in the English language, as well as by a language filter applied to the creator-provided title and description.
With object trajectories and corresponding 3D maps for over 100,000 segments, each 20 seconds long and mined for interesting interactions, our new motion dataset contains more than 570 hours of unique data.
TextOCR provides ~1M high quality word annotations on TextVQA images allowing application of end-to-end reasoning on downstream tasks such as visual question answering or image captioning.
Contains spoken English commands for setting timers, setting alarms, unit conversions, and simple math. The dataset contains around ~2,200 spoken audio commands from 95 speakers, representing 2.5 hours of continuous audio.
Question answering
CaseHOLD contains 53,000 multiple choice questions with prompts from a judicial decision and multiple potential holdings, one of which is correct, which could be cited.
Contract Understanding Atticus Dataset (CUAD) v1 is a corpus of 13,000+ labels in 510 commercial legal contracts that have been manually labeled under the supervision of experienced lawyers to identify 41 types of legal clauses that are considered important in contact review in connection with a corporate transaction, including mergers & acquisitions, etc.
Image
WebFace260M is a new million-scale face benchmark, which is constructed for the research community towards closing the data gap behind the industry.
A billion-word corpus of Danish text, freely distributed with attribution.
Self-driving
The ONCE dataset is a large-scale autonomous driving dataset with 2D&3D object annotations. Includes 1 Million LiDAR frames, 7 Million camera images.
Image
Adverse Conditions Dataset with Correspondences for training and testing semantic segmentation methods on adverse visual conditions. It comprises a large set of 4006 images which are evenly distributed between fog, nighttime, rain, and snow.
Image
A Dataset of Sky Images and their Irradiance values. SkyCam dataset is a collection of sky images from a variety of locations with diverse topological characteristics (Swiss Jura, Plateau and Pre-Alps regions), from both single and stereo camera settings coupled with a high-accuracy pyranometers. The dataset was collected with a high frequency with a data sample every 10 seconds.
Image
A dataset for automatic mapping of buildings, woodlands, water and roads from aerial images.
A dataset of “in the wild” portrait videos. The videos are diverse real-world samples in terms of the source generative model, resolution, compression, illumination, aspect-ratio, frame rate, motion, pose, cosmetics, occlusion, content, and context. They originate from various sources such as news articles, forums, apps, and research presentations; totaling up to 142 videos, 32 minutes, and 17 GBs.
Self-driving
A novel dataset covering seasonal and challenging perceptual conditions for autonomous driving.
This dataset contains 11,842,186 computer generated building footprints in all Canadian provinces and territories.
Medical
MedMNIST, a collection of 10 pre-processed medical open datasets. MedMNIST is standardized to perform classification tasks on lightweight 28 * 28 images, which requires no background knowledge.
Image
Cube++ is a novel dataset collected for computational color constancy. It has 4890 raw 18-megapixel images, each containing a SpyderCube color target in their scenes, manually labelled categories, and ground truth illumination chromaticities.
Image
Large-scale Person Re-ID Dataset. SYSU-30k contains 29,606,918 images.
Smithsonian Open Access, where you can download, share, and reuse millions of the Smithsonian’s images—right now, without asking. With new platforms and tools, you have easier access to more than 3 million 2D and 3D digital items.
Image
The Objectron dataset is a collection of short, object-centric video clips, which are accompanied by AR session metadata that includes camera poses, sparse point-clouds and characterization of the planar surfaces in the surrounding environment. Includes 15000 annotated videos and 4M annotated images.
Medical
MedICaT is a dataset of medical images, captions, subfigure-subcaption annotations, and inline textual references. Consists of: 217,060 figures from 131,410 open access papers, 7507 subcaption and subfigure annotations for 2069 compound figures, Inline references for ~25K figures in the ROCO dataset.
CLUE: A Chinese Language Understanding Evaluation Benchmark. CLUE is an open-ended, community-driven project that brings together 9 tasks spanning several well-established single-sentence/sentence-pair classification tasks, as well as machine reading comprehension, all on original Chinese text.
Ruralscapes Dataset for Semantic Segmentation in UAV Videos. Ruralscapes is a dataset with 20 high quality (4K) videos portraying rural areas.
Image
Fashionpedia is a dataset which consists of two parts: (1) an ontology built by fashion experts containing 27 main apparel categories, 19 apparel parts, 294 fine-grained attributes and their relationships; (2) a dataset with 48k everyday and celebrity event fashion images annotated with segmentation masks and their associated per-mask fine-grained attributes, built upon the Fashionpedia ontology.
Social Bias Inference Corpus (SBIC contains 150k structured annotations of social media posts, covering over 34k implications about a thousand demographic groups.
Medical
COVID19 severity score assessment project and database. 4703 CXR of COVID19 patients.
Medical
MaskedFace-Net is a dataset of human faces with a correctly or incorrectly worn mask (137,016 images) based on the dataset Flickr-Faces-HQ (FFHQ).
Image
A holistic dataset for movie understanding. 1.1K Movies, 60K trailers.
Image
ETH-XGaze, consisting of over one million high-resolution images of varying gaze under extreme head poses.
Image
The largest production recognition dataset containing 10,000 products frequently bought by online customers in JD.com
Image
HAA500, a manually annotated human-centric atomic action dataset for action recognition on 500 classes with over 591k labeled frames.
The dataset contains over 16.5k (16557) fully pixel-level labeled segmentation images.
Image
Human-centric Video Analysis in Complex Events. HiEve dataset includes the currently largest number of poses (>1M), the largest number of complex-event action labels (>56k), and one of the largest number of trajectories with long terms (with average trajectory length >480).
Image
AViD is a large-scale video dataset with 467k videos and 887 action classes. The collected videos have a creative-commons license.
GoEmotions, the largest manually annotated dataset of 58k English Reddit comments, labeled for 27 emotion categories or Neutral.
Question answering
DoQA is a dataset for accessing Domain Specific FAQs via conversational QA that contains 2,437 information-seeking question/answer dialogues (10,917 questions in total) on three different domains: cooking, travel and movies.
Medical
BIMCV-COVID19+: a large annotated dataset of RX and CT images of COVID19 patients. This first iteration of the database includes 1380 CX, 885 DX and 163 CT studies.
Image
MSeg: A Composite Dataset for Multi-domain Semantic Segmentation. More than 220,000 object masks in more than 80,000 images.
Image
Violin (VIdeO-and-Language INference), consists of 95,322 video-hypothesis pairs from 15,887 video clips, spanning over 582 hours of video (YouTube and TV shows).
Question answering
ClarQ: A large-scale and diverse dataset for Clarification Question Generation. Consists of ~2M examples distributed across 173 domains of stackexchange.
Image
KeypointNet is a large-scale and diverse 3D keypoint dataset that contains 83,231 keypoints and 8,329 3D models from 16 object categories, by leveraging numerous human annotations, based on ShapeNet models.
Image
TAO is a federated dataset for Tracking Any Object, containing 2,907 high resolution videos, captured in diverse environments, which are half a minute long on average.
A large-scale video dataset, featuring clips from movies with detailed captions. Over 3,000 diverse movies from a variety of genres, countries and decades.
Self-driving
DDAD (Dense Depth for Autonomous Driving) is a new autonomous driving benchmark from TRI (Toyota Research Institute) for long range (up to 250m) and dense depth estimation in challenging and diverse urban conditions. It contains monocular videos and accurate ground-truth depth (across a full 360 degree field of view) generated from high-density LiDARs mounted on a fleet of self-driving cars operating in a cross-continental setting.
Self-driving
PandaSet combines Hesai’s best-in-class LiDAR sensors with Scale AI’s high-quality data annotation. PandaSet features data collected using a forward-facing LiDAR with image-like resolution (PandarGT) as well as a mechanical spinning LiDAR (Pandar64). The collected data was annotated with a combination of cuboid and segmentation annotation (Scale 3D Sensor Fusion Segmentation). 48,000 camera images and 16,000 LiDAR sweeps.
Image
Dataset for text in driving videos. The dataset is 20 times larger than the existing largest dataset for text in videos. Our dataset comprises 1000 video clips of driving without any bias towards text and with annotations for text bounding boxes and transcriptions in every frame. Each video is from the BDD100K dataset.
Audio
VGG-Sound is an audio-visual correspondent dataset consisting of short clips of audio sounds, extracted from videos uploaded to YouTube. 200,000+ videos, 550+ hours, 310+ classes.
We introduce RISE, the first large-scale video dataset for Recognizing Industrial Smoke Emissions. Our dataset contains 12,567 clips with 19 distinct views from cameras on three sites that monitored three different industrial facilities.
NLP
A dataset of almost ~4,000 TLDRs written about AI research papers hosted on the 'OpenReview' publishing platform. SciTLDR includes at least two high-quality TLDRs for each paper.
Image
Yoga-82: A New Dataset for Fine-grained Classification of Human Poses. A dataset for yoga pose classification with 3 level hierarchy based on body pose. It is constructed from web images and consists of 82 yoga poses.
Question answering
AmbigQA, a new open-domain question answering task which involves predicting a set of question-answer pairs, where every plausible answer is paired with a disambiguated rewrite of the original question. A dataset covering 14,042 questions from NQ-open.
A new challenge set for multimodal classification, focusing on detecting hate speech in multimodal memes.
Smarthome has been recorded in an apartment equipped with 7 Kinect v1 cameras. It contains 31 daily living activities and 18 subjects. The videos were clipped per activity, resulting in a total of 16,115 video samples.
Question answering
Dataset is built upon the TV drama "Another Miss Oh" and it contains 16,191 QA pairs from 23,928 various length video clips, with each QA pair belonging to one of four difficulty levels. We provide 217,308 annotated images with rich character-centered annotations.
Mapillary Street-Level Sequences (MSLS) is the largest, most diverse dataset for place recognition, containing 1.6 million images in a large number of short sequences.
Medical
The COVID-CT-Dataset has 275 CT images containing clinical findings of COVID-19.
Medical
A database of COVID-19 cases with chest X-ray or CT images.
Medical
A dataset with16,756 chest radiography images across 13,645 patient cases. The current COVIDx dataset is constructed from other open source chest radiography datasets.
Open Images V6 expands the annotation of the Open Images dataset with a large set of new visual relationships, human action annotations, and image-level labels. This release also adds localized narratives, a completely new form of multimodal annotations that consist of synchronized voice, text, and mouse traces over the objects being described. In Open Images V6, these localized narratives are available for 500k of its images. It also includes localized narratives annotations for the full 123k images of the COCO dataset.
A challenging multi-agent seasonal dataset collected by a fleet of Ford autonomous vehicles at different days and times during 2017-18. Each log in the dataset is time-stamped and contains raw data from all the sensors, calibration values, pose trajectory, ground truth pose, and 3D maps.
Image
P-DESTRE is a multi-session dataset of videos of pedestrians in outdoor public environments, fully annotated at the frame level.
A Multi-view Multi-source Benchmark for Drone-based Geo-localization annotates 1652 buildings in 72 universities around the world.
Question answering
KnowIT VQA is a video dataset with 24,282 human-generated question-answer pairs about The Big Bang Theory. The dataset combines visual, textual and temporal coherence reasoning together with knowledge-based questions, which need of the experience obtained from the viewing of the series to be answered.
Image
PANDA is the first gigaPixel-level humAN-centric viDeo dAtaset, for large-scale, long-term, and multi-object visual analysis. The scenes may contain 4k head counts with over 100× scale variation. PANDA provides enriched and hierarchical ground-truth annotations, including 15,974.6k bounding boxes, 111.8k fine-grained attribute labels, 12.7k trajectories, 2.2k groups and 2.9k interactions.
Image
SVIRO is a Synthetic dataset for Vehicle Interior Rear seat Occupancy detection and classification. The dataset consists of 25.000 sceneries across ten different vehicles and we provide several simulated sensor inputs and ground truth data.
An update to the popular All the News dataset published in 2017. This dataset contains 2.7 million articles from 26 different publications from January 2016 to April 1, 2020.
Image
A novel in-the-wild stereo image dataset, comprising 49,368 image pairs contributed by users of the Holopix™ mobile social platform.
Image
MoVi is the first human motion dataset to contain synchronized pose, body meshes and video recordings. Dataset contains 9 hours of motion capture data, 17 hours of video data from 4 different points of view (including one hand-held camera), and 6.6 hours of IMU data.
Image
A large-scale unconstrained crowd counting dataset A comprehensive dataset with 4,372 images and 1.51 million annotations. In comparison to existing datasets, the proposed dataset is collected under a variety of diverse scenarios and environmental conditions.
Question answering
Break is a question understanding dataset, aimed at training models to reason over complex questions. It features 83,978 natural language questions, annotated with a new meaning representation, Question Decomposition Meaning Representation (QDMR). Each example has the natural question along with its QDMR representation.
First dataset for computer vision research of dressed humans with specific geometry representation for the clothes. It contains ~2 Million images with 40 male/40 female performing 70 actions.
Image
AU-AIR dataset is the first multi-modal UAV dataset for object detection. It meets vision and robotics for UAVs having the multi-modal data from different on-board sensors, and pushes forward the development of computer vision and robotic algorithms targeted at autonomous aerial surveillance. >2 hours raw videos, 32,823 labelled frames,132,034 object instances.
Open-source dataset for autonomous driving in wintry weather. The CADC dataset aims to promote research to improve self-driving in adverse weather conditions. This is the first public dataset to focus on real world driving data in snowy weather conditions. It features: 56,000 camera images, 7,000 LiDAR sweeps, 75 scenes of 50-100 frames each.
NLP
A billion-scale bitext data set for training translation models. CCMatrix is the largest data set of high-quality, web-based bitexts for training translation models with more than 4.5 billion parallel sentences in 576 language pairs pulled from snapshots of the CommonCrawl public data set.
Image
A collection of high resolution synthetic overhead imagery for building segmentation. Synthinel-1 consists of 2,108 synthetic images generated in nine distinct building styles within a simulated city. These images are paired with "ground truth" annotations that segment each of the buildings. Synthinel also has a subset dataset called Synth-1, which contains 1,640 images spread across six styles.
Question answering
TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language expresses -- such that we expect models performing well on this set to generalize across a large number of the languages in the world. It contains language phenomena that would not be found in English-only corpora.
Agriculture-Vision: a large-scale aerial farmland image dataset for semantic segmentation of agricultural patterns. We collected 94, 986 high-quality aerial images from 3, 432 farmlands across the US, where each image consists of RGB and Near-infrared (NIR) channels with resolution as high as 10 cm per pixel.
NLP
A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning. A crowdsourced dataset containing more than 24K span-selection questions that require resolving coreference among entities in about 4.7K English paragraphs from Wikipedia.
The inD dataset is a new dataset of naturalistic vehicle trajectories recorded at German intersections. Using a drone, typical limitations of established traffic data collection methods like occlusions are overcome. Traffic was recorded at four different locations. The trajectory for each road user and its type is extracted.
Generated human image dataset. We provide our generated images and make a large-scale synthetic dataset called DG-Market. This dataset is generated by our DG-Net and consists of 128,307 images (613MB), about 10 times larger than the training set of original Market-1501.
Image
ImageMonkey is a free, public open source dataset. ImageMonkey provides a platform where users can drop their photos, tag them with a label, and put them into public domain. Contains over 100,000 images.
Image
A new dataset for natural language based fashion image retrieval. Unlike previous fashion datasets, we provide natural language annotations to facilitate the training of interactive image retrieval systems, as well as the commonly used attribute based labels.
Question answering
TVQA is a large-scale video QA dataset based on 6 popular TV shows (Friends, The Big Bang Theory, How I Met Your Mother, House M.D., Grey's Anatomy, Castle). It consists of 152.5K QA pairs from 21.8K video clips, spanning over 460 hours of video. TVQA+ contains 310.8k bounding boxes, linking depicted objects to visual concepts in questions and answers.
Self-driving
We leverage a simulated driving environment to create a dataset for anomaly segmentation, which we call StreetHazards. It contains 5125 traning images, 1500 test images containing 250 anomaly types.
Fallen People Data Set (FPDS), a novel benchmark for detecting fallen people lying on the floor. It consists of 6982 images, with a total of 5023 falls and 2275 non falls corresponding to people in conventional situations.
Question answering
QASC is a question-answering dataset with a focus on sentence composition. It consists of 9,980 8-way multiple-choice questions about grade school science (8,134 train, 926 dev, 920 test), and comes with a corpus of 17M sentences.
Image
ObjectNet is a large real-world test set for object recognition with control where object backgrounds, rotations, and imaging viewpoints are random. Collected to intentionally show objects from new viewpoints on new backgrounds. 50,000 image test set, same as ImageNet, with controls for rotation, background, and viewpoint. 313 object classes with 113 overlapping ImageNet
Image
JRDB is the largest benchmark data for 2D-3D person tracking, including: Over 60K frames (67 minutes) sensor data captured from 5 stereo camera and two LiDAR sensors, 54 sequences from different locations, during day and night time, indoors and outdoors in a university campus environment. Around 2 milion high quality 2D bounding box annotations on 360° cylindrical video streams generated from 5 stereo cameras
Image
A dataset for assessing building damage from satellite imagery. With over 850,000 building polygons from six different types of natural disaster around the world, covering a total area of over 45,000 square kilometers, the xBD dataset is one of the largest and highest quality public datasets of annotated high-resolution satellite imagery.
NLP
The Benchmark of Linguistic Minimal Pairs. BLiMP is a challenge set for evaluating what language models (LMs) know about major grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each containing 1000 minimal pairs isolating specific contrasts in syntax, morphology, or semantics. The data is automatically generated according to expert-crafted grammars.
Image
A Large-Scale Logo Dataset for Scalable Logo Classification. Our resulting logo dataset contains 167,140 images with 10 root categories and 2,341 categories.
Self-driving
SemanticKITTI is based on the KITTI Vision Benchmark and we provide semantic annotation for all sequences of the Odometry Benchmark. The dataset contains 28 classes including classes distinguishing non-moving and moving objects.
Image
This project introduces a novel video dataset, named HACS (Human Action Clips and Segments). It consists of two kinds of manual annotations. HACS Clips contains 1.55M 2-second clip annotations; HACS Segments has complete action segments (from action start to end) on 50K videos. The large-scale dataset is effective for pretraining action recognition and localization models, and also serves as a new benchmark for temporal action localization.
Self-driving
A radar-centric automotive datasetbased on radar, lidar and camera data for the purposeof 3D object detection.
Image
SEN12MS is a dataset consisting of 180,748 corresponding image triplets containing Sentinel-1 dual-pol SAR data, Sentinel-2 multi-spectral imagery, and MODIS-derived land cover maps.
Image
The VisDrone2019 dataset is collected by the AISKYEYE team at Lab of Machine Learning and Data Mining , Tianjin University, China. The benchmark dataset consists of 288 video clips formed by 261,908 frames and 10,209 static images, captured by various drone-mounted cameras, covering a wide range of aspects including location (taken from 14 different cities separated by thousands of kilometers in China), environment (urban and country), objects (pedestrian, vehicles, bicycles, etc.), and density (sparse and crowded scenes).
A list of datasets for skin image analysis, from the 'Visual Diagnosis of Dermatological Disorders: Human and Machine Performance' paper.
NLP
OPIEC is an Open Information Extraction (OIE) corpus, constructed from the entire English Wikipedia. It containing more than 341M triples. Each triple from the corpus is composed of rich meta-data: each token from the subj / obj / rel along with NLP annotations (POS tag, NER tag, ...), provenance sentence (along with its dependency parse, sentence order relative to the article), original (golden) links contained in the Wikipedia articles, space / time, etc.
Self-driving
The dataset features 2D semantic segmentation, 3D point clouds, 3D bounding boxes, and vehicle bus data. Dataset includes more than 40,000 frames with semantic segmentation image and point cloud labels, of which more than 12,000 frames also have annotations for 3D bounding boxes. In addition, we provide unlabelled sensor data (approx. 390,000 frames) for sequences with several loops, recorded in three cities. A2D2 is around 2.3 TB in total.
Image
The BigEarthNet is a new large-scale Sentinel-2 benchmark archive, consisting of 590,326 Sentinel-2 image patches. To construct the BigEarthNet, 125 Sentinel-2 tiles acquired between June 2017 and May 2018 over the 10 countries (Austria, Belgium, Finland, Ireland, Kosovo, Lithuania, Luxembourg, Portugal, Serbia, Switzerland) of Europe were initially selected. All the tiles were atmospherically corrected by the Sentinel-2 Level 2A product generation and formatting tool (sen2cor). Then, they were divided into 590,326 non-overlapping image patches. Each image patch was annotated by the multiple land-cover classes (i.e., multi-labels) that were provided from the CORINE Land Cover database of the year 2018.
Facebook, Microsoft, Amazon Web Services, and the Partnership on AI have created the Deepfake Detection Challenge to encourage research into deepfake detection. Dataset consists of around 5000 videos, both original and manipulated. To build the dataset, the researchers crowdsourced videos from people while "ensuring a variability in gender, skin tone and age".
Image
The WiderPerson dataset is a pedestrian detection benchmark dataset in the wild, of which images are selected from a wide range of scenarios, no longer limited to the traffic scenario. We choose 13,382 images and label about 400K annotations with various kinds of occlusions. We randomly select 8000/1000/4382 images as training, validation and testing subsets.
Image
3D60 is a collective dataset generated in the context of various 360 vision research works. It comprises multi-modal (i.e. color, depth and normal) omnidirectional stereo renders (i.e. horizontal and vertical) of scenes from realistic and synthetic large-scale 3D datasets (Matterport3D, Stanford2D3D, SunCG). Contains 224,406 spherical panoramas.
Question answering
CommonsenseQA is a new multiple-choice question answering dataset that requires different types of commonsense knowledge to predict the correct answers . It contains 12,102 questions with one correct answer and four distractor answers. The dataset is provided in two major training/validation/testing set splits: "Random split" which is the main evaluation split, and "Question token split".
The Oxford Radar RobotCar Dataset is a radar extension to The Oxford RobotCar Dataset. We provide data from a Navtech CTS350-X Millimetre-Wave FMCW radar and Dual Velodyne HDL-32E LIDARs with optimised ground truth radar odometry for 280 km of driving around Oxford, UK (in addition to all sensors in the original Oxford RobotCar Dataset).
The Total-Text consists of 1555 images with more than 3 different text orientations: Horizontal, Multi-Oriented, and Curved, one of a kind.
NLP
ArT is a combination of Total-Text, SCUT-CTW1500 and Baidu Curved Scene Text, which were collected with the motive of introducing the arbitrary-shaped text problem to the scene text community. There is a total of 10,166 images in the ArT dataset. The ArT dataset was collected with text shape diversity in mind, hence all existing text shapes (i.e. horizontal, multi-oriented, and curved) have high number of existence in the dataset, which makes it an unique dataset.
Image
DeepFake Forensics (Celeb-DF) dataset contains real and DeepFake synthesized videos having similar visual quality on par with those circulated online. The Celeb-DF dataset includes 408 original videos collected from YouTube with subjects of different ages, ethic groups and genders, and 795 DeepFake videos synthesized from these real videos.
The Exclusively Dark (ExDARK) dataset is a collection of 7,363 low-light images from very low-light environments to twilight (i.e 10 different conditions) with 12 object classes (similar to PASCAL VOC) annotated on both image class level and local object bounding boxes
Schema-Guided Dialogue (SGD) dataset, containing over 16k multi-domain conversations spanning 16 domains. Our dataset exceeds the existing task-oriented dialogue corpora in scale, while also highlighting the challenges associated with building large-scale virtual assistants. It provides a challenging testbed for a number of tasks including language understanding, slot filling, dialogue state tracking and response generation.
Image
The Smartphone Image Denoising Dataset (SIDD), of ~30,000 noisy images from 10 scenes under different lighting conditions using five representative smartphone cameras and generated their ground truth images.
Self-driving
A new dataset recorded in Brno, Czech Republic. It offers data from four WUXGA cameras, two 3D LiDARs, inertial measurement unit, infrared camera and especially differential RTK GNSS receiver with centimetre accuracy which, to the best knowledge of the authors, is not available from any other public dataset so far. In addition, all the data are precisely timestamped with sub-millisecond precision to allow wider range of applications. At the time of publishing of the paper, it contains recordings of more than 350 km of rides in varying environments.
Image
Dataset of Human Eye Fixation over Crowd Videos. CrowdFix includes 434 videos with diverse crowd scenes, containing a total of 37,493 frames and 1,249 seconds. The diverse content refers to different crowd activities under three distinct categories - Sparse, Dense Free Flowing and Dense Congested. All videos are at 720p resolution and 30 Hz frame rate.
Self-driving
The INTERACTION dataset contains naturalistic motions of various traffic participants in a variety of highly interactive driving scenarios. Using drones and traffic cameras, trajectories were captured from different countries, including the US, Germany, China and other countries.
DIODE (Dense Indoor and Outdoor DEpth) is a dataset that contains diverse high-resolution color images with accurate, dense, wide-range depth measurements. It is the first public dataset to include RGBD images of indoor and outdoor scenes obtained with one sensor suite.
100,000 Faces Generated by AI. We have built an original machine learning dataset, and used StyleGAN (an amazing resource by NVIDIA) to construct a realistic set of 100,000 faces. Our dataset has been built by taking 29,000+ photos of 69 different models over the last 2 years in our studio.
Image
Objects365 is a brand new dataset, designed to spur object detection research with a focus on diverse objects in the Wild: 365 categories 600k images 10 million bounding boxes
FaceForensics++ is a forensics dataset consisting of 1000 original video sequences that have been manipulated with four automated face manipulation methods: Deepfakes, Face2Face, FaceSwap and NeuralTextures. The data has been sourced from 977 youtube videos and all videos contain a trackable mostly frontal face without occlusions which enables automated tampering methods to generate realistic forgeries. As we provide binary masks the data can be used for image and video classification as well as segmentation. In addition, we provide 1000 Deepfakes models to generate and augment new data.
We introduce a large-scale dataset called TabFact(website: https://tabfact.github.io/), which consists of 117,854 manually annotated statements with regard to 16,573 Wikipedia tables, their relations are classified as ENTAILED and REFUTED.
Image
CURE-TSD: Challenging Unreal and Real Environments for Traffic Sign Detection. The video sequences in the CURE-TSD dataset are grouped into two classes: real data and unreal data. Real data correspond to processed versions of sequences acquired from real world. Unreal data corresponds to synthesized sequences generated in a virtual environment. There are 49 real sequences and 49 unreal sequences that do not include any specific challenge. We have 34 training videos and 15 test videos in both real and unreal sequences that are challenge-free. There are 300 frames in each video sequence. There are 49 challenge-free real video sequences processed with 12 different types of effects and 5 different challenge levels. Moreover, there are 49 synthesized video sequences processed with 11 different types of effects and 5 different challenge levels. In total, there are 5,733 video sequences, which include around 1.72 million frames.
Urban Modelling Group at University College Dublin (UCD) captured major area of Dublin city centre (i.e. around 5.6 km^2 including partially covered areas) was scanned via an ALS device which was carried out by helicopter in 2015. However, the actual focused area was around 2 km^2 which contains the most densest LiDAR point cloud and imagery dataset. The flight altitude was mostly around 300m and the total journey was performed in 41 flight path strips. The datasets is made up of over 260 million laser scanning points labelled into 100,000 objects.
Self-driving
A*3D dataset is a step forward to make autonomous driving safer for pedestrians and the public in the real world. 230K human-labeled 3D object annotations in 39,179 LiDAR point cloud frames and corresponding frontal-facing RGB images. Captured at different times (day, night) and weathers (sun, cloud, rain).
A dataset consisting of 502 dialogs with 12,000 annotated utterances between a user and an assistant discussing movie preferences in natural language. It was collected using a Wizard-of-Oz methodology between two paid crowd-workers, where one worker plays the role of an 'assistant', while the other plays the role of a 'user'.
QMUL-OpenLogo contains 27,083 images from 352 logo classes, built by aggregating and refining 7 existing datasets and establishing an open logo detection evaluation protocol.
The dataset consists of 13,215 task-based dialogs, including 5,507 spoken and 7,708 written dialogs created with two distinct procedures. Each conversation falls into one of six domains: ordering pizza, creating auto repair appointments, setting up ride service, ordering movie tickets, ordering coffee drinks and making restaurant reservations.
Self-driving
The Waymo Open Dataset is comprised of high resolution sensor data collected by Waymo self-driving cars in a wide variety of conditions. We are releasing this dataset publicly to aid the research community in making advancements in machine perception and self-driving technology. The Waymo Open Dataset currently contains lidar and camera data from 1,000 segments (20s each): 1,000 segments of 20s each, collected at 10Hz (200,000 frames) in diverse geographies and conditions, Labels for 4 object classes - Vehicles, Pedestrians, Cyclists, Signs, 12M 3D bounding box labels with tracking IDs on lidar data, 1.2M 2D bounding box labels with tracking IDs on camera data...
Self-driving
A comprehensive, large-scale dataset featuring the raw sensor camera and LiDAR inputs as perceived by a fleet of multiple, high-end, autonomous vehicles in a bounded geographic area. This dataset also includes high quality, human-labelled 3D bounding boxes of traffic agents, an underlying HD spatial semantic map. Contains over 55,000 human-labeled 3D annotated frames; data from 7 cameras and up to 3 lidars; a drivable surface map; and, an underlying HD spatial semantic map. A semantic map provides context to reason about the presence and motion of the agents in the scenes. The provided map has over 4000 lane segments (2000 road segment lanes and about 2000 junction lanes) , 197 pedestrian crosswalks, 60 stop signs, 54 parking zones, 8 speed bumps, 11 speed humps.
NLP
Open WebText – an open source effort to reproduce OpenAI’s WebText dataset. This distribution was created by Aaron Gokaslan and Vanya Cohen of Brown University. Dataset was created by extracting all Reddit post urls from the Reddit submissions dataset. These links were deduplicated, filtered to exclude non-html content, and then shuffled randomly. The links were then distributed to several machines in parallel for download, and all web pages were extracted using the newspaper python package. Documents were hashed into sets of 5-grams and all documents that had a similarity threshold of greater than 0.5 were removed. The the remaining documents were tokenized, and documents with fewer than 128 tokens were removed. This left 38GB of text data (40GB using SI units) from 8,013,769 documents.
Image
LVIS is a new dataset for long tail object instance segmentation. 1000+ Categories: found by data-driven object discovery in 164k images. More than 2.2 million high quality instance segmentation masks.
Question answering
CODAH is an adversarially-constructed evaluation dataset with 2.8k questions for testing common sense. CODAH forms a challenging extension to the SWAG dataset, which tests commonsense knowledge using sentence-completion questions that describe situations observed in video.
Taco is an open image dataset of waste in the wild. It contains photos of litter taken under diverse environments, from tropical beaches to London streets. These images are manually labeled and segmented according to a hierarchical taxonomy to train and evaluate object detection algorithms.
A diverse street-level imagery dataset with bounding box annotations for detecting and classifying traffic signs around the world. 100,000 high-resolution images from all over the world with bounding box annotations of over 300 classes of traffic signs. The fully annotated set of the Mapillary Traffic Sign Dataset (MTSD) includes a total of 52,453 images with 257,543 traffic sign bounding boxes. The additional, partially annotated dataset contains 47,547 images with more than 80,000 signs that are automatically labeled with correspondence information from 3D reconstruction.
Self-driving
Argoverse is a research collection with three distinct types of data. The first is a dataset with sensor data from 113 scenes observed by our fleet, with 3D tracking annotations on all objects. The second is a dataset of 300,000-plus scenarios observed by our fleet, wherein each scenario contains motion trajectories of all observed objects. The third is a set of HD maps of several neighborhoods in Pittsburgh and Miami, to add rich context for all of the data mentioned above.
Question answering
The dataset contains rigorously annotated and validated videos, questions and answers, as well as annotations for the complexity level of each question and answer. Social-IQ brings novel challenges to the field of artificial intelligence which sparks future research in social intelligence modeling, visual reasoning, and multimodal question answering. 1,250 videos, 7,500 questions, 33,000 correct answers, 22,500 incorrect answers.
Question answering
DROP is a crowdsourced, adversarially-created, 96k-question benchmark, in which a system must resolve references in a question, perhaps to multiple input positions, and perform discrete operations over them (such as addition, counting, or sorting). These operations require a much more comprehensive understanding of the content of paragraphs than what was necessary for prior datasets.
SuperGLUE, a new benchmark styled after GLUE with a new set of more difficult language understanding tasks, improved resources, and a new public leaderboard. Full citation list of the datasets contained: {The CommitmentBank}: Investigating projection in naturally occurring discourse, Choice of plausible alternatives: An evaluation of commonsense causal reasoning, Looking beyond the surface: A challenge set for reading comprehension over multiple sentences, The {PASCAL} recognising textual entailment challenge, The second {PASCAL} recognising textual entailment challenge, The third {PASCAL} recognizing textual entailment challenge, The Fifth {PASCAL} Recognizing Textual Entailment Challenge, {WiC}: The Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations, The {W}inograd schema challenge.
Human Activity Knowledge Engine (HAKE) aims at promoting the human activity/action understanding. As a large-scale knowledge base, HAKE is built upon existing activity datasets, and supplies human instance action labels and corresponding body part level atomic action labels (Part States). Dataset contains 104 K+ images, 154 activity classes, 677 K+ human instances.
Image
PedX is a large-scale multi-modal collection of pedestrians at complex urban intersections. The dataset provides high-resolution stereo images and LiDAR data with manual 2D and automatic 3D annotations. The data was captured using two pairs of stereo cameras and four Velodyne LiDAR sensors.
Image
The Replica Dataset is a dataset of high quality reconstructions of a variety of indoor spaces. Each reconstruction has clean dense geometry, high resolution and high dynamic range textures, glass and mirror surface information, planar segmentation as well as semantic class and instance segmentation.
Image
A large-scale vehicle ReID dataset in the wild (VERI-Wild) is captured from a large CCTV surveillance system consisting of 174 cameras across one month (30× 24h) under unconstrained scenarios. The cameras are distributed in a large urban district of more than 200km2. After data cleaning and annotation, 416,314 vehicle images of 40,671 identities are collected.
The Semantic Drone Dataset focuses on semantic understanding of urban scenes for increasing the safety of autonomous drone flight and landing procedures. The imagery depicts more than 20 houses from nadir (bird's eye) view acquired at an altitude of 5 to 30 meters above ground. A high resolution camera was used to acquire images at a size of 6000x4000px (24Mpx). The training set contains 400 publicly available images and the test set is made up of 200 private images.
This is the second version of the Google Landmarks dataset, which contains images annotated with labels representing human-made and natural landmarks. The dataset can be used for landmark recognition and retrieval experiments. This version of the dataset contains approximately 5 million images, split into 3 sets of images: train, index and test.
A large dataset of almost two million annotated vehicles for training and evaluating object detection methods. 200,000 images. 1,990,000 annotated vehicles. 5 Megapixel resolution.
The Unsupervised Llamas dataset was annotated by creating high definition maps for automated driving including lane markers based on Lidar. The automated vehicle can be localized against these maps and the lane markers are projected into the camera frame. The 3D projection is optimized by minimizing the difference between already detected markers in the image and projected ones. Further improvements can likely be achieved by using better detectors, optimizing difference metrics, and adding some temporal consistency. Over 100,000 annotated images. Annotations of over 100 meters. Resolution of 1276 x 717 pixels.
Image
Open Images is a dataset of ~9M images annotated with image-level labels, object bounding boxes, object segmentation masks, and visual relationships. It contains a total of 16M bounding boxes for 600 object classes on 1.9M images, making it the largest existing dataset with object location annotations. Open Images V5 features segmentation masks for 2.8 million object instances in 350 categories. Unlike bounding-boxes, which only identify regions in which an object is located, segmentation masks mark the outline of objects, characterizing their spatial extent to a much higher level of detail.
Image
Mid-Air is a multi-modal synthetic dataset for low altitude drone flights in unstructured environments. It contains synchronized data captured by multiple sensors for a total of 54 trajectories and more than 420k video frames simulated in various climate conditions.
Medical
We have made the CQ500 dataset of 491 scans with 193,317 slices publicly available so that others can compare and build upon the results we have achieved in the paper. We provide anonymized dicoms for all the 491 scans and the corresponding radiologists' reads. The scans in the CQ500 dataset were generously provided by Centre for Advanced Research in Imaging, Neurosciences and Genomics(CARING), New Delhi, IN. The reads were done by three radiologists with an experience of 8, 12 and 20 years in cranial CT interpretation respectively.
Question answering
TextVQA requires models to read and reason about text in images to answer questions about them. Specifically, models need to incorporate a new modality of text present in the images and reason over it to answer TextVQA questions. Dataset contains 28,408 images from OpenImages, 45,336 questions, 453,360 ground truth answers.
Medical
The MRNet dataset consists of 1,370 knee MRI exams performed at Stanford University Medical Center. The dataset contains 1,104 (80.6%) abnormal exams, with 319 (23.3%) ACL tears and 508 (37.1%) meniscal tears; labels were obtained through manual extraction from clinical reports.
Image
It is a versatile benchmark of four tasks including clothes detection, pose estimation, segmentation, and retrieval. It has 801K clothing items where each item has rich annotations such as style, scale, viewpoint, occlusion, bounding box, dense landmarks and masks. There are also 873K Commercial-Consumer clothes pairs.
While early work in computer vision addressed related clothing recognition tasks, these are not designed with fashion insiders’ needs in mind, possibly due to the research gap in fashion design and computer vision. To address this, we first propose a fashion taxonomy built by fashion experts, informed by product description from the internet. To capture the complex structure of fashion objects and ambiguity in descriptions obtained from crawling the web, our standardized taxonomy contains 46 apparel objects (27 main apparel items and 19 apparel parts), and 92 related fine-grained attributes. Secondly, a total of around 50K clothing images (10K with both segmentation and fine-grained attributes, 40K with apparel instance segmentation) in daily-life, celebrity events, and online shopping are labeled by both domain experts and crowd workers for fine-grained segmentation.
With over 238,200 person instances manually labeled in over 47,300 images, EuroCity Persons is nearly one order of magnitude larger than person datasets used previously for benchmarking. Diversity is gained by recording this dataset throughout Europe. All objects were annotated with tight bounding boxes delineating their full extent. If objects were partly occluded, their full extents were estimated (this is useful for later processing steps such as tracking) and the level of occlusion was annotated.
Mozilla crowdsources the largest dataset of human voices available for use, including 18 different languages, adding up to almost 1,400 hours of recorded voice data from more than 42,000 contributors.
The Diversity in Faces(DiF)is a large and diverse dataset that seeks to advance the study of fairness and accuracy in facial recognition technology. The first of its kind available to the global research community, DiF provides a dataset of annotations of 1 million human facial images.
Question answering
Natural Questions (NQ), a new, large-scale corpus for training and evaluating open-domain question answering systems, and the first to replicate the end-to-end process in which people find answers to questions. NQ is large, consisting of 300,000 naturally occurring questions, along with human annotated answers from Wikipedia pages, to be used in training QA systems. We have additionally included 16,000 examples where answers (to the same questions) are provided by 5 different annotators, useful for evaluating the performance of the learned QA systems.
Dataset contents: 1. Wikipedia (wiki2019zh), 1 million well-formed Chinese entries 2. News corpus (news2016zh), 2.5 million news, including keywords, description 3. Encyclopedia question and answer (baike2018qa), 1.5 million questions and answers with question types 4. Community Q&A json version (webtext2019zh), 4.1 million high quality community Q&A, suitable for training oversized models 5. Translation corpus (translation2019zh), 5.2 million pairs of Chinese and English sentences
Question answering
The ActivityNet-QA dataset contains 58,000 human-annotated QA pairs on 5,800 videos derived from the popular ActivityNet dataset. The dataset provides a benckmark for testing the performance of VideoQA models on long-term spatio-temporal reasoning.
The 10kGNAD dataset is intended to solve part of this problem as the first german topic classification dataset. It consists of 10273 german language news articles from an austrian online newspaper categorized into nine topics. These articles are a till now unused part of the One Million Posts Corpus.
Facebook BISON (Binary Image Selection) dataset complements the COCO Captions dataset. BISON-COCO is not a training dataset, but rather an evaluation dataset that can be used to test existing models’ ability for pairing visual content with appropriate text descriptions.
Medical
MIMIC-CXR is a large, publicly-available database comprising of de-identified chest radiographs from patients admitted to the Beth Israel Deaconess Medical Center between 2011 and 2016. The dataset contains 371,920 chest x-rays associated with 227,943 imaging studies. Each imaging study can pertain to one or more images, but most often are associated with two images: a frontal view and a lateral view. Images are provided with 14 labels derived from a natural language processing tool applied to the corresponding free-text radiology reports.
Medical
CheXpert is a large public dataset for chest radiograph interpretation, consisting of 224,316 chest radiographs of 65,240 patients.
Question answering
The dataset consists of 22M questions about various day-to-day images. Each image is associated with a scene graph of the image's objects, attributes and relations, a new cleaner version based on Visual Genome.
SPEED consists of synthetic as well as actual camera images of a mock-up of the Tango spacecraft from the PRISMA mission. The synthetic images are created by fusing OpenGL-based renderings of the spacecraft’s3D model with actual images of the Earth captured by the Himawari-8 meteorolog-ical satellite. Dataset contains over 12,000 images with a resolution of 1920×1200 pixels.
A new large-scale scene text dataset, namely Large-scale Street View Text with Partial Labeling (LSVT), with 30,000 training data and 20,000 testing images in full annotations, and 400,000 training data in weak annotations, which are referred to as partial labels.
Flickr-Faces-HQ (FFHQ) is a high-quality image dataset of human faces, originally created as a benchmark for generative adversarial networks (GAN). The dataset consists of 70,000 high-quality PNG images at 1024×1024 resolution and contains considerable variation in terms of age, ethnicity and image background. It also has good coverage of accessories such as eyeglasses, sunglasses, hats, etc.
Image
Danbooru2018 is a large-scale anime image database with 3.33m+ images annotated with 99.7m+ tags; It can be useful for machine learning purposes such as image recognition and generation.
Image
Flickr1024 is a large stereo dataset, which consists of 1024 high-quality images pairs and covers diverse senarios. This dataset can be employed for stereo image super-resolution (SR).
Audio
AVSpeech is a new, large-scale audio-visual dataset comprising speech video clips with no interfering backgruond noises. The segments are 3-10 seconds long, and in each clip the audible sound in the soundtrack belongs to a single speaking person, visible in the video. In total, the dataset contains roughly 4700 hours of video segments, from a total of 290k YouTube videos, spanning a wide variety of people, languages and face poses.
QuAC, a dataset for Question Answering in Context that contains 14K information-seeking QA dialogs (100K questions in total). Question Answering in Context is a dataset for modeling, understanding, and participating in information seeking dialog. Data instances consist of an interactive dialog between two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2) a teacher who answers the questions by providing short excerpts (spans) from the text. QuAC introduces challenges not found in existing machine comprehension datasets: its questions are often more open-ended, unanswerable, or only meaningful within the dialog context.
Image
The Vehicle-1M dataset is constructed by National Laboratory of Pattern Recognition, Institute of Automation, University of Chinese Academy of Sciences (NLPR, CASIA). This dataset involves vehicle images captured across day and night, from head or rear, by multiple surveillance cameras installed in several cities in China. There are totally 936,051 images from 55,527 vehicles and 400 vehicle models in the dataset. Each image is attached with a vehicle ID label denoting its identity in real world as well as a vehicle model label indicating the make, model and year of the vehicle(i.e. "Audi-A6-2013").
This corpus provides 200-dimension vector representations, a.k.a. embeddings, for over 8 million Chinese words and phrases, which are pre-trained on large-scale high-quality data. These vectors, capturing semantic meanings for Chinese words and phrases, can be widely applied in many downstream Chinese processing tasks (e.g., named entity recognition and text classification) and in further research.
A large, high-diversity, one-shot database for generic object tracking in the wild. The dataset contains more than 10,000 video segments of real-world moving objects and over 1.5 million manually labeled bounding boxes. The dataset is backboned by WordNet and it covers a majority of 560+ classes of real-world moving objects and 80+ classes of motion patterns.The test set embodies 84 object classes and 32 motion classes with only 180 video segments, allowing for efficient evaluation.
Question answering
OpenBookQA, modeled after open book exams for assessing human understanding of a subject. The open book that comes with our questions is a set of 1329 elementary level science facts. Roughly 6000 questions probe an understanding of these facts and their application to novel situations.
Image
A Large-Scale Dataset and Benchmark for Object Tracking in the Wild. > 30K Video Sequences, > 14M Bounding Boxes. Diversity ensured by Youtube.
Visual Commonsense Reasoning (VCR) is a new task and large-scale dataset for cognition-level visual understanding. It contains: 290k multiple choice questions 290k correct answers and rationales: one per question 110k images Counterfactual choices obtained with minimal bias, via our new Adversarial Matching approach Answers are 7.5 words on average; rationales are 16 words. High human agreement (>90%) Scaffolded on top of 80 object categories from COCO
YouTube-8M is a large-scale labeled video dataset that consists of millions of YouTube video IDs and associated labels from a diverse vocabulary of 4700+ visual entities. It comes with precomputed state-of-the-art audio-visual features from billions of frames and audio segments, designed to fit on a single hard disk.
NLP
CMU-MOSEI is the largest in-the-wild dataset of multimodal sentiment analysis and emotion recognition in NLP. It consists of 23,500 sentences from more than 1000 youtube identities and 200 topics. Sentences are annotated for sentiment and emotion intensity. The dataset also contains unsupervised data (unannotated sentences).
Question answering
RecipeQA is a dataset for multimodal comprehension of cooking recipes. It consists of over 36K question-answer pairs automatically generated from approximately 20K unique recipes with step-by-step instructions and images. Each question in RecipeQA involves multiple modalities such as titles, descriptions or images, and working towards an answer requires (i) joint understanding of images and text, (ii) capturing the temporal flow of events, and (iii) making sense of procedural knowledge.
A dataset of Chinese text with about 1 million Chinese characters annotated by experts in over 30 thousand street view images.
CORNELL NEWSROOM is a large dataset for training and evaluating summarization systems. It contains 1.3 million articles and summaries written by authors and editors in the newsrooms of 38 major publications. The summaries are obtained from search and social metadata between 1998 and 2017 and use a variety of summarization strategies combining extraction and abstraction.
Stanford Question Answering Dataset (SQuAD) is a new reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage. With 100,000+ question-answer pairs on 500+ articles, SQuAD is significantly larger than previous reading comprehension datasets. SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000 new, unanswerable questions written adversarially by crowdworkers to look similar to answerable ones.
MMID is a large-scale, massively multilingual dataset of images paired with the words they represent collected at the University of Pennsylvania. By far the largest dataset of its kind, it has 98 languages (including English) and up to 10,000 words per language! (and many more for English.)
The dataset contains over 100k videos of driving experience, each running 40 seconds at 30 frames per second. The total image count is 800 times larger than Baidu ApolloScape (released March 2018), 4,800 times larger than Mapillary and 8,000 times larger than KITTI.
Question answering
Situations With Adversarial Generations is a large-scale dataset for this task of grounded commonsense inference, unifying natural language inference and physically grounded reasoning. The dataset consists of 113k multiple choice questions about grounded situations. Each question is a video caption from LSMDC or ActivityNet Captions, with four answer choices about what might happen next in the scene. The correct answer is the (real) video caption for the next event in the video; the three incorrect answers are adversarially generated and human verified, so as to fool machines but not humans.
The highD dataset is a new dataset of naturalistic vehicle trajectories recorded on German highways. Using a drone, typical limitations of established traffic data collection methods such as occlusions are overcome by the aerial perspective. Traffic was recorded at six different locations and includes more than 110 500 vehicles.
Self-driving
comma.ai presents comma2k19, a dataset of over 33 hours of commute in California's 280 highway. This means 2019 segments, 1 minute long each, on a 20km section of highway driving between California's San Jose and San Francisco. comma2k19 is a fully reproducible and scalable dataset.
Dataset consists of 5,711 images with 6,884 high-quality annotated person instances. Can be found on Supervisaly.ai under “Datasets library”.
Audio
The Voices Obscured in Complex Environmental settings (VOiCES) corpus presents audio recorded in acoustically challenging conditions. Source Material: a total of 15 hours (3,903 audio files).
WikiHow is a new large-scale dataset using the online WikiHow (http://www.wikihow.com/) knowledge base. Please refer to the paper for more information regarding the dataset and its properties. Each article consists of multiple paragraphs and each paragraph starts with a sentence summarizing it. By merging the paragraphs to form the article and the paragraph outlines to form the summary, the resulting version of the dataset contains more than 200,000 long-sequence pairs.
Self-driving
An autonomous driving dataset and benchmark for optical flow. > 1000 frames at 2560x1080 with diverse lighting and weather scenarios, reference data with error bars for optical flow, evaluation masks for dynamic objects, specific robustness evaluation on challenging scenes. The dataset includes: 110,500 vehicles 44,500 driven kilometers 147 driven hours
Question answering
VQA is a dataset containing open-ended questions about images. These questions require an understanding of vision and language. It contains 265,016 images (COCO and abstract scenes), at least 3 questions (5.4 questions on average) per image, 10 ground truth answers per question.
DTLD contains more than 230 000 annotated traffic lights in camera images with a resolution of 2 megapixels. The dataset was recorded in 11 cities in Germany with a frequency of 15 Hz. Due to additional annotation attributes such as the traffic light pictogram, orientation or relevancy 344 unique classes exist. In addition to camera images and labels we provide stereo information in form of disparity images allowing stereo-based detection and depth-dependent evaluations.
Image
A large fine-grained vehicle data set BoxCars116k, with 116k images of vehicles from various viewpoints taken by numerous surveillance cameras.
NLP
The Multi-Genre Natural Language Inference (MultiNLI) corpus is a crowd-sourced collection of 433k sentence pairs annotated with textual entailment information. The corpus is modeled on the SNLI corpus, but differs in that covers a range of genres of spoken and written text, and supports a distinctive cross-genre generalization evaluation.
Self-driving
ApolloScape is an order of magnitude bigger and more complex than existing similar datasets such as Kitti and CityScapes. ApolloScape offers 10 times more high-resolution images with pixel-by-pixel annotations, and includes 26 different recognizable objects such as cars, bicycles, pedestrians and buildings. The dataset offers several levels of scene complexity with increasing number of pedestrians and vehicles, up to 100 vehicles in a given scene, as well as a wider set of challenging environments such as heavy weather or extreme lighting conditions.
Question answering
DVQA: Understanding Data Visualizations via Question Answering, a dataset that tests many aspects of bar chart understanding in a question answering framework. Contains over 3 million image-question pairs about bar charts. It tests three forms of diagram understanding: a) structure understanding; b) data retrieval; and c) reasoning.
Self-driving
The nuScenes dataset is a large-scale autonomous driving dataset. It features: ● Full sensor suite (1x LIDAR, 5x RADAR, 6x camera, IMU, GPS) ● 1000 scenes of 20s each ● 1,440,000 camera images ● 400,000 lidar sweeps ● Two diverse cities: Boston and Singapore
Medical
MURA (musculoskeletal radiographs) is a large dataset of bone X-rays that can be used to train algorithms tasked with detecting abnormalities in X-rays. MURA is believed to be the world’s largest public radiographic image dataset with 40,561 labeled images.
Image
A Large-Scale Scene Text Dataset, Based on MSCOCO. COCO-Text V2.0 contains 63,686 images with 239,506 annotated text instances. Segmentation mask is annotated for every word, allowing fine-level detection. Three attributes are labeled for every word: machine-printed vs. handwritten, legible vs. illgible, and English vs. non-English.
Image
A photorealistic synthetic dataset for street scene parsing. The images in the dataset do not follow a driven path through a single virtual world. Instead, an entirely unique scene was procedurally generated for each of the 25,000 images. As a result, the dataset contains a wide range of variations and unique combinations of features.
Image
CULane is a large scale challenging dataset for academic research on traffic lane detection. It is collected by cameras mounted on six different vehicles driven by different drivers in Beijing. More than 55 hours of videos were collected and 133,235 frames were extracted. Data examples are shown above. We have divided the dataset into 88880 for training set, 9675 for validation set, and 34680 for test set. The test set is divided into normal and 8 challenging categories, which correspond to the 9 examples above.
NLP
The MultiWOZ dataset is a fully-labeled collection of human-human written conversations spanning over multiple domains and topics. At a size of 10k dialogues, it is at least one order of magnitude larger than all previous annotated task-oriented corpora. The dialogue are set between a tourist and a clerk in the information. It spans over 7 domains.
Question answering
CoQA is a large-scale dataset for building Conversational Question Answering systems. CoQA contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains.
Spider is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset. Spider consists of 10,181 questions and 5,693 unique complex SQL queries on 200 databases with multiple tables covering 138 different domains.
We make available Conceptual Captions, a new dataset consisting of ~3.3M images annotated with captions. In contrast with the curated style of other image caption annotations, Conceptual Caption images and their raw descriptions are harvested from the web, and therefore represent a wider variety of styles. More precisely, the raw descriptions are harvested from the Alt-text HTML attribute associated with web images. To arrive at the current version of the captions, we have developed an automatic pipeline that extracts, filters, and transforms candidate image/caption pairs, with the goal of achieving a balance of cleanliness, informativeness, fluency, and learnability of the resulting captions.
Image
Dense human pose estimation aims at mapping all human pixels of an RGB image to the 3D surface of the human body. We introduce DensePose-COCO, a large-scale ground-truth dataset with image-to-surface correspondences manually annotated on 50K COCO images.
Image
Composed by 74 video sequences of 5 mins each, we have captured and annotated more than 500,000 frames. The labeling contains drivers’ gaze fixations and their temporal integration providing task-specific saliency maps. Geo-referenced locations, driving speed and course complete the set of released data.
Question answering
HotpotQA is a question answering dataset featuring natural, multi-hop questions, with strong supervision for supporting facts to enable more explainable question answering systems. The dataset is composed of 113,000 QA pairs based on Wikipedia.
Tencent ML — Images is the largest open-source multi-label image dataset, including 17,609,752 training and 88,739 validation image URLs which are annotated with up to 11,166 categories.
Medical
Acollaborative research project from Facebook AI Research (FAIR) and NYU Langone Health to investigate the use of AI to make MRI scans up to 10 times faster. The dataset includes more than 1.5 million anonymous MRI images of the knee, drawn from 10,000 scans, and raw measurement data from nearly 1,600 scans.
Question answering
DuReader 2.0 is a large-scale open-domain Chinese dataset for Machine Reading Comprehension (MRC) and Question Answering (QA). It contains more than 300K questions, 1.4M evident documents and corresponding human generated answers.
The WebLogo-2M dataset is a weakly labelled (at image level rather than object bounding box level) logo detection dataset. The dataset was constructed automatically by sampling the Twitter stream data. It contains 194 unique logo classes and over 2 million logo images.
Audio
We introduce the Free Music Archive (FMA), an open and easily accessible dataset suitable for evaluating several tasks in MIR, a field concerned with browsing, searching, and organizing large music collections. The community's growing interest in feature and end-to-end learning is however restrained by the limited availability of large audio datasets. The FMA aims to overcome this hurdle by providing 917 GiB and 343 days of Creative Commons-licensed audio from 106,574 tracks from 16,341 artists and 14,854 albums, arranged in a hierarchical taxonomy of 161 genres. It provides full-length and high-quality audio, pre-computed features, together with track- and user-level metadata, tags, and free-form text such as biographies.
Twitter100k dataset is characterized by two aspects: 1) it has 100,000 image-text pairs randomly crawled from Twitter and thus has no constraint in the image categories; 2) text in Twitter100k is written in informal language by the users.
Image
CITYCAM aims to understand the city by analyzing the vehicles. We collected and annotated 60,000 frames with rich information, leading to about 900,000 annotated objects.
The Quick Draw Dataset is a collection of 50 million drawings across 345 categories, contributed by players of the game Quick, Draw!. The drawings were captured as timestamped vectors, tagged with metadata including what the player was asked to draw and in which country the player was located. You can browse the recognized drawings on quickdraw.withgoogle.com/data.
The Vehicle Make and Model Recognition dataset (VMMRdb) is large in scale and diversity, containing 9,170 classes consisting of 291,752 images, covering models manufactured between 1950 to 2016. VMMRdb dataset contains images that were taken by different users, different imaging devices, and multiple view angles, ensuring a wide range of variations to account for various scenarios that could be encountered in a real-life scenario. The cars are not well aligned, and some images contain irrelevant background. The data was gathered by crawling web pages related to vehicle sales on craigslist.com, including 712 areas covering all 412 sub-domains corresponding to US metro areas.
Image
Places contains more than 10 million images comprising 400+ unique scene categories. The dataset features 5000 to 30,000 training images per class, consistent with real-world frequencies of occurrence.
Image
UTKFace dataset is a large-scale face dataset with long age span (range from 0 to 116 years old). The dataset consists of over 20,000 face images with annotations of age, gender, and ethnicity. The images cover large variation in pose, facial expression, illumination, occlusion, resolution, etc. This dataset could be used on a variety of tasks, e.g., face detection, age estimation, age progression/regression, landmark localization, etc.
Audio
VoxCeleb is an audio-visual dataset consisting of short clips of human speech, extracted from interview videos uploaded to YouTube. It contains data from 7,000+ speakers, 1 million+ utterances, 2,000+ hours. VoxCeleb consists of both audio and video. Each segment is at least 3 seconds long.
This dataset contains 13,427 camera images at a resolution of 1280x720 pixels and contains about 24,000 annotated traffic lights. The annotations include bounding boxes of traffic lights as well as the current state (active light) of each traffic light.
Fashion-MNIST is a dataset of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes.
Large corpus of uncompressed and compressed sentences from news articles. Contains over 200,000 sentence compression pairs.
YouTube-BoundingBoxes is a large-scale data set of video URLs with densely-sampled high-quality single-object bounding box annotations. The data set consists of approximately 380,000 15-20s video segments extracted from 240,000 different publicly visible YouTube videos, automatically selected to feature objects in natural settings without editing or post-processing, with a recording quality often akin to that of a hand-held cell phone camera.
NLP
Reddit Comments from 2005-12 to 2017-03. Downloaded from https://files.pushshift.io/comments.
Question answering
NarrativeQA is a dataset built to encourage deeper comprehension of language. This dataset involves reasoning over reading entire books or movie scripts. This dataset contains approximately 45K question answer pairs in free form text. There are two modes of this dataset (1) reading comprehension over summaries and (2) reading comprehension over entire books/scripts.
Image
ScanNet is an RGB-D video dataset containing 2.5 million views in more than 1500 scans, annotated with 3D camera poses, surface reconstructions, and instance-level semantic segmentations. To collect this data, we designed an easy-to-use and scalable RGB-D capture system that includes automated surface reconstruction and crowdsourced semantic annotation.
Audio
A large-scale and high-quality dataset of annotated musical notes. The NSynth Dataset is an audio dataset containing ~300k musical notes, each with a unique pitch, timbre, and envelope. Each note is annotated with three additional pieces of information based on a combination of human evaluation and heuristic algorithms: the method of sound production for the note's instrument, the high-level family of which the note's instrument is a member and sonic qualities of the note.
Image
A dataset for scene parsing. There are 20,210 images in the training set, 2,000 images in the validation set, and 3,000 images in the testing set. All the images are exhaustively annotated with objects. Many objects are also annotated with their parts. For each object there is additional information about whether it is occluded or cropped, and other attributes.
Question answering
A dataset of questions from Quora aimed at determining if pairs of question text actually correspond to semantically equivalent queries. Over 400,000 lines of potential question duplicate pairs.
NLP
The Yelp dataset contains data about businesses, reviews, and user data for use in personal, educational, and academic purposes. Available in both JSON and SQL files.
AudioSet consists of an expanding ontology of 632 audio event classes and a collection of 2,084,320 human-labeled 10-second sound clips drawn from YouTube videos. The ontology is specified as a hierarchical graph of event categories, covering a wide range of human and animal sounds, musical instruments and genres, and common everyday environmental sounds.
Image
HASY is a publicly available, free of charge dataset of single symbols similar to MNIST. It contains 168233 instances of 369 classes.
Image
A multi-view stereo / 3D reconstruction benchmark covering a variety of indoor and outdoor scenes. Ground truth geometry has been obtained using a high-precision laser scanner. Contains 13 / 12 DSLR datasets for training / testing, 5 / 5 multi-cam rig videos for training / testing, 27 / 20 frames for two-view stereo training / testing.
There are about 208,000 jokes in this database scraped from three sources (reddit, stupidstuff.org, wocka.com).
Image
The main focus of this dataset is testing. It contains data recorded under real world driving situations. Aims of it are: to compile and provide standard data which can be used for evaluation. to establish accepted evaluation protocols, data and measures. to boost the algorithm development on driving applications using computer vision techniques. The WildDash dataset does not offer enough material to train algorithms by itself.
The Oxford RobotCar Dataset contains over 100 repetitions of a consistent route through Oxford, UK, captured over a period of over a year. The dataset captures many different combinations of weather, traffic and pedestrians, along with longer term changes such as construction and roadworks.
A set of datasets for automatic text understanding and reasoning.
Image
Recipe1M, a new large-scale, structured corpus of over one million cooking recipes and 13 million food images. As the largest publicly available collection of recipe data, Recipe1M affords the ability to train high-capacity models on aligned, multi-modal data.
Image
A large scale dataset that collects images and videos of various types of agents (not just pedestrians, but also bicyclists, skateboarders, cars, buses, and golf carts) that navigate in a real world outdoor environment such as a university campus. In the above images, pedestrians are labeled in pink, bicyclists in red, skateboarders in orange, and cars in green. 60 videos of 8 distinct scenes.
It provides 100,000 images containing 30,000 traffic-sign instances. These images cover large variations in illuminance and weather conditions. Each traffic-sign in the benchmark is annotated with a class label, its bounding box and pixel mask.
Image
The MF2 training dataset is the largest (in number of identities) publicly available facial recognition dataset with a 4.7 million faces, 672K identities, and their respective bounding boxes. All images obtained from Flickr (Yahoo's dataset) and licensed under Creative Commons.
Image
The datasets consists of 24,966 densely labelled frames split into 10 parts for convenience. The class labels are compatible with the CamVid and CityScapes datasets.
Medical
MIMIC is an openly available dataset developed by the MIT Lab for Computational Physiology, comprising deidentified health data associated with ~40,000 critical care patients. It includes demographics, vital signs, laboratory tests, medications, and more. The latest version of MIMIC is MIMIC-III v1.4, which comprises over 58,000 hospital admissions for 38,645 adults and 7,875 neonates. The data spans June 2001 - October 2012. The database, although de-identified, still contains detailed information regarding the clinical care of patients, so must be treated with appropriate care and respect.
It provides pixel-perfect ground truth for scene understanding problems such as semantic segmentation, instance segmentation, and object detection, and also for geometric computer vision problems such as optical flow, depth estimation, camera pose estimation, and 3D reconstruction. A set of 5M rendered RGB-D images from over 15K trajectories in synthetic layouts with random but physically simulated object poses.
NLP
Microsoft Machine Reading Comprehension (MS MARCO) is a new large scale dataset for reading comprehension and question answering. In MS MARCO, all questions are sampled from real anonymized user queries. The context passages, from which answers in the dataset are derived, are extracted from real web documents using the most advanced version of the Bing search engine. The answers to the queries are human generated if they could summarize the answer. It contains 1,010,916 user queries and 182,669 natural language answers.
Image
The SYNTHetic collection of Imagery and Annotations, is a dataset that has been generated with the purpose of aiding semantic segmentation and related scene understanding problems in the context of driving scenarios. SYNTHIA consists of a collection of photo-realistic frames rendered from a virtual city and comes with precise pixel-level semantic annotations. It contains: +200,000 HD images from video streams and +20,000 HD images from independent snapshots. Scene diversity: European style town, modern city, highway and green areas. Variety of dynamic objects: cars, pedestrians and cyclists.
Question answering
The purpose of the NewsQA dataset is to help the research community build algorithms that are capable of answering questions requiring human-level comprehension and reasoning skills. Leveraging CNN articles from the DeepMind Q&A Dataset, we prepared a crowd-sourced machine reading comprehension dataset of 120K Q&A pairs.
Image
The dataset contains 367,888 face annotations for 8,277 subjects divided into 3 batches. Contains bounding boxes, the extimated pose (yaw, pitch, and roll), locations of twenty-one keypoints, and gender information generated by a pre-trained neural network. The second part contains 3,735,476 annotated video frames extracted from a total of 22,075 for 3,107 subjects.
Image
7 and a quarter hours of largely highway driving.
Image
SpaceNet is an online repository of freely available satellite imagery, co-registered map data to train algorithms, and a series of public challenges designed to accelerate innovation in machine learning using geospatial data. This first of its kind open innovation project for the geospatial industry is a collaboration between CosmiQ Works, DigitalGlobe and NVIDIA. In the first year, over 5,700 km2 of very high-resolution imagery and more than 520,000 vectors were released through SpaceNet on AWS.
Image
The Comprehensive Cars (CompCars) dataset contains data from two scenarios, including images from web-nature and surveillance-nature. The web-nature data contains 163 car makes with 1,716 car models. There are a total of 136,726 images capturing the entire cars and 27,618 images capturing the car parts. The full car images are labeled with bounding boxes and viewpoints. Each car model is labeled with five attributes, including maximum speed, displacement, number of doors, number of seats, and type of car.
Image
ShapeNet is an ongoing effort to establish a richly-annotated, large-scale dataset of 3D shapes. ShapeNet is organized according to the WordNet hierarchy. Each meaningful concept in WordNet, possibly described by multiple words or word phrases, is called a "synonym set" or "synset". There are more than 100,000 synsets in WordNet, the majority of them being nouns (80,000+).
Image
WIDER FACE dataset is a face detection benchmark dataset, of which images are selected from the publicly available WIDER dataset. We choose 32,203 images and label 393,703 faces with a high degree of variability in scale, pose and occlusion as depicted in the sample images. WIDER FACE dataset is organized based on 61 event classes.
Image
WIDER is a dataset for complex event recognition from static images. As of v0.1, it contains 61 event categories and around 50574 images annotated with event class labels. We provide a split of 50% for training and 50% for testing.
Image
LSUN contains around one million labeled images for each of 10 scene categories and 20 object categories.
Image
CelebFaces Attributes Dataset (CelebA) is a large-scale face attributes dataset with more than 200K celebrity images, each with 40 attribute annotations. The images in this dataset cover large pose variations and background clutter. CelebA has large diversities, large quantities, and rich annotations.
Visual Genome is a dataset, a knowledge base, an ongoing effort to connect structured image concepts to language. It contains: 108,077 Images 5.4 Million Region Descriptions 1.7 Million Visual Question Answers 3.8 Million Object Instances
Two datasets using news articles for Q&A research. Each dataset contains many documents (90k and 197k each), and each document companies on average 4 questions approximately. Each question is a sentence with one missing word/phrase which can be found from the accompanying document/context.
Image
Large-scale dataset that contains a diverse set of stereo video sequences recorded in street scenes from 50 different cities, with high quality pixel-level annotations of 5 000 frames in addition to a larger set of 20 000 weakly annotated frames.
Image
ActivityNet is a new large-scale video benchmark for human activity understanding. ActivityNet aims at covering a wide range of complex human activities that are of interest to people in their daily living. In its current version, ActivityNet provides samples from 203 activity classes with an average of 137 untrimmed videos per class and 1.41 activity instances per video, for a total of 849 video hours.
Audio
Large-scale (1000 hours) corpus of read English speech.
Faces from the list of the most popular 100,000 actors as listed on the IMDb website and (automatically) crawled from their profiles date of birth, name, gender and all images related to that person. 460,723 face images from 20,284 celebrities from IMDb and 62,328 from Wikipedia, thus 523,051 in total.
NLP
The SNLI corpus (version 1.0) is a collection of 570k human-written English sentence pairs manually labeled for balanced classification with the labels entailment, contradiction, and neutral, supporting the task of natural language inference (NLI), also known as recognizing textual entailment (RTE).
Image
COCO is a large-scale object detection, segmentation, and captioning dataset. It contains: 330K images (>200K labeled), 1.5 million object instances, 80 object categories.
This dataset contains a list of photos and videos. This list is compiled from data available on Yahoo! Flickr. All the photos and videos provided in the list are licensed under one of the Creative Commons copyright licenses.
TCIA is a service which de-identifies and hosts a large archive of medical images of cancer accessible for public download. The data are organized as “Collections”, typically patients related by a common disease (e.g. lung cancer), image modality (MRI, CT, etc) or research focus. DICOM is the primary file format used by TCIA for image storage.
Image
This dataset is a set of additional annotations for PASCAL VOC 2010. It goes beyond the original PASCAL object detection task by providing segmentation masks for each body part of the object.
Image
Pedestrian Attribute Recognition At Far Distance dataset. The PETA dataset consists of 19000 images, with resolution ranging from 17-by-39 to 169-by-365 pixels. Those 19000 images include 8705 persons, each annotated with 61 binary and 4 multi-class attributes.
Image
An image caption corpus consisting of 158,915 crowd-sourced captions describing 31,783 images. This is an extension of the Flickr 8k Dataset. The new images and captions focus on people involved in everyday activities and events.
Image
We introduce a challenging data set of 101 food categories, with 101'000 images. For each class, 250 manually reviewed test images are provided as well as 750 training images. On purpose, the training images were not cleaned, and thus still contain some amount of noise. This comes mostly in the form of intense colors and sometimes wrong labels. All images were rescaled to have a maximum side length of 512 pixels.
Self-driving
A novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, 6 hours of traffic scenarios recorded at 10-100 Hz. The scenarios are diverse, capturing real-world traffic situations and range from freeways over rural areas to innercity scenes with many static and dynamic objects.
Image
Stanford Cars dataset contains 16,185 images of 196 classes of cars. The data is split into 8,144 training images and 8,041 testing images, where each class has been split roughly in a 50-50 split. Classes are typically at the level of Make, Model, Year, e.g. 2012 Tesla Model S or 2012 BMW M3 coupe.
Image
The Paris500k dataset consists of 501,356 geotagged images collected from Flickr and Panoramio. The dataset was collected from a geographic bounding box rather than using keyword queries. Thus, the images have a "natural" distribution, as shown in the figure on the right. The dataset is very challenging due to the presence of duplicates and near-duplicates, as well as a large fraction of unrelated images, such as photos of parties, pets, etc.
The purpose of the project is to make available a standard training and test setup for language modeling experiments.
A dataset for sentiment analysis that includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality.
Image
PASCAL VOC (2012 version) has 20 classes. The train/val data has 11,530 images containing 27,450 ROI annotated objects and 6,929 segmentations.
The German Traffic Sign Benchmark is a multi-class, single-image classification challenge held at the IJCNN 2011. The dataset contains: more than 40 classes, more than 50,000 images in total.
SVHN is a real-world image dataset for developing machine learning and object recognition algorithms with minimal requirement on data preprocessing and formatting. It can be seen as similar to MNIST (e.g., the images are of small cropped digits), but incorporates an order of magnitude more labeled data (over 600,000 digit images) and comes from a significantly harder, unsolved, real world problem (recognizing digits and numbers in natural scene images). SVHN is obtained from house numbers in Google Street View images.
NLP
This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well. Raw text and already processed bag of words formats are provided.
Image
This dataset is just like the CIFAR-10, except it has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class. The 100 classes in the CIFAR-100 are grouped into 20 superclasses. Each image comes with a “fine” label (the class to which it belongs) and a “coarse” label (the superclass to which it belongs).
Image
ImageNet is an image database organized according to the WordNet hierarchy (currently only the nouns), in which each node of the hierarchy is depicted by hundreds and thousands of images.
You can find more datasets at the UCI machine learning repository, Quantum stat NLP database and Kaggle datasets.
Subscribe to get updates when new datasets and tools are released.
© 2021 Nikola Plesa | Privacy | Datasets | Annotation tools
hello@datasetlist.com