The VSUMM (Video SUMMarization) dataset is of 50 videos from Open Video. All videos are in MPEG-1 format (30 fps, 352 x 240 pixels), in color and with sound. These videos are distributed among several genres (documentary, educational, ephemeral, historical, lecture) and their duration varies from 1 to 4 minutes and approximately 75 minutes of video in total. It also contains 250 user summaries. These summaries were created manually by 50 users, each one dealing with 5 videos, meaning that each video has 5 video summaries created by 5 different users.
The All I Have Seen (AIHS) dataset is created to study the properties of total visual input in humans, for around two weeks Nebojsa Jojic wore a camera ...
similarity, scene, summary, user, indoor, outdoor, video, 3d, clustering, studyThe Video Summarization (SumMe) dataset consists of 25 videos, each annotated with at least 15 human summaries (390 in total). The data consists of vide...
video, benchmark, summary, event, human, groundtruth, actionThe Weather and Illumination Database (WILD) is an extensive database of high quality images of an outdoor urban scene, acquired every hour over all sea...
urban, estimation, depth, weather, time, newyork, webcam, video, illumination, change, static, camera, lightThis is a subset of the dataset introduced in the SIGGRAPH Asia 2009 paper, Webcam Clip Art: Appearance and Illuminant Transfer from Time-lapse Sequence...
urban, nature, time, webcam, video, illumination, change, static, camera, lightMany different labeled video datasets have been collected over the past few years, but it is hard to compare them at a glance. So we have created a hand...
video, object, benchmark, classification, recognition, detection, actionThis dataset consist 51 oral presentation recorded with 2 ambient visual sensor (web-cam), 3 First Person View (FPV) cameras (1 on presenter and 2 on ra...
video, quality, kinect, multi-sensor, presentation, analysisThe Freiburg-Berkeley Motion Segmentation Dataset (FBMS-59) is an extension of the BMS dataset with 33 additional video sequences. A total of 720 frames...
motion, benchmark, video, object, pedestrian, segmentation, tracking, groundtruthThe Oxford RobotCar Dataset contains over 100 repetitions of a consistent route through Oxford, UK, captured over a period of over a year. The dataset c...
driving, street, urban, time, recognition, autonomous, video, segmentation, robot, classification, detection, car, yearThe automotive multi-sensor (AMUSE) dataset consists of inertial and other complementary sensor data combined with monocular, omnidirectional, high fram...
urban, api, image, video, inertial, streetside, traffic, cityThe VidPairs dataset contains 133 pairs of images, taken from 1080p HD (~2 megapixel) official movie trailers. Each pair consists of images of the same ...
matching, dense, video, flow, description, patch, pair, opticalThe GaTech VideoStab dataset consists of N videos for the task of video stabilization. This code is implemented in Youtube video editor for stabilizatio...
video, camera, path, stabilizationThe SPHERE human skeleton movements dataset was created using a Kinect camera, that measures distances and provides a depth map of the scene instead of ...
motion, skeleton, kinect, movement, depth, human, action, video, behaviorThe Yotta dataset consists of 70 images for semantic labeling given in 11 classes. It also contains multiple videos and camera matrices for 14km or driv...
urban, reconstruction, video, segmentation, 3d, classification, camera, semanticThe QMUL Junction dataset is a busy traffic scenario for research on activity analysis and behavior understanding. Video length: 1 hour (90000 frame...
video, motion, pedestrian, crowd, counting, tracking, detection, behaviorThe Berkeley Video Segmentation Dataset (BVSD) contains videos for segmentation (boundary?) Dataset train Dataset test
video, segmentation, benchmarkThe Fish4Knowledge project (groups.inf.ed.ac.uk/f4k/) is pleased to announce the availability of 2 subsets of our tropical coral reef fish video and e...
motion, nature, recognition, fish, video, water, classification, animal, cameraBackground Models Challenge (BMC) is a complete dataset and competition for the comparison of background subtraction algorithms. The main topics concer...
motion, background, video, modeling, segmentation, change, surveillance, detectionThis web page contains video data and ground truth for 16 dances with two different dance patterns. The style of dancing is inspired by Scottish Ceilidh...
motion, dance, analysis, background, action, video, chemistry, patternThe Video Segmentation Benchmark (VSB100) provides ground truth annotations for the Berkeley Video Dataset, which consists of 100 HD quality videos divi...
video, object, segmentation, motion, pedestrian, benchmark, tracking, groundtruthThe HandNet dataset contains depth images of 10 participants hands non-rigidly deforming infront of a RealSense RGB-D camera. This dataset includes 2...
rgbd, hand, articulation, video, segmentation, classification, pose, fingertip, detectionThe GaTech VideoContext dataset consists of over 100 groundtruth annotated outdoor videos with over 20000 frames for the task of geometric context eval...
urban, nature, outdoor, video, segmentation, supervised, classification, context, unsupervised, geometry, semanticThe multiple foreground video co-segmentation dataset, consisting of four sets, each with a video pair and two foreground objects in common. The datase...
video, segmentation, co-segmentationThe multi-modal/multi-view datasets are created in a cooperation between University of Surrey and Double Negative within the EU FP7 IMPART project. Th...
rgbd, color, dynamic, multi-view, action, outdoor, video, 3d, face, emotion, lidar, human, indoor, multi-mode, modelThe UCF Person and Car VideoSeg dataset consists of six videos with groundtruth for video object segmentation. Surfing, jumping, skiing, sliding, big ...
video, object, segmentation, motion, model, camera, groundtruthThe dataset captures 25 people preparing 2 mixed salads each and contains over 4h of annotated accelerometer and RGB-D video data. Annotated activities ...
video, activity, classification, tracking, recognition, detection, actionThe procedural texture perceptual similarity dataset contains a list of procedural textures along with their pairwise distances, as defined by a percept...
study, benchmark, procedural, textureThe domain-specific personal videos highlight dataset from the paper [1] describes a fully automatic method to train domain-specific highlight ranker f...
saliency, domain, wearable, human, recognition, action, video, summarizationThe video co-segmentation dataset contains 4 video sets which totally has 11 videos with 5 frames of each video labeled with the pixel-level ground-tr...
video, segmentation, co-segmentation, datasetThe UrbanStreet dataset used in the paper can be downloaded here [188M] . It contains 18 stereo sequences of pedestrians taken from a stereo rig mounted...
urban, human, recognition, video, pedestrian, segmentation, tracking, multitarget, detectionThe Cholec80 dataset contains 80 videos of cholecystectomy surgeries performed by 13 surgeons. The videos are captured at 25 fps. The dataset is labeled...
video, medicine, surgery, phase, tool, recognitionAn indoor action recognition dataset which consists of 18 classes performed by 20 individuals. Each action is individually performed for 8 times (4 dayt...
video, open-view, cross-view, recognition, indoor, action, multi-cameraThe Traffic Video dataset consists of X video of an overhead camera showing a street crossing with multiple traffic scenarios. The dataset can be down...
video, urban, traffic, road, overhead, tracking, view, detectionThe Multicamera Human Action Video Data (MuHAVi) Manually Annotated Silhouette Data (MAS) are two datasets consisting of selected action sequences for ...
video, segmentation, action, behavior, human, backgroundThe GaTech VideoSeg dataset consists of two (waterski and yunakim?) video sequences for object segmentation. There exists no groundtruth segmentation ...
video, object, segmentation, motion, model, cameraThe TVPR dataset includes 23 registration sessions. Each of the 23 folders contains the video of one registration session. Acquisitions have been perfor...
person, depth, recognition, indoor, top-view, video, clothing, gender, reidentification, identification, peopleWe present a new large-scale dataset that contains a diverse set of stereo video sequences recorded in street scenes from 50 different cities, with high...
urban, stereo, cities, person, video, weakly, segmentation, pedestrian, detection, car, semanticThe Webcam Interestingness dataset consists of 20 different webcam streams, with 159 images each. It is annotated with interestingness ground truth, acq...
video, interest, retrieval, classification, weather, ranking, webcamThe database of nude and non-nude videos contains a collection of 179 video segments collected from the following movies: Alpha Dog, Basic Instinct, Bef...
video, nude detection, movieThe Airport MotionSeg dataset contains 12 sequences of videos of an aiprort scenario with small and large moving objects and various speeds. It is chall...
video, segmentation, motion, airport, clustering, camera, zoomThe CMP Facade dataset consists of facade images assembled at the Center for Machine Perception, which includes 600 rectified images of facades from var...
urban, similarity, facade, recognition, segmentation, structure, classification, rectification, semanticThe High Definition Analytics (HDA) dataset is a multi-camera High-Resolution image sequence dataset for research on High-Definition surveillance: Pedes...
high-definition, benchmark, human, lisbon, indoor, video, re-identification, pedestrian, network, multiview, tracking, surveillance, camera, detectionThe Video2GIF dataset contains over 100,000 pairs of GIFs and their source videos. The GIFs were collected from two popular GIF websites (makeagif.com, ...
gif, scene, summarization, summary, video highlight detection, understandingAt Udacity, we believe in democratizing education. How can we provide opportunity to everyone on the planet? We also believe in teaching really amazing ...
driving, street, urban, time, recognition, autonomous, video, segmentation, robot, classification, detection, car, syntheticThe Leeds Cows dataset by Derek Magee consists of 14 different video sequences showing a total of 18 cows walking from right to left in front of differe...
video, segmentation, detection, cow, animal, backgroundThe Interactive Segmentation (IcgBench) dataset from Jakob Santner contains 243 images and 262 segmentation. Some images have multiple segmentations. Th...
interactive segmentation, userIt is composed of ADL (activity daily living) and fall actions simulated by 11 volunteers. The people involved in the test are aged between 22 and 39, w...
wearable, kinect, fall detection - adl, depth, human, recognition, action, accelerometer, videoThe current video database containing six types of human actions (walking, jogging, running, boxing, hand waving and hand clapping) performed several ti...
video, segmentation, action classificationThe Weizmann actions dataset by Blank, Gorelick, Shechtman, Irani, and Basri consists of ten different types of actions: bending, jumping jack, jumping,...
video, segmentation, action, action classificationThe Pittsburgh Fast-food Image dataset (PFID) consists of 4545 still images, 606 stereo pairs, 3033600 videos for structure from motion, and 27 privacy-...
video, laboratory, classification, reconstruction, real, food, recognitionHollywood-2 datset contains 12 classes of human actions and 10 classes of scenes distributed over 3669 video clips and approximately 20.1 hours of video...
video, segmentation, action classificationThe TUG (Timed Up and Go test) dataset consists of actions performed three times by 20 volunteers. The people involved in the test are aged between 22 a...
wearable, kinect, time, human, recognition, action, depth image processing - tug, accelerometer, videoThe ICG Multi-Camera and Virtual PTZ dataset contains the video streams and calibrations of several static Axis P1347 cameras and one panoramic video fr...
graz, outdoor, video, object, panorama, pedestrian, network, crowd, multiview, tracking, camera, multitarget, detection, calibrationThe Babenko tracking dataset contains 12 video sequences for single object tracking. For each clip they provide (1) a directory with the original i...
face, video, single, occlusion, object tracking, animalGaze data on video stimuli for computer vision and visual analytics. Converted 318 video sequences from several different gaze tracking data sets with...
video, metadata, segmentation, gaze data, polygon annotationThe dataset contains 15 documentary films that are downloaded from YouTube, whose durations vary from 9 minutes to as long as 50 minutes, and the total ...
video, object, detectionThe Salient Montages is a human-centric video summarization dataset from the paper [1]. In [1], we present a novel method to generate salient montages...
video, saliency, wearable, montage, summarization, humanJPL First-Person Interaction dataset (JPL-Interaction dataset) is composed of human activity videos taken from a first-person viewpoint. The dataset par...
video, motion, action, interactive, recognition, humanThe xawAR16 dataset is a multi-RGBD camera dataset, generated inside an operating room (IHU Strasbourg), which was designed to evaluate tracking/relocal...
video, medicine, table, depth, operation, recognition, surgeryThe Pornography database contains nearly 80 hours of 400 pornographic and 400 non-pornographic videos. For the pornographic class, we have browsed websi...
video, pornography, video shots, video framesShakeFive2 A collection of 8 dyadic human interactions with accompanying skeleton metadata. The metadata is frame based xml data containing the skelet...
video, human, kinect, interactionThe UMD Dynamic Scene Recognition dataset consists of 13 classes and 10 videos per class and is used to classify dynamic scenes. The dataset has been ...
video, motion, dynamic, classification, scene, recognitionWelcome to the homepage of the gvvperfcapeva datasets. This site serves as a hub to access a wide range of datasets that have been created for projects ...
face, reconstruction, depth, mesh, human, action, video, pose, multiview, trackingThis dataset contains 7 challenging volleyball activity classes annotated in 6 videos from professionals in the Austrian Volley League (season 2011/12)....
video, sport, analysis, activity recognition, volleyball, detection, actionThe MSR RGB-D Dataset 7-Scenes dataset is a collection of tracked RGB-D camera frames. The dataset may be used for evaluation of methods for different a...
video, kinect, location, reconstruction, depth, trackingThese datasets were generated for the M2CAI challenges, a satellite event of MICCAI 2016 in Athens. Two datasets are available for two different challen...
video, medicine, workflow, surgery, recognition, challengeThe YouTube-Objects dataset is composed of videos collected from YouTube by querying for the names of 10 object classes. It contains between 9 and 24 vi...
video, object, flow, segmentation, detection, opticalThe SegTrack dataset consists of six videos (five are used) with ground truth pixelwise segmentation (6th penguin is not usable). The dataset is used fo...
motion, video, object, proposal, flow, segmentation, stationary, model, camera, optical, groundtruthThe crowd datasets are collected from a variety of sources, such as UCF and data-driven crowd datasets. The sequences are diverse, representing dense cr...
video, pedestrian, scene, crowd, human, understanding, anomaly, detectionThe MSR Action datasets is a collection of various 3D datasets for action recognition. See details http://research.microsoft.com/en-us/um/people/zliu...
video, detection, 3d, action, reconstruction, recognitionThe Mall dataset was collected from a publicly accessible webcam for crowd counting and profiling research. Ground truth: Over 60,000 pedestrians wer...
video, pedestrian, crowd, counting, tracking, detection, indoor, webcamThe ICG Multi-Camera datasets consist of Easy Data Set (just one person) Medium Data Set (3-5 persons, used for the experiments) Hard Data Set (cro...
graz, indoor, video, object, pedestrian, multiview, tracking, camera, multitarget, detection, calibrationThe Where Who Why (WWW) dataset provides 10,000 videos with over 8 million frames from 8,257 diverse scenes, therefore offering a superior comprehensive...
recognition, video, flow, pedestrian, crowd, surveillance, optical, detectionThe Lane Level Localization dataset was collected on a highway in San Francisco with the following properties: * Reasonable traffic * Multiple lane h...
driving, benchmark, autonomous, video, road, gps, map, 3d, localization, carThe BEOID dataset includes object interactions ranging from preparing a coffee to operating a weight lifting machine and opening a door. The dataset is ...
video, object, egocentric, 3d, interaction, pose, trackingThe MOT Challenge is a framework for the fair evaluation of multiple people tracking algorithms. In this framework we provide: - A large collection of...
multiple, benchmark, evaluation, benhttp://motchallenge.net/chmark, dataset, target, video, pedestrian, 3d, tracking, surveillance, people