This web page contains video data and ground truth for 16 dances with two different dance patterns. The style of dancing is inspired by Scottish Ceilidh dancing, but the dances shown here are original patterns inspired by two chemical processes. This data was recorded March 7, 2016 at the School of Informatics, University of Edinburgh, as organized by Lewis Hou (Science Ceilidh) and Robert Fisher, and with high resolution filmed by Robin Mair. The dancers were volunteers from the Science Ceilidh and the New Scotland Scottish Country Dance Society and the filming was funded by the Royal Society of Chemistry. This dataset is interesting because there are very few video analysis datasets where there is highly structured behavior. In this case, the basic moves of all of the dancers are prescribed; however, the timing and positioning can vary by dancer, time and instance of the dance. The data is acquired using a ceiling camera as a set of 640x480 frames captured at about an average of 8 FPS. Typical frames from the two dances are shown here, along an annotation of the dancers and a background frame where no dancers are in the performance area.
This dataset contains 7 challenging volleyball activity classes annotated in 6 videos from professionals in the Austrian Volley League (season 2011/12)....
video, sport, analysis, activity recognition, volleyball, detection, actionThe SPHERE human skeleton movements dataset was created using a Kinect camera, that measures distances and provides a depth map of the scene instead of ...
motion, skeleton, kinect, movement, depth, human, action, video, behaviorBackground Models Challenge (BMC) is a complete dataset and competition for the comparison of background subtraction algorithms. The main topics concer...
motion, background, video, modeling, segmentation, change, surveillance, detectionThe Multicamera Human Action Video Data (MuHAVi) Manually Annotated Silhouette Data (MAS) are two datasets consisting of selected action sequences for ...
video, segmentation, action, behavior, human, backgroundJPL First-Person Interaction dataset (JPL-Interaction dataset) is composed of human activity videos taken from a first-person viewpoint. The dataset par...
video, motion, action, interactive, recognition, humanWelcome to the homepage of the gvvperfcapeva datasets. This site serves as a hub to access a wide range of datasets that have been created for projects ...
face, reconstruction, depth, mesh, human, action, video, pose, multiview, trackingThe Video Summarization (SumMe) dataset consists of 25 videos, each annotated with at least 15 human summaries (390 in total). The data consists of vide...
video, benchmark, summary, event, human, groundtruth, actionThe Fish4Knowledge project (groups.inf.ed.ac.uk/f4k/) is pleased to announce the availability of 2 subsets of our tropical coral reef fish video and e...
motion, nature, recognition, fish, video, water, classification, animal, cameraThe Airport MotionSeg dataset contains 12 sequences of videos of an aiprort scenario with small and large moving objects and various speeds. It is chall...
video, segmentation, motion, airport, clustering, camera, zoomThe Video Segmentation Benchmark (VSB100) provides ground truth annotations for the Berkeley Video Dataset, which consists of 100 HD quality videos divi...
video, object, segmentation, motion, pedestrian, benchmark, tracking, groundtruthThe QMUL Junction dataset is a busy traffic scenario for research on activity analysis and behavior understanding. Video length: 1 hour (90000 frame...
video, motion, pedestrian, crowd, counting, tracking, detection, behaviorThe Berkeley Multimodal Human Action Database (MHAD) contains 11 actions performed by 7 male and 5 female subjects in the range 23-30 years of age excep...
recognition, motion, action, classification, multiviewThe Leeds Cows dataset by Derek Magee consists of 14 different video sequences showing a total of 18 cows walking from right to left in front of differe...
video, segmentation, detection, cow, animal, backgroundAn indoor action recognition dataset which consists of 18 classes performed by 20 individuals. Each action is individually performed for 8 times (4 dayt...
video, open-view, cross-view, recognition, indoor, action, multi-cameraThe GaTech VideoSeg dataset consists of two (waterski and yunakim?) video sequences for object segmentation. There exists no groundtruth segmentation ...
video, object, segmentation, motion, model, cameraIt is composed of ADL (activity daily living) and fall actions simulated by 11 volunteers. The people involved in the test are aged between 22 and 39, w...
wearable, kinect, fall detection - adl, depth, human, recognition, action, accelerometer, videoThe dataset consist of the about 50 hours obtained from kindergarten surveillance videos. Dataset, totally approximately 100 videos sequences (1000GB, 5...
segmentation, action, behavior, video surveillance, human, backgroundThe multi-modal/multi-view datasets are created in a cooperation between University of Surrey and Double Negative within the EU FP7 IMPART project. Th...
rgbd, color, dynamic, multi-view, action, outdoor, video, 3d, face, emotion, lidar, human, indoor, multi-mode, modelThe SegTrack dataset consists of six videos (five are used) with ground truth pixelwise segmentation (6th penguin is not usable). The dataset is used fo...
motion, video, object, proposal, flow, segmentation, stationary, model, camera, optical, groundtruthThe Weizmann actions dataset by Blank, Gorelick, Shechtman, Irani, and Basri consists of ten different types of actions: bending, jumping jack, jumping,...
video, segmentation, action, action classificationPenn-Fudan Pedestrian Detection and Segmentation
segmentation, motion, background, pedestrian, detectionThe .enpeda.. Image Sequence Analysis Test Site (EISATS) offers sets of long bi- or trinocular image sequences recorded in the context of vision-based d...
motion, stereo, analysis, flow, segmentation, optical, semantic, visionThe MSR Action datasets is a collection of various 3D datasets for action recognition. See details http://research.microsoft.com/en-us/um/people/zliu...
video, detection, 3d, action, reconstruction, recognitionLASIESTA is composed by many real indoor and outdoor sequences organized in different categories, each of one covering a specific challenge in moving ob...
motion, subtraction, dataset, background, object, stationary, foreground, camera, challenge, detection, groundtruthThe TUG (Timed Up and Go test) dataset consists of actions performed three times by 20 volunteers. The people involved in the test are aged between 22 a...
wearable, kinect, time, human, recognition, action, depth image processing - tug, accelerometer, videoThe UCF Person and Car VideoSeg dataset consists of six videos with groundtruth for video object segmentation. Surfing, jumping, skiing, sliding, big ...
video, object, segmentation, motion, model, camera, groundtruthMany different labeled video datasets have been collected over the past few years, but it is hard to compare them at a glance. So we have created a hand...
video, object, benchmark, classification, recognition, detection, actionThe Freiburg-Berkeley Motion Segmentation Dataset (FBMS-59) is an extension of the BMS dataset with 33 additional video sequences. A total of 720 frames...
motion, benchmark, video, object, pedestrian, segmentation, tracking, groundtruthThis dataset consist 51 oral presentation recorded with 2 ambient visual sensor (web-cam), 3 First Person View (FPV) cameras (1 on presenter and 2 on ra...
video, quality, kinect, multi-sensor, presentation, analysisThe dataset captures 25 people preparing 2 mixed salads each and contains over 4h of annotated accelerometer and RGB-D video data. Annotated activities ...
video, activity, classification, tracking, recognition, detection, actionThe UMD Dynamic Scene Recognition dataset consists of 13 classes and 10 videos per class and is used to classify dynamic scenes. The dataset has been ...
video, motion, dynamic, classification, scene, recognitionDataset A (former NLPR Gait Database) was created on Dec. 10, 2001, including 20 persons. Each person has 12 image sequences, 4 sequences for each of th...
motion, foot, human, recognition, gait, action, classification, biometry, pressureThe domain-specific personal videos highlight dataset from the paper [1] describes a fully automatic method to train domain-specific highlight ranker f...
saliency, domain, wearable, human, recognition, action, video, summarizationThe Salient Montages is a human-centric video summarization dataset from the paper [1]. In [1], we present a novel method to generate salient montages...
video, saliency, wearable, montage, summarization, humanThe Longterm Pedestrian dataset consists of images from a stationary camera running 24 hours for 7 days at about 1 fps. It used for adaptive detection ...
coffee, graz, background, indoor, illumination, change, pedestrian, robust, multitarget, detectionA Kinect dataset for hand detection in naturalistic driving settings as well as a challenging 19 dynamic hand gesture recognition dataset for human mach...
actionThe VSUMM (Video SUMMarization) dataset is of 50 videos from Open Video. All videos are in MPEG-1 format (30 fps, 352 x 240 pixels), in color and with s...
similarity, type, summary, user, video, static, keyframe, studyThe Microsoft Research Cambridge-12 Kinect gesture dataset consists of sequences of human movements, represented as body-part locations, and the associa...
gesture, recognition, human, action, kinectThe xawAR16 dataset is a multi-RGBD camera dataset, generated inside an operating room (IHU Strasbourg), which was designed to evaluate tracking/relocal...
video, medicine, table, depth, operation, recognition, surgeryWe wanted to have a collection of action recognition papers and results that everybody can use for reference. The site will work by the community princi...
recognition, benchmark, action, datasetThe Pornography database contains nearly 80 hours of 400 pornographic and 400 non-pornographic videos. For the pornographic class, we have browsed websi...
video, pornography, video shots, video framesThis dataset comprises of 10 actions related to breakfast preparation, performed by 52 different individuals in 18 different kitchens.
actionShakeFive2 A collection of 8 dyadic human interactions with accompanying skeleton metadata. The metadata is frame based xml data containing the skelet...
video, human, kinect, interactionThe Shefeld Kinect Gesture (SKIG) dataset contains 2160 hand gesture sequences (1080 RGB sequences and 1080 depth sequences) collected from 6 subjects. ...
illumination, gesture, kinect, depth, recognition, human, actionThe Graz02 dataset by Andreas Opelt and Axel Pinz contains four categories of images: bikes, people, cars and a single background class. The annotation ...
pedestrian, clutter, object detection, graz, background, car, bikeThe MSR RGB-D Dataset 7-Scenes dataset is a collection of tracked RGB-D camera frames. The dataset may be used for evaluation of methods for different a...
video, kinect, location, reconstruction, depth, trackingThe CHALEARN Multi-modal Gesture Challenge is a dataset +700 sequences for gesture recognition using images, kinect depth, segmentation and skeleton dat...
gesture, skeleton, kinect, depth, human, recognition, action, illumination, segmentationThe dataset is designed to be realistic, natural and challenging for video surveillance domains in terms of its resolution, background clutter, diversit...
actionThe dataset contains 2326 video sequences of 15 different sport actions and human body joint annotations for all sequences.
actionThis is a subset of the dataset introduced in the SIGGRAPH Asia 2009 paper, Webcam Clip Art: Appearance and Illuminant Transfer from Time-lapse Sequence...
urban, nature, time, webcam, video, illumination, change, static, camera, lightThese datasets were generated for the M2CAI challenges, a satellite event of MICCAI 2016 in Athens. Two datasets are available for two different challen...
video, medicine, workflow, surgery, recognition, challengeThe YouTube-Objects dataset is composed of videos collected from YouTube by querying for the names of 10 object classes. It contains between 9 and 24 vi...
video, object, flow, segmentation, detection, opticalThe CERTH image blur dataset consists of 2450 digital images, 1850 out of which are photographs captured by various camera models in different shooting ...
motion, quality, detection, image, defocus, blurThe crowd datasets are collected from a variety of sources, such as UCF and data-driven crowd datasets. The sequences are diverse, representing dense cr...
video, pedestrian, scene, crowd, human, understanding, anomaly, detectionThe Mall dataset was collected from a publicly accessible webcam for crowd counting and profiling research. Ground truth: Over 60,000 pedestrians wer...
video, pedestrian, crowd, counting, tracking, detection, indoor, webcamThe Where Who Why (WWW) dataset provides 10,000 videos with over 8 million frames from 8,257 diverse scenes, therefore offering a superior comprehensive...
recognition, video, flow, pedestrian, crowd, surveillance, optical, detectionThe VidPairs dataset contains 133 pairs of images, taken from 1080p HD (~2 megapixel) official movie trailers. Each pair consists of images of the same ...
matching, dense, video, flow, description, patch, pair, opticalThe GaTech VideoStab dataset consists of N videos for the task of video stabilization. This code is implemented in Youtube video editor for stabilizatio...
video, camera, path, stabilizationThe Yotta dataset consists of 70 images for semantic labeling given in 11 classes. It also contains multiple videos and camera matrices for 14km or driv...
urban, reconstruction, video, segmentation, 3d, classification, camera, semanticThe Berkeley Video Segmentation Dataset (BVSD) contains videos for segmentation (boundary?) Dataset train Dataset test
video, segmentation, benchmarkThe Olympic Sports Dataset contains YouTube videos of athletes practicing different sports.
actionThe GaTech VideoContext dataset consists of over 100 groundtruth annotated outdoor videos with over 20000 frames for the task of geometric context eval...
urban, nature, outdoor, video, segmentation, supervised, classification, context, unsupervised, geometry, semanticWe introduce the Shelf dataset for multiple human pose estimation from multiple views. In addition we annotate the body joints in the Campus dataset fro...
motion, multiple, 3d, estimation, capture, pose, human, viewThe multiple foreground video co-segmentation dataset, consisting of four sets, each with a video pair and two foreground objects in common. The datase...
video, segmentation, co-segmentationContains six types of human actions (walking, jogging, running, boxing, hand waving and hand clapping) performed several times by 25 subjects in four di...
actionThe Daimler Urban Segmentation Dataset consists of video sequences recorded in urban traffic. The dataset consists of 5000 rectified stereo image pairs ...
segmentation, urban, motion, stereo, semantic, outdoorThe test sequences provide interested researchers a real-world multi-view test data set captured in the blue-c portals. The data is meant to be used for...
tracking, segmentation, camera, action, multiviewWe present a new large-scale dataset that contains a diverse set of stereo video sequences recorded in street scenes from 50 different cities, with high...
urban, stereo, cities, person, video, weakly, segmentation, pedestrian, detection, car, semanticThe database of nude and non-nude videos contains a collection of 179 video segments collected from the following movies: Alpha Dog, Basic Instinct, Bef...
video, nude detection, movieThe Graz01 dataset by Andreas Opelt and Axel Pinz contains four types of images: bikes, people, background with no bikes, background with no people.
pedestrian, clutter, occlusion, object detection, graz, background, bikeWalk, Run, Jump, Gallop sideways, Bend, One-hand wave, Two-hands wave, Jump in place, Jumping Jack, Skip.
actionThe dataset was captured by a Kinect device. There are 12 dynamic American Sign Language (ASL) gestures, and 10 people. Each person performs each gestur...
actionThis dataset contains 5 different collective activities : crossing, walking, waiting, talking, and queueing and 44 short video sequences some of which w...
actionHollywood-2 datset contains 12 classes of human actions and 10 classes of scenes distributed over 3669 video clips and approximately 20.1 hours of video...
video, segmentation, action classificationHollywood-2 datset contains 12 classes of human actions and 10 classes of scenes distributed over 3669 video clips and approximately 20.1 hours of video...
actionThe dataset consists of four temporally synchronized data modalities. These modalities include RGB videos, depth videos, skeleton positions, and inertia...
actionThe ICG Multi-Camera and Virtual PTZ dataset contains the video streams and calibrations of several static Axis P1347 cameras and one panoramic video fr...
graz, outdoor, video, object, panorama, pedestrian, network, crowd, multiview, tracking, camera, multitarget, detection, calibrationThe Weather and Illumination Database (WILD) is an extensive database of high quality images of an outdoor urban scene, acquired every hour over all sea...
urban, estimation, depth, weather, time, newyork, webcam, video, illumination, change, static, camera, lightThe ICG Multi-Camera datasets consist of Easy Data Set (just one person) Medium Data Set (3-5 persons, used for the experiments) Hard Data Set (cro...
graz, indoor, video, object, pedestrian, multiview, tracking, camera, multitarget, detection, calibrationObservations of several subjects setting a table in different ways. Contains videos, motion capture data, RFID tag readings,...
actionThis dataset consists of seven meal-preparation activities, each performed by 10 subjects. Subjects perform the activities based on the given cooking re...
actionThe Lane Level Localization dataset was collected on a highway in San Francisco with the following properties: * Reasonable traffic * Multiple lane h...
driving, benchmark, autonomous, video, road, gps, map, 3d, localization, carThe BEOID dataset includes object interactions ranging from preparing a coffee to operating a weight lifting machine and opening a door. The dataset is ...
video, object, egocentric, 3d, interaction, pose, trackingUCF50 is an action recognition dataset with 50 action categories, consisting of realistic videos taken from YouTube.
actionThe MOT Challenge is a framework for the fair evaluation of multiple people tracking algorithms. In this framework we provide: - A large collection of...
multiple, benchmark, evaluation, benhttp://motchallenge.net/chmark, dataset, target, video, pedestrian, 3d, tracking, surveillance, peopleThe Oxford RobotCar Dataset contains over 100 repetitions of a consistent route through Oxford, UK, captured over a period of over a year. The dataset c...
driving, street, urban, time, recognition, autonomous, video, segmentation, robot, classification, detection, car, yearThe automotive multi-sensor (AMUSE) dataset consists of inertial and other complementary sensor data combined with monocular, omnidirectional, high fram...
urban, api, image, video, inertial, streetside, traffic, cityCollected from various sources, mostly from movies, and a small proportion from public databases, YouTube and Google videos. The dataset contains 6849 c...
actionThis dataset consists of a set of actions collected from various sports which are typically featured on broadcast television channels such as the BBC an...
actionThe HandNet dataset contains depth images of 10 participants hands non-rigidly deforming infront of a RealSense RGB-D camera. This dataset includes 2...
rgbd, hand, articulation, video, segmentation, classification, pose, fingertip, detectionFully annotated dataset of RGB-D video data and data from accelerometers attached to kitchen objects capturing 25 people preparing two mixed salads each...
actionThe video co-segmentation dataset contains 4 video sets which totally has 11 videos with 5 frames of each video labeled with the pixel-level ground-tr...
video, segmentation, co-segmentation, datasetThe UrbanStreet dataset used in the paper can be downloaded here [188M] . It contains 18 stereo sequences of pedestrians taken from a stereo rig mounted...
urban, human, recognition, video, pedestrian, segmentation, tracking, multitarget, detectionScene Background Initialization (SBI) dataset The SBI dataset has been assembled in order to evaluate and compare the results of background initializa...
change, detection, benchmark, background, foreground, initializationThe Cholec80 dataset contains 80 videos of cholecystectomy surgeries performed by 13 surgeons. The videos are captured at 25 fps. The dataset is labeled...
video, medicine, surgery, phase, tool, recognitionThe Traffic Video dataset consists of X video of an overhead camera showing a street crossing with multiple traffic scenarios. The dataset can be down...
video, urban, traffic, road, overhead, tracking, view, detectionThe TVPR dataset includes 23 registration sessions. Each of the 23 folders contains the video of one registration session. Acquisitions have been perfor...
person, depth, recognition, indoor, top-view, video, clothing, gender, reidentification, identification, peopleThe Webcam Interestingness dataset consists of 20 different webcam streams, with 159 images each. It is annotated with interestingness ground truth, acq...
video, interest, retrieval, classification, weather, ranking, webcamThe All I Have Seen (AIHS) dataset is created to study the properties of total visual input in humans, for around two weeks Nebojsa Jojic wore a camera ...
similarity, scene, summary, user, indoor, outdoor, video, 3d, clustering, studyDataset of 9,532 images of humans performing 40 different actions, annotated with bounding-boxes.
actionThe High Definition Analytics (HDA) dataset is a multi-camera High-Resolution image sequence dataset for research on High-Definition surveillance: Pedes...
high-definition, benchmark, human, lisbon, indoor, video, re-identification, pedestrian, network, multiview, tracking, surveillance, camera, detectionThe Stanford 40 Actions dataset contains images of humans performing 40 actions. In each image, we provide a bounding box of the person who is performin...
recognition, human, detection, action, boundingboxAt Udacity, we believe in democratizing education. How can we provide opportunity to everyone on the planet? We also believe in teaching really amazing ...
driving, street, urban, time, recognition, autonomous, video, segmentation, robot, classification, detection, car, syntheticThe current video database containing six types of human actions (walking, jogging, running, boxing, hand waving and hand clapping) performed several ti...
video, segmentation, action classificationThe Pittsburgh Fast-food Image dataset (PFID) consists of 4545 still images, 606 stereo pairs, 3033600 videos for structure from motion, and 27 privacy-...
video, laboratory, classification, reconstruction, real, food, recognitionTo evaluate our method we designed a new ground truth database of 50 images. The following zip-files contain: Data, Segmentation, Labelling - Lasso, Lab...
background, color, optimization, boundingbox, image segmentationThe Babenko tracking dataset contains 12 video sequences for single object tracking. For each clip they provide (1) a directory with the original i...
face, video, single, occlusion, object tracking, animalThis dataset features video sequences that were obtained using a R/C-controlled blimp equipped with an HD camera mounted on a gimbal.The collection repr...
actionGaze data on video stimuli for computer vision and visual analytics. Converted 318 video sequences from several different gaze tracking data sets with...
video, metadata, segmentation, gaze data, polygon annotationThe dataset contains 15 documentary films that are downloaded from YouTube, whose durations vary from 9 minutes to as long as 50 minutes, and the total ...
video, object, detection