The Salient Montages is a human-centric video summarization dataset from the paper [1]. In [1], we present a novel method to generate salient montages from unconstrained videos, by finding montageable moments and identifying the salient people and actions to depict in each montage. Our method addresses the need for generating concise visualizations from the increasingly large number of videos being captured from portable devices. Our main contributions are (1) the process of finding salient people and moments to form a montage, and (2) the application of this method to videos taken in the wild where the camera moves freely. As such, we demonstrate results on head-mounted cameras, where the camera moves constantly, as well as on videos downloaded from YouTube. Our approach can operate on videos of any length; some will contain many montageable moments, while others may have none. We demonstrate that a novel montageability score can be used to retrieve results with relatively high precision which allows us to present high quality montages to users. [1] M. Sun, A. Farhadi, B. Taskar, and S. Seitz. Salient Montages from Unconstrained Videos ECCV, 2014.
The domain-specific personal videos highlight dataset from the paper [1] describes a fully automatic method to train domain-specific highlight ranker f...
saliency, domain, wearable, human, recognition, action, video, summarizationIt is composed of ADL (activity daily living) and fall actions simulated by 11 volunteers. The people involved in the test are aged between 22 and 39, w...
wearable, kinect, fall detection - adl, depth, human, recognition, action, accelerometer, videoThe TUG (Timed Up and Go test) dataset consists of actions performed three times by 20 volunteers. The people involved in the test are aged between 22 a...
wearable, kinect, time, human, recognition, action, depth image processing - tug, accelerometer, videoThe High Definition Analytics (HDA) dataset is a multi-camera High-Resolution image sequence dataset for research on High-Definition surveillance: Pedes...
high-definition, benchmark, human, lisbon, indoor, video, re-identification, pedestrian, network, multiview, tracking, surveillance, camera, detectionThe Video Summarization (SumMe) dataset consists of 25 videos, each annotated with at least 15 human summaries (390 in total). The data consists of vide...
video, benchmark, summary, event, human, groundtruth, actionThe crowd datasets are collected from a variety of sources, such as UCF and data-driven crowd datasets. The sequences are diverse, representing dense cr...
video, pedestrian, scene, crowd, human, understanding, anomaly, detectionWelcome to the homepage of the gvvperfcapeva datasets. This site serves as a hub to access a wide range of datasets that have been created for projects ...
face, reconstruction, depth, mesh, human, action, video, pose, multiview, trackingJPL First-Person Interaction dataset (JPL-Interaction dataset) is composed of human activity videos taken from a first-person viewpoint. The dataset par...
video, motion, action, interactive, recognition, humanThe Multicamera Human Action Video Data (MuHAVi) Manually Annotated Silhouette Data (MAS) are two datasets consisting of selected action sequences for ...
video, segmentation, action, behavior, human, backgroundThe SPHERE human skeleton movements dataset was created using a Kinect camera, that measures distances and provides a depth map of the scene instead of ...
motion, skeleton, kinect, movement, depth, human, action, video, behaviorThe UrbanStreet dataset used in the paper can be downloaded here [188M] . It contains 18 stereo sequences of pedestrians taken from a stereo rig mounted...
urban, human, recognition, video, pedestrian, segmentation, tracking, multitarget, detectionShakeFive2 A collection of 8 dyadic human interactions with accompanying skeleton metadata. The metadata is frame based xml data containing the skelet...
video, human, kinect, interactionThe multi-modal/multi-view datasets are created in a cooperation between University of Surrey and Double Negative within the EU FP7 IMPART project. Th...
rgbd, color, dynamic, multi-view, action, outdoor, video, 3d, face, emotion, lidar, human, indoor, multi-mode, modelGroup emotion recognition in images - Happiness Intensity labels for group of people in images. The images have been collected from Flickr using keyword...
emotion, wild, flickr, behavior, group, human, facial expressionThe HandNet dataset contains depth images of 10 participants hands non-rigidly deforming infront of a RealSense RGB-D camera. This dataset includes 2...
rgbd, hand, articulation, video, segmentation, classification, pose, fingertip, detectionThe Buffy dataset contains images selected from the TV series, Buffy: the Vampire Slayer. We select a set of 452 images from the first two episodes for ...
segmentation, human, buffy, movie, object detectionThe THUS10000 benchmark dataset comprises of 10,000 images, each of which has an unambiguous salient object and the object region is accurately annotate...
saliency, segmentation, salient object detection, attention, visualThe UCF Person and Car VideoSeg dataset consists of six videos with groundtruth for video object segmentation. Surfing, jumping, skiing, sliding, big ...
video, object, segmentation, motion, model, camera, groundtruthThe dataset captures 25 people preparing 2 mixed salads each and contains over 4h of annotated accelerometer and RGB-D video data. Annotated activities ...
video, activity, classification, tracking, recognition, detection, actionThe video co-segmentation dataset contains 4 video sets which totally has 11 videos with 5 frames of each video labeled with the pixel-level ground-tr...
video, segmentation, co-segmentation, datasetThe Cholec80 dataset contains 80 videos of cholecystectomy surgeries performed by 13 surgeons. The videos are captured at 25 fps. The dataset is labeled...
video, medicine, surgery, phase, tool, recognitionAn indoor action recognition dataset which consists of 18 classes performed by 20 individuals. Each action is individually performed for 8 times (4 dayt...
video, open-view, cross-view, recognition, indoor, action, multi-cameraThe Traffic Video dataset consists of X video of an overhead camera showing a street crossing with multiple traffic scenarios. The dataset can be down...
video, urban, traffic, road, overhead, tracking, view, detectionThe GaTech VideoSeg dataset consists of two (waterski and yunakim?) video sequences for object segmentation. There exists no groundtruth segmentation ...
video, object, segmentation, motion, model, cameraThe PIROPO database (People in Indoor ROoms with Perspective and Omnidirectional cameras) comprises multiple sequences recorded in two different indoor ...
perspective, human, indoor, room, surveillance, detection, fisheye, omnidirectional, peopleThe TVPR dataset includes 23 registration sessions. Each of the 23 folders contains the video of one registration session. Acquisitions have been perfor...
person, depth, recognition, indoor, top-view, video, clothing, gender, reidentification, identification, peopleWe present a new large-scale dataset that contains a diverse set of stereo video sequences recorded in street scenes from 50 different cities, with high...
urban, stereo, cities, person, video, weakly, segmentation, pedestrian, detection, car, semanticThe Webcam Interestingness dataset consists of 20 different webcam streams, with 159 images each. It is annotated with interestingness ground truth, acq...
video, interest, retrieval, classification, weather, ranking, webcamThe All I Have Seen (AIHS) dataset is created to study the properties of total visual input in humans, for around two weeks Nebojsa Jojic wore a camera ...
similarity, scene, summary, user, indoor, outdoor, video, 3d, clustering, studyA large dataset of geotagged face images collected from Flickr. The zip file contains text files containing urls of the images. Face2GPS: Estimating G...
gender, face, geotagged, classification, age, localization, humanThe database of nude and non-nude videos contains a collection of 179 video segments collected from the following movies: Alpha Dog, Basic Instinct, Bef...
video, nude detection, movieThe Airport MotionSeg dataset contains 12 sequences of videos of an aiprort scenario with small and large moving objects and various speeds. It is chall...
video, segmentation, motion, airport, clustering, camera, zoomThe Stanford 40 Actions dataset contains images of humans performing 40 actions. In each image, we provide a bounding box of the person who is performin...
recognition, human, detection, action, boundingboxAt Udacity, we believe in democratizing education. How can we provide opportunity to everyone on the planet? We also believe in teaching really amazing ...
driving, street, urban, time, recognition, autonomous, video, segmentation, robot, classification, detection, car, syntheticThe Leeds Cows dataset by Derek Magee consists of 14 different video sequences showing a total of 18 cows walking from right to left in front of differe...
video, segmentation, detection, cow, animal, backgroundThe Video2GIF dataset contains over 100,000 pairs of GIFs and their source videos. The GIFs were collected from two popular GIF websites (makeagif.com, ...
gif, scene, summarization, summary, video highlight detection, understandingThe INRIA People dataset from Navneet Dalal and Bill Triggs [DalalCVPR2005] consists of training and testing data. The training contains 1805 images and...
pedestrian, sideview, boundingbox, frontview, object detection, humanThe dataset consist of the about 50 hours obtained from kindergarten surveillance videos. Dataset, totally approximately 100 videos sequences (1000GB, 5...
segmentation, action, behavior, video surveillance, human, backgroundThe current video database containing six types of human actions (walking, jogging, running, boxing, hand waving and hand clapping) performed several ti...
video, segmentation, action classificationThe Weizmann actions dataset by Blank, Gorelick, Shechtman, Irani, and Basri consists of ten different types of actions: bending, jumping jack, jumping,...
video, segmentation, action, action classificationThe Pittsburgh Fast-food Image dataset (PFID) consists of 4545 still images, 606 stereo pairs, 3033600 videos for structure from motion, and 27 privacy-...
video, laboratory, classification, reconstruction, real, food, recognitionHollywood-2 datset contains 12 classes of human actions and 10 classes of scenes distributed over 3669 video clips and approximately 20.1 hours of video...
video, segmentation, action classificationt is composed of food intake movements, recorded with Kinect V1 (320240 depth frame resolution), simulated by 35 volunteers for a total of 48 tests. The...
kinect, age, intake, pointcloud, human, tracking, monitoring, groundtruth, food, behaviorMICCAI 2015 Challenge on Liver Ultrasound Tracking Munich, October 9, 2015 (Full Day) Outline Ultrasound (US) imaging is a widely used medical imagi...
ultrasound, liver, benchmark, real, therapy, human, medical, tracking, organThe ICG Multi-Camera and Virtual PTZ dataset contains the video streams and calibrations of several static Axis P1347 cameras and one panoramic video fr...
graz, outdoor, video, object, panorama, pedestrian, network, crowd, multiview, tracking, camera, multitarget, detection, calibrationThe Weather and Illumination Database (WILD) is an extensive database of high quality images of an outdoor urban scene, acquired every hour over all sea...
urban, estimation, depth, weather, time, newyork, webcam, video, illumination, change, static, camera, lightThe Babenko tracking dataset contains 12 video sequences for single object tracking. For each clip they provide (1) a directory with the original i...
face, video, single, occlusion, object tracking, animalThe PETS 2009 dataset contains 3 parts showing multi-view sequences containing pedestrians walking in an outdoor environment. The parts are used for per...
overlap, human, frontview, occlusion multitarget, outdoor, pedestrian, tracking, detectionGaze data on video stimuli for computer vision and visual analytics. Converted 318 video sequences from several different gaze tracking data sets with...
video, metadata, segmentation, gaze data, polygon annotationThe dataset contains 15 documentary films that are downloaded from YouTube, whose durations vary from 9 minutes to as long as 50 minutes, and the total ...
video, object, detectionWe share our omnidirectional and panoramic image dataset (with annotations) to be used for human and car detection. Please reach through: http://cvrg.i...
panorama, detection, car, omnidirection, recognition, humanThe VSUMM (Video SUMMarization) dataset is of 50 videos from Open Video. All videos are in MPEG-1 format (30 fps, 352 x 240 pixels), in color and with s...
similarity, type, summary, user, video, static, keyframe, studyThe Our Database of Faces (ORL) dataset contains ten different images of each of 40 distinct subjects. For some subjects, the images were taken at diffe...
illumination, face, recognition, human, expressionThe Microsoft Research Cambridge-12 Kinect gesture dataset consists of sequences of human movements, represented as body-part locations, and the associa...
gesture, recognition, human, action, kinectThe xawAR16 dataset is a multi-RGBD camera dataset, generated inside an operating room (IHU Strasbourg), which was designed to evaluate tracking/relocal...
video, medicine, table, depth, operation, recognition, surgeryThe Pornography database contains nearly 80 hours of 400 pornographic and 400 non-pornographic videos. For the pornographic class, we have browsed websi...
video, pornography, video shots, video framesThe UMD Dynamic Scene Recognition dataset consists of 13 classes and 10 videos per class and is used to classify dynamic scenes. The dataset has been ...
video, motion, dynamic, classification, scene, recognitionDataset A (former NLPR Gait Database) was created on Dec. 10, 2001, including 20 persons. Each person has 12 image sequences, 4 sequences for each of th...
motion, foot, human, recognition, gait, action, classification, biometry, pressureThe Shefeld Kinect Gesture (SKIG) dataset contains 2160 hand gesture sequences (1080 RGB sequences and 1080 depth sequences) collected from 6 subjects. ...
illumination, gesture, kinect, depth, recognition, human, actionThis dataset contains 7 challenging volleyball activity classes annotated in 6 videos from professionals in the Austrian Volley League (season 2011/12)....
video, sport, analysis, activity recognition, volleyball, detection, actionThe MSR RGB-D Dataset 7-Scenes dataset is a collection of tracked RGB-D camera frames. The dataset may be used for evaluation of methods for different a...
video, kinect, location, reconstruction, depth, trackingThe CHALEARN Multi-modal Gesture Challenge is a dataset +700 sequences for gesture recognition using images, kinect depth, segmentation and skeleton dat...
gesture, skeleton, kinect, depth, human, recognition, action, illumination, segmentationThis is a subset of the dataset introduced in the SIGGRAPH Asia 2009 paper, Webcam Clip Art: Appearance and Illuminant Transfer from Time-lapse Sequence...
urban, nature, time, webcam, video, illumination, change, static, camera, lightThese datasets were generated for the M2CAI challenges, a satellite event of MICCAI 2016 in Athens. Two datasets are available for two different challen...
video, medicine, workflow, surgery, recognition, challengeThe PASCAL VOC is augmented with segmentation annotation for semantic parts of objects. For example, for the person category, we provide segmentation ma...
part, human, recognition, object, pedestrian, segmentation, pascal, detection, semanticThe YouTube-Objects dataset is composed of videos collected from YouTube by querying for the names of 10 object classes. It contains between 9 and 24 vi...
video, object, flow, segmentation, detection, opticalThe SegTrack dataset consists of six videos (five are used) with ground truth pixelwise segmentation (6th penguin is not usable). The dataset is used fo...
motion, video, object, proposal, flow, segmentation, stationary, model, camera, optical, groundtruthWe collected a video dataset, termed ChokePoint, designed for experiments in person identification/verification under real-world surveillance conditions...
face, real, human, recognition, world, pedestrian, identification, clustering, multiview, surveillance, detection, sequenceThe MSR Action datasets is a collection of various 3D datasets for action recognition. See details http://research.microsoft.com/en-us/um/people/zliu...
video, detection, 3d, action, reconstruction, recognitionThe Mall dataset was collected from a publicly accessible webcam for crowd counting and profiling research. Ground truth: Over 60,000 pedestrians wer...
video, pedestrian, crowd, counting, tracking, detection, indoor, webcamThe Where Who Why (WWW) dataset provides 10,000 videos with over 8 million frames from 8,257 diverse scenes, therefore offering a superior comprehensive...
recognition, video, flow, pedestrian, crowd, surveillance, optical, detectionMany different labeled video datasets have been collected over the past few years, but it is hard to compare them at a glance. So we have created a hand...
video, object, benchmark, classification, recognition, detection, actionThis dataset consist 51 oral presentation recorded with 2 ambient visual sensor (web-cam), 3 First Person View (FPV) cameras (1 on presenter and 2 on ra...
video, quality, kinect, multi-sensor, presentation, analysisThe Freiburg-Berkeley Motion Segmentation Dataset (FBMS-59) is an extension of the BMS dataset with 33 additional video sequences. A total of 720 frames...
motion, benchmark, video, object, pedestrian, segmentation, tracking, groundtruthThe VidPairs dataset contains 133 pairs of images, taken from 1080p HD (~2 megapixel) official movie trailers. Each pair consists of images of the same ...
matching, dense, video, flow, description, patch, pair, opticalDataset contains 1000 images of 100 persons, with 10 images per person and is freely available. All images were acquired by cropping ears from images fr...
person, pedestrian, ear, recognition, human, lighting, biometryThe GaTech VideoStab dataset consists of N videos for the task of video stabilization. This code is implemented in Youtube video editor for stabilizatio...
video, camera, path, stabilizationThe Yotta dataset consists of 70 images for semantic labeling given in 11 classes. It also contains multiple videos and camera matrices for 14km or driv...
urban, reconstruction, video, segmentation, 3d, classification, camera, semanticThe Berkeley Video Segmentation Dataset (BVSD) contains videos for segmentation (boundary?) Dataset train Dataset test
video, segmentation, benchmarkThe Fish4Knowledge project (groups.inf.ed.ac.uk/f4k/) is pleased to announce the availability of 2 subsets of our tropical coral reef fish video and e...
motion, nature, recognition, fish, video, water, classification, animal, cameraDatabase contains 798 images of 114 persons, with 7 images per person and is freely available for research purposes. All images were taken in supervised...
face, person, human, lighting, recognition, illumination, pedestrian, biometryThe GaTech VideoContext dataset consists of over 100 groundtruth annotated outdoor videos with over 20000 frames for the task of geometric context eval...
urban, nature, outdoor, video, segmentation, supervised, classification, context, unsupervised, geometry, semanticWe introduce the Shelf dataset for multiple human pose estimation from multiple views. In addition we annotate the body joints in the Campus dataset fro...
motion, multiple, 3d, estimation, capture, pose, human, viewThe multiple foreground video co-segmentation dataset, consisting of four sets, each with a video pair and two foreground objects in common. The datase...
video, segmentation, co-segmentationChairGest is an open challenge / benchmark. The task consists in spotting and recognizing gestures from multiple synchronized sensors: 1 Kinect and 4 X...
gesture, detection, benchmark, kinect, recognition, humanThe ICG Multi-Camera datasets consist of Easy Data Set (just one person) Medium Data Set (3-5 persons, used for the experiments) Hard Data Set (cro...
graz, indoor, video, object, pedestrian, multiview, tracking, camera, multitarget, detection, calibrationThe Robotic 3D Scan Repository from Osnabrueck contains 23 different datasets showing a veriaty of 3D scans for objects, humans, cities, university camp...
lidar, scan, urban, reconstruction, human, laser, heat, aerial, germany, 3d, bremen, city, osnabrueckThe Lane Level Localization dataset was collected on a highway in San Francisco with the following properties: * Reasonable traffic * Multiple lane h...
driving, benchmark, autonomous, video, road, gps, map, 3d, localization, carThe FaceScrub dataset comprises a total of 107818 unconstrained face images of 530 celebrities crawled from the Internet, with about 200 images per pers...
face, celebrity, detection, people, recognition, humanThe BEOID dataset includes object interactions ranging from preparing a coffee to operating a weight lifting machine and opening a door. The dataset is ...
video, object, egocentric, 3d, interaction, pose, trackingWe introduce a labeled dataset of categorized images for evaluating sketch based image retrieval. Using Flickr, we downloaded about 3000 images for each...
saliency, internet, shape, sketch, visual, attention, group, retrieval, salient object detectionThe MOT Challenge is a framework for the fair evaluation of multiple people tracking algorithms. In this framework we provide: - A large collection of...
multiple, benchmark, evaluation, benhttp://motchallenge.net/chmark, dataset, target, video, pedestrian, 3d, tracking, surveillance, peopleThe Oxford RobotCar Dataset contains over 100 repetitions of a consistent route through Oxford, UK, captured over a period of over a year. The dataset c...
driving, street, urban, time, recognition, autonomous, video, segmentation, robot, classification, detection, car, yearThe automotive multi-sensor (AMUSE) dataset consists of inertial and other complementary sensor data combined with monocular, omnidirectional, high fram...
urban, api, image, video, inertial, streetside, traffic, cityThe QMUL Junction dataset is a busy traffic scenario for research on activity analysis and behavior understanding. Video length: 1 hour (90000 frame...
video, motion, pedestrian, crowd, counting, tracking, detection, behaviorAWS hosts a variety of public datasets that anyone can access for free. Previously, large datasets such as satellite imagery or genomic data have requi...
space, human, recognition, image, amazon, satellite, segmentation, learning, deep, classification, biology, resolutionBackground Models Challenge (BMC) is a complete dataset and competition for the comparison of background subtraction algorithms. The main topics concer...
motion, background, video, modeling, segmentation, change, surveillance, detectionThis web page contains video data and ground truth for 16 dances with two different dance patterns. The style of dancing is inspired by Scottish Ceilidh...
motion, dance, analysis, background, action, video, chemistry, patternThe Video Segmentation Benchmark (VSB100) provides ground truth annotations for the Berkeley Video Dataset, which consists of 100 HD quality videos divi...
video, object, segmentation, motion, pedestrian, benchmark, tracking, groundtruth