This dataset consist 51 oral presentation recorded with 2 ambient visual sensor (web-cam), 3 First Person View (FPV) cameras (1 on presenter and 2 on randomly chosen audience), and 1 presenter facing Kinect sensor. Oral presentation has been an effective method for delivering information to a group of participants for many years. In the past couple of decades, technological advancements have revolutionized the way humans deliver presentations. Unfortunately, due to a variety of reasons, the quality of presentations can be variable which can have an impact on its efficacy. Assessing the quality of a presentation usually requires painstaking manual analysis by experts. The expert feedback can definitely help people improve their presentation skills. Unfortunately, the manual evaluation of the presentation quality by experts is not cost effective and may not available to most people.
The SPHERE human skeleton movements dataset was created using a Kinect camera, that measures distances and provides a depth map of the scene instead of ...
motion, skeleton, kinect, movement, depth, human, action, video, behaviorThis web page contains video data and ground truth for 16 dances with two different dance patterns. The style of dancing is inspired by Scottish Ceilidh...
motion, dance, analysis, background, action, video, chemistry, patternThis dataset contains 7 challenging volleyball activity classes annotated in 6 videos from professionals in the Austrian Volley League (season 2011/12)....
video, sport, analysis, activity recognition, volleyball, detection, actionThe MSR RGB-D Dataset 7-Scenes dataset is a collection of tracked RGB-D camera frames. The dataset may be used for evaluation of methods for different a...
video, kinect, location, reconstruction, depth, trackingIt is composed of ADL (activity daily living) and fall actions simulated by 11 volunteers. The people involved in the test are aged between 22 and 39, w...
wearable, kinect, fall detection - adl, depth, human, recognition, action, accelerometer, videoShakeFive2 A collection of 8 dyadic human interactions with accompanying skeleton metadata. The metadata is frame based xml data containing the skelet...
video, human, kinect, interactionThe TUG (Timed Up and Go test) dataset consists of actions performed three times by 20 volunteers. The people involved in the test are aged between 22 a...
wearable, kinect, time, human, recognition, action, depth image processing - tug, accelerometer, videoThe QMUL Junction dataset is a busy traffic scenario for research on activity analysis and behavior understanding. Video length: 1 hour (90000 frame...
video, motion, pedestrian, crowd, counting, tracking, detection, behaviorThe Berkeley Video Segmentation Dataset (BVSD) contains videos for segmentation (boundary?) Dataset train Dataset test
video, segmentation, benchmarkThe Video Summarization (SumMe) dataset consists of 25 videos, each annotated with at least 15 human summaries (390 in total). The data consists of vide...
video, benchmark, summary, event, human, groundtruth, actionThe Fish4Knowledge project (groups.inf.ed.ac.uk/f4k/) is pleased to announce the availability of 2 subsets of our tropical coral reef fish video and e...
motion, nature, recognition, fish, video, water, classification, animal, cameraBackground Models Challenge (BMC) is a complete dataset and competition for the comparison of background subtraction algorithms. The main topics concer...
motion, background, video, modeling, segmentation, change, surveillance, detectionThe Video Segmentation Benchmark (VSB100) provides ground truth annotations for the Berkeley Video Dataset, which consists of 100 HD quality videos divi...
video, object, segmentation, motion, pedestrian, benchmark, tracking, groundtruthThe HandNet dataset contains depth images of 10 participants hands non-rigidly deforming infront of a RealSense RGB-D camera. This dataset includes 2...
rgbd, hand, articulation, video, segmentation, classification, pose, fingertip, detectionThe GaTech VideoContext dataset consists of over 100 groundtruth annotated outdoor videos with over 20000 frames for the task of geometric context eval...
urban, nature, outdoor, video, segmentation, supervised, classification, context, unsupervised, geometry, semanticThe multiple foreground video co-segmentation dataset, consisting of four sets, each with a video pair and two foreground objects in common. The datase...
video, segmentation, co-segmentationChairGest is an open challenge / benchmark. The task consists in spotting and recognizing gestures from multiple synchronized sensors: 1 Kinect and 4 X...
gesture, detection, benchmark, kinect, recognition, humanThe multi-modal/multi-view datasets are created in a cooperation between University of Surrey and Double Negative within the EU FP7 IMPART project. Th...
rgbd, color, dynamic, multi-view, action, outdoor, video, 3d, face, emotion, lidar, human, indoor, multi-mode, modelThe UCF Person and Car VideoSeg dataset consists of six videos with groundtruth for video object segmentation. Surfing, jumping, skiing, sliding, big ...
video, object, segmentation, motion, model, camera, groundtruthThe dataset captures 25 people preparing 2 mixed salads each and contains over 4h of annotated accelerometer and RGB-D video data. Annotated activities ...
video, activity, classification, tracking, recognition, detection, actionThe domain-specific personal videos highlight dataset from the paper [1] describes a fully automatic method to train domain-specific highlight ranker f...
saliency, domain, wearable, human, recognition, action, video, summarizationThe video co-segmentation dataset contains 4 video sets which totally has 11 videos with 5 frames of each video labeled with the pixel-level ground-tr...
video, segmentation, co-segmentation, datasetWe present the 2017 DAVIS Challenge, a public competition specifically designed for the task of video object segmentation. Following the footsteps of ot...
code, quality, benchmark, video segmentation, object, segmentation, hd, tracking, resolutionThe UrbanStreet dataset used in the paper can be downloaded here [188M] . It contains 18 stereo sequences of pedestrians taken from a stereo rig mounted...
urban, human, recognition, video, pedestrian, segmentation, tracking, multitarget, detectionThe NYU-Depth V2 data set is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microso...
semantic segmentation, kinect, label, reconstruction, depthThe Cholec80 dataset contains 80 videos of cholecystectomy surgeries performed by 13 surgeons. The videos are captured at 25 fps. The dataset is labeled...
video, medicine, surgery, phase, tool, recognitionAn indoor action recognition dataset which consists of 18 classes performed by 20 individuals. Each action is individually performed for 8 times (4 dayt...
video, open-view, cross-view, recognition, indoor, action, multi-cameraThe Traffic Video dataset consists of X video of an overhead camera showing a street crossing with multiple traffic scenarios. The dataset can be down...
video, urban, traffic, road, overhead, tracking, view, detectionThe Multicamera Human Action Video Data (MuHAVi) Manually Annotated Silhouette Data (MAS) are two datasets consisting of selected action sequences for ...
video, segmentation, action, behavior, human, backgroundThis dataset package contains the software and data used for Detection-based Object Labeling on the RGB-D Scenes Dataset as implemented in the paper: ...
object, 3d, kinect, reconstruction, depth, recognition, indoorThe GaTech VideoSeg dataset consists of two (waterski and yunakim?) video sequences for object segmentation. There exists no groundtruth segmentation ...
video, object, segmentation, motion, model, cameraThe TVPR dataset includes 23 registration sessions. Each of the 23 folders contains the video of one registration session. Acquisitions have been perfor...
person, depth, recognition, indoor, top-view, video, clothing, gender, reidentification, identification, peopleWe present a new large-scale dataset that contains a diverse set of stereo video sequences recorded in street scenes from 50 different cities, with high...
urban, stereo, cities, person, video, weakly, segmentation, pedestrian, detection, car, semanticThe Webcam Interestingness dataset consists of 20 different webcam streams, with 159 images each. It is annotated with interestingness ground truth, acq...
video, interest, retrieval, classification, weather, ranking, webcamThe All I Have Seen (AIHS) dataset is created to study the properties of total visual input in humans, for around two weeks Nebojsa Jojic wore a camera ...
similarity, scene, summary, user, indoor, outdoor, video, 3d, clustering, studyThe database of nude and non-nude videos contains a collection of 179 video segments collected from the following movies: Alpha Dog, Basic Instinct, Bef...
video, nude detection, movieThe Airport MotionSeg dataset contains 12 sequences of videos of an aiprort scenario with small and large moving objects and various speeds. It is chall...
video, segmentation, motion, airport, clustering, camera, zoomThe High Definition Analytics (HDA) dataset is a multi-camera High-Resolution image sequence dataset for research on High-Definition surveillance: Pedes...
high-definition, benchmark, human, lisbon, indoor, video, re-identification, pedestrian, network, multiview, tracking, surveillance, camera, detectionAt Udacity, we believe in democratizing education. How can we provide opportunity to everyone on the planet? We also believe in teaching really amazing ...
driving, street, urban, time, recognition, autonomous, video, segmentation, robot, classification, detection, car, syntheticThe Leeds Cows dataset by Derek Magee consists of 14 different video sequences showing a total of 18 cows walking from right to left in front of differe...
video, segmentation, detection, cow, animal, backgroundThe current video database containing six types of human actions (walking, jogging, running, boxing, hand waving and hand clapping) performed several ti...
video, segmentation, action classificationThe Weizmann actions dataset by Blank, Gorelick, Shechtman, Irani, and Basri consists of ten different types of actions: bending, jumping jack, jumping,...
video, segmentation, action, action classificationThe .enpeda.. Image Sequence Analysis Test Site (EISATS) offers sets of long bi- or trinocular image sequences recorded in the context of vision-based d...
motion, stereo, analysis, flow, segmentation, optical, semantic, visionThe Pittsburgh Fast-food Image dataset (PFID) consists of 4545 still images, 606 stereo pairs, 3033600 videos for structure from motion, and 27 privacy-...
video, laboratory, classification, reconstruction, real, food, recognitionHollywood-2 datset contains 12 classes of human actions and 10 classes of scenes distributed over 3669 video clips and approximately 20.1 hours of video...
video, segmentation, action classificationt is composed of food intake movements, recorded with Kinect V1 (320240 depth frame resolution), simulated by 35 volunteers for a total of 48 tests. The...
kinect, age, intake, pointcloud, human, tracking, monitoring, groundtruth, food, behaviorThe ICG Multi-Camera and Virtual PTZ dataset contains the video streams and calibrations of several static Axis P1347 cameras and one panoramic video fr...
graz, outdoor, video, object, panorama, pedestrian, network, crowd, multiview, tracking, camera, multitarget, detection, calibrationThe Weather and Illumination Database (WILD) is an extensive database of high quality images of an outdoor urban scene, acquired every hour over all sea...
urban, estimation, depth, weather, time, newyork, webcam, video, illumination, change, static, camera, lightThe Babenko tracking dataset contains 12 video sequences for single object tracking. For each clip they provide (1) a directory with the original i...
face, video, single, occlusion, object tracking, animalGaze data on video stimuli for computer vision and visual analytics. Converted 318 video sequences from several different gaze tracking data sets with...
video, metadata, segmentation, gaze data, polygon annotationThe dataset contains 15 documentary films that are downloaded from YouTube, whose durations vary from 9 minutes to as long as 50 minutes, and the total ...
video, object, detectionThe Salient Montages is a human-centric video summarization dataset from the paper [1]. In [1], we present a novel method to generate salient montages...
video, saliency, wearable, montage, summarization, humanJPL First-Person Interaction dataset (JPL-Interaction dataset) is composed of human activity videos taken from a first-person viewpoint. The dataset par...
video, motion, action, interactive, recognition, humanThe VSUMM (Video SUMMarization) dataset is of 50 videos from Open Video. All videos are in MPEG-1 format (30 fps, 352 x 240 pixels), in color and with s...
similarity, type, summary, user, video, static, keyframe, studyThe Microsoft Research Cambridge-12 Kinect gesture dataset consists of sequences of human movements, represented as body-part locations, and the associa...
gesture, recognition, human, action, kinectThe xawAR16 dataset is a multi-RGBD camera dataset, generated inside an operating room (IHU Strasbourg), which was designed to evaluate tracking/relocal...
video, medicine, table, depth, operation, recognition, surgeryThe Pornography database contains nearly 80 hours of 400 pornographic and 400 non-pornographic videos. For the pornographic class, we have browsed websi...
video, pornography, video shots, video framesThe UMD Dynamic Scene Recognition dataset consists of 13 classes and 10 videos per class and is used to classify dynamic scenes. The dataset has been ...
video, motion, dynamic, classification, scene, recognitionThe Shefeld Kinect Gesture (SKIG) dataset contains 2160 hand gesture sequences (1080 RGB sequences and 1080 depth sequences) collected from 6 subjects. ...
illumination, gesture, kinect, depth, recognition, human, actionWelcome to the homepage of the gvvperfcapeva datasets. This site serves as a hub to access a wide range of datasets that have been created for projects ...
face, reconstruction, depth, mesh, human, action, video, pose, multiview, trackingThe CHALEARN Multi-modal Gesture Challenge is a dataset +700 sequences for gesture recognition using images, kinect depth, segmentation and skeleton dat...
gesture, skeleton, kinect, depth, human, recognition, action, illumination, segmentationThis is a subset of the dataset introduced in the SIGGRAPH Asia 2009 paper, Webcam Clip Art: Appearance and Illuminant Transfer from Time-lapse Sequence...
urban, nature, time, webcam, video, illumination, change, static, camera, lightThese datasets were generated for the M2CAI challenges, a satellite event of MICCAI 2016 in Athens. Two datasets are available for two different challen...
video, medicine, workflow, surgery, recognition, challengeThe YouTube-Objects dataset is composed of videos collected from YouTube by querying for the names of 10 object classes. It contains between 9 and 24 vi...
video, object, flow, segmentation, detection, opticalThe SegTrack dataset consists of six videos (five are used) with ground truth pixelwise segmentation (6th penguin is not usable). The dataset is used fo...
motion, video, object, proposal, flow, segmentation, stationary, model, camera, optical, groundtruthThe CERTH image blur dataset consists of 2450 digital images, 1850 out of which are photographs captured by various camera models in different shooting ...
motion, quality, detection, image, defocus, blurThe crowd datasets are collected from a variety of sources, such as UCF and data-driven crowd datasets. The sequences are diverse, representing dense cr...
video, pedestrian, scene, crowd, human, understanding, anomaly, detectionThe MSR Action datasets is a collection of various 3D datasets for action recognition. See details http://research.microsoft.com/en-us/um/people/zliu...
video, detection, 3d, action, reconstruction, recognitionThe Mall dataset was collected from a publicly accessible webcam for crowd counting and profiling research. Ground truth: Over 60,000 pedestrians wer...
video, pedestrian, crowd, counting, tracking, detection, indoor, webcamThe Where Who Why (WWW) dataset provides 10,000 videos with over 8 million frames from 8,257 diverse scenes, therefore offering a superior comprehensive...
recognition, video, flow, pedestrian, crowd, surveillance, optical, detectionThe NYU-Depth data set is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft ...
semantic segmentation, kinect, label, reconstruction, depthMany different labeled video datasets have been collected over the past few years, but it is hard to compare them at a glance. So we have created a hand...
video, object, benchmark, classification, recognition, detection, actionThe Freiburg-Berkeley Motion Segmentation Dataset (FBMS-59) is an extension of the BMS dataset with 33 additional video sequences. A total of 720 frames...
motion, benchmark, video, object, pedestrian, segmentation, tracking, groundtruthThe VidPairs dataset contains 133 pairs of images, taken from 1080p HD (~2 megapixel) official movie trailers. Each pair consists of images of the same ...
matching, dense, video, flow, description, patch, pair, opticalThe GaTech VideoStab dataset consists of N videos for the task of video stabilization. This code is implemented in Youtube video editor for stabilizatio...
video, camera, path, stabilizationThe ICG Multi-Camera datasets consist of Easy Data Set (just one person) Medium Data Set (3-5 persons, used for the experiments) Hard Data Set (cro...
graz, indoor, video, object, pedestrian, multiview, tracking, camera, multitarget, detection, calibrationThe Yotta dataset consists of 70 images for semantic labeling given in 11 classes. It also contains multiple videos and camera matrices for 14km or driv...
urban, reconstruction, video, segmentation, 3d, classification, camera, semanticThe Lane Level Localization dataset was collected on a highway in San Francisco with the following properties: * Reasonable traffic * Multiple lane h...
driving, benchmark, autonomous, video, road, gps, map, 3d, localization, carThe BEOID dataset includes object interactions ranging from preparing a coffee to operating a weight lifting machine and opening a door. The dataset is ...
video, object, egocentric, 3d, interaction, pose, trackingThe MOT Challenge is a framework for the fair evaluation of multiple people tracking algorithms. In this framework we provide: - A large collection of...
multiple, benchmark, evaluation, benhttp://motchallenge.net/chmark, dataset, target, video, pedestrian, 3d, tracking, surveillance, peopleFor the first few decades of the fields existence, computer vision has been focused on algorithmic, logical approaches to perception. But it was only wi...
object, 3d, kinect, reconstruction, depth, recognition, indoorThe Oxford RobotCar Dataset contains over 100 repetitions of a consistent route through Oxford, UK, captured over a period of over a year. The dataset c...
driving, street, urban, time, recognition, autonomous, video, segmentation, robot, classification, detection, car, yearThe automotive multi-sensor (AMUSE) dataset consists of inertial and other complementary sensor data combined with monocular, omnidirectional, high fram...
urban, api, image, video, inertial, streetside, traffic, city