Description

The Video Segmentation Benchmark (VSB100) provides ground truth annotations for the Berkeley Video Dataset, which consists of 100 HD quality videos divided into train and test folders containing 40 and 60 videos, respectively. Each video was annotated by four different persons. The annotation comes with the evaluation software used for the experiments reported in Galasso et al. Download training set (40 Videos, external link to Berkeley server) Download Test set (60 Videos, external link to Berkeley server) Local links for Train and Test set Train set Test set Download annotation Full frame annotation - General Benchmark. Training set [Full Res] [Half Res] Test set [Full Res] [Half Res] Download evaluation code (ver 1.3) Denser annotations for training set by Khoreva et.al, GCPR 2014: [Full Res] [Half res] Subtasks Motion - objects which undergo significant motion. Objects such as snow, fire are excluded. Training set [Full Res] [Half Res] Test set [Full Res] [Half Res] Non-rigid motion - subset of Motion subtask, objects which undergo signficant articulated motion Training set [Full Res] [Half Res] Test set [Full Res] [Half Res] Camera motion - sequences where the camera moves Training set [Full Res] [Half Res] Test set [Full Res] [Half Res] Mirror links of VSB100 [Mirror 1] [Mirror 2] Terms of use The dataset is provided for research purposes. Any commercial use is prohibited. When using the dataset in your research work, you should cite the following papers: F. Galasso, N.S. Nagaraja, T.J. Cardenas, T. Brox, B. Schiele A Unified Video Segmentation Benchmark: Annotation, Metrics and Analysis, International Conference on Computer Vision (ICCV), 2013. P. Sundberg, T. Brox, M. Maire, P. Arbelaez, and J. Malik Occlusion Boundary Detection and Figure/Ground Assignment from Optical Flow, Computer Vision and Pattern Recognition (CVPR), 2011 .

Related datasets