Description

The TU Berlin Multi-Object and Multi-Camera Tracking Dataset (MOCAT) is a synthetic dataset to train and test tracking and detection systems in a virtual world. One of the key advantages of this dataset is that there is a complete and accurate ground truth, including pixel accurate object masks, available. All sequences are rendered 3 times, each with different illumination settings. This allows to directly measure the influence of the illumination to the algorithm under test. There are 8 to 10 different camera views (including camera calibration information) with partly overlapping FOVs for each sequence available. The ground truth contains the world position for each object, so the multi-camera tracking performance can be evaluated as well. All sequences contain vehicles, animals and pedestrians as objects to detect and track. Erik Bochinski, Volker Eiselein, Thomas Sikora "Training a Convolutional Neural Network for Multi-Class Object Detection Using Solely Virtual World Data" IEEE International Conference on Advanced Video and Signal-Based Surveillance, pp. 278-285

Related datasets