The researchers at the Utrecht University have created the Utrecht Multi-Person Motion (UMPM) benchmark to evaluate human motion capturing algorithms for multiple subjects in a similar way as HumanEva does for a single subject. It includes 10 different multi-person scenarios including interaction, each with 1-4 persons.Per scenario, the dataset providesFour synchronized color video sequences of 30-60 sec at 644 484 and 50 fpsMotion capture (MoCap) data at 100 fps for at most 2 persons * 37 marker position per subject (corrected) * 15 virtual marker positions by interpolation * 15 virtual marker positions by using inverse kinematicsCalibration dataBackground imagesTo advance the state-of-the-art in human motion estimation, the UMPM benchmark is made available to the research community. The video recordings, C3D files, calibration parameters, background images, documentation and software of the UMPM benchmark can be downloaded fromhttp://www.projects.science.uu.nl/umpm/.Please acknowledge the following publication when using the UMPM benchmark in your research:N.P. van der Aa, X. Luo, G.-J. Giezeman, R.T. Tan, and R.C. Veltkamp. Utrecht Multi-Person Motion (UMPM) benchmark: a multi-person dataset with synchronized video and motion capture data for evaluation of articulated human motion and interaction. In Proceedings of the Workshop on Human Interaction in Computer Vision (HICV), 2011. In conjunction with ICCV 2011.For further questions, please contactUMPM@science.uu.nl