This benchmark aims to provide tools to evaluate 3D Interest Point Detection Algorithms with respect to human generated ground truth.Using a web-based subjective experiment, human subjects marked 3D interest points on a set of 3D models. The models were organized in two datasets: Dataset A and Dataset B. Dataset A consists of 24 models which were hand-marked by 23 human subjects. Dataset B is larger with 43 models, and it contains all the models in Dataset B. The number of human subjects who marked all the models in this larger set is 16.Some of the models are standard models that are widely used in 3D shape research; and they have been used as test objects by researchers working on the best view problem. Examples are Armadillo, Davids head, Utah teapot, Bunny, etc. Some of the models are chosen fromThe Stanford 3D Scanning Repositoryand some others from the Watertight Models Track of SHREC 2007.