Description

The Change Detection dataset presented here contains 1000 pairs of 800x600 images, each pair consisting of one reference image and one test image, and the 1000 corresponding 800x600 ground truth masks. The images were rendered using the realistic rendering engine of the serious game Virtual Battle Station 2, developed by Bohemia Interactive Simulations (see theproduct webpagefor more details).The dataset consists of 100 different scenes containing several objects (trees, buildings, ...) and moderate ground relief. Each scene was rendered under various conditions:Viewpoints:In order to enable for the analysis of viewpoint differences on the detection performances, each scene was rendered from five different viewpoints. The cameras were distributed at steps of 10 degrees on a circle of radius 100 meters at approximately 250 meters high, and with a fixed tilt of about -70 degrees. All images were acquired at a ground resolution of about 50cm per pixel. The drawing below illustrates the setting of the viewpoints: Setting of the viewpointsHard Shadows:In order to enable the evaluation of a change detection algorithm independently from a possible shadow attenuation method, each scene viewed under each viewpoint was rendered with and without hard shadows. Note that ground truth is unchanged by the presence or absence of shadows.Changes:For each scene in each of the previous cases, a reference image and a test image were rendered. The reference and test images respectively differ in the absence and presence of one significant change, but also in the direction of sunlight, resulting in changes in illumination and hard shadows.For every pair of reference and test images, a ground truth mask was generated by the rendering engine, providing accurate and objective localization of the true change. Below are samples of a pair of reference and test images, from the same viewpoint and rendered with hard shadows, along with the corresponding ground truth mask.Reference imageTest imageGround truth maskAlong with the images, the camera poses for each viewpoint are provided in a text file, whose format is described in the following. Each line of the file contains the camera pose of one single viewpoint. On a given line, the parameters are separated by one comma (',') followed by one space, and the last parameter is directly followed by a line break. The parameters are always listed in the same order: scene index, view index, coordinates Above Sea Level (ASL, in meters) of the camera's optical center, coordinates Above Ground Level (AGL, in meters) of the camera's optical center, coordinates ASL (in meters) of the ground point intersecting optical axis, coordinates AGL (in meters) of the ground point intersecting optical axis, the tangeant of the half horizontal field of view (given fov = tan(TotalHorizontalFOV/2)), and finally the bank angle which in practice is always zero. These parameters are sufficient to define estimates of the direct and inverse localisation functions. for more information about this dataset, visitChange Detection Benchmark in Aerial ImageryThis dataset is hosted on Computer Vision Online and can be downloaded from here(AICDDataset ~1.7 GB).Credit: Nicolas Bourdis, Denis Marraud, Hichem Sahbi

Related Papers

  • Changes: For each scene in each of the previous cases, a reference image and a test image were rendered. The reference and test images respectively differ in the absence and presence of one significant change, but also in the direction of sunlight, resulting in changes in illumination and hard shadows. [link]
  • Hard Shadows: In order to enable the evaluation of a change detection algorithm independently from a possible shadow attenuation method, each scene viewed under each viewpoint was rendered with and without hard shadows. Note that ground truth is unchanged by the presence or absence of shadows. [link]
  • Viewpoints: In order to enable for the analysis of viewpoint differences on the detection performances, each scene was rendered from five different viewpoints. The cameras were distributed at steps of 10 degrees on a circle of radius 100 meters at approximately 250 meters high, and with a fixed tilt of about -70 degrees. All images were acquired at a ground resolution of about 50cm per pixel. The drawing below illustrates the setting of the viewpoints: Setting of the viewpointsHard Shadows: In order to enable the evaluation of a change detection algorithm independently from a possible shadow attenuation method, each scene viewed under each viewpoint was rendered with and without hard shadows. Note that ground truth is unchanged by the presence or absence of shadows.Changes: For each scene in each of the previous cases, a reference image and a test image were rendered. The reference and test images respectively differ in the absence and presence of one significant change, but also in the direction of sunlight, resulting in changes in illumination and hard shadows.For every pair of reference and test images, a ground truth mask was generated by the rendering engine, providing accurate and objective localization of the true change. Below are samples of a pair of reference and test images, from the same viewpoint and rendered with hard shadows, along with the corresponding ground truth mask.Reference imageTest imageGround truth maskAlong with the images, the camera poses for each viewpoint are provided in a text file, whose format is described in the following. Each line of the file contains the camera pose of one single viewpoint. On a given line, the parameters are separated by one comma (',') followed by one space, and the last parameter is directly followed by a line break. The parameters are always listed in the same order: scene index, view index, coordinates Above Sea Level (ASL, in meters) of the camera's optical center, coordinates Above Ground Level (AGL, in meters) of the camera's optical center, coordinates ASL (in meters) of the ground point intersecting optical axis, coordinates AGL (in meters) of the ground point intersecting optical axis, the tangeant of the half horizontal field of view (given fov = tan(TotalHorizontalFOV/2)), and finally the bank angle which in practice is always zero. These parameters are sufficient to define estimates of the direct and inverse localisation functions. for more information about this dataset, visit Change Detection Benchmark in Aerial Imagery This dataset is hosted on Computer Vision Online and can be downloaded from here (AICDDataset ~1.7 GB).Credit: Nicolas Bourdis, Denis Marraud, Hichem Sahbi [link]