Description

The data were collected over a series of specifically designed trials. Our hope was to cover most of the types of sensory interactions that a Pioneer might be reasonably expected to encounter: things like passing by visible objects, pushing visible objects, crashing into walls, etc. Many of these interactions are repeated throughout the dataset. This data was collected to serve as the basis for work in learning and conceptual development. Our first goal was to be able to have the robot cluster these experiences by their dynamics on their own into clusters of experiences with a common outcome. Each data file contains time series data in which each row of data corresponds to a single observation of the sensor array. Included in each row are two additional variables, 'id' and 'description', which indicate the experience number that the observation belongs to, and a description of that experience, respectively. Observations within an experience are taken every 100ms. The data is stored in three text files: one file for experiences in which the Pioneer was moving in a straight line, one in which it was turning in place, and one in which it was raising or lowering its gripper. The description variable is a string of symbols. The string breaks down as follows: "u" or "o" - unobstructed or obstructed "x.xs" - activity lasted x.x seconds activity - the activity and speed, if applicable, i.e. move100 = move forward at 100mm/sec visual - objects in the visual array are listed in sequence. "cAHEAD" indicates an object visible to channel c directly AHEAD of the Pioneer. [visual.X] - visual descriptions followed by a '.' and one character indicate that something special happens with the visible object. .V means the object Vanishes from sight during the activity. .D indicates that the object is Discovered (becomes visible) during the activity. .P indicates that the object is pushed. An example: "u-3.5s-retr-100-aRIGHT.D" An unobstructed retreat (move) at -100 mm/sec for 3.5 seconds with an object being discovered in channel A. It should be noted that, particularly with respect to the visual channels, the description may not be 100% accurate. Since the visual channels respond to colors that they are trained on (visual a=red, visual b=yellow, visual c=blue), it was possible, but infrequent, for some extraneous object in the environment generated a response in visual channels that were not supposed to show activity in a particular trial. Rows are seperated by carriage returns, columns by commas.

Related Papers

Related datasets