Center for Machine Learning and Intelligent Systems
About  Citation Policy  Donate a Data Set  Contact

Repository Web            Google
View ALL Data Sets

× Check out the beta version of the new UCI Machine Learning Repository we are currently testing! Contact us if you have any issues, questions, or concerns. Click here to try out the new site.

Pedestrian in Traffic Dataset Data Set
Download: Data Folder, Data Set Description

Abstract: This data-set contains a number of pedestrian tracks recorded from a vehicle driving in a town in southern Germany. The data is particularly well-suited for multi-agent motion prediction tasks.

Data Set Characteristics:  

Multivariate, Sequential, Time-Series

Number of Instances:




Attribute Characteristics:


Number of Attributes:


Date Donated


Associated Tasks:

Classification, Regression, Causal-Discovery

Missing Values?


Number of Web Hits:



- Claudia Blaiotta, Bosch Center for Artificial Intelligence, Robert Bosch GmbH, claudia.blaiotta '@'
- Date: July, 2019

Data Set Information:

The raw data was acquired from a vehicle equipped with multiple sensors while driving, for approximately five hours, in an urban area in southern Germany. The sensor set included one mono-RGB camera, one stereo-RGB camera, an inertial measurement system with differential GPS and a lidar system. The preprocessed data available from this repository consists of 45 pedestrian tracks (in world coordinates) together with a semantic map of the static environment. For each track and at each time-step, not only the agent position is provided, but also body and head orientation attributes, as well as the position of all other agents and their type (e.g. car, cyclist, pedestrian etc.). Additional details about the preprocessing pipeline can be found in [1]. More information on the data format is provided in the next section.

Attribute Information:

: Pedestrian tracks are stored in the tracks.csv. Each row in such files contains 14 comma-separated attributes, with missing values denoted by ‘None’. The attributes are in order:
• oid: unique agent id (int),
• timestamp: time in seconds (float),
• x: x component of position vector (float),
• y: y component of position vector (float),
• body_roll: roll body angle in degrees (float),
• body_pitch: pitch body angle in degrees (float),
• body_yaw: yaw body angle in degrees (float),
• head_roll: roll head angle in degrees (float),
• head_pitch: pitch head angle in degrees (float),
• head_yaw: yaw head angle in degrees (float),
• other_oid: list of ids of agents currently present in the scene ([list of int]),
• other_class: list of other agents’ class labels ([list of int]),
• other_x: list of other agents’ x coordinates ([list of float]),
• other_y: list of other agents’ y coordinates ([list of float]).
Labels used to identify agent types are available in agent_class_label_info.csv.
The file semantic_map.png contains a map of the static environment, where semantic labels are color-encoded according to the mapping available in semantic_map_label_info.csv. Information needed to transform between image and world coordinates is stored in the file map2world_info.txt.

Relevant Papers:

[1] Blaiotta, Claudia. 'Learning generative socially-aware models of pedestrian motion.' IEEE Robotics and Automation Letters, 2019.

Citation Request:

You may use this data for scientific, non-commercial purposes, as long as you give credit to the owners when publishing any work based on this data. Please cite [1].

Supported By:

 In Collaboration With:

About  ||  Citation Policy  ||  Donation Policy  ||  Contact  ||  CML