Skip to main content
European Commission logo

Detection and Interpretation of the Driving Environment

Germany Icon
Project Acronym
STRIA Roadmaps
Connected and automated transport (CAT)
Transport mode
Road icon
Transport policies
Societal/Economic issues,


Background & Policy context

Driver assistance systems can only support the driver intelligently if the system has access to adequate information concerning the vehicle and the road, and particularly concerning objects and obstacles in the vehicle environment. The use of different independent sensors and the fusion of this information is essential for comprehensive and reliable detection of the driving environment. By means of overlapping regimes of sensor response, a highly desirable physical 'redundancy' is achieved. This redundancy, together with the resulting, comprehensive model of the driving environment, provides a considerable improvement in overall data quality compared to the information from individual sensors. This high quality of driving environment detection is a prerequisite for complex applications to be developed in the course of the INVENT research initiative, such as the congestion assistant, the lateral control assistance, intersection support, and pedestrian protection. Such systems will be introduced to the market in stages, and each new improvement will enhance the others. This mutual enhancement is known from experience with dynamical driving control such as antilock braking systems, traction control systems, electronic stabilization, and braking assistance.


The overall objective is to develop solutions and algorithms for improved Detection and Interpretation of the Driving Environment. As a basis for this endeavour, two goals have been identified in particular:


  • On the one hand, the specifications of the available sensors, such as camera, radar, and lidar, need to be expressed in a unified and standardised manner.
  • On the other hand, the proposed performance requirements of the assistance systems for detection of the driving environment need to be thoroughly described by means of appropriate scenarios in a way that allows validation.


An object catalogue will be created, which will classify the elements of the driving environment such as stationary and moving vehicles, pedestrians, the roadway, road signs, roadside structures, visibility, road status, etc. into distinct categories.


In order to provide an appropriate description of the vehicle's surroundings, a variety of data needs to be collected and evaluated.<?xml:namespace prefix = o ns = 'urn:schemas-microsoft-com:office:office' />

This includes information on the position, distance, speed, and size of objects – from the driver's vehicle, all other vehicles, pedestrians and obstacles, as well as from the entire infrastructure (i.e. the course and its markings, road signs, and roadside structures).

The technologies to be implemented include mono and stereo camera systems, infrared and heat sensitive cameras, short and long-range radar systems, multibeam and scanning lidar systems, ultrasound sensors, as well as visibility sensors, roadway condition detection systems, GPS together with digital maps, and finally all the data available on the vehicle CAN bus. The project activities regarding data fusion and interpretation can be explained in terms of a three-stage perceptual pyramid in the order of processing levels (i.e. from sensor data through object detection to situation analysis). The system architecture approach used in existing assistance systems such as adaptive cruise control (ACC) has been to assign each function to its own dedicated sensor. This approach is no longer feasible with the increasing complexity of the Congestion Assistant or of other new functions planned. The new approach is to integrate multiple individual sensors


Other Programme
Research initiative INVENT


Lead Organisation
EU Contribution
Partner Organisations
EU Contribution


Contribute! Submit your project

Do you wish to submit a project or a programme? Head over to the Contribute page, login and follow the process!