Combined 3D-Vision and Adaptive Front-Lighting System for Safe Autonomous Driving

01.09.2017 - 31.05.2020
Research funding project

Driven by the trend towards highly automated or even autonomous driving, the automotive industry currently undergoes dramatic changes. The evolution of driver assistance systems, also including adaptive front lighting technologies, exhibits enormous progress and paves the way. A fundamental topic in this context is the required further improvement of sensor systems supplying the information for regulation and control of these assistance systems. Dealing with this, problems with perception of the environment have to be particularly solved. Despite the currently predominating development efforts concerning sensor data fusion, summing up information coming from several different sensor varieties like cameras in general, long/short range radar as well as Lidar, to one overall representation, at the moment optical image capture in the visible spectral range is still the dominating method to detect objects and to build a model of the driving environment.

At night the conditions for this optical survey task are complicated and require measures at overall system level to significantly improve the effective performance of driver assistance systems. Exemplarily a situation may be thought of where a collision prevention system could brake momentarily due to the fact that the optical survey result of the front facing camera did not deliver data of sufficient quality. Since there is no need for braking due to the lack of a real collision danger, the driver would be severely irritated by this unmotivated behaviour. A second example, being the incentive for this project, is the improvement of the video control equipment of a high beam assistant respectively a glare free high beam system. In these cases, improper evaluation of geometric information (e.g. distance measurement) may occur. Such a deficiency leads to irritating use of high beam illumination, by which the driver or passing traffic will be potentially distracted or exposed to discomfort by glare. To largely optimize the environmental survey by means of optical sensing it is the goal of the project to enable and put into effect stereoscopic, three dimensional image capturing using cameras directly installed in a pair of headlamps. In comparison to a central camera installed behind the wind screen according to the state of the art, in such an approach the camera is not only mounted in close vicinity but directly interacts with the headlamp control unit.

The concept of the resultant integrated, intelligent general vision system to be developed delivers outcomes and findings about in-depth automation steps as follows:

- Aimed at strongly improved image- and therefore environment perception, the headlamps provide a tailored adaptive scenery illumination to assigned lamp-cameras.

- The stereoscopic, high resolution camera system provides significantly improved information about data related to glare free high beam control, as object classification, positioning (horizontal, vertical), distance, direction of movement, lane,

- The stereoscopic camera system enables highly improved information processing based on data relevant for autonomous driving like addressed and labelled lanes, topology, free space, obstacles, traffic participants, pedestrians, etc.

People

Project leader

Subproject managers

Project personnel

Institute

Grant funds

  • FFG - Österr. Forschungsförderungs- gesellschaft mbH (National) Programme IKT der Zukunft Austrian Research Promotion Agency (FFG) Call identifier IKT der Zukunft - 5. Ausschreibung (2016) Specific program IKT der Zukunft

Research focus

  • Media Informatics and Visual Computing: 100%

Keywords

GermanEnglish
Computer Visioncomputer vision
Maschinelles Lernenmachine learning
Umfeldwahrnehmungenvironment perception
Ground Truthground truth
Ground Truth Annotierungground truth annotation
Autonomes Fahrenautonomous driving

External partner

  • ZKW Lichtsysteme GesmbH
  • Emotion3D GesmbH

Publications