A world in which autonomous vehicles can be seen on our roads is not far and many international companies, both commercial and high-tech, are investing hundreds of millions to bring them into the real world. There are 4 key stages, envisioned by experts, for the evolution of autonomous vehicles: Advanced Assisted Driving Systems (2015-2018), Hands-off self-driving (2019-2021), Automated Driving (2022-2025), Fully Autonomous Cars (2025 onwards). Although we are in the AADS stage and we are getting ready to enter the next stage, both industry and academia are confronted with major challenges in developing autonomous or semi-autonomous vehicles. An important research topic for the near future is developing pilot systems or robust driving assistants, working in real time while being both cost effective and computationally efficient.
ROBIN CAR
OBJECTIVES
The main goal of the ROBIN-CAR project is to develop computer vision methods for solving a wide and sophisticated array of assisted driving tasks, to develop intelligent modules for „Hands-off driving” and „Automated driving” and a prototype system which will be tested on an electric semi-autonomous car made available by PRIME Motors Industry for the duration of the project. The system will be able to observe, recognize and monitor the 3D scene including the road, objects and persons in the environment as well as the expressions of the driver, offering the necessary information in an non-intrusive manner (vocal interaction through simple commands) with an increased driving and decision making ability.
The specific objectives of the project are:
- Developing a module for semantic understanding of the vehicle’s environment, for detecting and tracking moving objects (other cars, pedestrians), recognizing stationary objects (obstacles, lanes, traffic signs), through data fusion from 2D cameras and 2D and 3D lasers, based on novel computer vision algorithms.
- Developing an efficient module for 3D scene geometry management in order to estimate regions with potholes, bumps, etc.
- Developing a module for analyzing the driver’s facial expression and gaze direction, assisting or issuing warnings during driving, based on the degree of tiredness or attentiveness of the driver.
- Developing a driver warning module based on context sensitive services offered by the P3 – ROBIN-Context sub-project.
- Spoken natural language interaction and translation of visual information in natural language (in Romanian), allowing driver-car interaction.
- Improving and accelerating convolutional neural networks using reconfigurable systems (FPGAs).
- Implementing a prototype which integrates all the capabilities described above, then installing it on a semi-autonomous electric vehicle and thoroughly testing it.
Originality and innovation:
Developing computer vision algorithms that can optimally and robustly perform in any conditions in a real-time manner; estimating the 3D scene, the road surface, while detecting vertical or porous surfaces in order to safely perform obstacle avoidance; reducing the computational effort for object recognition by using contextual relations; developing novel neural network architectures; spoken interaction in Romanian between the driver and the car; a prototype system deployed on an electric semi-autonomous vehicle.