VUAD - Video Understanding for Autonomous Driving
Overview
Background & policy context:
From connected vehicles to self-driving cars, driverless vehicles are set to become a game changer in coming years. Traditional automobile manufacturers are racing to develop fully autonomous vehicles to meet (future) demand. Paving the way for this transformation is the advance of autonomous vision algorithms. The EU-funded VUAD project will develop a deep learning method for multi-object tracking on graph-structured data. The project will also extend this to joint video object detection and tracking by exploiting temporal cues, in order to improve both detection and tracking performance. In addition, VUAD will propose a background motion model for the static parts of the scene in an unsupervised manner. The goal is to combine the proposed algorithms into a unified video-understanding module.
Objectives:
Autonomous vision aims to solve computer vision problems related to autonomous driving. Autonomous vision algorithms achieve impressive results on a single image for various tasks such as object detection and semantic segmentation, however, this success has not been fully extended to video sequences yet. In computer vision, it is commonly acknowledged that video understanding falls years behind single image. This is mainly due to two reasons: processing power required for reasoning across multiple frames and the difficulty of obtaining ground truth for every frame in a sequence, especially for pixel-level tasks such as motion estimation. Based on these observations, there are two likely directions to boost the performance of tasks related to video understanding in autonomous vision: unsupervised learning and object-level reasoning as opposed to pixel-level reasoning. Following these directions, we propose to tackle three relevant problems in video understanding. First, we propose a deep learning method for multi-object tracking on graph structured data. Second, we extend it to joint video object detection and tracking by exploiting temporal cues in order to improve both detection and tracking performance. Third, we propose to learn a background motion model for the static parts of the scene in an unsupervised manner. Our long-term goal is also to be able to learn detection and tracking in an unsupervised manner. Once we achieve these stepping stones, we plan to combine the proposed algorithms into a unified video understanding module and test its performance in comparison to static counterparts as well as the state-of-the-art algorithms in video understanding.
Share this page