Skip to main content
European Commission logo
TRIMIS

Omni-Supervised Learning for Dynamic Scene Understanding

PROJECTS
Funding
European
European Union
Duration
-
Geo-spatial type
Network corridors
Total project cost
€1 500 000
EU Contribution
€1 500 000
Project Acronym
DynAI
STRIA Roadmaps
Connected and automated transport (CAT)
Transport mode
Road icon
Transport policies
Digitalisation
Transport sectors
Passenger transport,
Freight transport

Overview

Call for proposal
ERC-2021-STG
Link to CORDIS
Background & Policy context

Self-driving cars seem to be at the touch of our hand, partially due to the success of computer vision algorithms that have been developed to be the "eyes" of such autonomous vehicles. To navigate the world, autonomous vehicles need to understand the dynamic objects in the scene, namely detect, segment and track multiple moving objects. Computer vision can now successfully tackle this problem, thanks mostly to the advances in deep learning. Most methods rely on convolutional neural networks trained on large-scale data sets in a supervised way, but is this paradigm enough to represent the complexity of our streets? The ERC-funded DynAI project will go beyond supervised learning. Project researchers will design innovative machine learning models that learn directly from unlabelled video streams.

Objectives

Computer vision has become a powerful technology, able to bring applications such as autonomous vehicles and social robots closer to reality. In order for autonomous vehicles to safely navigate a scene, they need to understand the dynamic objects around it. In other words, we need computer vision algorithms to perform dynamic scene understanding (DSU), i.e., detection, segmentation, and tracking of multiple moving objects in a scene. This is an essential feature for higher-level tasks such as action recognition or decision making for autonomous vehicles. Much of the success of computer vision models for DSU has been driven by the rise of deep learning, in particular, convolutional neural networks trained on large-scale datasets in a supervised way. But the closed-world created by our datasets is not an accurate representation of the real world. If our methods only work on annotated object classes, what happens if a new object appears in front of an autonomous vehicle? We propose to rethink the deep learning models we use, the way we obtain data annotations, as well as the generalization of our models to previously unseen object classes.

Methodology

To bring all the power of computer vision algorithms for DSU to the open-world, we will focus on three lines of research:  1-Models. We will design novel machine learning models to address the shortcomings of convolutional neural networks. A hierarchical (from pixels to objects) image-dependent representation will allow us to capture spatio-temporal dependencies at all levels of the hierarchy. 2-Data. To train our models, we will create a new large-scale DSU synthetic dataset, and propose novel methods to mitigate the annotation costs for video data.  3-Open-World. To bring DSU to the open-world, we will design methods that learn directly from unlabeled video streams.  Our models will be able to detect, segment, retrieve, and track dynamic objects coming from classes never previously observed during the training of our models.

Funding

Specific funding programme
HORIZON.1.1 - European Research Council (ERC)
Other Programme
ERC-2021-STG ERC STARTING GRANTS

Partners

Lead Organisation
EU Contribution
€1 500 000

Technologies

Technology Theme
Connected and automated vehicles
Technology
Manoeuvring control algorithms for cooperative automation
Development phase
Research/Invention

Contribute! Submit your project

Do you wish to submit a project or a programme? Head over to the Contribute page, login and follow the process!

Submit