Skip to main content
European Commission logo
TRIMIS

Cloud Large Scale Video Analysis

PROJECTS
Funding
European
European Union
Duration
-
Status
Complete
Geo-spatial type
Other
Total project cost
€4 604 431
EU Contribution
€4 604 431
Project website
Project Acronym
Cloud LSVA
STRIA Roadmaps
Connected and automated transport (CAT)
Transport mode
Road icon
Transport policies
Digitalisation,
Deployment planning/Financing/Market roll-out
Transport sectors
Passenger transport

Overview

Link to CORDIS
Background & Policy context

The automotive industry needs tools that can manage the extremely large volumes of data (Big Data) especially to provide support in the annotation task (ADAS, cartography market). One of the main bottlenecks in advancing in several application domains  is the lack of labelled realistic video datasets of sufficient size, complexity and coverage (comprehensiveness).

The performance of computer vision or video analysis systems is inherently restricted by the quality of the available training data. Manually collating and annotating such datasets is:

  • infeasible
  • impractical
  • slow
  • inconsistent
  • excessively costly

Cloud LSVA is the solution.

Cloud-LSVA will create Big Data Technologies to address the open problem of a lack of software tools, and hardware platforms, to annotate petabyte scale video datasets. The problem is of particular importance to the automotive industry. CMOS Image Sensors for Vehicles are the primary area of innovation for camera manufactures at present. They are the sensor that offers the most functionality for the price in a cost sensitive industry. By 2020 the typical mid-range car will have 10 cameras, be connected, and generate 10TB per day, without considering other sensors. Customer demand is for Advanced Driver Assistance Systems (ADAS) which are a step on the path to Autonomous Vehicles. The European automotive industry is the world leader and dominant in the market for ADAS. The technologies depend upon the analysis of video and other vehicle sensor data. Annotations of road traffic objects, events and scenes are critical for training and testing computer vision techniques that are the heart of modern ADAS and Navigation systems. Thus, building ADAS algorithms using machine learning techniques require annotated data sets. Human annotation is an expensive and error-prone task that has only been tackled on small scale to date. Currently no commercial tool exists that addresses the need for semi-automated annotation or that leverages the elasticity of Cloud computing in order to reduce the cost of the task. Providing this capability will establish a sustainable basis to drive forward automotive Big Data Technologies. Furthermore, the computer is set to become the central hub of a connected car and this provides the opportunity to investigate how these Big Data Technologies can be scaled to perform lightweight analysis on board, with results sent back to a Cloud Crowdsourcing platform, further reducing the complexity of the challenge faced by the Industry. Car manufacturers can then in turn cyclically update the ADAS and Mapping software on the vehicle benefiting the consumer

Objectives

The aim of this project is to develop a software platform for efficient and collaborative semiautomatic labelling and exploitation of large-scale video data that solves existing needs for ADAS and Digital Cartography industries. This platform will need to deal with diverse structured and unstructured data sourced from different sensors. The main objectives of the project are to design tools deployed on a Cloud platform that can:

  • Effectively handle and exploit large amounts of data to fulfil the ultimate goals of building and validating ADAS systems and creating scene descriptions for system validation and cartography.
  • Provide a framework for sharing and combining scene analysis results, including for benchmarking applications, and update capabilities for in-vehicle ADAS systems.
  • Fuse video data analysis with data from other sources such that video annotations can integrate with and reference across the entire data corpus.
  • Support annotation tools capable of learning from human generated relevance feedback, in the form of corrections, verifications and specializations.
  • Automate as far as possible the video annotation process to minimise human workload and improve system scalability and feasibility.
  • Apply video analysis as online, efficient, recursive filters with incremental updates that store only the last estimated models (and not entire data subsets).

Balance the computational and network load of the automatic labelling algorithms so that part of the processing or annotation can be done at the remote data sources (i.e. on board vehicle computers)

Methodology

The entire work plan is structured in a cyclical approach, each cycle starting with a short definition stage, followed by the core RTD activities, and a final short period of integration and testing. Each cycle ends up with an evaluated prototype. The obtained results can be feed back to the definition stage of the next cycle. The project will push to have a first (Alpha) prototype for quick testing and concept validation. Functionalities and services will gradually be built into this Alpha prototype, in a second (Beta) and final (Gamma) cycles. The methodology is designed to reach market quickly and help the exploitation of the solution.

Funding

Parent Programmes
Funding Source
Cloud LSVA is co-funded by the European Union’s Horizon 2020 research and innovation programme under grant No. 688099

Partners

Lead Organisation
EU Contribution
€935 125
Partner Organisations
EU Contribution
€708 500

Technologies

Technology Theme
Advanced driver assistance systems
Technology
Cloud computing for ADAS

Contribute! Submit your project

Do you wish to submit a project or a programme? Head over to the Contribute page, login and follow the process!

Submit