ATCO2 will deliver a platform to collect, store, process and share voice communications from real world air-traffic control data, exploiting deep learning methods. The planned machine learning solutions are enabling technologies for air-traffic control. To achieve robust and high speech recognition performance, large amount of data will be collected. The project aims at accessing data from certified ADS-B datalinks aligned with a surveillance technology, and directly from air-traffic controllers supplied by air navigation service providers.
Centred on a robust platform, the project will build on an existing and extensively used solution of ‘OpenSky network’ partner, ensuring its long-term sustainability. Current platform collects and stores periodically broadcasted aircraft information through a network of ADS-B receivers. It will be extended to allow collection, storage and pre-processing of voice communications, and time/position aligned with other aircraft information. The project targets both spoken commands issued by air-traffic controllers and readback confirmations provided by pilots. In addition to broadcasted data, ATCO2 will have access to voice recordings from air navigation service providers (e.g. Austrocontrol). Besides automatic segmentation (e.g. speaker, accent, specific command), robust automatic speech recognition will be implemented and integrated to automatically transcribe voice communications. It will use active learning scenarios capable of iterative improvements, in addition to manual post-editing.
To comply with the CleanSky2 Programme, the project will also significantly contribute to community building, consolidating an existing community of ‘OpenSky network’. Project incentives will motivate users to upload and potentially pre-transcribe data to gain access to other resources and automatic transcripts. The project will strongly account for legal and ethical issues regarding privacy, personal data, data security and other related aspect