Air traffic controllers use flight strips to manage information concerning an aircraft. If an aircraft is given clearance, this information must be logged in the flight strip. Paper flight strips are easy to maintain, but the information they contain is not available in digital form in the overall system. A remedy is offered by electronic flight strips. These, however, increase the workload and, depending on the implementation, also the length of time for which the controller has to turn his eyes away from the radar screen (head-down times).
Voice recognition based on artificial intelligence (AI) offers a solution here. The projects AcListant® and AcListant®-Strips have shown that both good recognition rates and low recognition-error rates can be achieved with assistance-based speech recognition, i.e. by coupling a controller assistance system with a speech recogniser. Both factors result in the assistance system being able to better recognise the intentions of the controller, as a result of which it can support the controller more efficiently in his work.
The subsequent project MALORCA showed that through machine learning, such assistance-based speech recognisers can be adapted – automatically and therefore inexpensively – to different airports. The prerequisite for this is that sufficient speech data and radar data are available to train the algorithms of machine learning.
The current project HAAWAII is based on the work of AcListant® and MALORCA. For the first time, it also includes the recognition of pilot radio traffic and will use significantly more voice data to train the AI algorithms: MALORCA used only 25 hours of voice data for learning, while HAAWAII will use more than 1,000 hours. As an example, HAAWAII selects the complex environments of en-route air traffic in Iceland as well as air traffic in the terminal area (TMA) of London. Particular challenges here are, in addition to diverse accents and significantly poorer speech signals, aspects of data protection.
The work in HAAWAII will both improve air traffic safety and reduce the workload of air traffic controllers. One of the main application areas of HAAWAII research will be to recognise whether the pilot has understood exactly what the controller has said to him. This can help to avoid misunderstandings in communication. In order to achieve this, the validity of speech-recognition models must be significantly improved.
The digitisation of spoken messages from air traffic controllers and pilots can be used for a multitude of safety and efficiency-enhancing applications, e.g. in order to create advance entries in electronic flight strips with little effort or to transmit controller commands directly to the aircraft’s on-board computer via data link (Controller Pilot Data Link Communication, CPDLC). A further application is the objective estimation of air traffic controllers’ workload by means of digitised voice recordings of the complex London terminal control area.