Moscow Aviation Institute postgraduate student automates the human-machine interface testing
Sergey Dyachenko, the postgraduate student of the department 703 "System design of aerial systems" of the Institute No. 7 "Robotic and intelligent systems" is working on the creation of a complex for automating testing of graphic information and sound warnings as part of the verification of indication and alarm systems of civil aircraft. In addition to the main functions, this tool might be used to test any technical objects that have human-machine interaction system: military and military transport aircraft, helicopters, spaceships, cars, ships and others.
Birth of the idea
The work is based on popular and widely used IT technologies – methods of image and sound recognition. The idea of the project came from production needs. Sergey is an expert in the indication and alarm systems department at the Integration Center branch of Irkut Corporation, that develops avionics for MC-21 aircraft. During the development of the next version of the software of these systems, it became necessary to test the issuance of text messages generated by the crew warning system and displayed on the indicators in the cockpit, at various phases of the aircraft flight. Given the large number of messages (over 600), the department estimates that manual testing would take about a month for a two-person testing team.
In addition, the testing loop was not limited to just this task, and the deadlines for testing were tight. Then the team thought about developing a tool that would automate this process.
Along with Sergey Dmitry Ilyashenko, Egor Mamkin, Artyom Krytsin, Vladislav Zub and Ivan Kordonsky, graduates and students of Department 703 of the Moscow Aviation Institute, who are employees of the Integration Center branch of PJSC “Irkut Corporation”, are working on the project as programmers and testers. The head of the department 703 Evgeny Neretin acts as the scientific leader of the project.
Implementation process
The idea of the project is that the tester must set the parameter values for the text messages creation, after which they appear on the display of the cockpit. Further, the camera, installed opposite the display, captures their appearance, takes the screenshot and transfers it to the input of the text recognition software. As a result of recognition, a file is formed with a list of recognized messages, which is then compared with the expected results. Based on the results of the comparison, it is concluded that the test has passed.
– We made an analysis and found out that this task can be solved quickly enough using neural network technologies. We took the Tesseract OCR neural network from Google, aimed specifically at text recognition, trained it on those fonts that are used in the MC-21 cockpit, and tested it first on static pictures with text messages. The result was quite good – the OCR accuracy was about 97%. Then we tested our system “in combat conditions” at the stand, and here our system also proved its efficiency. As a result, we managed to reduce the time for solving the previously mentioned problem to one week, and the number of testers – to one person. After the successful implementation of text recognition, we started thinking about automating testing of arbitrary graphic information and sound messages. So the goals of our project have expanded – says Sergey.
No competitors
It is worth mentioning that most on-board systems do not imply human-machine interaction during the operation of the aircraft. That is, the systems are made in the form of blocks that receive information at the input, based on it, they calculate the necessary data and give the results to the output. At the moment, testing of such systems is almost completely automated: there are many solutions that allow you to set and read the parameters transmitted over code communication lines.
However, not everything is so simple with human-machine interaction systems, which form organoleptic information (for example, image, sound, tactile signals). Due to the complexity of the implementation of complexes for automating testing of this information, there are practically no such solutions on the market. However, manual testing of images and sound takes longer and can also cause human error.
– This makes our development stand out among competitors, – says Sergey.
Final stage
According to the developer, the main functionality of the complex has been successfully implemented and introduced into the activities of the Integration Center branch of Irkut Corporation. However, work to improve it and expand its functionality will continue. For example, work to increase the accuracy of recognition of arbitrary graphic information is now underway.
In the future, Sergey and his team plans to finalize the project to implement all the planned functions in an appropriate quality and initiate the qualification process of the developed software tool. This is an important stage for the possibility of entering the market and mass application of the complex in industry.
“As noted, the principles underlying the project are universal, so the developed complex can be used to test any technical systems of human-machine interaction,” says Sergey. – Adaptation for various objects depends on specific design solutions for the issuance of visual and sound information, as well as on the need to take into account the requirements of the relevant regulatory documentation.