Examining Decision-Making in Air Traffic Control: Enhancing Transparency and Decision Support Through Machine Learning, Explanation, and Visualization: A Case Study
Résumé
Artificial Intelligence (AI) has recently made significant advancements and is now pervasive across various application domains. This holds true for Air Transportation as well, where AI is increasingly involved in decision-making processes. While these algorithms are designed to assist users in their daily tasks, they still face challenges related to acceptance and trustworthiness. Users often harbor doubts about the decisions proposed by AI, and in some cases, they may even oppose them. This is primarily because AI-generated decisions are often opaque, non-intuitive, and incompatible with human reasoning. Moreover, when AI is deployed in safety-critical contexts like Air Traffic Management (ATM), the individual decisions generated by AI models must be highly reliable for human operators. Understanding the behavior of the model and providing explanations for its results are essential requirements in every life-critical domain. In this scope, this project aimed to enhance transparency and explainability in AI algorithms within the Air Traffic Management domain. This article presents the results of the project’s validation conducted for a Conflict Detection and Resolution task involving 21 air traffic controllers (10 experts and 11 students) in En-Route position (i.e. hight altitude flight management). Through a controlled study incorporating three levels of explanation, we offer initial insights into the impact of providing additional explanations alongside a conflict resolution algorithm to improve decision-making. At a high level, our findings indicate that providing explanations is not always necessary, and our project sheds light on potential research directions for education and training purposes
Origine :
Fichiers produits par l'(les) auteur(s)
Loading...