Enhancing Trust in Human-AI Interaction through Explainable Decision Support Systems for Mission Planning of UAS-Swarms

Open Access
Article
Conference Proceedings
Authors: Batuhan ÖzcanMax FriedrichJan-Paul HuttnerKevin Dwinger

Abstract: The use of Artificial Intelligence (AI) in decision-making systems often raises concerns about transparency and interpretability due to the "black box" nature of many AI models. This lack of explain ability can hinder trust and limit the effective integration of AI into human-machine systems. Fuzzy logic offers a compelling solution to this challenge by providing inherently interpretable decision-making frameworks. Unlike traditional AI approaches, which often obscure their reasoning processes, fuzzy logic operates through intuitive linguistic rules and degrees of truth, making it possible to design systems that are both explainable and adaptable. By combining fuzzy logic with advanced AI techniques, such as machine learning, it becomes feasible to build systems that leverage the power of AI without sacrificing transparency or user trust. Fuzzy logic plays an essential role in advancing these goals by offering a framework for handling uncertainty and modeling human-like reasoning. Unlike classical logic, which relies on binary true/false values, fuzzy logic operates on degrees of truth. This characteristic makes it uniquely suited for real-world applications where data may be incomplete or imprecise. By using linguistic variables and intuitive rules, fuzzy logic enables decision-making systems to align more closely with human perception. For example, instead of rigid thresholds like “temperature > 30°C,” fuzzy logic employs terms such as “moderately warm” or “very hot,” which are easier for humans to understand. This interpretability is particularly valuable in Human-Machine Interface design, where trust and collaboration between users and machines is a key factor. This study proposes a structured approach to integrating fuzzy logic into mission planning systems for safety-critical environments such as autonomous disaster management with drone swarms. The first step involves defining a set of linguistic variables and rules tailored to specific operational contexts—for instance, assessing flight risks or prioritizing tasks during disaster response scenarios. These rules will be designed to align with human reasoning patterns while remaining computationally efficient. Next, fuzzy inference systems will be developed to process uncertain inputs—such as environmental conditions or sensor data—and generate interpretable outputs that guide decisionmaking. To enhance system adaptability, reinforcement learning algorithms will be integrated into the framework, allowing the system to optimize its performance over time based on feedback from real-world operations or simulations. The study seeks to demonstrate how combining fuzzy logic with machine learning and XAI principles can create robust, explainable systems that improve trust, collaboration, and safety in human-autonomous teaming environments.

Keywords: Unmanned Aircraft System, Artificial Intelligence, Mission Planning, Fuzzy-Logic

DOI: 10.54941/ahfe1006377

Cite this paper:

Downloads
15
Visits
46
Download