Decision Transparency for enhanced human-machine collaboration for autonomous ships
Open Access
Article
Conference Proceedings
Authors: Andreas Madsen, Andreas Brandsæter, Magne V Aarset
Abstract: Maritime Autonomous Surface Ships (MASS) are quickly emerging as a game-changing technology in various parts of the world. They can be used for a wide range of applications, including cargo transportation, oceanographic research and military operations. One of the main challenges associated with MASS is the need to build trust and confidence in the systems among end-users. While the use of AI and algorithms can lead to more efficient and effective decision-making, humans are often reticent to rely on systems that they do not fully understand. The lack of transparency and interpretability makes it very difficult for the human operator to know when an intervention is appropriate. This is why it is crucial that the decision-making process of MASS is transparent and easily interpretable for human operators and supervisors. In the emerging field of eXplainable AI (XAI), various techniques are developed and designed to help explain the predictions and decisions made by the AI system. How useful these techniques are in a real-world MASS operation is, however, currently an open question. This calls for research with a holistic approach that takes into account not only the technical aspects of MASS, but also the human factors that are involved in their operation. To address this challenge, this study employs a simulator-based approach were navigators test a mock-up system in a full mission navigation simulator. Enhanced decision support was presented on an Electronic Chart Display & Information System (ECDIS) together with information of the approaching ships as AIS (Automatic Identification System) symbols. The decision support provided by the system was a suggested sailing route with waypoints to either make a manoeuvre to avoid collision, or to maintain course and speed according to the Convention of the International Regulations for Preventing Collisions at Sea (COLREG). After completing the scenarios, the navigators were asked about the system's trustworthiness and interpretability. Further, we explored the needs for transparency and explainability. In addition, the navigators gave suggestions on how to improve the decision support based on the mentioned traits. The findings from the assessment can be used to develop a strategic plan for AI decision transparency. Such a plan would help building trust in MASS systems and improve human-machine collaboration in the maritime industry.
Keywords: Decision Transparency, MASS, XAI, Human, machine collaboration, Decision Support
DOI: 10.54941/ahfe1003750
Cite this paper:
Downloads
260
Visits
636