Formal Verification for Human-Centred Trust in AI: A Critical Examination of Current Paradigms

Open Access
Article
Conference Proceedings
Authors: Asieh Salehi Fathabadi

Abstract: As artificial intelligence systems increasingly permeate critical societal infrastructures, the gap between technical verification and human-centred trust has become a fundamental challenge. This position paper argues that current formal verification approaches for AI systems are fundamentally inadequate to foster genuine public trust, particularly in settings involving human interaction and socio technical complexity. We advance three critical arguments: (1) the Trust Verification Paradox: static verification approaches fail to capture the dynamic and adaptive nature of trust; (2) the Public Technical Trust Divide: technical correctness without human understanding risks ``certification theater''; and (3) the Distributed Responsibility Crisis: existing verification paradigms struggle to account for collective outcomes and accountability. We propose a shift toward Participatory Verification, in which formal methods are extended to embed stakeholder values, support verification of trust evolution, and enable responsibility attribution. Through a formal and illustrative autonomous vehicle coordination case study, we demonstrate the expressive power of Participatory Verification and outline how trust evolution, stakeholder values, and responsibility attribution can be embedded into verification frameworks.This vision paper calls for a research agenda that bridges formal methods, human-AI interaction, and social science to support AI systems that are not only technically correct, but genuinely trustworthy.

Keywords: Formal Methods, Responsible AI, Trust, Human-Centred, Participatory Design

DOI: 10.54941/ahfe1007160

Cite this paper:

Downloads
0
Visits
1
Download