A Multi-Perspective AI Framework for Mitigating Disinformation Through Contextual Analysis and Socratic Dialogue

Open Access
Article
Conference Proceedings
Authors: Manuel DelaflorCarlos Toxtli

Abstract: The proliferation of digital information channels has created an unprecedented challenge in discerning credible information from sophisticated disinformation campaigns. Traditional fact-checking methods, often relying on binary true/false classifications, struggle to address the complexity, context-dependency, and nuanced nature of many claims circulating online. This limitation underscores the urgent need for advanced tools that empower individuals to critically evaluate information from multiple angles. Our AI-driven framework combines persistent contextual memory with Socratic dialogue and a three-lens analytical pipeline to foster deeper understanding and resilience against manipulation.As users interact, each input is segmented into atomic claims and stored, alongside the evolving dialogue history, in a contextual memory to ensure consistency. Each claim is then evaluated in parallel by three specialized LLM arbiters: the Empirical Arbiter, which verifies data against curated repositories and assesses observational consistency; the Logical Arbiter, which uncovers hidden fallacies and assesses argument coherence; and the Pragmatic Arbiter, which weighs potential outcomes, utility, and situational fit. An Analysis Integrator synthesizes these into interpretable metrics: Verifact Score (evidence strength), Model Diversity Quotient (inter-arbiter agreement), Contextual Sensitivity Index (scenario appropriateness) and Reflective Index (exposed assumptions). Additionally, a Perspective Generator crafts counter-arguments and alternative viewpoints, encouraging users to consider different interpretations and promoting epistemic humility.We hypothesize (H₁) that our arbiters' feedback will reduce user endorsement of unsupported claims more effectively than conventional fact-checking while mitigating backfire effects through Socratic dialogue. Our research questions ask how Empirical, Logical and Pragmatic scores influence confidence revision (RQ₁); whether MDQ reliably signals claim controversy and predicts evidence volatility (RQ₂); how users perceive transparency, fairness and cognitive load when receiving multi-perspective feedback versus a simple true/false label (RQ₃); and to what extent the persistent contextual memory system improves belief updating by maintaining coherent reasoning chains across extended dialogues (RQ₄).By providing a multi-faceted presentation that moves beyond simple verification, the system is designed to encourage engagement in higher-order critical thinking. The proposed framework represents a significant advancement over traditional fact-checking by integrating empirical validation, logical scrutiny, and pragmatic assessment through an AI-driven system. The full paper will detail the system architecture, formal metric definitions, experimental protocol, and proposed evaluation methodology to assess its efficacy in educational settings, media literacy programs, and as a personal tool for navigating the complexities of the modern information ecosystem.

Keywords: Disinformation, Misinformation, Critical Thinking, Fact-Checking, Content Analysis

DOI: 10.54941/ahfe1006743

Cite this paper:

Downloads
8
Visits
61
Download