The Simplicity Paradox: Designing Transparency in the Age of AI

Open Access
Article
Conference Proceedings
Authors: Clara Rocha KyrillosMarina Moreira MarinhoEduardo Oliveira

Abstract: In recent years, design has come to treat simplicity almost as a moral value. Yet, in the context of Artificial Intelligence (AI), this ideal grows more difficult to define. The following paper reflects on the delicate tension between making AI systems approachable and the risk of reducing them to something opaque or misleading. Maeda’s Laws of Simplicity (2006) outline key principles—Reduce, Organize, Learn, and Trust—that help structure human–technology interaction. However, when simplicity becomes a goal pursued without critical thought, it can mask the very mechanisms users should understand. Rams saw simplicity as the outcome of careful refinement; Munari described it as clarity achieved after wrestling with complexity; Norman linked it to cognitive empathy; and Simon reminded us that what seems simple depends on what the observer already knows. These views converge in suggesting that simplicity and ethics in AI design cannot be separated.To simplify does not mean to erase complexity, but rather to interpret it. Google Design (2024) suggests that simplicity in AI involves giving people agency over the “magic” of automation, creating what they call cognitive transparency. Interfaces that achieve this reveal just enough to build comprehension without overloading the user. Popular tools such as ChatGPT, Google Translate, and Spotify demonstrate this principle in practice, each turning intricate algorithms into something fluid and familiar. Studies by Karran et al. (2022) emphasize the role of visual clarity and feedback in building trust, while NNGroup (2025) identifies “perceptive simplicity” as a strategy to reduce cognitive effort. Likewise, Brdnik (2023) notes that clarity of hierarchy and modularity are central to making data-heavy AI dashboards usable.This study offers a comparative look at these three applications—ChatGPT, Google Translate, and Spotify—examining how Maeda’s laws appear in their visual and interaction design. Each platform presents a particular interpretation of simplicity, balancing usability, transparency, and control in different ways. Through this lens, it becomes possible to see how design choices shape not only engagement but also the user’s trust in AI-driven decisions.Simplicity, then, is never neutral. ChatGPT’s conversational design invites learning but hides its sources. Google Translate reduces linguistic barriers while glossing over cultural nuance. Spotify curates personal playlists yet conceals its algorithmic logic. All three show how simplicity can either illuminate meaning or quietly manipulate it.Recent thinking around Generative AI expands this debate. New design principles—such as “exploration and control,” “acceptance of imperfection,” and “model comprehensibility”—suggest that uncertainty should be shown, not erased. The notion of Seamful Design reinforces this: revealing flaws can make systems more honest. As Liao et al. (2023) argue, designers must first understand the machinery behind AI to make it truly simple. Without that insight, what appears simple may only be beautifully opaque.In essence, simplicity in AI is a moral negotiation between clarity, capability, and honesty. To design simply is not to hide complexity, but to guide others through it with care.

Keywords: AI transparency, XAI (Explainable AI), Simplicity, Seamful design, Trust, Design ethics

DOI: 10.54941/ahfe1007184

Cite this paper:

Downloads
0
Visits
1
Download