Formatting Usability Guidance For Agentic AI: How Structure Shapes First‑Pass Quality
Open Access
Article
Conference Proceedings
Authors: Michael Jenkins, Caroline Kingsley, Craig Johnson, Laura Mieses
Abstract: Agentic AI (AAI) systems increasingly act, not just answer. When teams use AAI to scaffold software creation, the format of the usability standards and design guidelines they feed into these systems may determine whether the first pass is usable or needs heavy rework. Yet most guidance focuses on what to design, not how to format guidance for AI consumption. Objective. Evaluate how different formats of usability standards/design guidelines affect AAI first‑pass outputs in application‑development contexts.Method. We authored four distinct guidance styles (varying in structure, constraint explicitness, and exemplar density) and applied each across two AAI/GenAI application‑development services executing the same feature‑level build tasks. Blind outputs were rated by an expert panel of three Human Factors/UX specialists against anchored criteria for task fit, constraint adherence, and revision effort. Ratings were aggregated and compared across styles and platforms; inter‑rater agreement and cross‑platform consistency were examined. Normative heuristics (e.g., Nielsen’s 10) informed rubric construction and our emphasis on compact, chunked, example‑rich inputs. Results & Contributions. The study isolates formatting as a lever for first‑pass adequacy in AAI development. We report which style(s) yielded higher expert ratings across platforms, summarize inter‑rater agreement, and distill formatting patterns (ordering, constraint statements, examples, acceptance checks) that traveled well across tools. We provide a reusable “meta‑spec” for authoring AI‑ready usability guidance that aligns HCD, HAI guidelines, and risk controls while preserving designer intent. Implications. For product teams adopting AAI, small changes in how standards are formatted can reduce rework, speed iteration, and connect design practice to measurable outcomes long associated with high‑maturity design organizations.Poster takeaways (practitioner‑ready):1. Treat usability guidance as input UX: structure, explicit constraints, and worked examples are first‑class variables—measure them.2. Author guidance to an AI‑ready “meta‑spec” (sections, acceptance checks, examples, and known failure modes) to improve first‑pass adequacy across tools.3. Build rubrics from established HCD/HAI sources; use expert review for quality and agreement checks before scaling.4. Close the loop: track rework hours saved to connect formatting choices to lean delivery and design ROI.
Keywords: Generative AI, Agentic AI, Usability, UX Design, Human-AI Teaming, Prompt Engineering
DOI: 10.54941/ahfe1006834
Cite this paper:
Downloads
14
Visits
49


AHFE Open Access