AI Tool Compliance Reporting: A Heuristic Analysis of Survey Data Using Natural Language Processing

Open Access
Article
Conference Proceedings
Authors: Aimee Roundtree

Abstract: This study examined how well New York City’s public AI tools reported good design practices for users. It analyzes 76 reports about algorithmic tools using a mix of computer methods (natural language processing), human review, and Nielsen's ten common heuristics for good usability, such as showing system status, giving users control, and providing help. The tools often followed some of these rules—especially those that support transparency, user control, and clear design. But others, like helping users prevent mistakes or reducing memory load, were rarely used. Agencies may be focusing more on making tools technically sound and less on making them easy and fair to use. We also looked at the language in the reports and found differences based on heuristic. Some used more formal or technical words, while others were simpler and more user-friendly. This study's findings confirm earlier ones that public trust in AI depends on transparency and fairness. More work is needed to include all users, especially regarding high-risk tools like those used in healthcare or law enforcement. Future studies should involve users and designers directly and look at tools across more sectors to improve design and fairness in public AI

Keywords: AI, Public Services, User Experience

DOI: 10.54941/ahfe1006053

Cite this paper:

Downloads
3
Visits
8
Download