Large language models in programming: a meta-analysis of tools, users, and human-computer interaction themes

Open Access
Article
Conference Proceedings
Authors: Daniel OlivaresCharles BenningtonAbigail Skillestad

Abstract: Since 2021, the rapid integration of large language models (LLMs), such as OpenAI’s Codex and ChatGPT, into programming has reshaped how software is written, learned, and maintained. Tools such as GitHub Copilot, Amazon CodeWhisperer, Tabnine, and Sourcegraph Cody have evolved from experimental aids to core elements of modern workflows, while academic prototypes continue to explore new interfaces and teaching applications. This meta-analysis synthesizes empirical research, user evaluations, and product-level comparisons to provide a comprehensive view of the opportunities and challenges posed by LLM-based programming assistants. The analysis considers novice programmers, professional developers, researchers, and educators, highlighting recurring human-computer interaction (HCI) themes of trust calibration, cognitive load management, interface modalities, and the balance between automation and user control.The methodology followed a systematic review of studies published between 2021 and early 2025 in ACM, IEEE, arXiv, and other recognized repositories. Industry reports and tool documentation were included to capture emerging developments. A qualitative thematic synthesis integrated findings across varied research contexts, including user studies, classroom evaluations, and professional development workflows, revealing consistent patterns in tool use, learning outcomes, and professional practice, while also identifying gaps in current understanding.Novice programmers benefit from immediate feedback, reduced syntax errors, and increased confidence. Yet these advantages can foster over-reliance if tools are used as answer generators. Structured support, such as hint-based prompting and code validation, helps students engage more deeply with core concepts. Professional developers report productivity gains in routine tasks and code navigation but remain cautious about correctness, security, and workflow disruptions. Vulnerability checks, auto-generated tests, and explanation features are especially valued. Researchers and educators employ LLM-based programming tools to streamline analysis, generate assessments, and create interactive teaching methods, though concerns persist about equity, academic integrity, and responsible classroom use.Across all groups, four HCI themes stand out. trust calibration is essential to help users understand both strengths and limitations. Cognitive load management improves when tools integrate seamlessly into workflows and provide context-aware assistance. Interface modalities matter, with value in combining inline completions and conversational explanations to support varied scenarios. Finally, balancing automation with user control ensures accountability and promotes critical engagement, meaning that users remain actively involved in evaluating, verifying, and refining AI-generated outputs rather than passively accepting them.These findings show that LLM-based programming tools are not inherently harmful; outcomes depend on how they are used and designed. For learners, risks arise when practice is bypassed, limiting skill growth. For professionals, challenges involve accuracy, security, and workflow integration. Effective use treats LLMs as collaborators that support reflection and experimentation rather than replacements for human reasoning. Students benefit when tools provide hints and guidance instead of complete solutions, encouraging deeper understanding.In conclusion, LLM-based programming tools present strong potential for advancing productivity, education, and research. Benefits include faster coding, improved learning, and streamlined teaching. Persistent challenges remain related to correctness, cognitive load, and trust. Future research should emphasize longitudinal studies of skill development and design strategies that improve transparency, context, and pedagogy. Ethical and legal considerations, including attribution, privacy, and access, must also be addressed. By positioning these tools as collaborative partners, the computing community can maximize their benefits while reducing risks for developers, educators, and researchers.

Keywords: large language models, programming tools, developer productivity, human-computer interaction, software education, user experience, meta-analysis, AI

DOI: 10.54941/ahfe1006934

Cite this paper:

Downloads
16
Visits
85
Download