Expanding the Horizon of Learning Applications: A Study on the Versatility of Prompting Architectures in Large Language Models
Open Access
Article
Conference Proceedings
Authors: Cecilia Delgado Solorzano, Carlos Toxtli
Abstract: This paper presents an exploration of the versatility of prompting architectures in large language models (LLMs), expanding the horizons of their application in learning and language interfaces. By leveraging the expansive capabilities of LLMs, this research probes the potential for creating structured prompts that can simultaneously support multiple use cases, namely paraphrasing, grammatical syntax guidance for introductory sentences, and conducting experiential conversations in a foreign language. In this study, we delve into the specifics of enabling such technology, including the design of the prompting architecture, the deployment process, and the intricacies of applying the same structure across diverse applications. An extensive field experiment incorporating interfaces powered by LLMs using this structured prompt has been conducted to evaluate the model's efficiency in real-world scenarios. Results from the field experiment highlight the promising adaptability of these prompting architectures, revealing remarkable efficiency across the multiple use cases explored. Furthermore, this research uncovers a new dimension of flexibility in the design and deployment of learning applications using LLMs, potentially revolutionizing language learning interfaces by establishing a one-size-fits-all solution. This paper aims to stimulate further research into refining and expanding the potential of LLMs, encouraging the exploration of how artificial intelligence can optimally benefit language learning and related applications.
Keywords: Large Language Models, Artificial Intelligence
DOI: 10.54941/ahfe1004606
Cite this paper:
Downloads
115
Visits
353