Large language models as Retail Cart Assistants: A Prompt-Based Evaluation

Open Access
Article
Conference Proceedings
Authors: Ratomir KarlovićIvan Lorencin

Abstract: Large language models (LLMs) offer promising capabilities for interpreting user input in natural language and translating it into structured formats for downstream processing. This study investigates the use of LLMs as shopping-cart assistants, limited to the task of parsing natural-language commands into a predefined JSON schema containing three fields: action, product, and quantity. The objective is to evaluate the models’ ability to perform accurate semantic parsing under consistent conditions. To examine the impact of prompt design, three distinct prompting strategies were developed: a minimal instruction specifying the target fields, an extended prompt including synonym guidance and formatting rules; and a few-shot learning approach incorporating multiple examples with strict output requirements. Each prompt variant was applied identically across all selected LLMs to ensure comparability. The evaluation was conducted using a dataset of 1,000 synthetic shopping-cart commands generated via a large generative AI model. Each command was paired with a known ground truth, structured into the same target schema. Model-generated outputs were transformed into CSV format and compared against these references to assess parsing performance. By systematically varying prompt complexity and controlling for model input, this study provides a controlled comparison framework for assessing prompt effectiveness in narrow, structured tasks. The results contribute to a deeper understanding of prompt design as a determinant of LLM utility in applied, goal-oriented scenarios.

Keywords: LLM, retail smart assistant, prompt engineering

DOI: 10.54941/ahfe1006787

Cite this paper:

Downloads
6
Visits
41
Download