I don’t understand you – error-handling as a key aspect of conversational design for chatbots
Authors: Stephan Raimer, Marleen Vanhauer
Abstract: Misunderstandings can occur in any dialogue. This applies to communication between people just as it does to communication between humans and chatbots. Resolving these misunderstandings and continuing the dialogue effectively is called error handling. Good error handling makes conversations with chatbots more helpful, convincing and successful and has great impact on user experience. The aim of this paper is to contribute to successful chatbot implementation projects. To this end, the paper will focus on the process of planning and implementing chatbot projects, with a particular emphasis on human-centred design (i.e. ISO 9241). A special role is played by the conversational design aspects with special consideration of how error handling should look like.Generalised, it can be said that an analysis for the start of a chatbot project should run (at least) through the following phases: Stakeholders and Value Proposition, Current solutions and channels, Context and conversational tasks, Personality of the chatbot, Background tasks, Error handling and fallbacks.In the field of public services, chatbot systems are becoming more and more common in public administration contexts. Compared to business use cases, where conversion rates are the ultimate KPI, here the benefits are more towards 24/7 support services, availability and scalability. For the conversational design, users should be able to enjoy talking to the virtual assistant. Hence the dialogue design must be adapted to a (defined) target group depending on the context and area of application. The chatbot is a (virtual) representative of a company or administrative body, and it therefore represents it in conversations with user or customers. Consequently, the language of the chatbot should be adapted to the Coroporate Identity, values and language.Challenges and opportunities of chatbot design, especially looking at the user experience, give you on the one hand a great potential for engaging with users at scale. On the other hand there are (potential) usability problems, i.e. users need to read a lot, can’t just easily scan and skip to their relevant content and probably a limited ability of chatbot systems to understand natural language.So what is the cause of possible misunderstandings between humans and chatbots? There can be various reasons, apart from a lack of the aforementioned understanding. The natural language processing in terms of AI comprises of two important aspects: detection of (a users’) intent as well as entities. In that context the error or misunderstanding occurs when a chatbot does not understand a request or assigns the wrong concern to the request. Other reasons might be that the chatbot knows the user's concern but has not understood the wording correctly, or the chatbot has not (yet) learned a concern that is actually relevant. Otherwise the concern might not belong to the chatbot's area of expertise.So what are the options for dealing with unknown questions and misunderstandings? A often used approach is that the chatbot describes the misunderstanding and asks the user to formulate their question differently. This is called a fallback answer. It often looks like: "I'm sorry, I didn't understand you. Can you describe your request again in other words?". Recommended alternatives to this standard behaviour might be, that the chatbot says that it may not yet have learned anything about this issue and offers the user a selection of topics that it has already mastered. Or that the chatbot refers to additional information (i.e. websites) where information on a recognised keyword can be found. Even more convenient is an intelligent search as a background function that provides relevant entries to the question directly in the chat. Last exit can be an offer to forward to a human employee (e.g. callback service).
Keywords: Chatbots, Ai, Natural Language Processing, Human-Centred Design
Cite this paper: