A new conversational interaction concept for document creation and editing on mobile devices for visually impaired users
Authors: Alireza Darvishy, Zeno Heeb, Edin Beljulji, Hans-Peter Hutter
Abstract: This paper describes the ongoing development of a conversational interaction concept that allows visually impaired users to easily create and edit text documents on mobile devices using mainly voice input. In order to verify the concept, a prototype app was developed and tested for both iOS and Android systems, based on the natural-language understanding (NLU) platform Google Dialogflow. The app and interaction concept were repeatedly tested by users with and without visual impairments. Based on their feedback, the concept was continuously refined, adapted and improved on both mobile platforms. In an iterative user-centred design approach, the following research questions were investigated: Can a visually impaired user rely mainly on speech commands to efficiently create and edit a document on mobile devices? User testing found that an interaction concept based on conversational speech commands was easy and intuitive for visually impaired users. However, it was also found that relying on speech commands alone created its own obstacles, and that a combination of gestures and voice interaction would be more robust. Future research and more extensive useability tests should be carried out among visually impaired users in order to optimize the interaction concept.
Keywords: Visual impairment, mobile devices, non, visual interaction, NLP, speech input, speech output, document creation
Cite this paper: