VOXReality: Immersive XR experiences combining language and vision AI models

Open Access
Article
Conference Proceedings
Authors: Apostolos ManiatisStavroula BourouZacharias AnastasakisKostantinos Psychogios

Abstract: In recent years, Artificial Intelligence (AI) technology has seen significant growth due to advancements in machine learning (ML) and data processing, as well as the availability of large amounts of data. The integration of AI with eXtended Reality (XR) technologies such as Virtual Reality (VR) and Augmented Reality (AR) can create innovative solutions and provide intuitive interactions and immersive experiences across various sectors, including education, entertainment and healthcare. The presented paper describes the innovative Voice-drive interaction in XR spaces (VOXReality)* initiative, funded by the European commission, that integrates language and vision-based AI with unidirectional or bidirectional exchanges to drive AR and VR, allowing for natural human interactions with XR systems and creating multi-modal XR experiences. It aligns Natural Language Processing (NLP) and Computer Vision (CV) parallel progress to design novel models and techniques that integrate language and visual understanding with XR, providing a holistic understanding of goals, environment, and context. VOXReality plans to validate its visionary approaches through three use cases such as a XR personal assistant, real-time verbal communication in virtual conferences, and immersive experience for the audience of theatrical plays.* Funded by European Union (Grant agreement ID: 101070521)

Keywords: Artificial Intelligence, Multimodal Artificial Intelligence, Extended Reality, human-artificial intelligence Interaction

DOI: 10.54941/ahfe1002938

Cite this paper:

Downloads
173
Visits
565
Download