Visual cues improve spatial orientation in telepresence as in VR

Open Access
Article
Conference Proceedings
Authors: Jennifer BradeTobias HoppeSven WinklerPhilipp KlimantGeorg Jahn

Abstract: When moving in reality, successful spatial orientation is enabled through continuous updating of egocentric spatial relations to the surrounding environment. But in Virtual Reality (VR) or telepresence, cues of one’s own movement are rarely provided, which typically impairs spatial orientation. Telepresence robots are mostly operated by minimal real movements of the user via PC-based controls, which entail a lack of real translations and rotations and thus can disrupt spatial orientation. Studies in virtual environments show that a certain degree of spatial updating is possible without body-based cues to self-motion (vestibular, proprioceptive, motor efference) solely through continuous visual information about the change in orientation or additional visual landmarks. While a large number of studies investigated spatial orientation in virtual environments, spatial updating in telepresence remains largely unexplored. VR and telepresence environments share the common feature that the user is not physically located in the mediated environment and thus interacts in an environment that does not correspond to the body-based cues generated by posture and self-motion in the real environment. Despite this similarity, virtual and telepresence environments also have significant differences in how the environment is presented: common, commercially available telepresence systems can usually only display the environment on a 2D monitor. The 2D monitor impairs the operator's depth perception compared with 3D presentation in VR, for instance in an HMD, and interacting by means of mouse movements on a 2D plane is indirect compared with moving VR controllers and the HMD in 3D space. Thus, it cannot be assumed without verification that the spatial orientation in 2D telepresence systems can be compared with that in VR systems. Therefore, we employed a standard spatial orientation task with a telepresence robot to evaluate if results concerning the number of visual cues turn out similar to findings in VR-studies.To address the research question, a triangle completion task (TCT) was carried out using the telepresence robot Double 3. The participants (n= 30) controlled the telepresence robot remotely using a computer and a mouse: At first, they moved the robot to a specified point, then they turned the robot to orient towards a second specified point, moved there and were then asked to return the robot to its starting point. To evaluate the influence of the number of visual cues on the performance in the TCT, three conditions that varied in the amount of visual information provided for navigating the third leg were presented in a within-subjects design. Similar to studies that showed support of spatial orientation in TCT by visual cues in VR, the number of visual cues available while navigating the third leg supported triangle completion with a telepresence robot. This was confirmed by the trend of reduced error with more visual cues and a reliable difference between the conditions with sparse and many visual cues. Connecting results obtained in VR with telepresence and teleoperation scenarios is valuable to inform designing telepresence and teleoperation interfaces. We demonstrated that a standard task for studying spatial orientation performance is applicable with telepresence robots.

Keywords: Telepresence, User Studies, Triangle Completion Task, Spatial Orientation, Teleoperation

DOI: 10.54941/ahfe1002862

Cite this paper:

Downloads
184
Visits
781
Download