Multimodal Extended Reality for Laparoscopic Surgery Training
Open Access
Article
Conference Proceedings
Authors: Yang Cai
Abstract: In this study, we explore a multimodal extended reality system for laparoscopic surgery training. The system contains multimodal feedback and holographic overlays. The haptic organs are integrated into the simulator to fill the gap of the mixed reality interfaces for realistic training needs. The holographic overlay synchronizes with the simulated or actual tissue. The 3D objects from the CT data are overlaid to the live simulated tissues in the cavity. The 3D object registration can be controlled by hand gestures. The machine vision algorithms are designed to enable the dynamic overlay process on the live laparoscopic surgery video. For example, we overlay the symbolic Calot’s Triangle based on the Visual or Near-Infrared video data. Machine vision also includes virtual reality to capture, model, and render 3D objects from 2D or 3D image and video sources, including the live 3D scope camera, CT, MRI, NIR images, and tissue scanning. The photorealistic virtualized reality emphasizes that the image data looks natural rather than the synthetic imagery used in virtual reality. The 3D reconstruction results show that the 2D laparoscopic camera can achieve reasonable accuracy in measured distances. Our experiments indicate that multimodal extended reality can increase the fidelity of the laparoscopic surgery simulation and potentially improve training efficiency.
Keywords: Stereo, 3D, Field Of View, Augmented Reality, Virtual Reality, Extended Reality, Usability, Interaction, Hci, Human-Computer Interaction
DOI: 10.54941/ahfe1005402
Cite this paper:
Downloads
45
Visits
110