Virtual human centered design: an affordable and accurate tool for motion capture in mixed reality

Open Access
Article
Conference Proceedings
Authors: Saverio SerinoCarlotta FontanaRosaria CalifanoNicola CappettiAlessandro Naddeo

Abstract: The introduction of Digital Human Modeling and Virtual Production in the industrial field has made possible to bring the user to the center of the project in order to guarantee the safety of workers and well-being in the performance of any activity. Traditional methods of motion capture are unable to represent user interaction with the environment. The user runs a simulation without the realistic objects, so his behavior and his movements are inaccurate due to the lack of real interaction. Mixed reality, through a combination of real objects and virtual environment, allows to increase the human-object interaction, improving the accuracy of the simulation. A real-time motion capture system produces considerable advantages: the possibility of modifying the action performed by the simulator in real time, the possibility of modifying the user's posture and obtaining feedback on it, and finally, after having suffered a post - data processing, without first processing the recorded animation. These developments have introduced Motion Capture (MoCap) technology into industrial applications, which is used for assessing and occupational safety risks, maintenance procedures and assembly steps. However, real-time motion capture techniques are very expensive due to the required equipment. The aim of this work, therefore, is to create an inexpensive MoCap tool while maintaining high accuracy in the acquisition. In this work, the potential of the Unreal Engine software was initially analyzed, in terms of ergonomic simulations. Subsequently, a case study was carried out inside the passenger compartment of the vehicle, simulating an infotainment reachability test and acquiring the law of motion. This procedure was performed through two cheap MoCap techniques: through an optical system, using ArUco markers and through a markerless optical system, using the Microsoft Kinect® as a depth sensor. The comparison of the results showed an average difference, in terms of calculated angles, between the two methodologies, of about 2,5 degrees. Thanks to this small error, the developed methods allows to have a simulation in mixed reality with user’s presence and offers an accurate analysis of performed movements.

Keywords: Human centred design, Mixed reality, DHM, Motion capture

DOI: 10.54941/ahfe1002080

Cite this paper:

Downloads
207
Visits
702
Download