Time-Series Segmentation Based on Video Images of Cutting Operations with a Lathe in Virtual Reality Space

Open Access
Article
Conference Proceedings
Authors: Shohei TawataKeiichi WatanukiKazunori Kaede

Abstract: Due to the shortage of labor force in Japan, skill transfer and training education are becoming increasingly important in the manufacturing industry. In recent years, virtual reality (VR) technology has attracted attention in work-related training, enabling simplified training, but there is a problem that human and time resources cannot be sufficiently allocated to training due to a lack of educators and an immature training system. In this study, we developed a method to automatically recognize tasks and actions to improve the efficiency of education and training. To recognize tasks and actions, we adopted a deep learning model that can recognize actions from videos in time series, and we pre-trained the model on a large open-source dataset. We evaluated the performance of the model on unlearned procedures and people by preparing a dataset with three different procedures and 10 participants. The overall validation metrics all exceeded 90%. Specifically, results of more than 90% were achieved for unlearned people, but a drop of more than 5% was observed for all unlearned procedures, suggesting that issues must be addressed for application to task training.

Keywords: Motion analysis, human action segmentation, work training, virtual reality, lathe

DOI: 10.54941/ahfe1001789

Cite this paper:

Downloads
241
Visits
496
Download