Comparing semiautomatic Rapid Upper Limb Assessments (RULA): Azure Kinect versus RGB-based machine vision algorithm
Open Access
Article
Conference Proceedings
Authors: Antonio Maria Coruzzolo, Francesco Lolli, Nazareno Amicosante, Hrishikesh Kumar, Pramod Thupaki, Saurav Agarwal
Abstract: Correctly using a rapid upper limb assessment for working postures is crucial to avoid musculoskeletal disorders. Although motion capture technologies and in particular depth cameras are widely used, they cannot be used in large-scale industrial environments due to their high cost and their performance greatly impacted by the surrounding environment. We thus compared the effectiveness of a commercial machine vision algorithm (named ErgoEdge) based on an RGB camera against an application here developed based on the depth camera Microsoft Azure Kinect for the RULA evaluation (AzKRULA). We conducted an experiment where fifteen static postures were evaluated with Microsoft Azure Kinect and ErgoEdge, and the results were also compared with those of an expert in ergonomics. This experiment showed a substantial agreement between the solutions provided by the semi-automatic RULA evaluation and the ergonomic expert and between AzKRULA and ErgoEdge. At the same time, it showed that the RGB camera must be placed on the side of the worker due to the difficulties of the machine vision algorithm in reconstructing from a frontal view, important joint angles in 2D space (e.g., to evaluate the neck and trunk), which can invalidate the RULA evaluation provided by ErgoEdge. Moreover, the RULA evaluation with AzKRULA and ErgoEdge highlighted the need for an in-depth study into the thresholds of the secondary factors (i.e., all the factors for the RULA evaluation that are not computed from the thresholds of joint angles) as the highest differences between the two evaluations and the ergonomist one arises on them.
Keywords: ergonomic, machine vision, automatic risk assestment
DOI: 10.54941/ahfe1002596
Cite this paper:
Downloads
450
Visits
1176