Visual dictionary of human action in vehicular environment using computer vision

Open Access
Conference Proceedings
Authors: Abhijit Sarkar

Abstract: Every human behavior and actions can be divided in small sub-actions and attributes. Some of these sub-actions and attributes contain visually semantic meanings to human. We call the ensemble of them as visual dictionary. The visual dictionaries help to create an action like how words from dictionary helps to create sentences. In this work, we demonstrate the effectiveness of the visual dictionary by analyzing driver behavior inside the vehicle. We take the primary driving behavior which includes two hands on the wheel and 56 secondary behaviors that include talking over handheld phone, eating sandwich, drinking from bottle, smoking, reaching for objects, and dancing. Finally, we demonstrate how each of these dictionary elements can be automatically processed from videos using computer vision.

Keywords: secondary behavior, human action, computer vision, behavior modeling, visual dictionary

DOI: 10.54941/ahfe1002446

Cite this paper: