Using an Artificial Neural Network Pre-trained for a Different, yet Comparable Task to Evaluate Extreme-Affect Vocalizations that Are Indistinguishable by Humans

Open Access
Article
Conference Proceedings
Authors: Hermann ProssingerVioletta Victoria Prossinger BeckSilvia BoschettiJakub Binter

Abstract: Humans categorize vocal displays of highly intensive affective states with very low precision. However, there are many applications necessitating correct perceptions of alarm calls. We decided to classify two negative (pain and fear), two positive (laugh and pleasure) affective states and compared these to neutral state. We used a unique dataset where all displays had been vocalized by all expressers. We used an ANN that is designed for a different, yet comparable task; one that classifies human and animal sounds as well as mundane events (such as pouring water from a jug). The outputs were then statistically analyzed using Bayesian methods. Our analysis showed that the outputs can successfully classify neutral and non-neutral affective states but they were unable to distinguish the intensive affective states from each other (with one exceptional case of laugh). Given the insights we acquired, we infer that classifying intense affective states will remain an insurmountable barrier for any future ANN. The applicability of our result also shows that the cost, time, and effort overhead of attempting to designing a dedicated ANN will be prohibitive.

Keywords: Affect Vocalization, Artificial Neural Networks, Affect Valence Identification, Vocal Cues, Bayesian methods

DOI: 10.54941/ahfe1005475

Cite this paper:

Downloads
24
Visits
103
Download