Back close

Affective state recognition using audio cues

Publication Type : Journal Article

Publisher : Journal of Intelligent and Fuzzy Systems, IOS Press

Source : Journal of Intelligent and Fuzzy Systems, IOS Press, Volume 36, Number 3, p.2147-2154 (2019)

Url : https://www.scopus.com/inward/record.uri?eid=2-s2.0-85063530205&doi=10.3233%2fJIFS-169926&partnerID=40&md5=b71afda83f870fd7cd158325cc6c34d8

Keywords : Affective state, Emotional speech, Feature vectors, Intelligent systems, K nearest neighbor (KNN), Nearest neighbor search, Soft computing, spectral feature, Speech, Speech analysis, Speech communication, Speech corpora, Speech recognition, Speech signals, State estimation, Voiced speech

Campus : Bengaluru

School : School of Engineering

Department : Computer Science, Electronics and Communication

Year : 2019

Abstract : This paper presents a technique to detect the six affective states of individual using audio cues. Bi-spectral features extracted from entire speech signal and voiced part of speech are used to create feature vectors. For classification K-Nearest Neighbor (KNN) and Simple Logistic Classifiers (SL) are used. eNTERFACE audio-visual emotional speech corpus that consists of six archetypal affective states: Fear, Anger, Disgust, Sad, Happy, and Surprise is considered. The performance of the system is analyzed based on features obtained from voiced part of speech and features obtained from the entire speech signal. The work proposed is first of its kind in affect computation, where a compact 13-dimensional Bi-spectral features extracted from the voiced speech segments is able to yield promising performance. A considerable improvement of 8.46% - 27.6% recognition rate is achieved with the proposed methodology compared to the existing approaches using emotion samples from the same speech corpus adding novelty to the proposed work. © 2019 - IOS Press and the authors

Cite this Research Publication : M. P. Krishna, R. Reddy, P., Narayanan, V., Lalitha, S., and Gupta, D., “Affective state recognition using audio cues”, Journal of Intelligent and Fuzzy Systems, vol. 36, pp. 2147-2154, 2019.

Admissions Apply Now