Back close

Emotion recognition using multi-parameter speech feature classification

Publication Type : Conference Paper

Publisher : Proceedings - 2015 International Conference on Computers, Communications and Systems

Source : Proceedings - 2015 International Conference on Computers, Communications and Systems, ICCCS 2015, p.217-222 (2015)

Url : https://www.scopus.com/inward/record.uri?eid=2-s2.0-84988936230&partnerID=40&md5=146da45530fa5eef160c9498ecfcdefe

Campus : Amritapuri

School : School of Engineering

Department : Electronics and Communication

Year : 2015

Abstract : Speech emotion recognition is basically extraction and identification of emotion from a speech signal. Speech data, corresponding to various emotions as happiness, sadness and anger, was recorded from 30 subjects. A local database called Amritaemo was created with 300 samples of speech waveforms corresponding to each emotion. Based on the prosodic features: energy contour and pitch contour, and spectral features: cepstral coefficients, quefrency coefficients and formant frequencies, the speech data was classified into respective emotions. The supervised learning method was used for training and testing, and the two algorithms used were Hybrid Rule based K-mean clustering and multiclass Support Vector Machine (SVM) algorithms. The results of the study showed that, for optimized set of features, Hybrid-rule based K mean clustering gave better performance compared to Multi class SVM. © 2015 IEEE.

Cite this Research Publication : Poorna S. S., Jeevitha, C. Y., Nair, S. J., Santhosh, S., and G.J. Nair, “Emotion recognition using multi-parameter speech feature classification”, in Proceedings - 2015 International Conference on Computers, Communications and Systems, ICCCS 2015, 2015, pp. 217-222

Admissions Apply Now