Emotion recognition (ER) systems finds applications in many fields like call centres, humanoid Roberts and robotic pets, telecommunication, psychiatry, behavioral science, educational softwares, etc., In this work, the speech and facial features extracted from the video data is explored to recognize the emotions. Since both these features are compliment to each other, on combining them will result in higher performance. The features used for emotion recognition from video data are geometric and appearance based while prosodic and spectral features are employed for speech signal. Support Vector Machine (SVM) classifier is used to capture the emotion specific information. The basic aim of this work is to explore the capability of speech and facial features to provide the emotion specific information.
cited By 0; Conference of 2016 IEEE International Conference on Circuit, Power and Computing Technologies, ICCPCT 2016 ; Conference Date: 18 March 2016 Through 19 March 2016; Conference Code:123144
S. Thushara and S. Veni, “A multimodal emotion recognition system from video”, in 2016 International Conference on Circuit, Power and Computing Technologies (ICCPCT), 2016.