Emotions are important to understand human behavior. Several modalities of emotion recognition are text, speech, facial expression or gesture. Emotion recognition through facial expressions from video play a vital role in human computer interaction where the facial feature movements that convey the emotion expressed need to be recognized quickly. In this work, we propose a novel method for the recognition of six basic emotions in 4D video sequences of BU-4DFE database using geometric based approach. We have selected key facial points out of the 83 feature points provided in the BU-4DFE database. A video expressing emotion has frames containing neutral, onset, apex and offset of that emotion. We have identified the apex frame from a video sequence automatically. The Euclidean distance between the feature points in apex and neutral frame is determined and their difference in corresponding neutral and the apex frame is calculated to form the feature vector. The feature vectors thus formed for all the emotions and subjects are given to Random Forests and Support Vector Machine (SVM) for classification. We have compared the accuracy obtained by the two classifiers. Our proposed method is simple, uses only two frames and yields good accuracy for BU-4DFE database. We have determined optimum number of key facial points that could provide better recognition rate using the computed distance vectors. Our proposed method gives better results compared with literature and can be applied for real time implementation using SVM classifier and kinesics in future.
V. P. Kalyan Kumar, Suja, P., and Dr. Shikha Tripathi, “Emotion Recognition from Facial Expressions for 4D Videos Using Geometric Approach”, in Advances in Signal Processing and Intelligent Recognition Systems: Proceedings of Second International Symposium on Signal Processing and Intelligent Recognition Systems (SIRS-2015) December 16-19, 2015, Trivandrum, India, S. M. Thampi, Bandyopadhyay, S., Krishnan, S., Li, K. - C., Mosin, S., and Ma, M., Eds. Cham: Springer International Publishing, 2016, pp. 3–14.