Publication Type : Conference Proceedings
Publisher : Springer Singapore
Source : Smart Innovation, Systems and Technologies
Url : https://doi.org/10.1007/978-981-16-3675-2_41
Campus : Kochi
School : School of Computing
Year : 2021
Abstract : Over the most recent couple of years, facial detection and emotion recognition research has been widely considered. In many human–computer interactions, face-communication methods are used, such as facial expressions, eye movement and gestures, which are commonly used among them because they convey the individual emotional states and sentiments. The ability to recognize facial expressions in real time is automatically enabling fictitious applications in human and computer interactivity and different other regions like security, customer satisfaction recognition, etc. This paper provides a brisk measure to coordinate whether the CNN engineering model performs better if it uses only the raw pixels of image data for training or whether it is better to give the CNN some additional details, such as face landmarks and HOG characteristics. The faces are first detected using the LBP classifier; at that point, we extricate the face landmarks utilizing dlib. We additionally extricated the HOG features, and finally, we trained the CNN model with the raw pixel image data with the additional information. In support of this work, we used the FER-2013 Dataset collected from Kaggle. We obtained an accuracy of 75.1 and 59.1% when we trained the model with and without feature extraction, respectively. The results show that the additional information helps CNN to carry out better.
Cite this Research Publication : V. S. Amal, Sanjay Suresh, G. Deepa, Real-Time Emotion Recognition from Facial Expressions Using Convolutional Neural Network with Fer2013 Dataset, Smart Innovation, Systems and Technologies, Springer Singapore, 2021, https://doi.org/10.1007/978-981-16-3675-2_41