Qualification: 
Ph.D, M.Tech
p_suja@blr.amrita.edu

Dr. Suja P. currently serves as Assistant Professor at the Department of Computer Science, Amrita School of Engineering. She successfully defended her PhD in "Robust Emotion Recognition Techniques from Facial Expressions Using Images & Videos" under the guidance of Dr. Shikha Tripathi. Her areas of research include Image Processing, Computer Graphics, Emotion Recognition.

Education

  • Ph.D. in Computer Science - 2017
    From: Amrita Vishwa Vidyapeetham
  • M.Tech. in Computer Vision and Image Processing - 2005
    From: Amrita Vishwa Vidyapeetham
  • B.E. Computer Science and Engineering - 1998
    From: Bharathiyar University

Professional Appointments

Year

Affiliation

Since 2000

Amrita Vishwa Vidyapeetham

1999 – 2000

Kumaraguru College of Technology

Research & Management Experience

  • Research focus on Emotion Recognition from facial expressions using images and videos since 2011.
  • Served as Academic Coordinator from December 2008 to June 2014. Major Research Interests
  • Emotion Recognition, HRI and Meta Learning

Membership in Professional Bodies

  • IEEE Senior Member and member of IEEE RAS Certificates, Awards & Recognitions
  • Received Best Paper Award for 3 papers in two International Conferences – SIRS 2014 and CSITSS 2019 (2 papers)
  • Academic merit student during M.Tech. programme

Publications

Publication Type: Book Chapter

Year of Publication Title

2021

L. Nair, Gupta, R., Teja, N. V. S., and Dr. Suja P., “Meta- Learning: A New Way to Learn and Comparison of Machine Learning Versus Meta-Learning”, in Advs in Intelligent Syst., Computing, Vol. 1325, V.Sivakumar Reddy et al. (Eds): Soft Computing and Signal Processing, 2021.

Publication Type: Conference Proceedings

Year of Publication Title

2020

A. Kumar and Dr. Suja P., “Steering Angle Estimation for Self-driving Car Using Deep Learning”, Machine Learning and Metaheuristics Algorithms, and Applications. Springer Singapore, Singapore, pp. 196-207, 2020.[Abstract]


The contemporary age has seen a tremendous increase in the number of road accidents. Traffic accidents are commonly caused by driver error, mobile phone usage, in-car audio and video entertainment systems, and extensive traffic. The road accident in India causes one death every four minutes. Imagine if everyone can easily and safely get around while driving is not tired, drunk or distracted. Self-driving means of transport are those in which drivers are never required to drive the vehicle. In self-driving car, time spent on travel may well be time spent doing what one needs, because all driving is handled by the car. Also referred to as autonomous or ``driverless'' cars, they mix sensors and code to manage, navigate, and drive the vehicle. The self-driving cars have huge potential to alter the means of transportation. We have proposed an end-to-end method based on deep learning for estimating the steering angle and the accuracy obtained is 98.6%.

More »»

2020

K. Soumya and Dr. Suja P., “Emotion Recognition from Partially Occluded Facial Images using Prototypical Networks”, 2020 2nd International Conference on Innovative Mechanisms for Industry Applications (ICIMIA). 2020.[Abstract]


Facial expression recognition is a challenging task due to the variations and spontaneity of expressions in the real world scenarios. These variations include different head poses, occlusion of face regions, illumination changes and other noises in the image during capturing or transmission. This work aims at classifying five basic human emotions from partially occluded face images. Lack of base line datasets with occluded images is a hurdle to build a model which can generalize well. Meta-learning algorithms offer a solution to this problem. We have carried out research on leveraging meta-learning concepts with prototypical networks in our work to classify emotions in few-shot regime. We have used CMU Multi-PIE database, AffectNet database and images collected from Internet for training and testing purposes. The proposed method is named as MERO (meta-learning for emotion recognition under occlusion).

More »»

2019

R. Yadhunath, Srikanth, S., Sudheer, A., and Dr. Suja P., “Identification of Criminal Activity Hotspots using Machine Learning to Aid in Effective Utilization of Police Patrolling in Cities with High Crime Rates”, 2019 4th International Conference on Computational Systems and Information Technology for Sustainable Solution (CSITSS). 2019.

2019

Dr. Suja P. and , “A Robust Pose Illumination Invariant Emotion Recognition from Facial Images using Deep Learning for Human-Machine Interface”, 2019 4th International Conference on Computational Systems and Information Technology for Sustainable Solution (CSITSS). 2019.

2019

K. N. V. Sriram and Dr. Suja P., “Mobile Robot Assistance for Disabled and Senior Citizens Using Hand Gestures”, 2019 International Conference on Power Electronics Applications and Technology in Present Energy Scenario (PETPES). 2019.

2019

T. Keshari and Dr. Suja P., “Emotion Recognition Using Feature-level Fusion of Facial Expressions and Body Gestures”, 2019 International Conference on Communication and Electronics Systems (ICCES). 2019.

2018

P. Sreedhar and Dr. Suja P., “Robotic Grasper based on an End-to-End Neural Architecture using Raspberry Pi”, 24th annual International conference on Advanced Computing and Communications (ADCOM 2018). IIITB, Advanced Computing and Communications Society, IISc, Bangalore, 2018.[Abstract]


Reinforcement learning coupled with Neural Networks has been demonstrated to perform well in robotic manipulation tasks. Yet they require large volume of sample data which are trained using huge amount of computing resources. We propose an End-to-End Neural Network architecture based robotic system that can be deployed on embedded platforms such as a Raspberry Pi to perform robotic grasping. The proposed method potentially solves the exploration-exploitation dilemma even under undecidable scenarios.

More »»

2018

R. M. Karthik and Dr. Suja P., “Resource Unit (RU) based OFDMA Scheduling in IEEE 802.11ax system”, International Conference on Advances in Computing, Communications and Informatics (ICACCI). PES, Bengaluru, 2018.[Abstract]


IEEE 802.11ax is a revolutionary effortto provide a improvement over current generation of802.11 and has been approved to deliver the next-generation Wireless Local Area Network (WLAN)technologies. The medium access control protocol isthe critical component to enable efficient sharing ofthe wireless medium whilst at the same time pro-viding Quality of Service (QoS) to diverse applica-tions. Orthogonal Frequency Division Multiple Ac-cess (OFDMA) is the access method adopted byIEEE 802.11ax where the subcarriers are divided intoResource Units (RUs) for scheduling. We proposea contention-based RU allocation in OFDMA basedIEEE 802.11ax using an adaptation of IEEE 802.11eEnhanced Distributed Channel access (EDCA). Weextend EDCA to the OFDMA system where the accesspoint (AP) transmits Physical Protocol Data Units(PPDUs) by determining the RUs to be allocated to itsassociated stations (STAs). We also determine the RUlimit by normalizing the RU requirements using differ-ent methods that is provided as input to the schedulingalgorithm and also propose a feedback mechanism tothrottle the input arrival process.

More »»

2018

Y. Sainath, Sai, K. Pruthvi, Rajesh, A., and Dr. Suja P., “Sleep Pattern Monitoring and Analysis to Improve the Health and Quality of Life of People”, International Conference on Advances in Computing, Communications and Informatics (ICACCI). PES, Bengaluru, 2018.

2018

A. Purushothaman and Dr. Suja P., “Pose and Illumination Invariant Face Recognition for Automation Of Door Lock System”, 2nd International Conference on Inventive Communication and Computational Technologies(ICICCT 2018). Hotel Arcadia, Coimbatore, 2018.

2017

S. K.M and Dr. Suja P., “A Geometric Approach for Recognizing Emotions From 3D Images with Pose Variations”, International Conference On Smart Technologies For Smart Nation (SmartTechCon2017). Reva University, Bengaluru, 2017.

2017

and Dr. Suja P., “Emotion Recognition from 3D Videos using Optical Flow Method”, International Conference On Smart Technologies For Smart Nation (SmartTechCon2017). Reva University, Bengaluru, 2017.

2016

Dr. Suja P., Prathyusha,, .Tripathi, S., and Louis, R., “Emotion Recognition from Facial Expressions of 4D Videos Using Curves and Surface Normals”, International Conference on Human Computer Interaction, LNCS, Springer. 2016.

2016

D. KrishnaSri, Dr. Suja P., and Tripathi, S., “Emotion Recognition from 3D Images with Non-Frontal View Using Geometric Approach”, Advances in Signal Processing and Intelligent Recognition Systems. Springer, pp. 63–73, 2016.[Abstract]


Over the last decade emotion recognition has gained prominence for its applications in the field of Human Robot Interaction (HRI), intelligent vehicle, patient health monitoring, etc. The challenges in emotion recognition from nonfrontal images, motivates researchers to explore further. In this paper, we have proposed a method based on geometric features, considering 4 yaw angles (0°, +15°, +30°, +45°) from BU-3DFE database. The novelty in our proposed work lies in identifying the most appropriate set of feature points and formation of feature vector using two different approaches. Neural network is used for classification. Among the 6 basic emotions four emotions i.e., anger, happy, sad and surprise are considered. The results are encouraging. The proposed method may be implemented for combination of pitch and yaw angles in future. © Springer International Publishing Switzerland 2016.

More »»

2015

Dr. Suja P. and Dr. Shikha Tripathi, “Dynamic facial emotion recognition from 4D video sequences”, Contemporary Computing (IC3), 2015 Eighth International Conference on. IEEE, 2015.

2015

Dr. Suja P., Krishnasri, D., and Dr. Shikha Tripathi, “Pose invariant method for emotion recognition from 3D images”, 12th IEEE International Conference Electronics, Energy, Environment, Communication, Computer, Control: (E3-C3), INDICON 2015. Institute of Electrical and Electronics Engineers Inc., 2015.[Abstract]


Information about the emotional state of a person can be inferred from facial expressions. Emotion recognition has become an active research area in recent years in various fields such as Human Robot Interaction (HRI), medicine, intelligent vehicle, etc., The challenges in emotion recognition from images with pose variations, motivates researchers to explore further. In this paper, we have proposed a method based on geometric features, considering images of 7 yaw angles (-45°,-30°,-15°,0°,+15°,+30°,+45°) from BU3DFE database. Most of the work that has been reported considered only positive yaw angles. In this work, we have included both positive and negative yaw angles. In the proposed method, feature extraction is carried out by concatenating distance and angle vectors between the feature points, and classification is performed using neural network. The results obtained for images with pose variations are encouraging and comparable with literature where work has been performed on pitch and yaw angles. Using our proposed method non-frontal views achieve similar accuracy when compared to frontal view thus making it pose invariant. The proposed method may be implemented for pitch and yaw angles in future.

More »»

2014

Dr. Suja P., Tripathi, S., and Deepthy, J., “Emotion Recognition From Facial Expressions Using Frequency Domain Techniques”, First International Symposium on Signal Processing and Intelligent Recognition Systems - SIRS 2014. Springer, IIITMK- Technopark, Trivandrum, India, 2014.[Abstract]


An emotion recognition system from facial expression is used for recognizing expressions from the facial images and classifying them into one of the six basic emotions. Feature extraction and classification are the two main steps in an emotion recognition system. In this paper, two approaches viz., cropped face and whole face methods for feature extraction are implemented separately on the images taken from Cohn-Kanade (CK) and JAFFE database. Transform techniques such as Dual – Tree Complex Wavelet Transform (DT-CWT) and Gabor Wavelet Transform are considered for the formation of feature vectors along with Neural Network (NN) and K-Nearest Neighbor (KNN) as the Classifiers. These methods are combined in different possible combinations with the two aforesaid approaches and the databases to explore their efficiency. The overall average accuracy is 93{%} and 80{%} for NN and KNN respectively. The results are compared with those existing in literature and prove to be more efficient. The results suggest that cropped face approach gives better results compared to whole face approach. DT-CWT outperforms Gabor wavelet technique for both classifiers.

More »»

2014

Dr. Suja P., Thomas, S. Mariam, Tripathi, S., and Madan, V. K., “Emotion Recognition from Images Under Varying Illumination Conditions”, Proc. 6th Int'l. Workshop Soft Computing Applications, Timisoara, Romania. Proc. in Advances in Intelligent Systems and Computing. Springer, pp. 913–921, 2014.[Abstract]


Facial expressions are one of the most powerful and immediate means for human beings to communicate their emotions. Recognizing human emotion has varied range of applications in humanoid robots, animation industry, psychology, forensic analysis, medical aid, automotive industry, etc. This work focuses on emotion recognition under various illumination conditions using images from CMU-MultiPIE database. The database is provided with five basic expressions like neutral, happiness, anger, disgust and surprise with varying pose and illuminations. The experiment has been conducted on images with varying illuminations initially without pre-processing and also by applying a proposed ratio-based pre-processing method followed by feature extraction and classification. Dual—Tree-Complex Wavelet Transform (DT-CWT) was applied for the formation of feature vectors along with K-Nearest Neighbour (KNN) as the classifier. The result shows that pre-processed images give better results than original images. It is thus concluded that varying illumination has effect on emotion recognition and the pre-processing algorithm demonstrates improvement in accuracy of recognition. Future work may include a broader perspective of using body language and speech data for emotion recognition. © Springer International Publishing Switzerland 2016.

More »»

2013

Dr. Shikha Tripathi, .N.Keerthana, D., and Dr. Suja P., “Emotion Recognition using DWT, KL Transform and Neural Network”, International Conference on advances in Signal Processing and Communications (SPC2013). ACEEE,‘The Piccadily’ in Lucknow, India, 2013.[Abstract]


Human face communicates important information about a person’s emotional condition. In this paper an approach for facial expression recognition using wavelet transform for feature extraction and neural network classifier for five basic emotions is proposed. The strength of the algorithm is the reduction in feature size and use of less number of images for training the network, compared to existing similar approaches. Static images of the Cohn-Kanade Face Expression image database have been used for experimentation. The facial expression information that are mostly concentrated on mouth, eye and eyebrow regions are segmented from the face. Then the low-dimension features are acquired using 2-level Discrete Wavelet Transform and Karhunen–Loeve transform. A neural network classifier is constructed to categorize the emotions. The neural network based classifier yielded an average accuracy of 96.4%. The expressions that are recognized are happiness, sadness, anger, surprise and disgust.

More »»

Publication Type: Journal Article

Year of Publication Title

2020

Dr. Suja P. and Purushothaman, A., “Development of smart home using gesture recognition for disabled and elderly”, Journal of Computational and Theoretical Nanoscience, vol. 17, pp. 177-181, 2020.

2018

Dr. Suja P. and Shikha Tripathi, “Emotion Recognition from Facial Expressions using Images with Pose, Illumination and Age Variations for Human-Computer/Robot Interaction”, Journal of ICT Research and Applications, vol. 12, 1 vol., no. 2, pp. 14-34, 2018.[Abstract]


A technique for emotion recognition from facial expressions in images with simultaneous pose, illumination and age variation in real time is proposed in this paper. The basic emotions considered are anger, disgust, happy, surprise, and neutral. Feature vectors that were formed from images from the CMU-MultiPIE database for pose and illumination were used for training the classifier. For real-time implementation, Raspberry Pi II was used, which can be placed on a robot to recognize emotions in interactive real-time applications. The proposed method includes face detection using Viola Jones Haar cascade, Active Shape Model (ASM) for feature extraction, and AdaBoost for classification in real- time. Performance of the proposed method was validated in real time by testing with subjects from different age groups expressing basic emotions with varying pose and illumination. 96% recognition accuracy at an average time of 120 ms was obtained. The results are encouraging, as the proposed method gives better accuracy with higher speed compared to existing methods from the literature. The major contribution and strength of the proposed method lie in marking suitable feature points on the face, its speed and invariance to pose, illumination and age in real time.

More »»

2017

Dr. Suja P. and Dr. Shikha Tripathi, “Geometrical approach for emotion recognition from facial expressions using 4D videos and analysis on feature-classifier combination”, International Journal of Intelligent Engineering and Systems, vol. 10, pp. 30-39, 2017.[Abstract]


Emotion recognition from facial expressions using videos is important in human computer communication where the continuous changes in face movements need to be recognized efficiently. In this paper, a method using the geometrical based approach for feature extraction and recognition of six basic emotions has been proposed which is named as GAFCI (Geometrical Approach for Feature Classifier Identification). Various classifiers, Support Vector Machine (SVM), Random Forest, Naïve Bayes and Neural Networks are used for classification, and the performances of all the chosen classifiers are compared. Out of the 83 feature points provided in the BU4DFE database, optimum feature points are identified by experimenting with several sets of feature points. Suitable "feature-classifier" combination has been obtained by varying the number of feature points, classifier parameters, and training and test samples. A detailed analysis on the feature points and classifiers has been performed to learn the relationship between distance parameters and classification of emotions. The results are compared with literature and found to be encouraging.

More »»

2017

SaSai Prathusha, Dr. Suja P., Dr. Shikha Tripathi, and Louis, Rc, “Emotion recognition from facial expressions of 4D videos using curves and surface normals”, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 10127 LNCS, pp. 51-64, 2017.[Abstract]


In this paper, we propose and compare three methods for recognizing emotions from facial expressions using 4D videos. In the first two methods, the 3D faces are re-sampled by using curves to extract the feature information. Two different methods are presented to resample the faces in an intelligent way using parallel curves and radial curves. The movement of the face is measured through these curves using two frames: neutral and peak frame. The deformation matrix is formed by computing the distance point to point on the corresponding curves of the neutral frame and peak frame. This matrix is used to create the feature vector that will be used for classification using Support Vector Machine (SVM). The third method proposed is to extract the feature information from the face by using surface normals. At every point on the frame, surface normals are extracted. The deformation matrix is formed by computing the Euclidean distances between the corresponding normals at a point on neutral and peak frames. This matrix is used to create the feature vector that will be used for classification of emotions using SVM. The proposed methods are analyzed and they showed improvement over existing literature. © Springer International Publishing AG 2017.

More »»

2015

Dr. Suja P. and Dr. Shikha Tripathi, “Analysis of emotion recognition from facial expressions using spatial and transform domain methods”, International Journal of Advanced Intelligence Paradigms, vol. 7, pp. 57-73, 2015.[Abstract]


Facial expressions are non-verbal signs that play an important role in interpersonal communications. There are six basic universally accepted emotions viz., happiness, surprise, anger, sadness, fear and disgust. An emotion recognition system is used for recognising different expressions from the facial images/videos and classifying them into one of the six basic emotions. Spatial domain methods are more popularly used in literature in comparison to transform domain methods. In this paper, two approaches viz., cropped face and whole face methods for feature extraction are implemented separately on the images taken from Cohn-Kanade (CK) and JAFFE databases. Classification is performed using K-nearest neighbour and neural network. The results are compared and analysed. The results suggest that transform domain techniques yield better accuracy than spatial domain techniques and cropped face approach outperforms whole face approach for both the databases for few feature extraction methods. Such systems find application in human computer interaction, entertainment industry and could be used for clinical diagnosis. Copyright © 2015 Inderscience Enterprises Ltd.

More »»

Keynote Addresses/Invited Talks/Panel Memberships

  • Session Chair for conference ICIC 2018
  • Reviewer for journals IEEE Access & Concurrency and Computation: Practice and Experience.
  • TPC member of several international conferences Courses taught
  • Programming Languages
  • Object Modelling and Design
  • Information Technology Essentials
  • Operating Systems
  • Operating System Internals
  • Real-Time Computing Systems
  • Neural Networks and Deep Learning
  • Digital Signal and Image Processing
  • Behavioural Robotics
  • Computer Vision
  • Computational Intelligence
  • Introduction to Soft Computing
  • Multimedia Systems
  • Computer Graphics and Visualization

Student Guidance Undergraduate students

Sl. No.

Name of the Student(s)

Topic

Status –

Ongoing/Compl

eted

Year of

Completion

1

NVS Teja, Lakshmi

Nair, Addepalli Harshith

Emotion Recognition using Meta Learning approach

Completed

2019-20

2

K.R. Sai Shivani, D. Naga Jyothi, B.

Music Recommender

Completed

2018-19

 

Amrutha Sahithi

System

   

3

Pruthvi, Sainath and Rajesh

Sleep Pattern Monitoring and Analysis to improve the quality of life for patients and adults

Completed

2017-18

4

Pranav Girish

Algorithm performance evaluation: Emprical comparison opf Skicit learn

and WEKA

Completed

2016-17

5

Y. Sneha and S.H. Praharsha

Emotion recognition from facial expressions using images of pose variations with appearance based approach

Completed

2015-16

6

Sreenu M. Panicker

Paint System Using Colour Based Tracking

Completed

2013-14

7

Yeshwanth Kaushal, Yeshwanth Reddy, Pradeep Sai U.

Facial Emotion Recognition System

Completed

2012-13

8

Shyam Sundar

“NYMBLE” Blocking Misbehaving Users in

Anonymizing Networks”

Completed

2011-12

Postgraduate Students

Sl.

No.

Name of the Student(s)

Topic

Status –

Ongoing/Completed

Year of

Completion

1

Siddharth K.

Emotion Recognition for children with special needs using meta learning

Completed

2019-20

2

Soumya K.

Emotion Recognition

From Facial Images

With Simultaneous

Occlusion, Pose And

Illumination variations

Using Meta-Learning

Completed

2019-20

3

Tanya Keshari

Bi-Modal Emotion recognition

Completed

2018-19

4

K.N.V. Sriram

Mobile Robot

Assistance for

Disabled and Senior

Citizens Using Hand

Gestures

Completed

2018-19

5

Amritha

Purushothaman

Development of Smart home using gesture recognition for disabled and elderly

Completed

2017-18

6

Pranav B Sreedhar

Robotic grasper based on an end-to-end neural architecture using Raspberry Pi

Completed

2017 -18

7

Shwetha K.M.

Emotion recognition using 3D images with simultaneous rotation in y and x axis

Completed

2016-17

8

Gowri S Patil

Emotion recognition from facial expressions using 4D videos using optical flow approach

Completed

2016-17

9

Sai Prathyusha

Emotion recognition from facial expressions using 4D videos using curve based approach

Completed

2015-16

10

Suchitra

Emotion recognition from facial expressions

at real time using images

Completed

2014-15

11

Krishnasri D.

Emotion recognition from facial expressions using 3D images

Completed

2014-15

12

V.P. Kalyan Kumar

Emotion recognition from facial expressions using 4D videos

Completed

2014-15

13

Pradeep L.N.

Emotion recognition from facial images with pose variations

Completed

2013-14

14

Sherin Mariam Thomas

Emotion recognition from facial images under different

Completed

2013-14

   

illuminated conditions

   

15

Samanjasaa J Krishnan

Emotion recognition from facial images using spatial domain methods

Completed

2012-13

16

Deepthi J.

Emotion recognition from facial images using frequency domain methods

Completed

2012-13

Research Scholars

Sl. No.

Name of the Student(s)

Topic

Status –

Ongoing/Completed

Year of Completion

1

Pranav B Sreedhar

Robotics

Ongoing

 

2

AR Manjupriya

Emotion recognition

Ongoing

 

3

Ashwini R Doke

Meta learning

Ongoing