Qualification: 
M.E, BE
poornass@am.amrita.edu

Poorna S. S. currently serves as an Assistant Professor (Sr.Gr.) at the Department of Electronics and Communication Engineering at Amrita School of Engineering, Amritapuri. She has completed M. E. in VLSI Design.

Publications

Publication Type: Journal Article

Year of Publication Title

2019

Poorna S. S. and Nair, G. J., “Multistage Classification Scheme to Enhance Speech Emotion Recognition”, International Journal of Speech Technology, vol. 22, pp. 327–340, 2019.[Abstract]


During the past decades, emotion recognition from speech has become one of the most explored areas in affective computing. These systems lack universality due to multilingualism. Research in this direction is restrained due to unavailability of emotional speech databases in various spoken languages. Arabic is one such language, which faces this inadequacy. The proposed work aims at developing a speech emotion recognition system for Arabic speaking community. A speech database with elicited emotions–-anger, happiness, sadness, disgust, surprise and neutrality are recorded from 14 subjects, who are non-native, but proficient speakers in the language. The prosodic, spectral and cepstral features are extracted after pre-processing. Subsequently the features were subjected to single stage classification using supervised learning methods viz. Support vector machine and Extreme learning machine. The performance of the speech emotion recognition systems implemented are compared in terms of accuracy, specificity, precision and recall. Further analysis is carried out by adopting three multistage classification schemes. The first scheme followed a two stage classification by initially identifying gender and then the emotions. The second used a divide and conquer approach, utilizing cascaded binary classifiers and the third, a parallel approach by classification with individual features, followed by a decision logic. The result of the study depicts that, these multistage classification schemes an bring improvement in the performance of speech emotion recognition system compared to the one with single stage classification. Comparable results were obtained for same experiments carried out using Emo-DB database.

More »»

2018

Poorna S. S., Arsha, V. V., Aparna, P. T. A., Gopal, P., and Nair, G. J., “Drowsiness Detection for Safe Driving Using PCA EEG Signals”, Progress in Computing, Analytics and Networking, pp. 419-428, 2018.[Abstract]


Forewarning the onset of drowsiness in drivers and pilots by analyzing the state of brain can reduce the number of road and aviation accidents to a large extent. For this, EEG signals are acquired using a 14-channel wireless neuro-headset, while subjects are in virtual driving environment. Principal component analysis (PCA) of EEG data is used to extract the dominant ocular pulses. Two sets of feature vectors obtained from the analysis are: one set characterizing eye blinks only and another set where eye blinks are excluded. The temporal characteristics of ocular pulses are obtained from the first set. The latter is obtained from the spectral bands delta, theta, alpha, beta, and gamma. Classification using K-nearest neighbor (KNN) and artificial neural network (ANN) gives an accuracy of 80% and 85%, sensitivity of 33.35% and 58.21%, respectively, for these features. The targets used for classification are alert or awake, drowsy, and sleep state.

More »»

Publication Type: Conference Proceedings

Year of Publication Title

2018

Poorna S. S., Raghav, R., Nandan, A., and Nair, G. J., “EEG Based Control - A Study Using Wavelet Features”, 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI). Bangalore, India, pp. 550-553, 2018.[Abstract]


EEG based brain control techniques serves as a strong aid for severely disabled people, as it gives the direct measure of the cortical activity of brain. The work aims at analyzing and classifying the eye blinks obtained from EEG signals for control applications. A wireless headset consisting of 14 terminals was used for acquiring EEG data from 10 healthy subjects. In order to improve the signal quality in raw EEG signals, pre-processing techniques for removing noise and baseline variations were applied. Further Discrete wavelet transform (DWT) was used for extracting required features. Features in wavelet domain: wavelet entropy, wavelet cepstrum and statistical parameters from the approximation coefficients were used for supervised learning and classification. The analysis was carried out for three levels of decomposition using Daubechies 6 wavelet (db6). The system performance was evaluated using the K-Nearest Neighbor and Artificial Neural Networks using the measures: accuracy, sensitivity and specificity

More »»

2018

Poorna S. S., Anuraj K., and Saikumar C., “Ultrasonic Signal Modelling and Parameter Estimation : A Comparative Study Using Optimization Algorithms”, In: Zelinka I., Senkerik R., Panda G., Lekshmi Kanthan P. (eds) Soft Computing Systems. ICSCS 2018. Communications in Computer and Information Science, , vol. 837. Springer, Singapore, 2018.[Abstract]


The parameter estimation from ultrasonic reverberations is used in applications such as non-destructive evaluation, characterization and defect detection of materials. The parameters of back scattered Gaussian ultrasonic echo altered by noise: Received time, Amplitude, Phase, bandwidth and centre-frequency should be estimated. Due to the assumption of the nature of noise as additive white Gaussian, the estimation can be approximated to a least square method. Hence different least square cure-fitting optimization algorithms can be used for estimating the parameters. Optimization techniques: Levenberg-Marquardt(LM), Trust-region-reflective, Quasi-Newton, Active Set and Sequential Quadratic Programming are used to estimate the parameters of noisy echo. Wavelet denoising with Principal Component Analysis is also applied to check if it can make some improvement in estimation. The goodness of fit for noisy and denoised estimated signals are compared in terms of Mean Square Error (MSE). The results of the study shows that LM algorithm gives the minimum MSE for estimating echo parameters from both noisy and denoised signal, with minimum number of iterations.

More »»

2018

J. S. Anjana and Poorna S. S., “Language Identification From Speech Features Using SVM and LDA”, 2018 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET). Chennai, pp. 1-4, 2018.[Abstract]


Speech based language identification system has a wide range of applications in the field of telephone services, multilingual translation services, government intelligence and monitoring etc. Identifying the exact speech feature for classification is an important problem in the language identification research area. In this work, we are comparing the performance measures of a language identification system using two different supervised learning algorithms. Mel frequency cepstral coefficients and formant feature vectors are extracted for classification purpose. The system which is developed using the database of seven different Indian languages is capable of identifying languages with LDA giving a maximum classification accuracy of 93.88% when compared to SVM with a classification accuracy of 84%.

More »»

2018

Poorna S. S., Anuraj K., and Nair, G. J., “A Weight Based Approach for Emotion Recognition from Speech: An Analysis Using South Indian Languages”, In: Zelinka I., Senkerik R., Panda G., Lekshmi Kanthan P. (eds) Soft Computing Systems. ICSCS 2018. Communications in Computer and Information Science, vol. 837. Springer Singapore, Singapore, 2018.[Abstract]


A weight based emotion recognition system is presented to classify emotions using audio signals recorded in three south Indian languages. An audio database with containing five emotional states namely anger, surprise, disgust, happiness, and sadness is created. For subjective validation, the database is subjected to human listening test. Relevant features for recognizing emotions from speech are extracted after suitably pre-processing the samples. The classification methods, K-Nearest Neighbor, Support Vector Machine and Neural Networks are used for detection of respective emotions. For classification purpose the features are weighted so as to maximize the inter cluster separation in feature space. An inter performance comparison of the above classification methods using normal, weighted features as well as feature combinations are analyzed.

More »»

2018

G. Gayathri, Udupa, G., Nair, G. J., and Poorna S. S., “EEG-Controlled Prosthetic Arm for Micromechanical Tasks”, (eds) Proceedings of the Second International Conference on Computational Intelligence and Informatics. Advances in Intelligent Systems and Computing, vol. 712. Springer Singapore, Singapore, 2018.[Abstract]


Brain-controlled prosthetics has become one of the significant areas in brain–computer interface (BCI) research. A novel approach is introduced in this paper to extract eyeblink signals from EEG to control a prosthetic arm. The coded eyeblinks are extracted and used as a major task commands for control of prosthetic arm movement. The prosthetic arm is built using 3D printing technology. The major task is converted to micromechanical tasks by the microcontroller. In order to classify the commands, features are extracted in time and spectral domain of the EEG signals using machine learning methods. The two classification techniques used are: Linear Discriminant Analysis (LDA) and K-Nearest Neighbor (KNN). EEG data was obtained from 10 healthy subjects and the performance of the system was evaluated for accuracy, precision, and recall measures. The methods gave accuracy, precision and recall for LDA as 97.7%, 96%, and 95.3% and KNN as 70.7%, 67.3%, and 68% respectively

More »»

2018

Poorna S. S., Anuraj K., Renjith, S., Vipul, P., and Nair, G. J., “EEG Based Control using Spectral Features”, 2nd International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud). Palladam, India, pp. 788-794, 2018.[Abstract]


In the present days, Brain Computer Interfaces (BCI) are used in applications pertaining to diagnostics and prosthetics for neurological disorders, navigation of unmanned aerial vehicles and gaming. Detailed analysis of spectral features and classifiers using eye blink control from Electroencephalogram (EEG) will be described in this paper. In this study, the signals were acquired using an EEG headset, where the ocular pulses dominated the data. Principal Component Analysis was used to extract the ocular components. From the resultant signal, the features: sum of spectral peaks, bandwidth, power spectral entropy, and Cepstral coefficients of the blinks were extracted for supervised learning. The classification methods Multiclass Support Vector Machine (SVM), Quadratic Discriminant Analysis (QDA) and Artificial Neural Networks (ANN) were evaluated using these features independently as well as together. The results showed that among the three features, spectral peaks and bandwidth gave more classification accuracy. Also while features were taken together, QDA gave superior classification results in terms of accuracy, sensitivity and specificity compared to Multi class SVM and ANN.

More »»

Publication Type: Conference Paper

Year of Publication Title

2018

Poorna S. S., Anjana, S., Varma, P., Sajeev, A., Arya, K. C., Renjith, S., and Nair, G. J., “Facial Emotion Recognition using DWT based Similarity and Difference Features”, in 2018 2nd International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC)I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), 2018 2nd International Conference on, 2018.[Abstract]


Recognizing emotions from facial images has become one of the major fields in affective computing arena since it has wide spread applications in robotics, medicine, surveillance, defense, e-learning, gaming, customer services etc. The study used Ekman model with 7 basic emotions- anger, happy, disgust, sad, fear, surprise and neutral acquired from subjects of Indian ethnicity. The acquired data base, Amritaemo consisted of 700 still images of Indian male and female subjects in seven emotions. The images were then cropped manually to obtain the region of analysis i.e. the face and converted to grayscale for further processing. Preprocessing techniques, histogram equalization and median filtering were applied to these after resizing. Discrete Wavelet Transform (DWT) was applied to these pre-processed images. The 2 D Haar wavelet coefficients (WC) were used to obtain the feature parameters. The maximum 2D correlation of mean value of one specific emotion versus all others was considered as the similarity feature. The squared difference of the emotional and neutral images in the transformed domain was considered as the difference feature. Supervised learning methods, K-Nearest Neighbor (KNN) and Artificial Neural Networks (ANN) were used to classify these features separately as well as together. The performance of these parameters were evaluated based on the measures accuracy, sensitivity and specificity.

More »»

2017

Poorna S. S., Baba, P. M. V. D. Sai, G. Ramya, L., Poreddy, P., Aashritha, L. S., G.J. Nair, and Renjith, S., “Classification of EEG based control using ANN and KNN-A comparison”, in 2016 IEEE International Conference on Computational Intelligence and Computing Research, ICCIC 2016, 2017.[Abstract]


EEG based controls are extensively used in applications such as autonomous navigation of remote vehicles and wheelchairs, as prosthetic control for limb movements in health care, in robotics and in gaming. The work aimed at implementing and classifying the intended controls for autonomous navigation, by analyzing the recorded EEG signals. Here, eye closures extracted from the EEG signals were pulse coded to generate the control signals for navigation. The EEG data was acquired using wireless Emotive Epoc EEG headset, with 14 electrodes, from ten healthy subjects. Preprocessing techniques were applied to enhance the signal, by removing noise and baseline variations. The features from the blinks considered were height of the ocular pulses and their respective widths, from four channels. K-Nearest Neighbor Classifier and Artificial Neural Network Classifier were applied to classify the number of blinks. The results of the study showed that, for the data set under consideration, ANN Classifier gave 98.58% accuracy and 94% sensitivity, compared to KNN Classifier, which gave 96.06 % accuracy and 87.42% sensitivity, to classify the blinks for the control application.

More »»

2015

Poorna S. S., Jeevitha, C. Y., Nair, S. J., Santhosh, S., and G.J. Nair, “Emotion recognition using multi-parameter speech feature classification”, in Proceedings - 2015 International Conference on Computers, Communications and Systems, ICCCS 2015, 2015, pp. 217-222.[Abstract]


Speech emotion recognition is basically extraction and identification of emotion from a speech signal. Speech data, corresponding to various emotions as happiness, sadness and anger, was recorded from 30 subjects. A local database called Amritaemo was created with 300 samples of speech waveforms corresponding to each emotion. Based on the prosodic features: energy contour and pitch contour, and spectral features: cepstral coefficients, quefrency coefficients and formant frequencies, the speech data was classified into respective emotions. The supervised learning method was used for training and testing, and the two algorithms used were Hybrid Rule based K-mean clustering and multiclass Support Vector Machine (SVM) algorithms. The results of the study showed that, for optimized set of features, Hybrid-rule based K mean clustering gave better performance compared to Multi class SVM. © 2015 IEEE. More »»
Faculty Research Interest: