Qualification: 
M.E, BE
poornass@am.amrita.edu

Poorna S. S. currently serves as an Assistant Professor (Sr.Gr.) at the Department of Electronics and Communication Engineering at Amrita School of Engineering, Amritapuri. She has completed M. E. in VLSI Design.

Publications

Publication Type: Conference Paper

Year of Publication Publication Type Title

2017

Conference Paper

Poorna S. S., Baba, P. M. V. D. Sai, G. Ramya, L., Poreddy, P., Aashritha, L. S., G.J. Nair, and Renjith, S., “Classification of EEG based control using ANN and KNN-A comparison”, in 2016 IEEE International Conference on Computational Intelligence and Computing Research, ICCIC 2016, 2017.[Abstract]


EEG based controls are extensively used in applications such as autonomous navigation of remote vehicles and wheelchairs, as prosthetic control for limb movements in health care, in robotics and in gaming. The work aimed at implementing and classifying the intended controls for autonomous navigation, by analyzing the recorded EEG signals. Here, eye closures extracted from the EEG signals were pulse coded to generate the control signals for navigation. The EEG data was acquired using wireless Emotive Epoc EEG headset, with 14 electrodes, from ten healthy subjects. Preprocessing techniques were applied to enhance the signal, by removing noise and baseline variations. The features from the blinks considered were height of the ocular pulses and their respective widths, from four channels. K-Nearest Neighbor Classifier and Artificial Neural Network Classifier were applied to classify the number of blinks. The results of the study showed that, for the data set under consideration, ANN Classifier gave 98.58% accuracy and 94% sensitivity, compared to KNN Classifier, which gave 96.06 % accuracy and 87.42% sensitivity, to classify the blinks for the control application.

More »»

2015

Conference Paper

Poorna S. S., Jeevitha, C. Y., Nair, S. J., Santhosh, S., and G.J. Nair, “Emotion recognition using multi-parameter speech feature classification”, in Proceedings - 2015 International Conference on Computers, Communications and Systems, ICCCS 2015, 2015, pp. 217-222.[Abstract]


Speech emotion recognition is basically extraction and identification of emotion from a speech signal. Speech data, corresponding to various emotions as happiness, sadness and anger, was recorded from 30 subjects. A local database called Amritaemo was created with 300 samples of speech waveforms corresponding to each emotion. Based on the prosodic features: energy contour and pitch contour, and spectral features: cepstral coefficients, quefrency coefficients and formant frequencies, the speech data was classified into respective emotions. The supervised learning method was used for training and testing, and the two algorithms used were Hybrid Rule based K-mean clustering and multiclass Support Vector Machine (SVM) algorithms. The results of the study showed that, for optimized set of features, Hybrid-rule based K mean clustering gave better performance compared to Multi class SVM. © 2015 IEEE. More »»
Faculty Research Interest: 
207
PROGRAMS
OFFERED
6
AMRITA
CAMPUSES
15
CONSTITUENT
SCHOOLS
A
GRADE BY
NAAC, MHRD
8th
RANK(INDIA):
NIRF 2018
150+
INTERNATIONAL
PARTNERS