Qualification: 
PG

Aswathi T. is an Assistant Professor in the Department of Computer Science and Engineering at Amrita School of Engineering, Amrita Vishwa Vidyapeetham University – Coimbatore. She graduated with Post-Graduation in Computer Science and Communication Engineering from VIT University Vellore and Under-Graduation in Computer science and Engineering form University of Calicut. She is pursuing her Ph.D. from the Amrita Vishwa Vidyapeetham University – Coimbatore. Her research area of interest includes Machine learning, Deep Learning, and Medical image processing.

Publications

Publication Type: Conference Proceedings

Year of Publication Title

2021

Aswathi T, Swapna, T. R., and Padmavathi, S., “Transfer Learning approach for grading of Diabetic Retinopathy”, Journal of Physics: Conference Series. . 2021.[Abstract]


There has been a wide interest in applying Deep Learning (DL) algorithms for automated binary and multi class classification of colour fundus images affected with Diabetic Retinopathy (DR). These algorithms have shown high sensitivity and specificity for detecting DR in non-clinical setup. Transfer learning has been successfully tested in many medical imaging applications like skin cancer detection, pulmonary nodule detection, Alzheimer's disease etc. This paper experiments with the different DL architectures such as VGG19, InceptionV3, ResNet50, MobileNet and NASNet for automated DR classification (binary and multi class) on Messidor dataset. The dataset is publicly available, and comprises of 1200 retinal fundus images. The images belong to four different classes of DR namely, normal (class 0), mild (class 1), moderate (class 2) and severe (class 3), graded based on the severity level of DR. In our experiment, we have enhanced the quality of input images by applying algorithms like CLAHE (Contrast Limited Adaptive Histogram Equalisation) algorithm and Powerlaw transformation as pre-processing techniques, which work on the small image patches with high accuracy, contrast limiting and image sharpening. Hyper parameter tuning on pretrained InceptionV3 architecture, resulted in enhancing the accuracy of the model. Both binary and multi class results were analysed considering inter class (one class with another class) accuracies. We achieved an accuracy of 78% between class 0 and class 1, the accuracy between class 0 and class 2 further reduced to 69%, while class 1 and class 2 showed an accuracy of 61%. Moreover, the interclass class accuracy between class 1 and class 3 was 62%, class 2 and class 3 further reduced to 49%. The accuracy further diminished between class 0 and class 3 to 32%. These experiments suggest that the pretrained models provided better results in classifying normal and mild, but they were not that much efficient in classifying moderate-severe and normal-severe binary classifications.

More »»

Publication Type: Book Chapter

Year of Publication Title

2020

Aswathi T, K., R., Devika, K., Sreevidya, P., Sowmya, V., and K. P. Soman, “Performance Analysis of NASNet on Unconstrained Ear Recognition.”, in Nature Inspired Computing for Data Science (pp.57-82), 2020.[Abstract]


Recent times are witnessing greater influence of Artificial Intelligence (AI) on identification of subjects based on biometrics. Traditional biometric recognition algorithms, which were constrained by their data acquisition methods, are now giving way to data collected in the unconstrained manner. Practically, the data can be exposed to factors like varying environmental conditions, image quality, pose, image clutter and background changes. Our research is focused on the biometric recognition, through identification of the subject from the ear. The images for the same are collected in an unconstrained manner. The advancements in deep neural network can be sighted as the main reason for such a quantum leap. The primary challenge of the present work is the selection of appropriate deep learning architecture for unconstrained ear recognition. Therefore the performance analysis of various pretrained networks such as VGGNet, Inception Net, ResNet, Mobile Net and NASNet is attempted here. The third challenge we addressed is to optimize the computational resources by reducing the number of learnable parameters while reducing the number of operations. Optimization of selected cells as in NASNet architecture is a paradigm shift in this regard.

More »»

Publication Type: Journal Article

Year of Publication Title

2018

Aswathi T and M, S., “Preventing distributed denial of service attacks and data security”, International Journal of Advance Research and Development, vol. Volume 3, no. issue4, 2018.

2016

Aswathi T, A. Joylin, B., Suma, P., and Victor, P. Nancy, “Sentiment Analysis on “Ebola” outbreak using Twitter data”, International Journal of Pharmacy and Technology, 2016.

2016

B. Joylin, Aswathi T, and Victor, N., “Sentiment analysis based on word-emoticon clusters”, International Journal of Pharmacy and Technology, vol. 8, pp. 25288-25296, 2016.[Abstract]


Sentiment analysis is a process of identifying and extracting subjective information in source materials by performing text analysis and Natural Language Processing. It aims to determine the attitude of a speaker or a writer with respect to some topic or the overall contextual polarity of a document. Conventionally, a machine learning algorithm is applied to classify the polarity of a given text into positive, negative or neutral and this classification is done based on emotional states such as „angry‟, „sad‟, „happy‟ etc. A better classification can be achieved by considering emoticons along with emotional states. In many past studies, the emoticons played an important role in building sentiment lexicons and in training machine learning classifiers and also they are considered to be the reliable indicators of sentiment. But, real meaning of all emoticons is not known to many of the social media users. Clustering of words and emoticons in the context of social media will give a good insight about the meaning conveyed by the emoticons. Emoticons are labeled as positive, negative or neutral based on the respective cluster of words they come under. Emoticons are identified from the text and sentiment analysis is performed by using emotional states in the text and emoticons. Such an analysis will result in better classification. This paper focuses on clustering of words and emoticons to know the meaning conveyed by the emoticons and compare the results of sentiment analysis before and after the emoticons are removed from the text. © 2016, International Journal of Pharmacy and Technology. All rights reserved.

More »»