Qualification: 
Ph.D, M.Tech
j_amudha@blr.amrita.edu

Dr. Amudha J. currently serves Associate Professor, Department of Computer Science, Amrita School of Engineering, Bengaluru. Her areas of research include Image Processing, Computer Vision, Soft Computing, Computational visual attention system, Cognitive Vision, Pattern Recognition, Attention allocation in various fields & marketing, robotics, etc.

A dynamic professional with a total of 20+ years of experience with a passion to be a researcher to serve the society and to kindle the interest of the student community by providing a better education platform for their Leading a thrust area group named Computational Intelligence and Computer Vision which is an interdisciplinary group for researchers interested in Computer Graphics, Computer Vision, Eye Tracking, Image Processing, Pattern Recognition, Machine Learning and Computational Algorithms. Established eye tracking and computer vision research lab with a team of 10+ doctorate and PG scholars Have published nearly 70+ research papers in journals and conference and filed patent.

Education

  • February 2012: Ph.D. on Thesis titled “Performance Evaluation of Bottom-up and Top-down Approaches in Computational Visual Attention System
    From: Amrita Vishwa Vidyapeetham, Coimbatore, India.
  • 2002: M.E. in Computer Science and Engineering
    From: College of Engineering Guindy, Anna University, Chennai, India.
  • 1998: B.E. Electrical and Electronics Engineering
    From: Amrita Institute of Technology & Science, Coimbatore, India.

Professional Appointments

Year

Affiliation

2012

Associate Professor, Department of CSE, Amrita School of Engineering, Bengaluru

2006

Assistant Professor (Selection Grade), Department of Information Technology, Amrita School of Engineering, Coimbatore

2002

Assistant Professor (Senior Grade), Department of Information Technology, Amrita School of Engineering, Coimbatore

1998

Assistant Professor, Department of Electrical and Electronics Engineering, Amrita School of Engineering, Coimbatore

Research & Management Experience

  • Experience of Administrative role as a Chairperson of the Computer Science Department which has undergraduate, postgraduate and doctorate program with 700+ students, 30+ faculty from June 2017-July 2020.
  • Mentor to faculty colleagues and collaborates with other University administrative officers, interprets policy, advocates from a perspective of the best overall interests of the University, to lead faculty in important processes that shape the curriculum and have an impact on the learning of students, and effectively articulates department and college missions to internal and external constituencies. Industrial collaborative research works, initiative of MOU between academics and other collaborators, Research projects etc.
  • Significant Role in the Board of Studies across Chair for M.Tech Data Science, Member for M.Tech CSE, B.Tech CSE, AI
  • Handled many administrative roles like School level Committee for UG/PG programmes, Disciplinary Committee Member, Course committee chairman, Women Grievance Cell Member, UG/PG/PhD Interview Panel, UG/PG/PhD Guidance, Course Committee Chairman, Class Committee Chairman, NAAC, NBA, IQAC Administrative Roles and other roles
  • Proposed and initiated a full time M. Tech Programme in Data Science
  • Lab establishment for the new B.Tech CSE(AI) undergraduate programme
  • Signed an MoU with GE, Honeywell to support SLAC Hackathon event 2019,2020
  • Signed NDA with Honeywell to support research engagement in the areas of AI/ML(Computer Vision, NLP, Time Series, etc.,), 5G, Blockchain
  • Executive Member, BCIC, Bengaluru
  • Member in Committee to deal with complaints on Sexual Harassment at Workplace, CDAC, Bengaluru
  • Mentor for profession societies like ACM, clubs like ACROM etc
  • Computational Intelligence and Computer Vision Research Group: Leading a thrust area group named Computational Intelligence and Computer Vision which is an interdisciplinary group for researchers interested in Computer Graphics, Computer Vision, Eye Tracking, Image Processing, Pattern Recognition, Machine Learning and Computational Algorithms. “WE ARE WHAT WE SEE” The Computational Intelligence and Computer Vision research team revolves around this theme of understanding how human visualizes and try to mimic in all automation process related to research in various sub areas of Eye Gaze Tracking for health care, ecommerce applications, Medical Image Processing related to Cancer Detection, Video Analytics in the area of Video Surveillance, Visual Sensor Network related to Smart City, Data Analytics on image, video building deep learning models to various use case. We are working on research projects on Operator fatigue Detection, Stress Analysis, Glaucoma Detection, health care projects on Parkinson Patients, Eye gaze recommendation System for Students/Software Professionals, Human Computer Interface for easy navigation using eye movements, Medical Image Fusion, Deep learning models. Expert in the field of Artificial Intelligence, Computational Intelligence, Computer Vision, Human Computer Interaction, Eye tracking, Machine Learning. Have published nearly 70+ research papers in journals and conference and filed patent.
  • Research Lab Setup for Eye Tracking and Computer Vision: Established Eye Tracking lab equipped with SMI REDn Professional eye tracker and Eye Tribe eye trackers. The Lab is associated with SMI Eye Tracking Data Analytics software and open-source platforms. The Research Lab has joint research works with health care hospitals like Nimhans, HCG, Narayana Nethralaya etc.
  • Signed Memorandum of Understanding and working on collaborative research works in healthcare domain with HCG, Narayana Nethralaya, Bangalore and AIMS, Kochi. ABB Global Industries and Services Private Limited, Bangalore, India has selected research candidate associated to the lab to work on “Eye tracking to understand developer Behavior” under ABB India Student Intern Program. The research lab also works with industry like Honeywell, Paralaxiom Technologies Private Limited etc., and has taken up consultancy work in the Computer Vision and Natural Language Processing namely "Optical Character Recognition in the Wild”, and “Auto Diagnosis and Auto Description Tool”
  • Patent: SYSTEM AND METHOD FOR DETECTION OF FEATURES IN AN IMAGE USING KNOWLEDGE OF EXPERT EYE GAZE PATTERN, Application No.201641037789 A Publication Date 2018

Major Research Interests

  • Artificial Intelligence, Computational Intelligence, Computer Vision, Human Computer Interaction, Eye tracking, Deep Learning, Machine Learning.

Membership in Professional Bodies

  • Computer Society of India
  • ISTE

Certificates, Awards & Recognitions

  • Chartered Management Institute Level 5 Certification in Management and Leadership – Dec 2020
  • Participated as an Invitee to Living Lab Delegation to the Netherlands 18-22 March 2018 to visit academic institutes and Dutch companies to develop project proposals and was funded under Schipol-to-Schipol Programme
  • State funded project titled “Diagnosis of Reading Disability in Dyslexic Children using Eye Tracking”, KSCST Student Project, Karnataka

Publications

Publication Type: Conference Proceedings

Year of Publication Title

2021

P. Laxmi, Gupta, D., Radhakrishnan, G., Amudha, J., and Sharma, K., “Automatic Multi-disease Diagnosis and Prescription System Using Bayesian Network Approach for Clinical Decision Making”, Advances in Artificial Intelligence and Data Engineering, vol. 1133. Springer, pp. 393–409, 2021.[Abstract]


A clinical decision support system (CDSS) is used as an aid in decision-making processes of health care providers in their day-to-day activities. This research attempts diagnosis of multiple diseases based on symptoms provided by patients. The work also recommends laboratory tests related to the predicted diseases and medications based on their results. The methodology adopted for implementation of CDSS is the Bayesian network approach. The modeling of the Bayesian network structure was undertaken in consultation with experts from medical domain. Clinical data has been used for estimation of network parameters such as conditional probability tables thereby bringing in machine learning into Bayesian methodology. The model developed is a learning model wherein the system input is saved for future training of the model. The results indicate that Bayesian approach is suitable for implementing a CDSS for multiple disease diagnosis. The proposed work will be useful towards increasing physicians throughput.

More »»

2021

S. Bhaskaran, Geetika Paul, Dr. Deepa Gupta, and Amudha, J., “Indian Language Identification for Short Text”, Advances in Intelligent Systems and Computing. Springer, pp. 47-58, 2021.[Abstract]


Language Identification is used to categorize the language of a given document. Language Identification categorizes the contents and can have a better search results for a multilingual document. In this work, we classify each line of text to a particular language and focused on short phrases of length 2 to 6 words for 15 Indian languages. It detects that a given document is in multilingual and identifies the appropriate Indian languages. The approach used is the combination of n-gram technique and a list of short distinctive words. The n-gram model applied is language independent whereas short word method uses less computation. The results show the effectiveness of our approach over the synthetic data.

More »»

2020

K. A and Amudha, J., “Machine Learning and Metaheuristics Algorithms, and Applications”, Metaheuristics Algorithms, and Applications. SoMMA 2019. Communications in Computer and Information Science, vol. 1203. 2020.

2020

A. Shrivastava, Amudha, J., Gupta, D., and Sharma, K., “Deep Learning Model for Text Recognition in Images”, 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT). IEEE, Kanpur, India, 2020.[Abstract]


Computer Vision and its applications are the core of industry digitization which is known as industry 4.0. For automating a process, texts embedded in images are considered as good source of information about that object. Reading text from natural images is still a challenging problem because of complicated background, size and space variations, irregular arrangements of texts. Detection and Recognition are the main stages of reading texts in the wild. In last few years, many researchers have provided many methods for recognizing texts in images. These methods have fine results on horizontal texts only but not on irregular arrangements of texts. This paper mainly focuses on deep learning model for text recognition in images (DL-TRI). The model addresses various cases of curved and perspective fonts and hard to recognize due to complex background.

More »»

2020

S. Tamuly, Jyotsna C, and Amudha, J., “Tracking Eye Movements To Predict The Valence of A Scene”, 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT). IEEE, Kanpur, India, 2020.[Abstract]


Studying human bio signals such as eye movements and tracking them can help in identifying and classifying the emotional essence of a scene. The existing methods employed to evaluate the reaction of the eyes based on exposure to a scene or image often use a classifier to extract features from eye movements. These extracted features are then evaluated to determine the valence of a scene. On the contrary, as much as eye movement has proved to be a reliable source in scene or image detection, factors such as how each feature affects the outcome of the prediction have not been explored. For the determination of the emotional category of images using eye movements, images are categorized into pleasant, neutral and unpleasant images and then these images are shown to the test subjects to record their response. Features of eye movement like fixation count, fixation frequency, saccade count, and saccade frequency among others, along with a machine learning approach was used for scene classification.

More »»

2020

G. Karthik, Amudha, J., and Jyotsna C, “A Custom Implementation of the Velocity Threshold Algorithm for Fixation Identification”, 2019 International Conference on Smart Systems and Inventive Technology (ICSSIT). IEEE, Tirunelveli, India, pp. 488-492, 2020.[Abstract]


Identifying fixations and saccades is an essential component of eye movement data analysis. There exist many algorithms that are employed to find fixations and saccades. In this paper we study the Velocity-Threshold Identification algorithm and implement it as an open source code for a specific dataset. A modified version of the algorithm was implemented and the number of fixations were compared with those reported by running a commercial software application. It was observed that the results are similar with an average difference of only 6 fixations per stimulus. The open source implementation also has a module to identify the centroids of fixation groups. This research aims to provide the open source community the ability to analyse an eye-tracking dataset without the use of any commercial software application.

More »»

2020

T. Shravani, Sai, R., M. Shree, V., Amudha, J., and Jyotsna C, “Assistive Communication Application for Amyotrophic Lateral Sclerosis Patients”, Advances in Intelligent Systems and Computing book series (AISC), vol. 1118. Springer International Publishing, Cham, pp. 1397-1408, 2020.[Abstract]


Eye tracking is one of the advanced techniques to enable the people with Amyotrophic lateral Sclerosis and other locked-in diseases to communicate with normal people and make their life easier. The software application is an assistive communication tool designed for the ALS Patients, especially people whose communication is limited only to eye movements. Those patients will have difficulty in communicating their basic needs. The design of this system's Graphical User Interface is made in such a way that it can be used by physically impaired people to convey their basic needs through pictures using their eye movements. The prototype is user friendly, reliable and performs simple tasks in order the paralyzed or physically impaired person can have an easy access to it. The application is tested and the results are evaluated accordingly.

More »»

2020

S. Tandra, Gupta, D., Amudha, J., and Sharma, K., “A Fuzzy-Neuro-Based Clinical Decision Support System For Disease Diagnosis Using Symptom Severity”, Advances in Intelligent Systems and Computing, vol. 1118. Springer, pp. 81-98, 2020.[Abstract]


Faster and accurate disease diagnosis is the need of the day. Various diagnostic tools are available to assist medical practitioners in the form of clinical decision support system (CDSS) and many more. This paper proposes to develop a CDSS that can assist medical practitioners with diagnostic decisions in general internal medicine for common diseases like malaria, typhoid, dengue which when ignored can cause epidemics. The proposed system aims at multi-disease diagnosis. Symptoms along with their severity are the input to the system. Most probable disease along with medication is the output of the system. The proposed system is modeled on neuro-fuzzy technique called adaptive neuro-fuzzy inference system (ANFIS) for disease diagnosis. Gaussian membership function is used as the fuzzifier, and custom defuzzifier is used to defuzzify the output. A rule-based system is used for medication and laboratory test recommendations. The proposed medical decision support system can aid medical practitioners in making better, effective, and faster diagnostic decisions, thereby helping in increasing the in-patient count and quality of medical care.

More »»

2020

K. Anusree and Amudha, J., “Eye Movement Event Detection with Deep Neural Networks”, Advances in Intelligent Systems and Computing, vol. 1108. Springer, pp. 921-930, 2020.[Abstract]


This paper presents a comparison of event detection task in eye movement with the exact events recorded from eye tracking device. The primary goal of this research work is to build a general approach for eye-movement based event detection, which will work with all eye tracking data collected using different eye tracking devices. It utilizes an end to end method based on deep learning, which can efficiently utilize eye tracking raw particulars that is further grouped into Saccades, post-saccadic oscillations and Fixations. The drawback of deep learning method is that it requires a lot of preprocessing data. At first, we have to build up a strategy to enlarge handcoded information, with the goal that we can unequivocally augment the informational index utilized for preparing, limiting the run through time on coding by a human. Utilizing this all-encompassing hand-coded information, we instruct neural networks model to process eye-development fixation grouping from eye-movement information in the absence of any previously defined extraction or post-preparing steps.

More »»

2020

C. V. Amrutha, Jyotsna C, and Amudha, J., “Deep learning approach for suspicious activity detection from surveillance video”, 2020 2nd International Conference on Innovative Mechanisms for Industry Applications (ICIMIA). IEEE, Bangalore, India, 2020.[Abstract]


Video Surveillance plays a pivotal role in today's world. The technologies have been advanced too much when artificial intelligence, machine learning and deep learning pitched into the system. Using above combinations, different systems are in place which helps to differentiate various suspicious behaviors from the live tracking of footages. The most unpredictable one is human behaviour and it is very difficult to find whether it is suspicious or normal. Deep learning approach is used to detect suspicious or normal activity in an academic environment, and which sends an alert message to the corresponding authority, in case of predicting a suspicious activity. Monitoring is often performed through consecutive frames which are extracted from the video. The entire framework is divided into two parts. In the first part, the features are computed from video frames and in second part, based on the obtained features classifier predict the class as suspicious or normal.

More »»

2020

S. S and Amudha, J., “Vιsual question answering models Evaluation”, International Conference for Emerging Technology, INCET 2020. 2020.

2020

N. Gouda and Amudha, J., “Skin Cancer Classification using ResNet”, 2020 IEEE 5th International Conference on Computing Communication and Automation (ICCCA). IEEE, Greater Noida, India, 2020.[Abstract]


Since skin disease is one of the most well-known human ailments, intelligent systems for classification of skin maladies have become another line of research in profound realizing, which is of incredible importance for the dermatologists. The exact acknowledgement of the infection is very challenging due to complexity of the skin texture and visual closeness of the disease. Skin images are filtered to discard undesirable noise and furthermore process it for improvement of the picture. We have used 25,331 clinical-skin disease images, the training images from varying lesions of eight categories and having no-skin ailments at different anatomic sites to test 8238 images. This classifier was utilized for categorization of skin lesions such as Vascular lesion, Melanoma, Basal cell carcinoma, Melanocytic nevus, Actinic keratosis, Benign keratosis, Dermatofibroma and Squamous cell carcinoma. Complex techniques such as Residual Neural Network (ResNet) which is a type of Deep Learning Neural Network which is utilized in classification of the image and obtain the diagnosis report as a confidence score with high accuracy. ResNet is used to make the training process faster b V skipping the identical lavers. There is an effective improvement in training process in every successive layer. Analysis of this investigation can help specialist in advance diagnosis, to know the kind of infection and begin with any treatment if required.

More »»

2020

R. Aarthi and Amudha, J., “Study on Computational Visual Attention System and Its Contribution to Robotic Cognition System”, 2020 4th International Conference on Trends in Electronics and Informatics (ICOEI). pp. 278-283, 2020.

2020

Jyotsna C, SaiMounica, M., Manvita, M., and Amudha, J., “Low Cost Eye Gaze Tracker Using Web Camera”, 3rd International Conference on Computing Methodologies and Communication [ICCMC 2019]. IEEE, Surya Engineering College, Erode , pp. 79-85., 2020.[Abstract]


The traditional gaze tracking systems are of invasive, expensive or not of a standard hardware. To address this problem, research has been focused onto systems with simple hardware with gaze tracking systems. We propose a system which uses web camera and the free open source Computer Vision Library Open CV. Gaze estimation plays a major role in predicting human attention and understanding human activities. It is also used in market analysis, gaze driven interactive displays, medical research, usability research, packaging research, gaming research, psychology research, and other human-machine interfaces. This paper describes how to develop a low cost eye gaze system using a simple web camera. The system captures real time video of the person to detect the eyes in the initial frames and extract the features of eyes. Once the features are extracted, the eyes are tracked in the subsequent frames and the gaze direction is estimated using the computational intelligence techniques.

More »»

2019

Y. Navya, SriDevi, S., Akhila, P., Amudha, J., and Jyotsna C, “Third Eye: Assistance for Reading Disability”, International Conference on Soft Computing & Signal Processing. Springer Singapore, Singapore, 2019.[Abstract]


Around 5–20% of students across the world have learning difficulties which occur due to several reasons. Dyslexia is the most common form of learning difficulty associated with difficulties in reading, writing, spelling and organization. Researchers show that the eye movements can reflect the difficulties faced by an individual while reading. Third eye uses eye gaze information from the eye tracker and analyzes it to extract features which can identify the reading skills of the subject and further broadly categorizes them based on their reading skills. An automated report is generated which depicts the areas of difficulty faced by the reader through various visualization techniques. A correlation study done between normal readers and dyslexic readers showcases significant differences in their eye movements in terms of fixation duration, fixation count and regressions. This platform can be used to identify the reading skills of the student which improves their self-confidence and motivates them toward further augmentation.

More »»

2019

S. Tamuly, Jyotsna C, and Amudha, J., “Deep Learning Model for Image Classification”, Advances in Intelligent Systems and Computing, vol. 1108. Springer International Publishing, Cham, 2019.[Abstract]


Starting from images captured on mobile phones, advertisements popping up on our internet browser to e-retail sites displaying flashy dresses for sale, every day we are dealing with a large quantity of visual information and sometimes, finding a particular image or visual content might become a very tedious task to deal with. By classifying the images into different logical categories, our quest to find an appropriate image becomes much easier. Image classification is generally done with the help of computer vision, eye tracking and ways as such. What we intend to implement in classifying images is the use of deep learning for classifying images into pleasant and unpleasant categories. We proposed the use of deep learning in image classification because deep learning can give us a deeper understanding as to how a subject reacts to a certain visual stimuli when exposed to it.

More »»

2019

K. Hena, Amudha, J., and Aarthi, R., “A Dynamic Object Detection In Real-World Scenarios”, Proceedings of International Conference on Computational Intelligence and Data Engineering (Lecture Notes on Data Engineering and Communications Technologies), vol. 28. Springer Singapore, Singapore, pp. 231-240., 2019.[Abstract]


The object recognition is one of the most challenging tasks in computer vision, especially in the case of real-time robotic object recognition scenes where it is difficult to predefine an object and its location.Hena, Kausar To address this challenge, we propose an object detection method that can be adaptive to learn objects independent of the environment, by enhancing the relevant features of the object and by suppressing the other irrelevant feature. The proposed method has been modeled to learn the association of features from the given training dataset.Amudha, J. Using dynamic evolution of neuro-fuzzy inference system (DENFIS) model has been used to generate number of rules from the cluster formed from the dataset. The validation of the model has been carried on various datasets created from the real-world scenario.Aarthi, R. The system is capable of locating the target regardless of scale, illumination variance, and background

More »»

2019

Sowmyasri, .Ravalika, R., Jyotsna C, and Amudha, J., “An Online Platform for Diagnosis of Children with Reading Disability”, 3rd International Conference on Computational Vision and Bio Inspired Computing, (ICCVBIC 2019) (Advances in Intelligent Systems and Computing), vol. 1108. Springer, RVS Technical Campus,Coimbatore, pp. 645-655., 2019.[Abstract]


Reading disability is a condition in which a person is analyzed with difficulty in reading There are different types of reading disorders, some of them are dyslexia, alexia. Recent research has identified that one of the causes for this disability could be hereditary factors. Detecting on time and proper treatment helps in improving the reading ability of a dyslexic child. Proposed method provides a user friendly approach for analyzing the children dyslexic disorder level using an Eye Tracker. Proposed module uses set of input stimulus to analyze all the parameters that could identify the stage of disability in a child. At the end of experiment a dashboard showing the results that could provide a detailed information for the doctor to decide the treatment that could help in reducing child’s reading disability. The proposed system increases the understandability of dyslexic disorder levels of a child as it display all the parameters that are measured to test the level of disorder in the form of a dashboard through a web page.

More »»

2019

L. M. Pavani, Prakash, A. V. Bhanu, Koushik, M. S. Shwetha, Amudha, J., and Jyotsna C, “Navigation through eye-tracking for human–computer interface”, (3rd International Conference on Information and Communication Technology for Intelligent Systems ) Smart Innovation, Systems and Technologies , vol. 107. Springer Science and Business Media Deutschland GmbH, pp. 575-586, 2019.[Abstract]


Vision Buy is a productive tool in enhancing the user experience of e-commerce Web sites. The consumer has the ability to purchase anything at anytime from anywhere through their visual attention and eye movements. The process of analysis typically involves examining the characteristics and patterns of visual attention during the online shopping process. The selection of the product based on the consumer’s eye movements is done by adopting the principle of attention distribution, which refers to the percentage of time visually allocated to each category of product available on the webpage. This data will be analyzed based on gaze points, and based on these selections, it enables users to navigate through the webpages. Thus, it is an ensuing product for any e-commerce web application. © Springer Nature Singapore Pte Ltd. 2019.

More »»

2019

K. Kavikuil and Amudha, J., “Leveraging deep learning for anomaly detection in video surveillance”, Advances in Intelligent Systems and Computing, vol. 815. Springer Verlag, pp. 239-247, 2019.[Abstract]


Anomaly detection in video surveillance data is very challenging due to large environmental changes and human movement. Additionally, high dimensionality of video data and video feature representation adds to these challenges. Many machine learning algorithms failed to show accurate results and it is time consuming in many cases. The semi supervised nature of deep learning algorithms aids in learning representations from the video data instead of hand crafting the features for specific scenes. Deep learning is applied to handle complicated anomalies to improve the accuracy of anomaly detection due to its efficiency in feature learning. In this paper, we propose an efficient model to predict anomaly in video surveillance data and the model is optimized by tuning the hyperparameters. © Springer Nature Singapore Pte Ltd. 2019.

More »»

2019

B. Gottimukkala, Praveen, M. P., P. Amruta, L., and Amudha, J., “Semi-automatic annotation of images using eye gaze data (SAIGA)”, Advances in Intelligent Systems and Computing, vol. 815. Springer Verlag, pp. 175-185, 2019.[Abstract]


Eye gaze tracking is based on pupil movement and is an effective medium for human–computer interaction. This field is utilized in several ways and is gaining popularity due to its increased ease of use and improved accuracy. The main objective of this paper is to present a framework that would assuage the burden of image annotations and make it more interactive. Images are annotated by physically describing their metadata. The current system gives the user 100% freedom to label the images at his/her discretion but is very tedious and time consuming. Here, we propose semi-automatic annotation of images using eye gaze data (SAIGA), an approach that would assist in using the eye gaze data to annotate images. SAIGA—the proposed framework shows how time and physical efforts spent on manual annotation can be bettered by a large value. © Springer Nature Singapore Pte Ltd. 2019.

More »»

2018

B. Murugaraj and Amudha, J., “Performance Assessment Framework for Computational Models of Visual Attention”, Intelligent Systems Technologies and Applications (Advances in Intelligent Systems and Computing), vol. 683. Springer International Publishing, Cham, pp. 345-355, 2018.[Abstract]


This paper presents performance framework for computational model of visual attention, a software package, written using python scripting language, developed for the real-time comparison of computational model with human fixations. The performance framework was developed for real-time processing of eye trackers recorded data, analyzing them to generate fixation map, and comparing the fixation map to a saliency model got by running a configured computational model either in bottom-up or top-down mode. The framework is designed such that added modules can be extended for various experiment processing as required by the researcher. The framework encompasses the main connection to eye tracker to collect the raw data that will have observers eye coordinates and duration, it has analysis model to analyze the model and providing methods of visualization like fixation, heatmap and scanpath, it also has a computational model that predicts the fixation on the given image stimulus, finally the platform compares the fixation and saliency map to assess the accuracy of the prediction. All the functions of the framework can be controlled by using the graphical user interfaces.

More »»

2018

Rahul Ramanathan, Bhaskaran, S., Amudha, J., and Dr. Deepa Gupta, “Multilingual Text Detection and Identification from Indian Signage Boards”, International Conference on Advances in Computing, Communications and Informatics (ICACCI). PES, Bengaluru, 2018.

2018

Jyotsna C and Amudha, J., “Eye Gaze as an Indicator for Stress Level Analysis in Students”, International Conference on Advances in Computing, Communications and Informatics (ICACCI). PES, Bengaluru, 2018.[Abstract]


Stress is the most common cause of ill health in our society. Intense psychological pressure can affect the health of a person. Higher levels of anxiety, depression, and stress-related problems are observed in students than they had in the past. Stress in studies is one of the reasons for committing suicides. Timely detection and proper counseling can help them to manage their stress. The intense use of the digital devices is the common causes of eye fatigue. Student‟s eye gaze parameters are recorded using an Eye Tracker. Proposed method provides a user friendly approach for analyzing the students‟ stress level using an Eye Tracker. Proposed method uses set of stimulus to analyze whether the stress level increases with respect to the increase in cognitive load. At the end of experiment eye fatigue detection test is conducted for the detection of eye fatigue. The comfort zone of the student is verified by filling a questionnaire at the end.

More »»

2018

S. Uday, Jyotsna C, and Amudha, J., “Detection of Stress using Wearable Sensors in IoT Platform”, 2nd International Conference on Inventive Communication and Computational Technologies(ICICCT 2018). Hotel Arcadia, Coimbatore, 2018.

2018

A. Malathi, Amudha, J., and Narayana, P., “A prototype to detect anomalies using machine learning algorithms and deep neural network”, Lecture Notes in Computational Vision and Biomechanics, vol. 28. Springer Netherlands, pp. 1084-1094, 2018.[Abstract]


Artificial Intelligence is making a huge impact nowadays in almost all the applications. It is all about instructing a machine to perceive an object like a human. Making the machine to excel at this perception requires training it by feeding large number of examples. In this way machine learning algorithms find many applications for real time problems. In the last five years, Deep Learning techniques have transfigured the field of machine learning, data mining and big data analytics. This research is to investigate the presence of anomalies in given data. This model can be used as a prototype and can be applied in domains like finding abnormalities in medical tests, to segregate fraud applications in banking, insurance records, to monitor IT infrastructure, observing energy consumption and vehicle tracking. The concepts of supervised learning and unsupervised learning are used with the help of machine learning algorithms and deep neural networks. Even though machine learning algorithms are effective for classification problems, the data in question, determines the efficiency of these algorithms. It has been showed in this paper; the traditional machine learning algorithms fail when the data is highly imbalanced and necessitate the use of deep neural networks. One such neural network called deep Autoencoder, is used to detect anomalies present in a large data set which is largely biased. The results derived out of this study, proved to be very accurate.

More »»

2018

K. R. Chandrika, Amudha, J., and Sudarsan, S. D., “Recognizing eye tracking traits for source code review”, IEEE International Conference on Emerging Technologies and Factory Automation, ETFA, vol. Part F134116. Institute of Electrical and Electronics Engineers Inc., pp. 1-8, 2018.[Abstract]


Source code review is a core activity in software engineering where a reviewer examines the code with the intention of finding bugs in the program. A lot of research has been carried out in understanding how software engineers perform code comprehension; however contribution of eye tracking traits seems to have not been addressed. This paper outlines a study conducted in an industrial environment of software engineers. It focused on understanding the visual attention of subjects with programming skills and subjects without programming skills and recognize the eye tracking traits required for source code review. The results indicate a significant difference in gaze behaviors of these groups. The key aspects of subjects with programming skills while source code review are required to have certain eye tracking traits like better code coverage, attention span on error lines and comments. ©2017 IEEE.</p>

More »»

2017

, Radha D., and Amudha, J., “Effectual Training For Object Detection Using Eye Tracking Data Set”, 2nd International conference on Inventive Computation Technologies(ICICT-2017). Coimbatore, 2017.

2017

J. Amudha and Radha D., “Optimization of Rules in Neuro-Fuzzy Inference Systems”, International Conference on Computational Vision and Bio-inspired Computing (ICCVBIC 2017). Inventive Research Organization and RVS Technical Campus, Coimbatore, 2017.

2017

P. Salunkhe, Bhaskaran, S., Amudha, J., and Dr. Deepa Gupta, “Recognition of Multilingual Text from Signage Boards”, 6th International Conference on Advances in Computing, Communications & Informatics (ICACCI’17), . Manipal University, Karnataka , 2017.

2017

D. P. Pragna, Dandu, S., Meenakzshi, M., Jyotsna C, and Amudha, J., “Health alert system to detect oral cancer”, Proceedings of the International Conference on Inventive Communication and Computational Technologies, ICICCT 2017. Institute of Electrical and Electronics Engineers Inc., pp. 258-262, 2017.[Abstract]


Cancer is a deadly disease in which body's cells start dividing enormously and are able to spread in to other tissues. Oral cancer is a kind of cancer, where some abnormal lesions or patches will appear in the oral cavity. Since it is difficult to identify it in the initial stages, it has one of the worst survival rates. The proposed health alert system can help the patients in identifying the disease in the initial stage itself. It accepts the Computerized Tomography (CT) scanned images of the cancer affected region and can detect the presence of malignancy. The obtained CT image is preprocessed using Adaptive Median Filter and the features such as Texture, Shape, Water Content, Linear Binary Pattern (LBP), Histogram of Oriented Gradients (HOG) and Gray Level Co-occurrence Matrix (GLCM) are extracted from preprocessed images. The redundant features are omitted using features election process and Support Vector Machine (SVM) classification algorithm is used to classify it as benign or malignant. Proposed Health Alert system has an accuracy of 97%.

More »»

2017

S. Lakshmi Sadasivam and Amudha, J., “System Design for Tackling Blind Curves”, Proceedings of International Conference on Computer Vision and Image Processing: CVIP 2016, Volume 1. Springer Singapore, Singapore, pp. 69–77, 2017.[Abstract]


Driving through blind curves, especially in mountainous regions or through roads that have blocked visibility due to the presence of natural vegetation or buildings or other structures is a challenge because of limited visibility. This paper aims to address this problem by the use of surveillance mechanism to capture the images from the blind side of the road through stationary cameras installed on the road and provide necessary information to drivers approaching the blind curve on the other side of the road, thus cautioning about possible collision. This paper proposes a method to tackle blind curves and has been studied for various cases.

More »»

2016

D. Venugopal, Amudha, J., and Jyotsna C, “Developing an application using eye tracker”, 2016 IEEE International Conference on Recent Trends in Electronics, Information Communication Technology (RTEICT). IEEE, 2016.[Abstract]


Eye tracking measures where the eye is focused or the movement of eye with respect to the head. The eye tracker will track the eye positions and eye movement for the visual stimulus presented on the computer system. Various features like gaze point, pupil size and mouse position can be extracted and it can be represented using visualization techniques such as fixation, saccade, scanpath and heat map. The features obtained from eye tracker can be extended to real life applications. Using this technology companies could be able to analyze thousands of customer's eye patterns in real-time, and make decisions on marketing based on the data. The technology can analyze the stress level of patients, employees in IT, BPO, accounting, banking, front office etc. Here we are illustrating the advantages and applications of eye tracking, its usability and how to develop an application using a commercial eye tracker.

More »»

2016

J. Amudha, S. Reddy, R., and Y. Reddy, S., “Blink Analysis using Eye gaze tracker”, Intelligent Systems Technologies and Applications 2016. Springer International Publishing, Cham, pp. 237–244, 2016.[Abstract]


An involuntary action of opening and closing the eye is called blinking. In the proposed work, blink analysis has been performed on different persons performing various tasks. The experimental suite for this analysis is based on the eye gaze coordinate data obtained from commercial eye gaze tracker. The raw data is processed through a FSM(Finite State Machine) modeled to detect the opening and closing state of an eye. The blink rate of a person varies, while performing tasks like talking, resting and reading operations. The results indicate that a person tend to blink more while talking when compared to reading and resting. An important observation from analysis is that the person tends to blink more if he/she is stressed.

More »»

2016

J. Amudha and Chandrika, K. R., “Suitability of Genetic Algorithm and Particle Swarm Optimization for Eye Tracking System”, 2016 IEEE 6th International Conference on Advanced Computing (IACC). IEEE, pp. 256-261, 2016.[Abstract]


Evolutionary algorithms provide solutions to optimization problem and its suitability to eye tracking is explored in this paper. In this paper, we compare the evolutionary methods Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) using deformable template matching for eye tracking. Here we address the various eye tracking challenges like head movements, eye movements, eye blinking and zooming that affect the efficiency of the system. GA and PSO based Eye tracking systems are presented for real time video sequence. Eye detection is done by Haar-like features. For eye tracking, GAET and PSOET use deformable template matching to find the best solution. The experimental results show that PSOET achieves tracking accuracy of 98% in less time. GAET predicted eye has high correlation to actual eye but the tracking accuracy is only 91 %.

More »»

2015

J. Tressa Jose, Amudha, J., and Sanjay, G., “A Survey on Spiking Neural Networks in Image Processing”, Advances in Intelligent Informatics. Springer International Publishing, Cham, pp. 107–115, 2015.[Abstract]


Spiking Neural Networks are the third generation of Artificial Neural Networks and is fast gaining interest among researchers in image processing applications. The paper attempts to provide a state-of-the-art of SNNs in image processing. Several existing works have been surveyed and the probable research gap has been exposed.

More »»

2015

J. Amudha, Radha D., and Deepa, A. S., “Comparative Study of Visual Attention Models with Human Eye Gaze in Remote Sensing Images”, Proceedings of the Third International Symposium on Women in Computing and Informatics. ACM, New York, NY, USA, 2015.

2015

R. Aarthi, Amudha, J., and Priya, U., “A Generic Bio-inspired Framework for Detecting Humans Based on Saliency Detection”, Artificial Intelligence and Evolutionary Algorithms in Engineering Systems: Proceedings of ICAEES 2014, Volume 2. Springer India, New Delhi, pp. 655–663, 2015.[Abstract]


Even with all its advancement in technology, computer vision system cannot competes with nature’s gift—the brains, that arranges the objects quickly and extract the necessary information from huge data. A bio-inspired feature selection method is proposed for detecting the humans using saliency detection. It is performed by tuning prominent features such as color, orientation, and intensity in bottom-up approach to locate the probable candidate regions of humans in an image. Further, the results improved in detection phase that makes use of weights learned from training samples to ignore non-human regions in the candidate regions. The overall system has an accuracy rate of 90&nbsp;% for detecting the human region.

More »»

2015

J. Amudha and Arpita, P., “Multi-Camera Activation Scheme for Target Tracking with Dynamic Active Camera Group and Virtual Grid-Based Target Recovery”, Procedia Computer Science, vol. 58. Elsevier, pp. 241–248, 2015.[Abstract]


Camera sensor activation schemes are essential for optimizing the usage of resources in wireless visual sensor networks. In this regard, an efficient camera sensor activation scheme which accounts for fast moving targets is proposed. This is achieved by adapting the number of cameras involved in tracking the target, based on the target's speed. To reduce the target miss rate, a virtual grid-based target recovery scheme is proposed, which attempts to re-locate the target in the case of a target miss. Simulations show that the proposed activation scheme gives a considerable reduction in target miss rate compared to an existing scheme which is based on observation correlation coefficient between camera sensors.

More »»

2015

R. Aarthi and Amudha, J., “Saliency based Modified Chamfers Matching Method for Sketch based Image Retrieval”, Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on. IEEE, Karpagam College of Engineering, Coimbatore, India, 2015.

2015

J. Amudha, Nandakumar, H., Madhura, S., M Reddy, P., and Kavitha, N., “An android-based mobile eye gaze point estimation system for studying the visual perception in children with autism”, Computational Intelligence in Data Mining-Volume 2. Springer India, pp. 49–58, 2015.

2014

H. Nandakumar and Amudha, J., “A comparative analysis of a neural-based remote eye gaze tracker”, 2014 International Conference on Embedded Systems (ICES). 2014.[Abstract]


Remote eye gaze tracker is a nonintrusive system which can give the coordinates of the position., where a person is looking on the screen. This paper gives an extensive analysis of a neural based eye gaze tracker. The eye gaze detection system based on neural network considers the variation of the system behavior with different feature extraction techniques adopted like eye template based features and features based on pupil detection. The performance comparison between these various models has been presented in this paper. The system has also been tested under different lighting conditions and distance from the webcam for different subjects. The performance of the eye gaze tracker based on features computed from the eyes were found to have better performance of 95.8% compared to the template based features.

More »»

2012

J. Amudha, Radha D., and NareshKumar, P., “Video Shot Detection using Saliency Measure”, International Journal of Computer Applications, vol. 45. pp. 17–24, 2012.[Abstract]


Video shot boundary is an early stage of content based video analysis and is fundamental to any kind to of video application. The increased availability and usage of online digital video has created a need for automated video content analysis techniques. Major bottle neck that limits a wider use of digital video is the ability of quickly finding desired information from a huge database. Manual indexing and annotating the video material are both computationally expensive and time consuming. In this paper we design a novel approach for shot boundary detection using visual attention model by comparing the saliency measures. The approach is robust to a wide range of digital effects with low computational complexity

More »»

2012

J. Amudha, Chadalawada, R. Kiran, Subashini, V., and B. Kumar, B., “Optimised Computational Visual Attention Model for Robotic Cognition”, Intelligent Informatics: Proceedings of the International Symposium on Intelligent Informatics ISI'12 Held at August 4-5 2012, , vol. 182. Springer , Chennai, India, pp. 249–260, 2012.[Abstract]


The goal of research in computer vision is to impart and improvise the visual intelligence in a machine i.e. to facilitate a machine to see, perceive, and respond in human-like fashion(though with reduced complexity) using multitudinal sensors and actuators. The major challenge in dealing with these kinds of machines is in making them perceive and learn from huge amount of visual information received through their sensors. Mimicking human like visual perception is an area of research that grabs attention of many researchers. To achieve this complex task of visual perception and learning, Visual Attention model is developed. A visual attention model enables the robot to selectively (and autonomously) choose a “behaviourally relevant” segment of visual information for further processing while relative exclusion of others (Visual Attention for Robotic Cognition: A Survey, March 2011).The aim of this paper is to suggest an improvised visual attention model with reduced complexity while determining the potential region of interest in a scenario.

More »»

2010

R. Aarthi, Dr. Padmavathi S., and Amudha, J., “Vehicle detection in static images using color and corner map”, ITC 2010 - 2010 International Conference on Recent Trends in Information, Telecommunication, and Computing. Kochi, Kerala, pp. 244-246, 2010.[Abstract]


This paper presents an approach to identify the vehicle in the static images using color and corner map. The detection of vehicles in a traffic scene can address wide range of traffic problems. Here an attempt has been made to reduce the search time to find the possible vehicle candidates thereby reducing the computation time without a full search. A color transformation is used to project all the colors of input pixels to a new feature space such that vehicle pixels can be easily distinguished from non-vehicle ones. Bayesian classifier is adopted for verifying the vehicle pixels from the background. Corner map is used for removing the false detections and to verify the vehicle candidates. © 2010 IEEE.

More »»

2009

C. V. Hari, Jojish, J. V., Gopi, S., Felix, V. P., and Amudha, J., “Mid-Point Hough Transform: A Fast Line Detection Method”, 2009 Annual IEEE India Conference. IEEE, 2009.[Abstract]


This paper proposes a new method for detection of lines in images. The new algorithm is a modification of the standard Hough transform by considering a pair of pixels simultaneously and mapping them to the parameter space. The proposed algorithm is compared with line detection algorithms like standard Hough transform, randomized Hough transform and its variants and has advantages of smaller storage and higher speed.

More »»

Publication Type: Book Chapter

Year of Publication Title

2020

K. R. Chandrika, Amudha, J., and Sudarsan, S. D., “Identification and Classification of Expertise Using Eye Gaze–-Industrial Use Case Study with Software Engineers”, in Lecture Notes in Networks and Systems, vol. 120, J. Chand Bansal, Gupta, M. Kumar, Sharma, H., and Agarwal, B., Eds. Singapore: Springer Singapore, 2020, pp. 391-405.[Abstract]


Identifying and classifying based on expertise in an objective manner is a challenge as it is difficult to distinguish between ability to solve with and without stress. Ability to complete the task in a comfortable manner results in a more productive and healthier workforce by eliminating competency-related stress at work. We use cognitive load as an indicator of stress while understanding the skill and comfort level of software testers in an industrial setting as a case study. We conducted our study using eye tracking techniques. Our findings are reported, and they were corroborated by interacting the participants of the study pre- and post-eye-tracking experiments. The results are encouraging to extend such exercises for additional use cases, e.g., trainer effectiveness evaluation.

More »»

2018

J. Amudha and Radha, D., “Optimization of rules in neuro-fuzzy”, in Lecture Notes in Computational Vision and Biomechanics, vol. 23, 2018, pp. 803-818.

Publication Type: Journal Article

Year of Publication Title

2020

A. Sahay and Amudha, J., “Integration of Prophet Model and Convolution Neural Network on Wikipedia Trend Data”, Journal of Computational and Theoretical Nanoscience, vol. 17, no. 1, pp. 260–266, 2020.[Abstract]


Forecasting a time series is an ever growing area in which various machine learning techniques have been used to predict and analyze the future based on the data gathered from past. “Prophet” forecasting model is the most recent development in forecasting the time series, developed by Facebook. Prophet is much faster and simpler to implement than the previous forecasting model such as ARIMA model. Classification of forecasting output can be done by applying convolution neural network (CNN) on the outcomes of the Prophet model. To get higher accuracy with lesser loss, the method runs CNN with the best possible deep layers. The yearly, weekly, daily seasonality and trends could be realized by Prophet Model. The paper shows classification of these output based on the varying types of seasonality and trends. The labeled output can then, train and test all the trends’ result and find out the accuracy and loss incurred in a CNN model. Applying different depth and parameters of CNN that is a combined unit at each layer, it can achieve more than 96% accuracy with less than 4% loss. The integration of prophet and CNN shows that the training and testing model of a neural network can validate the prediction done by using prophet forecasting model along with the seasonality and trends parameters are in coherence to one another.

More »»

2020

Jyotsna C, Amudha, J., Rao, R., and Nayar, R., “Intelligent gaze tracking approach for trail making test”, International Symposium on Intelligent Systems Technologies and Applications, vol. 38, pp. 1-12, 2020.[Abstract]


Trail making test is a cognitive impairment test used for understanding the visual attention during the visual search task. The classical paper pencil method measures the completion time of the participant and there was no mechanism for comparison across the participant with similar feature. The psychologist has to observe the reactions of the participants during the trial process and there is no mechanism to capture it. This study made an attempt to resolve the above problem and tried to infer additional parameters which can support psychologist to understand the participant performance in trail making test. The insight provided by the approach is to extract various features which helps a psychologist by providing individual profiling and group profiling of a person and can understand the group of people who show similar cognitive impairment while performing trail Making Test. The proposed Intelligent Gaze Tracking approach could classify the participant into three different groups like low, high and medium cognitive impairment based on the extracted gaze features. The proposed approach has been compared across existing literature survey to significantly show the advantage of the system in terms of identifying the people with similar characteristics in terms of cognitive impairment.

More »»

2020

J. Amudha, K.V, D., and Aarthi, R., “A fuzzy based system for target search using top-down visual attention”, Journal of Intelligent and Fuzzy Systems, vol. 38, no. 5, pp. 6311-6323, 2020.[Abstract]


Top–down influences play a major role in the primate’s visual attention mechanism. Design of top-down influences for target search problems is the recommended approach to develop better computational models. Existing top down computational visual attention models mainly exploit three factors namely the context information, target information and task demands. Here in this paper we propose a Fuzzy based System for Target Search (FSTS) which makes use of target information as the top-down factor. The system uses Fuzzy logic to predict the salient locations in an image based on the prior information about a target object to be detected in a scene or frame. The performance of the system was analysed using multiple evaluation parameters and is found to have a better average hit number, number of first hits and elapsed CPU time than the existing system. The saliency map comparison is performed with human eye fixation map and is found to predict the human fixations with better accuracy than existing systems.

More »»

2018

K. R. Chandrika and Amudha, J., “A fuzzy inference system to recommend skills for source code review using eye movement data”, Journal of Intelligent & Fuzzy Systems, vol. 34, no. 3, pp. 1743 - 1754, 2018.[Abstract]


A quality software development is inclined to the software developer skills. The research focus on recommending the skills of individuals based on the eye movement data. The paper sketches a study conducted on students who are future developers. A fuzzy based recommendation system was implemented to recommend two skills, code coverage and debugging skills that are primary in source code review. The code coverage inference system recommends individual code coverage as maximum, average and minimum and the debugging fuzzy inference system recommends debugging skills as skilled, unskilled and expert.

More »»

2018

N. Kulkarni and Amudha, J., “Eye gaze– based optic disc detection system”, Journal of Intelligent & Fuzzy Systems, vol. 34, no. 3, pp. 1713 - 1722, 2018.[Abstract]


Optic disc (OD) detection is an important step in a number of algorithms developed for automatic extraction of anatomical structures and retinal lesions. In this article, a novel system, eye gaze– based OD detection, is presented for detecting OD in fundus retinal image using the knowledge developed from the expert’s eye gaze data. The eye gaze data are collected from expert optometrists and non-experts group while viewing the fundus retinal images. The task given to them is to spot the OD in fundus retinal images. Eye gaze fixations were used to identify the target and distractor regions. The image-based features were extracted from the identified regions. The top-down (TD) knowledge is developed using feature ranking and fuzzy system. This TD knowledge is further used for building the TD map. The success rates for various standard datasets are: DRIVE dataset, 100%; DRIONS-DB, 98.2%; INSPIRE, 97.5%; High Resolution Fundus Images, 100%; DIRECTDB0, 96.9%; ONHSD, 91.9% and STARE, 81.4%.

More »»

2018

J. Amudha and Nandakumar, H., “A fuzzy based eye gaze point estimation approach to study the task behavior in autism spectrum disorder”, Journal of Intelligent and Fuzzy Systems, vol. 35, pp. 1459-1469, 2018.[Abstract]


The general characteristics observed in Autism is decrease in communication skill, interaction and shows behavioral changes. The reasons for these can be studied by understanding their visual sensory processing. The research work presented here uses image stimuli to study the behavior in children by understanding when and where they look. A Fuzzy based Eye Gaze Point estimation (FEGP) has been proposed which observes the gaze coordinates of the child, analyze the eye gaze parameters to assess the difference in visual perception of an autistic child in comparison to a normal child. The approach helps to identify the visual behavior difference in autistic children with a performance level indicator, visualization and inferences that can be used to tune their learning programs with an attempt to meet their counterparts.

More »»

2017

N. Kulkarni and Amudha, J., “Comparison of experts and non-experts gaze pattern for the target as optic disc in the fundus retinal images”, International Journal of Applied Engineering Research, vol. 12, no. 14, pp. 4106–4112, 2017.

2017

T. Sawhney, S Reddy, P., Amudha, J., and Jyotsna C, “Helping hands–an eye tracking enabled user interface framework for amyotrophic lateral sclerosis patients”, International Journal of Control Theory and Applications, vol. 10, no. 15, 2017.[Abstract]


Amyotrophic Lateral Sclerosis weakens the nervous system of a person due to death of the neurons that are responsible for the muscular activities of our body, allowing him to move and communicate with the help of his eye gaze and eye movements. This inactivity leads to a person being in a completely paralyzed state slowly making him succumb to his condition. In a vast majority of cases of ALS, the cause is still unknown. However, some studies claim that the involvement of multiple genes and environmental factors contribute to ALS in various cases. Indeed, ALS is primarily a polygenic (multiple genes involved) disease (70%–90) is inherited in 30% of total ALS cases. Some other causes include cigarette smoking, viral infections and ingestion of non-protein amino acids that may engender to this disease. The initial symptoms of ALS involves muscle weakness and/or muscle atrophy. Other presenting symptoms include trouble swallowing or breathing, cramping, or stiffness of affected muscles. However these symptoms prove to be too subtle to be detected and are often overlooked. This state of being makes it impossible for the patient to communicate his basic needs to others. Eye tracking is the advanced techniques which enable the people with Amyotrophic lateral Sclerosis (ALS) and other locked-in diseases to communicate with normal people and make their life easier. The framework developed proves to be user-friendly and customizable according to the needs of the patient.

More »»

2016

R. Aarthi, Amudha, J., K., B., and Varrier, A., “Detection of Moving Objects in Surveillance Video by Integrating Bottom-up Approach with Knowledge Base”, Procedia Computer Science, vol. 78, pp. 160 - 164, 2016.[Abstract]


Abstract In the modern age, where every prominent and populous area of a city is continuously monitored, a lot of data in the form of video has to be analyzed. There is a need for an algorithm that helps in the demarcation of the abnormal activities, for ensuring better security. To decrease perceptual overload in \{CCTV\} monitoring, automation of focusing the attention on significant events happening in overpopulated public scenes is also necessary. The major challenge lies in differentiating detecting of salient motion and background motion. This paper discusses a saliency detection method that aims to discover and localize the moving regions for indoor and outdoor surveillance videos. This method does not require any prior knowledge of a scene and this has been verified with snippets of surveillance footages. More »»

2016

R. Aarthi, Anjana, K. P., and Amudha, J., “Sketch based Image Retrieval using Information Content of Orientation”, Indian Journal of Science and Technology, vol. 9, 2016.[Abstract]


Background/Objectives: This paper presents an image retrieval system using hand drawn sketches of images. Sketch is one of the convenient ways to represent the abstract shape of an object. The main objective is to perform retrieval of images using edge content by prioritizing the blocks based on information. Methods/Statistical Analysis: Entropy based Histogram of Gradients (HOG) method is proposed to prioritize the block. The method helps to pick the candidate blocks dynamically to compare with database images. Findings: The performance of the method has been evaluated using benchmark dataset of Sketch Based Image Retrieval (SBIR) with other methods like Indexable Oriented Chamfer Matching (IOCM), Context Aware Saliency (CAS-IOCM) and Histogram of Gradients (HOG). Comparing to these methods the number of relevant images retrieved is high for our approach.Application/Improvement: Knowledge based block selection method improves the performance of the existing method.

More »»

2015

K. R. Chandrika and Amudha, J., “Comparing several selection strategies in GA for software testing”, International Journal of Applied Engineering Research, vol. 10, no. 20, pp. 18892-18896., 2015.

2015

J. Amudha and V., J., “A cost effective low computational approach on eye event recognition using non invasive method”, International Journal of Applied Engineering Research, vol. 10, no. 20, pp. 18980-18984., 2015.[Abstract]


Eye tracking refers to measuring point of gaze or relative motion of eye with respect to head. The primary motivation is to study and implement a technique of mapping human eye gaze onto the computer to serve as a means of interface between the human and the system. With the various approaches to eye tracking systems, systems based on light-reflection which utilizes sensors which are non-imaging, like low complexity phototransistors or photodiodes, computation requirements and almost provide moderately accurate estimation of the gesture performed using the eye. The cost of the system is considerably reduced when compared with the commercially available systems. The acquired data are then used in low computation based algorithms to recognize the gesture performed. The system aims at utilizing an algorithm for effectively classifying the patterns for performing right, left and double clicks based on respective eye winks with accuracy of high order.

More »»

2015

Radha D. and Amudha, J., “Design of an Economic Voice Enabled Assistive System for the Visually Impaired”, International Journal of Applied Engineering Research, vol. 10, no. 12, pp. 32711–32721, 2015.[Abstract]


Assistive technologies are meant to improve the quality of the life of visually impaired population. However due to factors like income level/economic status of the visually impaired, hand held devices that are not ease of use, ignorance of the existing assistive technologies makes less reachability of these assistive systems for the needy. This paper proposes a system which has been developed to be an economic, simple, Voice Enabled Assistive System (VEAS) for visually impaired to identify and locate the objects in his/her nearby environment. VEAS uses attention theories to easily locate the objects from its background with less computational time. The user interface has been made flexible, by making the human interface system quite comfortable to an ignorant user and assisted with a speech processor for guiding them.

More »»

2015

G. Sanjay, Amudha, J., and Jose, J. Tressa, “Moving Human Detection in Video Using Dynamic Visual Attention Model”, Advances in Intelligent Systems and Computing, vol. 320, pp. 117-124, 2015.[Abstract]


Visual Attention algorithms have been extensively used for object detection in images. However, the use of these algorithms for video analysis has been less explored. Many of the techniques proposed, though accurate and robust, still require a huge amount of time for processing large sized video data. Thus this paper introduces a fast and computationally inexpensive technique for detecting regions corresponding to moving humans in surveillance videos. It is based on the dynamic saliency model and is robust to noise and illumination variation. Results indicate successful extraction of moving human regions with minimum noise, and faster performance in comparison to other models. The model works best in sparsely crowded scenarios. More »»

2015

J. Amudha, Radha D., and S., S., “Analysis of fuzzy rule optimization models”, International Journal of Engineering and Technology, vol. 7, no. 5, pp. 1564-1570, 2015.[Abstract]


Optimization without losing the accuracy and interpretability of rules is a major concern in rule based system. Fuzzy Inference system characterized by uncertainty tolerance is the best way to represent a knowledge based system. Optimization of rule based systems starts by incorporating selflearning ability to a fuzzy inference system. This can be achieved by neural networks, there by developing a neuro fuzzy inference system. This paper analyses different neuro fuzzy inference systems.The analysis has been performed in different types of datasets in terms of dimensionality and noises. Analysis results concludes that the neuro fuzzy model DENFIS (Dynamically Evolving Neuro Fuzzy Inference System) shows an improved performance when handling with high dimensional data. Simulation results on low dimensional data exhibits similar performance in ANFIS (Adaptive Neuro Fuzzy Inference System) and Denfis.

More »»

2014

Radha D., Amudha, J., and Jyotsna C, “Study of Measuring Dissimilarity between Nodes to Optimize the Saliency Map”, Int.J.Computer Technology & Applications, vol. 5, no. 3, pp. 993-1000, 2014.[Abstract]


An analytical conclusion based on eye tracking data sets has shown that Graph Based Visual Saliency (GBVS) measures saliency in a better way. GBVS promotes higher saliency at the center of the image plane and strongly highlights salient regions even for the locations that are far-away from object borders. It predicts human fixations more consistently than the standard algorithms. Every pixel in an image is mapped as an individual graph node in the activation map. This in turn increases the computational time. Hence the objective of this paper is to analyze the performance of saliency measure in GBVS by modeling different grouping strategies to represent a node. Here, we concentrate on finding the dissimilarity between the nodes by grouping pixels as a node with overlapping or non-overlapping pixels in the surrounding nodes which optimize the saliency closer to the Eye-Tracker’s saliency. The different grouping strategies of GBVS are analyzed across several performance measures like Normalized Scanpath Saliency the Linear Correlation Coefficient, Area Under Curve, , Similarity, Kullback – Leibler Divergence to prove its efficiency. Key terms – Visual Attention Models, Saliency maps, Eye-Tracking, Grouping pixels.

More »»

2013

Radha D. and Amudha, J., “Detection of Unauthor- ized Human Entity in Surveillance Video”, International Journal of Engineering and Technology (IJET), vol. 5, no. 3, pp. 3101-3108, 2013.[Abstract]


With the ever growing need for video surveillance in various fields, it has become very important to automate the entire process in order to save time, cost and achieve accuracy. In this paper we propose a novel and rapid approach to detect unauthorized human entity for the video surveillance system. The approach is based on bottom-up visual attention model using extended Itti Koch saliency model. Our approach includes three modules- Key frame extraction module, Visual attention model module, Human detection module. This approach permits detection and separation of the unauthorized human entity with higher accuracy than the existing Itti Koch saliency model.
Keywords—Video surveillance, Histogram, Key frame extraction, Visual Attention Model, Saliency map, Connected component, Aspect ratio.

More »»

2013

D. Radha, Amudha, J., Ramyasree, P., Ravindran, R., and Shalini, S., “Detection of unauthorized human entity in surveillance video”, International Journal of Engineering and Technology, vol. 5, 2013.[Abstract]


With the ever growing need for video surveillance in various fields, it has become very important to automate the entire process in order to save time, cost and achieve accuracy. In this paper we propose a novel and rapid approach to detect unauthorized human entity for the video surveillance system. The approach is based on bottom-up visual attention model using extended Itti Koch saliency model. Our approach includes three modules- Key frame extraction module, Visual attention model module, Human detection module. This approach permits detection and separation of the unauthorized human entity with higher accuracy than the existing Itti Koch saliency model.

More »»

2012

J. Amudha, “Performance evaluation of bottom-up and top-down approaches in computational visual attention system”, 2012.[Abstract]


The world around us has abundant of visual information and it is indeed a hilarious job for the brain to process this continuous flow of visual information bombarding the retina and to extract the small portions of information that are important for further actions. Visual attention systems provides the brain with a mechanism of focusing computational resources on one object at a time, either driven by low-level properties (bottom-up attention) or based on a specific task (top-down attention). Moving the focus of attention to locations one by one enables sequential recognition of objects at these locations. What appears to be a straight-forward sequence of processes (first focus attention to a location, then process object information there) is in fact an intricate system of interactions between visual attention and object recognition. How, for instance, can the focus of attention from one object to the next is performed? Can the existing information used for processing the attention can be used for the next object recognition task also? If so how to use it? Whether the existing knowledge about a target object can be utilized in the recognition system to bias the attention from the top down? This thesis attempts to address these questions with a combination of how computational models can be adopted for artificial visual attention systems and how the bottom-up and top-down approaches can be studied empirically for various applications. The base of this research work is on the popular model by Koch and Ullman [60] which is based on the psychological work by Treisman [113] terme the feature-integration-theory. The model uses saliency maps in combination with a winner-take-all selection mechanism. Once a region has been selected for processing it is inhibited to enable other regions to compete for the available resources. More »»

2011

K. P. Soman and Amudha, J., “Feature Selection in top-down visual attention model”, International Journal of Computer application, vol. 24, pp. 38–43, 2011.

2011

J. Amudha, Soman, K. P., and Kiran, Y., “Feature Selection in Top Down Visual Attention Model with WEKA”, International Journal of Computer Applications, vol. 24, pp. 38–43, 2011.

2011

J. Amudha, Dr. Soman K. P., and S Reddy, P., “A Knowledge Driven Computational Visual Attention Model”, International Journal of Computer Science Issues, vol. 8, no. 3, 2011.[Abstract]


Computational Visual System face complex processing problems as there is a large amount of information to be processed and it is difficult to achieve higher efficiency in par with human system. In order to reduce the complexity involved in determining the saliency region, decomposition of image into several parts based on specific location is done and decomposed part is passed for higher level computations in determining the saliency region with assigning priority to the specific color in RGB model depending on application. These properties are interpreted from the user using the Natural Language Processing and then interfaced with vision using Language Perceptional Translator (LPT). The model is designed for a robot to search a specific object in a real time environment without compromising the computational speed in determining the Most Salient Region. More »»

2011

J. Amudha, Dr. Soman K. P., and Kiran, Y., “Feature Selection in Top-Down Visual Attention Model using WEKA.”, International Journal of Computer Applications, vol. 24, no. 4, pp. 38-43, 2011.

2009

J. Amudha and Dr. Soman K. P., “Saliency based visual tracking of vehicle”, International Journal of Recent Trends in Engineering, vol. 2, no. 2, p. 114, 2009.[Abstract]


In this paper, a cognitive approach for car tracking is proposed. A biologically motivated attention system detects regions of interest in images based on concepts of the human visual system. A top-down guided visual search module of the system enables to highlight features of a previously learned target object. Here, the attention system identifies the appearance of the vehicle and builds a top-down, target-related saliency map. This enables to focus on the most relevant features of the vehicle and helps in tracking in the subsequent frames. The system operates in real-time and can cope with the requirements of real-world tasks such as illumination variations.

More »»

2009

J. Amudha and Soman, K. P., “Saliency based visual tracking of vehicles”, International Journal of Recent Trends in Engineering, vol. 2, pp. 114–116, 2009.

2009

J. Amudha and Dr. Soman K. P., “Selective tuning visual attention model”, International Journal of Recent Trends in Engineering, vol. 2, pp. 117–119, 2009.

2005

J. Amudha, Raghesh, K. K., and P, N., “Generation of IFS code for unstructured object Categorization”, 2005.

Publication Type: Conference Paper

Year of Publication Title

2018

S. Vishwakarma, Radha D., and Amudha, J., “Effectual Training for Object Detection Using Eye Tracking Data Set”, in 2018 International Conference on Inventive Research in Computing Applications (ICIRCA), Coimbatore, India, 2018.[Abstract]


Eye tracking device is playing a major role in research area due to its feature to capture the eye movements of a human. Eye tracker gives the patterns of visual attention of the observer which is analyzed in terms of fixations and saccades. The proposed work is used eye tracker for training the system of automatic detection of no-entry sign board. The whole process is divided into three stages: in first stage, fixation point is collected from an eye-tracker device. In second stage, objects are tagged by k-means clustering algorithm applied on fixation points. In third stage, the system is trained using cascade training algorithm to detect the objects automatically. No-entry sign boards are considered as the target in the proposed work. Detection of objects is based on fixation points rather than analyzing the features of all the pixels of the images.

More »»

2017

S. Bhaskaran, Geetika Paul, Dr. Deepa Gupta, and Amudha, J., “Langtool: Identification of Indian Language for short Text”, in 9th International Conference on Advanced Computing (ICoAC 2017), MIT, Chennai , 2017.

2017

J. Amudha and Kulkarni, N., “Top-down knowledge generation from regions in the fundus retinal images”, in Grace Hopper Celebration India (GHCI) 2017, 2017.

2017

J. Amudha and Jyotsna C, “Eye Tracking Enabled User Interface for Amyotrophic Lateral Sclerosis Patients”, in Grace Hopper Celebration India 2017 ,(GHCI-2017), 2017.

2014

A. Joseph and Amudha, J., “Gradual Transition Detection Based on Fuzzy Logic Using Visual Attention Model”, in Recent Advances in Intelligent Informatics, Mysores, 2014.[Abstract]


Shot boundary detection (SBD) is the process of automatically detecting the boundaries between shots in video. It is a problem which has attracted much attention since video became available in digital form as it is an essential pre-processing step to almost all video analysis, indexing, summarization, search, and other content based operations. The existing SBD algorithms are sensitive to video object motion and there are no reliable solutions to detect gradual transitions (GT). GT is difficult to detect because of the following reasons. First, GT include various special editing effects, including dissolve, wipe, Fade Out/In. Each effect results in a distinct temporal pattern over the continuity signal curve. Secondly, GT exhibit varying temporal duration and also the temporal patterns of GT are similar to those caused by object/camera movement, since both of them are essentially processes of gradual visual content variation. The proposed approach uses Fuzzy rule based system to detect the Gradual Transitions based on the features derived from visual attention model which detects the gradual transition better than the existing approaches.

More »»

2014

R. Aarthi, Amudha, J., and P, U., “A generic bio inspired framework for detecting Humans Based on saliency detection”, in International Conference on Artificial Intelligence and Evolutionary Algorithms in Engineering Systems-2014 (ICAEES-2014), Kumaracoil, Kanyakumari, Tamilnadu, 2014.

2014

J. Amudha, Nandakumar, H., Madhura, S., M. Reddy, P., and Kavitha, N., “An Android-Based Mobile Eye Gaze Point Estimation System for Studying the Visual Perception in Children with Autism”, in Computational Intelligence in Data Mining - Volume 2, New Delhi, 2014.[Abstract]


Autism is a neural developmental disorder characterized by poor social interaction, communication impairments and repeated behaviour. Reason for this difference in behaviour can be understood by studying the difference in their sensory processing. This paper proposes a mobile application which uses visual tasks to study the visual perception in children with Autism, which can give a profound explanation to the fact why they see and perceive things differently when compared to normal children. The application records the eye movements and estimates the region of gaze of the child to understand where the child's attention focuses to during the visual tasks. This work provides an experimental proof that children with Autism are superior when compared to normal children in some visual tasks, which proves that they have higher IQ levels than their peers.

More »»

2012

J. Amudha and Parul Mathur, “Optimal Key Frame Identification Using Visual Attention Model”, in International Conference on ‘Recent Trends in Computer Science and Engineering (ICRTCSE -2012)’, Department of Computer Science and Engineering, Apollo Engineering College at Chennai , 2012.

2008

J. Amudha, Dr. Soman K. P., and Vasanth, K., “Video Annotation using Saliency.”, in IPCV, 2008.

2003

Db Loganathan, Amudha, J., and Mehata, K. Mb, “Classification and feature vector techniques to improve fractal image coding”, in IEEE Region 10 Annual International Conference, Proceedings/TENCON, Bangalore, 2003, vol. 4, pp. 1503-1507.[Abstract]


Fractal image compression receives much attention because of its desirable properties like resolution independence, fast decoding and very competitive rate-distortion curves. Despite the advances made in fractal image compression the long computing time in encoding phase still remain as main drawback of this technique as encoding step is computationally expensive. A large number of sequential searches through portions of the image are carried out to identify best matches for other image portions. So far, several methods have been proposed in order to speed-up fractal image coding. Here an attempt is made to analyze the speed-up techniques like classification and feature vector, which demonstrates the search through larger portions of the domain pool without increasing computation time, In this way both the image quality and compression ratio are improved at reduced computation time. Experimental results and analysis show that proposed method can speed up fractal image encoding process over conventional methods. More »»

Research Grants Received

Year

Funding Agency

Title of the Project

Investigators

Status

2018

Consultancy Project PARALAXIOM PVT

LTD, Bengaluru

WILD OCR

Dr. Amudha J

Dr. Deepa Gupta

Completed

2018

Consultancy Project PARALAXIOM PVT

LTD, Bengaluru

ADAPT

Dr. Amudha J

Dr. Deepa Gupta

Completed

Collaborative Research

Year

Title & other details

2019

“Comparison of oculomotor abnormalities in patients with Parkinson’s disease with and without psychosis and the impact of deep brain stimulation: an observational eye tracking study”

Principal Investigator: Dr. Pramod Kumar Pal, Professor and Head, Department of Neurology, National Institute of Mental Health and Neurosciences (NIMHANS) Bangalore. 

 Co-Principal Investigator: Dr. Nitish Kamble, Assistant Professor, Department of Neurology, 

National Institute of Mental Health and Neurosciences (NIMHANS), 

Bangalore. 

Amudha J, Associate Professor, Department of Computer Science & Engineering, Amrita School of Engineering, Bengaluru-560035

Co – Investigators: Dr. Ravi Yadav, Additional Professor, Department of Neurology, National Institute of Mental Health and Neurosciences (NIMHANS) Bangalore. 

Dr. Dwarakanath Srinivas, Professor, Department of Neurosurgery, 

National Institute of Mental Health and Neurosciences (NIMHANS) 

Bangalore. 

Mr. Akshay S, Research Scholar (MY.AS.D*CSA16002), Department of Computer Science, Amrita School of Arts and Sciences, Mysuru Campus 

Mr. Amitabh Bhattacharya, PhD Scholar, Department of Neurology, NIMHANS, Bangalore 

2018

Research topic “Eye Movement Analysis of Glaucoma Patients using Eye Tracker Device”, Dr Sushma Tejawani, Consultant and Head of Glaucoma Services, Narayana Nethralaya, Bengaluru 

2016

The research work titled “Eye tracking to understand developer behavior” focused on understanding the skills required by the software developers and testers in their day-to-day task. The activities performed involved setting up experiments in the lab, selection of participants for the experiment, conducting experiments, eye gaze data collection, data analysis, interpreting the results and publishing the outcomes of the study. The research scholar was offered a paid research internship at the Usability Lab of India Corporate Research Center, ABB GISPL, Bangalore for a period of one year, from July 2016 to June 2017 under the mentorship of the Dr. Sithu D Sudarsan ABB GISPL ,Designation : Research Manager, Industrial Software Systems, Corporate Research Center.

2016

Eye Tracking as a Biomarker for Stress level Analysis in Patients -HCG Enterprises Limited, Bangalore, Dr. Ravi Nayar

Keynote Addresses/Invited Talks/Panel Memberships

  • Invited Talk on Smart City Security, “Deep Learning Architecture for Video Analytics, in International Virtual Symposium on Smart City Security & Resilience, Feb 19-20, 2021
  • Panelist in Wharton-QS Reimagine Education Awards & Virtual Conference (2-11 December 2020)
  • Invited Talk “Trail from Amrita to Industry 4.0”, Talk at ASE, Bengaluru, 2020
  • Invited Talk “Creating a vibrant and conducive teaching learning environment”, One Week International Faculty Development Programme on “Challenges in Restructuring the Innovative Teaching Learning Techniques”, 2nd -8th June 2020
  • Invited Talk, “Explore Insights into Data “, Webinar Explore Eye Gaze Data Analytics, Sensing and Perception of the data, 18th May 2020
  • Panelists at Robotics & AI at The Times of India’s Mission Admission Programme, 7th August 2020
  • Organized UK Delegate Visit - Dr. Mathew Forshew New Castle University - Jan 31st, 2019
  • Organized AI & Python Programming 3 days’ workshop- 100 CBSE Teachers within Bangalore and a session on Artificial Intelligence – Neural Network – May 2019
  • Organized ICIC Conference - BCIC Support & Conduct of Industry -Academia Gap - Oct 2018 -An initiative on bridging the “Industry-Academia Gap” – a well-attended conference at the Amrita School of Engineering, Bangalore on the 26th of Oct to deliberate on what is causing the Industry-Academia Gap in the Computer Science, Information Science, MCA courses, the various dimensions and challenges, best practices and possible solutions. We had representation from Infosys, TCS, Wipro, IBM, Oracle, Akamai, Automation Anywhere, Danske Bank, ANZ from the Industry. Academia was represented by professors from Amrita College, BMS Amity, Manipal, ISME and others.
  • Hosted Top Coders Regional Competition July 21, 2018 and hosted a workshop July 16-17, 2018
  • Organized workshop in Automation of Image processing using cloud infrastructure in Amrita for students and faculty
  • Talk on Eye Gaze Data Analytics - A machine learning framework to map what we see with what we are, Oct 2018, ASE, Bengaluru in Pre-Conference Tutorial ICIC2018
  • Organized ICIC 2018, International Conference, Oct25-27, 2019 (Steering Committee)
  • Organizing Member of UBICNET 2019 2nd EAI International Conference on Ubiquitous Communications and Network Computing, Feb 8-10, 2019
  • Organized Eye Tracking and Computer Vision - ETCVision 2016 August 19-21, 2016 – Talk and Tutorial Session Speaker
  • International Conference on VLSI Systems, Architecture, Technology and Applications (VLSI - SATA 2015), Jan 8-10, 2015, Tutorial Chair
  • Computer Architecture and High-Performance Computing 15th -16th March 2013
  • Organized Workshop Computer Vision for Robotics CVR 2012
  • Talk in Recent Trends in Image Analysis, 25th -26th May 2009
  • Participated in one day conference of ICT Academy Bridge 26th September2018
  • Participated in Infosys Campus summit (Feb14-Feb16), 2019
  • Participated in workshop in Automation of Image processing using cloud infrastructure in Kochi Campus
  • Participated in Computer Networks, 27th Oct-9th Nov-2003
  • Participated in Introduction to Bioinformatic Algorithms and their parallel implementation Nov 10-11th 2003
  • Participated in Mentor Graphics EDA Tools for VLSI Design and Embedded RTOS, 31st Jan-1st Feb 2004
  • Participated in Tutorial & Conference of Indian Conference on Computer Vision, Graphics and Image Processing, Dec 15-18th 2004
  • Participated in Research Frontiers in Machine Translation in Indian Languages, 4th Jan 2006
  • Participated in Mobile and Network Security and Cryptographic Protocols, 23rd -24th March 2007
  • Participated in CyberSecurity 10th -20th July 2014
  • Participated in Intel India Innovation Conclave 9th-10th Dec 2014
  • Participated in Grace Hopper Celebration India 2nd-4th Dec-2015
  • Participated in Annual Seminar of SIGMA Projects -2016 2nd-3rd February 2016
  • Participated in Python Programming Fundamentals, 4th -6th Jan 2016

Courses Taught

  • Handled many courses like Reinforcement Learning, Machine learning, Computational Intelligence, Image Processing, Eye Tracking, Computer Vision, Data Visualization and Deep Learning for UG, PG and PhD program

Student Guidance

Undergraduate Students

Sl. No.

Name of the Student(s)

Topic

Status – Ongoing/Completed

Year of Completion

1

Arjun S Kedlaya

Auto Model Generation

Completed

2020

2

Srikar Tondapu

Building a smart hermostat using Deep learning

Completed

2020

3

Akhil Katam

CDP On Boarding of Management packs

Completed

2019

4

Ashwin Preeth KS

Real Time Management of Enterprise CSR Activities

Completed

2019

5

Manmohan Singh

Supplier Collaboration Tool – Internship

Completed

2019

6

Venkatraman Srinivasan

Oracle Service Bus Business Process Execution Language

Completed

2018

7

Navya, Y., SriDevi, S., Akhila, P

Assistive System for Reading Disability

Completed

2018

8

Gottimukkala, B., Praveen, M.P., Lalita Amruta

Eye Gaze Data Analysis and Visualization using Machine learning techniques

Completed

2017

9

Tushar Sawhney

S. Pravin Reddy

Market Research by Learning Interest Operators from Human Eye Movements

Completed

2017

10

Tejashri Srikumar

Yamini C Shekar

Large Scale Community Detection on Graph Parallel Frameworks ( Student Exchange Program University Of Trento, Italy)

Completed

2017

10

Advait Ganesh Athreya 

PREDICATE ENCRYPTION (Student Exchange Program)

Completed

2017

9

Lakshmi Pavani, M., Bhanu Prakash, A.V., Shwetha Koushik, M.S.

Navigation through eye-tracking for human–computer interface

Completed

2016

10

Roja Reddy, S., Supraja Reddy, Y.

Blink analysis using eye gaze tracker

Completed

2015

11

Sai Mounica, M., Manvita, M.,

Low cost eye gaze tracker using web camera

Completed

2015

12

Madhura, S., Reddy, M.P., Kavitha, N.

Assessing the perceptual behaviour of the autistic child using low cost mobile application

Completed

2014

13

Shravani, T., Sai, R., Vani Shree, M.,

Assistive communication application for Amyotrophic lateral sclerosis patients

Completed

2014

14

Chadalawada, R.K., Subashini, V., Barath Kumar, B.

Improvised architecture of visual attention model

Completed

2012

15

Mrigan Rohan

Near real time optic flow computation

Completed

2011

 

Postgraduate students

Sl. No.

Name of the Student(s)

Topic

Status – Ongoing/Completed

Year of Completion

1

R. Deepalakshmi

Deep learning-based Eye Gaze Estimation

Ongoing

2021

2

Harsha Vittala R.B.

Handwriting digit recognition of Tigalari Script

Ongoing

2021

3

Rashmi

Visual Question Answering System

Completed

2020

5

Anupriya Srivasta

Text Detection and Recognition

Completed

2019

6

Anupriya

Human Computer Interaction to Strategize Reading Disability

Completed

2019

10

Sharat S

Visual Question Answering System

Completed

2019

11

Kavikuil

Smart Surveillance for university augmented with deep learning

Completed

2018

12

Hena Kausar

Dynamic object detection using top-down model

Completed

2018

13

Malathi A

Application of Deep auto encoder in the detection of anomalies

Completed

2017

14

Bharathi S

Top-down computational attention system for object search

Completed

2017

15

Sowdarya Lakshmi

System design for blind curve detection

Completed

2015

16

Chandrika K R

Eye gaze tracking using PSO and GA

Completed

2015

17

Julia Teresa Jose

Visual attention model based on Spiking neural network

Completed

2014

18

Sanjay G

Anomaly detection in video using knowledge driven computational visual attention model

Completed

2014

19

Janani V

A non-invasive eye tracking system for robotic wheel control

Completed

2015

20

Arpita

Camera Sensor activation scheme for target tracking in wireless visual sensor network

Completed

2015

21

Asha Priya

Study and implementation of top-down influences in visual attention model

Completed

2013

22

Divya K.V.

Study and implementation of top-down influences in visual attention model

Completed

2013

23

Padmakar S

Video shot detection using saliency measure

Completed

2011

24

Kiran Y

A computational attention model for traffic sign recognition system

Completed

2011

25

Hitha Nandakumar

Computational visual attention model to study the behaviour of Autism

Completed

2014

26

Naresh Kumar P

Gradual Transition Detection Based on fuzzy logic using Visual Attention System

Completed

2012

27

Parul Mathur

Anomaly detection in surveillance video using visual attention model

Completed

2012

Research scholars

Sl. No.

Name of the Student(s)

Topic

Status – Ongoing/Completed

Year of Completion

1

Aiswariya Milan K

Federated Learning

Ongoing

2025

2

Varsha S Lalapura

Efficient Recurrent Neural Network Modeling and Implementation for Edge Intelligence

Ongoing

2023

3

Sajitha Krishnan

Eye Gaze Analysis in Glaucoma Detection

Ongoing

2022

4

Akshay S

A Machine Learning framework to identify the effect of Parkinson’s disease on visual cognition abilities 

Ongoing

2022

5

Jyotsna C

An Intelligent System to estimate mental health – Eye Tracking Study

Ongoing

2022

6

Chandrika K. R.

Eye Gaze Based Recommendation System

Ongoing

2021

7

Veerender Reddy Bhimavarapu

Investigation of Multimodal feature models for Audio Visual Speech Recognition

Ongoing

2021

8

Gowri P

Deep Learning

Ongoing

2023

9

Aarthi R

Top-down Visual Attention Model

Ongoing

2021

10

Nilima Kulkarni

Eye Gaze Based Optic Disc Detection

Completed

2018