Qualification: 
Ph.D, M.Tech
Email: 
j_amudha@blr.amrita.edu

Dr. Amudha J. currently serves Chairperson, Department of Computer Science, Amrita School of Engineering, Bengaluru.

Publications

Publication Type: Book Chapter

Year of Publication Publication Type Title

2017

Book Chapter

S. Lakshmi Sadasivam and Amudha, J., “System Design for Tackling Blind Curves”, in Proceedings of International Conference on Computer Vision and Image Processing: CVIP 2016, Volume 1, B. Raman, Kumar, S., Roy, P. Pratim, and Sen, D. Singapore: Springer Singapore, 2017, pp. 69–77.[Abstract]


Driving through blind curves, especially in mountainous regions or through roads that have blocked visibility due to the presence of natural vegetation or buildings or other structures is a challenge because of limited visibility. This paper aims to address this problem by the use of surveillance mechanism to capture the images from the blind side of the road through stationary cameras installed on the road and provide necessary information to drivers approaching the blind curve on the other side of the road, thus cautioning about possible collision. This paper proposes a method to tackle blind curves and has been studied for various cases. More »»

2016

Book Chapter

J. Amudha, S. Reddy, R., and Y. Reddy, S., “Blink Analysis using Eye gaze tracker”, in Intelligent Systems Technologies and Applications 2016, J. Manuel Cor Rodriguez, Mitra, S., Thampi, S. M., and El-Alfy, E. - S. Cham: Springer International Publishing, 2016, pp. 237–244.[Abstract]


An involuntary action of opening and closing the eye is called blinking. In the proposed work, blink analysis has been performed on different persons performing various tasks. The experimental suite for this analysis is based on the eye gaze coordinate data obtained from commercial eye gaze tracker. The raw data is processed through a FSM(Finite State Machine) modeled to detect the opening and closing state of an eye. The blink rate of a person varies, while performing tasks like talking, resting and reading operations. The results indicate that a person tend to blink more while talking when compared to reading and resting. An important observation from analysis is that the person tends to blink more if he/she is stressed. More »»

2015

Book Chapter

J. Amudha, Nandakumar, H., Madhura, S., M Reddy, P., and Kavitha, N., “An android-based mobile eye gaze point estimation system for studying the visual perception in children with autism”, in Computational Intelligence in Data Mining-Volume 2, Springer India, 2015, pp. 49–58.

2015

Book Chapter

R. Aarthi, Amudha, J., and Priya, U., “A Generic Bio-inspired Framework for Detecting Humans Based on Saliency Detection”, in Artificial Intelligence and Evolutionary Algorithms in Engineering Systems: Proceedings of ICAEES 2014, Volume 2, P. L Suresh, Dash, S. Sekhar, and Panigrahi, B. Ketan New Delhi: Springer India, 2015, pp. 655–663.[Abstract]


Even with all its advancement in technology, computer vision system cannot competes with nature’s gift—the brains, that arranges the objects quickly and extract the necessary information from huge data. A bio-inspired feature selection method is proposed for detecting the humans using saliency detection. It is performed by tuning prominent features such as color, orientation, and intensity in bottom-up approach to locate the probable candidate regions of humans in an image. Further, the results improved in detection phase that makes use of weights learned from training samples to ignore non-human regions in the candidate regions. The overall system has an accuracy rate of 90 % for detecting the human region. More »»

2015

Book Chapter

J. Tressa Jose, Amudha, J., and Sanjay, G., “A Survey on Spiking Neural Networks in Image Processing”, in Advances in Intelligent Informatics, E. - S. M. El-Alfy, Thampi, S. M., Takagi, H., Piramuthu, S., and Hanne, T. Cham: Springer International Publishing, 2015, pp. 107–115.[Abstract]


Spiking Neural Networks are the third generation of Artificial Neural Networks and is fast gaining interest among researchers in image processing applications. The paper attempts to provide a state-of-the-art of SNNs in image processing. Several existing works have been surveyed and the probable research gap has been exposed. More »»

2013

Book Chapter

J. Amudha, Chadalawada, R. Kiran, Subashini, V., and B. Kumar, B., “Optimised Computational Visual Attention Model for Robotic Cognition”, in Intelligent Informatics: Proceedings of the International Symposium on Intelligent Informatics ISI'12 Held at August 4-5 2012, Chennai, India, A. Abraham and Thampi, S. M. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013, pp. 249–260.[Abstract]


The goal of research in computer vision is to impart and improvise the visual intelligence in a machine i.e. to facilitate a machine to see, perceive, and respond in human-like fashion(though with reduced complexity) using multitudinal sensors and actuators. The major challenge in dealing with these kinds of machines is in making them perceive and learn from huge amount of visual information received through their sensors. Mimicking human like visual perception is an area of research that grabs attention of many researchers. To achieve this complex task of visual perception and learning, Visual Attention model is developed. A visual attention model enables the robot to selectively (and autonomously) choose a “behaviourally relevant” segment of visual information for further processing while relative exclusion of others (Visual Attention for Robotic Cognition: A Survey, March 2011).The aim of this paper is to suggest an improvised visual attention model with reduced complexity while determining the potential region of interest in a scenario. More »»

Publication Type: Conference Paper

Year of Publication Publication Type Title

2017

Conference Paper

P. Salunkhe, Bhaskaran, S., Amudha, J., and Dr. Deepa Gupta, “Recognition of Multilingual Text from Signage Boards”, in 6th International Conference on Advances in Computing, Communications & Informatics (ICACCI’17), , Manipal University, Karnataka , 2017.

2017

Conference Paper

J. Amudha and Radha D., “Optimization of Rules in Neuro-Fuzzy Inference Systems”, in International Conference on Computational Vision and Bio-inspired Computing (ICCVBIC 2017), Inventive Research Organization and RVS Technical Campus, Coimbatore, 2017.

2017

Conference Paper

J. Amudha and Jyotsna C, “Eye Tracking Enabled User Interface for Amyotrophic Lateral Sclerosis Patients”, in Grace Hopper Celebration India 2017 ,(GHCI-2017), 2017.

2017

Conference Paper

J. Amudha and Kulkarni, N., “Top-down knowledge generation from regions in the fundus retinal images”, in Grace Hopper Celebration India (GHCI) 2017, 2017.

2017

Conference Paper

S. Bhaskaran, Paul, G., Dr. Deepa Gupta, and Amudha, J., “Langtool: Identification of Indian Language for short Text”, in 9th International Conference on Advanced Computing (ICoAC 2017), MIT, Chennai , 2017.

2016

Conference Paper

J. Amudha and Chandrika, K. R., “Suitability of Genetic Algorithm and Particle Swarm Optimization for Eye Tracking System”, in 2016 IEEE 6th International Conference on Advanced Computing (IACC), 2016, pp. 256-261.[Abstract]


Evolutionary algorithms provide solutions to optimization problem and its suitability to eye tracking is explored in this paper. In this paper, we compare the evolutionary methods Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) using deformable template matching for eye tracking. Here we address the various eye tracking challenges like head movements, eye movements, eye blinking and zooming that affect the efficiency of the system. GA and PSO based Eye tracking systems are presented for real time video sequence. Eye detection is done by Haar-like features. For eye tracking, GAET and PSOET use deformable template matching to find the best solution. The experimental results show that PSOET achieves tracking accuracy of 98% in less time. GAET predicted eye has high correlation to actual eye but the tracking accuracy is only 91 %. More »»

2016

Conference Paper

D. Venugopal, Amudha, J., and Jyotsna, C., “Developing an application using eye tracker”, in 2016 IEEE International Conference on Recent Trends in Electronics, Information Communication Technology (RTEICT), 2016.[Abstract]


Eye tracking measures where the eye is focused or the movement of eye with respect to the head. The eye tracker will track the eye positions and eye movement for the visual stimulus presented on the computer system. Various features like gaze point, pupil size and mouse position can be extracted and it can be represented using visualization techniques such as fixation, saccade, scanpath and heat map. The features obtained from eye tracker can be extended to real life applications. Using this technology companies could be able to analyze thousands of customer's eye patterns in real-time, and make decisions on marketing based on the data. The technology can analyze the stress level of patients, employees in IT, BPO, accounting, banking, front office etc. Here we are illustrating the advantages and applications of eye tracking, its usability and how to develop an application using a commercial eye tracker. More »»

2015

Conference Paper

R. Aarthi and Amudha, J., “Saliency based modified chamfers matching method for sketch based image retrieval”, in Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, 2015.

2015

Conference Paper

J. Amudha, Radha, D., and Deepa, A. S., “Comparative Study of Visual Attention Models with Human Eye Gaze in Remote Sensing Images”, in Proceedings of the Third International Symposium on Women in Computing and Informatics, New York, NY, USA, 2015.

2014

Conference Paper

H. Nandakumar and Amudha, J., “A comparative analysis of a neural-based remote eye gaze tracker”, in 2014 International Conference on Embedded Systems (ICES), 2014.[Abstract]


Remote eye gaze tracker is a nonintrusive system which can give the coordinates of the position., where a person is looking on the screen. This paper gives an extensive analysis of a neural based eye gaze tracker. The eye gaze detection system based on neural network considers the variation of the system behavior with different feature extraction techniques adopted like eye template based features and features based on pupil detection. The performance comparison between these various models has been presented in this paper. The system has also been tested under different lighting conditions and distance from the webcam for different subjects. The performance of the eye gaze tracker based on features computed from the eyes were found to have better performance of 95.8% compared to the template based features. More »»

2010

Conference Paper

R. Aarthi, Dr. Padmavathi S., and Amudha, J., “Vehicle detection in static images using color and corner map”, in ITC 2010 - 2010 International Conference on Recent Trends in Information, Telecommunication, and Computing, Kochi, Kerala, 2010, pp. 244-246.[Abstract]


This paper presents an approach to identify the vehicle in the static images using color and corner map. The detection of vehicles in a traffic scene can address wide range of traffic problems. Here an attempt has been made to reduce the search time to find the possible vehicle candidates thereby reducing the computation time without a full search. A color transformation is used to project all the colors of input pixels to a new feature space such that vehicle pixels can be easily distinguished from non-vehicle ones. Bayesian classifier is adopted for verifying the vehicle pixels from the background. Corner map is used for removing the false detections and to verify the vehicle candidates. © 2010 IEEE.

More »»

2009

Conference Paper

C. V. Hari, Jojish, J. V., Gopi, S., Felix, V. P., and Amudha, J., “Mid-Point Hough Transform: A Fast Line Detection Method”, in 2009 Annual IEEE India Conference, 2009.[Abstract]


This paper proposes a new method for detection of lines in images. The new algorithm is a modification of the standard Hough transform by considering a pair of pixels simultaneously and mapping them to the parameter space. The proposed algorithm is compared with line detection algorithms like standard Hough transform, randomized Hough transform and its variants and has advantages of smaller storage and higher speed. More »»

2008

Conference Paper

J. Amudha, Soman, K. P., and Vasanth, K., “Video Annotation using Saliency.”, in IPCV, 2008.

2003

Conference Paper

Db Loganathan, Amudha, J., and Mehata, K. Mb, “Classification and feature vector techniques to improve fractal image coding”, in IEEE Region 10 Annual International Conference, Proceedings/TENCON, Bangalore, 2003, vol. 4, pp. 1503-1507.[Abstract]


Fractal image compression receives much attention because of its desirable properties like resolution independence, fast decoding and very competitive rate-distortion curves. Despite the advances made in fractal image compression the long computing time in encoding phase still remain as main drawback of this technique as encoding step is computationally expensive. A large number of sequential searches through portions of the image are carried out to identify best matches for other image portions. So far, several methods have been proposed in order to speed-up fractal image coding. Here an attempt is made to analyze the speed-up techniques like classification and feature vector, which demonstrates the search through larger portions of the domain pool without increasing computation time, In this way both the image quality and compression ratio are improved at reduced computation time. Experimental results and analysis show that proposed method can speed up fractal image encoding process over conventional methods. More »»

Publication Type: Journal Article

Year of Publication Publication Type Title

2016

Journal Article

R. Aarthi, Anjana, K. P., and Amudha, J., “Sketch based Image Retrieval using Information Content of Orientation”, Indian Journal of Science and Technology, vol. 9, 2016.[Abstract]


Background/Objectives: This paper presents an image retrieval system using hand drawn sketches of images. Sketch is one of the convenient ways to represent the abstract shape of an object. The main objective is to perform retrieval of images using edge content by prioritizing the blocks based on information. Methods/Statistical Analysis: Entropy based Histogram of Gradients (HOG) method is proposed to prioritize the block. The method helps to pick the candidate blocks dynamically to compare with database images. Findings: The performance of the method has been evaluated using benchmark dataset of Sketch Based Image Retrieval (SBIR) with other methods like Indexable Oriented Chamfer Matching (IOCM), Context Aware Saliency (CAS-IOCM) and Histogram of Gradients (HOG). Comparing to these methods the number of relevant images retrieved is high for our approach.Application/Improvement: Knowledge based block selection method improves the performance of the existing method.

More »»

2016

Journal Article

A. R., Amudha, J., K., B., and Varrier, A., “Detection of Moving Objects in Surveillance Video by Integrating Bottom-up Approach with Knowledge Base”, Procedia Computer Science, vol. 78, pp. 160 - 164, 2016.[Abstract]


Abstract In the modern age, where every prominent and populous area of a city is continuously monitored, a lot of data in the form of video has to be analyzed. There is a need for an algorithm that helps in the demarcation of the abnormal activities, for ensuring better security. To decrease perceptual overload in \{CCTV\} monitoring, automation of focusing the attention on significant events happening in overpopulated public scenes is also necessary. The major challenge lies in differentiating detecting of salient motion and background motion. This paper discusses a saliency detection method that aims to discover and localize the moving regions for indoor and outdoor surveillance videos. This method does not require any prior knowledge of a scene and this has been verified with snippets of surveillance footages. More »»

2015

Journal Article

G. Sanjay, Amudha, J., and Jose, J. Tressa, “Moving Human Detection in Video Using Dynamic Visual Attention Model”, Advances in Intelligent Systems and Computing, vol. 320, pp. 117-124, 2015.[Abstract]


Visual Attention algorithms have been extensively used for object detection in images. However, the use of these algorithms for video analysis has been less explored. Many of the techniques proposed, though accurate and robust, still require a huge amount of time for processing large sized video data. Thus this paper introduces a fast and computationally inexpensive technique for detecting regions corresponding to moving humans in surveillance videos. It is based on the dynamic saliency model and is robust to noise and illumination variation. Results indicate successful extraction of moving human regions with minimum noise, and faster performance in comparison to other models. The model works best in sparsely crowded scenarios. More »»

2015

Journal Article

J. Amudha and Arpita, P., “Multi-Camera Activation Scheme for Target Tracking with Dynamic Active Camera Group and Virtual Grid-Based Target Recovery”, Procedia Computer Science, vol. 58, pp. 241–248, 2015.[Abstract]


Camera sensor activation schemes are essential for optimizing the usage of resources in wireless visual sensor networks. In this regard, an efficient camera sensor activation scheme which accounts for fast moving targets is proposed. This is achieved by adapting the number of cameras involved in tracking the target, based on the target's speed. To reduce the target miss rate, a virtual grid-based target recovery scheme is proposed, which attempts to re-locate the target in the case of a target miss. Simulations show that the proposed activation scheme gives a considerable reduction in target miss rate compared to an existing scheme which is based on observation correlation coefficient between camera sensors. More »»

2015

Journal Article

D. .Radha and Amudha, J., “Design of an Economic Voice Enabled Assistive System for the Visually Impaired”, International Journal of Applied Engineering Research, vol. 10, pp. 32711–32721, 2015.[Abstract]


Assistive technologies are meant to improve the quality of the life of visually impaired population. However due to factors like income level/economic status of the visually impaired, hand held devices that are not ease of use, ignorance of the existing assistive technologies makes less reachability of these assistive systems for the needy. This paper proposes a system which has been developed to be an economic, simple, Voice Enabled Assistive System (VEAS) for visually impaired to identify and locate the objects in his/her nearby environment. VEAS uses attention theories to easily locate the objects from its background with less computational time. The user interface has been made flexible, by making the human interface system quite comfortable to an ignorant user and assisted with a speech processor for guiding them More »»

2014

Journal Article

D. Radha, Amudha, J., and C, J., “Study of Measuring Dissimilarity between Nodes to Optimize the Saliency Map”, International journal of Computer Technology and Applications, vol. 5, pp. 993–1000, 2014.

2013

Journal Article

D. Radha, Amudha, J., Ramyasree, P., Ravindran, R., and Shalini, S., “Detection of unauthorized human entity in surveillance video”, International Journal of Engineering and Technology, vol. 5, 2013.[Abstract]


With the ever growing need for video surveillance in various fields, it has become very important to automate the entire process in order to save time, cost and achieve accuracy. In this paper we propose a novel and rapid approach to detect unauthorized human entity for the video surveillance system. The approach is based on bottom-up visual attention model using extended Itti Koch saliency model. Our approach includes three modules- Key frame extraction module, Visual attention model module, Human detection module. This approach permits detection and separation of the unauthorized human entity with higher accuracy than the existing Itti Koch saliency model.

More »»

2012

Journal Article

J. Amudha, Radha, D., and NareshKumar, P., “Video Shot Detection using Saliency Measure”, International Journal of Computer Applications, vol. 45, pp. 17–24, 2012.[Abstract]


Video shot boundary is an early stage of content based video analysis and is fundamental to any kind to of video application. The increased availability and usage of online digital video has created a need for automated video content analysis techniques. Major bottle neck that limits a wider use of digital video is the ability of quickly finding desired information from a huge database. Manual indexing and annotating the video material are both computationally expensive and time consuming. In this paper we design a novel approach for shot boundary detection using visual attention model by comparing the saliency measures. The approach is robust to a wide range of digital effects with low computational complexity More »»

2012

Journal Article

J. Amudha, “Performance evaluation of bottom-up and top-down approaches in computational visual attention system”, 2012.[Abstract]


The world around us has abundant of visual information and it is indeed a hilarious job for the brain to process this continuous flow of visual information bombarding the retina and to extract the small portions of information that are important for further actions. Visual attention systems provides the brain with a mechanism of focusing computational resources on one object at a time, either driven by low-level properties (bottom-up attention) or based on a specific task (top-down attention). Moving the focus of attention to locations one by one enables sequential recognition of objects at these locations. What appears to be a straight-forward sequence of processes (first focus attention to a location, then process object information there) is in fact an intricate system of interactions between visual attention and object recognition. How, for instance, can the focus of attention from one object to the next is performed? Can the existing information used for processing the attention can be used for the next object recognition task also? If so how to use it? Whether the existing knowledge about a target object can be utilized in the recognition system to bias the attention from the top down? This thesis attempts to address these questions with a combination of how computational models can be adopted for artificial visual attention systems and how the bottom-up and top-down approaches can be studied empirically for various applications. The base of this research work is on the popular model by Koch and Ullman [60] which is based on the psychological work by Treisman [113] terme the feature-integration-theory. The model uses saliency maps in combination with a winner-take-all selection mechanism. Once a region has been selected for processing it is inhibited to enable other regions to compete for the available resources. More »»

2012

Journal Article

K. P. Soman and Amudha, J., “Feature Selection in top-down visual attention model”, International Journal of Computer application, vol. 24, pp. 38–43, 2012.

2011

Journal Article

J. Amudha, Dr. Soman K. P., and Kiran, Y., “Feature Selection in Top-Down Visual Attention Model using WEKA.”, International Journal of Computer Applications, vol. 24, no. 4, pp. 38-43, 2011.

2011

Journal Article

J. Amudha, Dr. Soman K. P., and S Reddy, P., “A Knowledge Driven Computational Visual Attention Model”, International Journal of Computer Science Issues, vol. 8, no. 3, 2011.[Abstract]


Computational Visual System face complex processing problems as there is a large amount of information to be processed and it is difficult to achieve higher efficiency in par with human system. In order to reduce the complexity involved in determining the saliency region, decomposition of image into several parts based on specific location is done and decomposed part is passed for higher level computations in determining the saliency region with assigning priority to the specific color in RGB model depending on application. These properties are interpreted from the user using the Natural Language Processing and then interfaced with vision using Language Perceptional Translator (LPT). The model is designed for a robot to search a specific object in a real time environment without compromising the computational speed in determining the Most Salient Region. More »»

2011

Journal Article

J. Amudha, Soman, K. P., and Kiran, Y., “Feature Selection in Top Down Visual Attention Model with WEKA”, International Journal of Computer Applications, vol. 24, pp. 38–43, 2011.

2009

Journal Article

J. Amudha and Soman, K. P., “Selective tuning visual attention model”, International Journal of Recent Trends in Engineering, vol. 2, pp. 117–119, 2009.

2009

Journal Article

J. Amudha and Soman, K. P., “Saliency based visual tracking of vehicles”, International Journal of Recent Trends in Engineering, vol. 2, pp. 114–116, 2009.

2005

Journal Article

J. Amudha, Raghesh, K. K., and P, N., “Generation of IFS code for unstructured object Categorization”, 2005.

207
PROGRAMS
OFFERED
5
AMRITA
CAMPUSES
15
CONSTITUENT
SCHOOLS
A
GRADE BY
NAAC, MHRD
9th
RANK(INDIA):
NIRF 2017
150+
INTERNATIONAL
PARTNERS