Geetha M. currently serves as Assistant Professor at the Department of Computer Science and Engineering Amrita School of Engineering, Amritapuri.


Publication Type: Conference Paper
Year of Publication Publication Type Title
2015 Conference Paper M. Geetha and Pillay, S. S., “Disrupted structural connectivity using diffusion tensor tractography in epilepsy”, in 2015 IEEE International Conference on Electronics, Computing and Communication Technologies, CONECCT 2015, 2015.[Abstract]

Human thoughts and emotions are communicated between different brain regions through pathways comprising of white matter tracts. Diffusion Tensor Imaging (DTI) is a newly developed Magnetic Resonance Imaging (MRI) technique to locate the white matter lesions which cannot be found on other types of clinical MRI. Fiber tracking using streamline tractography approaches has a limitation that it could not detect crossing or branching fibers. This limitation is overcome in Fast Marching technique of tractography where branching fibers are detected correctly but it takes more time than streamline tracking technique. For tracking fiber pathways in a noninvasive way, we propose an approach which utilizes the advantages of both tracking techniques: Fiber Assignment by Continuous Tracking and Fast Marching, to give a better and accurate tracking of fiber pathways as given by Fast Marching tracking technique and in less time as given by Fiber Assignment by Continuous tracking. © 2015 IEEE.

More »»
2014 Conference Paper M. Geetha and Rakendu, R., “An improved method for segmentation of point cloud using Minimum Spanning Tree”, in International Conference on Communications and Signal Processing (ICCSP), 2014 , 2014.[Abstract]

With the development of low-cost 3D sensing hardware such as the Kinect, three dimensional digital images have become popular in medical diagnosis, robotics etc. One of the difficult task in image processing is image segmentation. The problem become simpler if we add the depth channel along with height and width. The proposed algorithm uses Minimum Spanning Tree (MST) for the segmentation of point cloud. As a pre processing step, first level clustering is done which gives group of cluttered objects. Each of this cluttered group is subjected to more finite level of segmentation using MST based on distance and normal. In our method, we build a weighted planar graph of each of the clustered cloud and construct the MST of the corresponding graph. By taking the advantage of normal, we can separate the surface from the object. The proposed method is applied to different 3D scenes and the results are discussed.

More »»
2014 Conference Paper M. Geetha, Paul, M. P., and Kaimal, M. R., “An improved content based image retrieval in RGBD images using Point Clouds”, in International Conference on Communications and Signal Processing (ICCSP), 2014 , 2014.[Abstract]

Content-based image retrieval (CBIR) system helps users to retrieve images based on their contents. Therefore, a reliable CBIR method is required to extract important information from the image. This important information includes texture, color, shape of the object in the image etc. For RGBD images, the 3D surface of the object is the most important feature. We propose a new algorithm which recognize the 3D object by using 3D surface shape features, 2D boundary shape features, and the color features. We present an efficient method for 3D object shape extraction. For that we are using first and second order derivatives over the 3D coordinates of point clouds for detecting landmark points on the surface of RGBD object. Proposed algorithm identifies the 3D surface shape features efficiently. For the implementation we use Point Cloud Library(PCL). Experimental results show that the proposed method is effective and efficient and it is able to give more than 80% classification rate for any objects in our test data. Also it eliminates false positive results and it yields higher retrieval accuracy.

More »»
2014 Conference Paper M. Geetha, Anandsankar, B., Nair, L. S., Amrutha, T., and Rajeev, A., “An Improved Human Action Recognition System Using RSD Code Generation”, in Proceedings of the 2014 International Conference on Interdisciplinary Advances in Applied Computing (ICONIAAC '14), 2014.[Abstract]

This paper presents a novel method for recognizing human actions from a series of video frames. The paper uses the idea of an RSD (Region Speed Direction) code generation, which is capable of recognizing most of the common activities in spite of the spatiotemporal variability between subjects. Majority of the researches focus on upper body part or make use of hand and leg trajectories. The trajectory-based approach gives less accurate results due to variability of action pattern between subjects. In RSD Code, we give importance to three factors Region, Speed and Direction to detect the action. These three factors together gives better result for recognizing actions. The proposed method is free from occlusion, positional errors and missing information. The results from our algorithm are comparable to the results of the existing human action detection algorithms. More »»
2013 Conference Paper M. Geetha, Aswathi, P. V., and Dr. Kaimal, M. R., “A Stroke Based Representation of Indian Sign Language Signs Incorporating Global and Local Motion Information”, in Second International Conference on Advanced Computing, Networking and Security, 2013.[Abstract]

Sign Language is a visual gesture language used by speech impaired people to convey their thoughts and ideas with the help of hand gestures and facial expressions. This paper presents a stroke based representation of dynamic gestures of Indian Sign Language Signs incorporating both local as well as global motion information. This compact representation of a gesture is analogous to phonemic representation of speech signals. To incorporate the local motion of the hand, each stroke contains the features corresponding to the hand shape as well. The dynamic gesture trajectories are segmented based on Maximum Curvature Points(MCPs). MCPs are selected based on the direction change of trajectories. The frames corresponding to the MCP points of the trajectory are considered as the key frames. Local information features are taken as the hand shape of the Key frames. The existing methods of Sign Language Recognition has scalability problems apart from high complexity and the need for extensive training data. In contrast, our proposed method of stroke based representation has less expensive training phase since it only requires the training of stroke features and stroke sequences of each word. Our algorithms also address the issue of scalability. We have tested our approach in the context of Indian Sign Language recognition and we present the results from this study More »»
2013 Conference Paper M. Geetha, C, M., P, U., and R, H., “A vision based dynamic gesture recognition of Indian Sign Language on Kinect based depth images”, in Emerging Trends in Communication, Control, Signal Processing Computing Applications (C2SPCA), 2013 International Conference on, 2013.[Abstract]

Indian Sign Language (ISL) is a visual-spatial language which provides linguistic information using hands, arms, facial expressions, and head/body postures. Our proposed work aims at recognizing 3D dynamic signs corresponding to ISL words. With the advent of 3D sensors like Microsoft Kinect Cameras, 3D geometric processing of images has received much attention in recent researches. We have captured 3D dynamic gestures of ISL words using Kinect camera and has proposed a novel method for feature extraction of dynamic gestures of ISL words. While languages like the American sign language(ASL) are of huge popularity in the field of research and development, Indian Sign Language on the other hand has been standardized recently and hence its (ISLs) recognition is less explored. The method extracts features from the signs and convert it to the intended textual form. The proposed method integrates both local as well as global information of the dynamic sign. A new trajectory based feature extraction method using the concept of Axis of Least Inertia (ALI) is proposed for global feature extraction. An eigen distance based method using the seven 3D key points- (five corresponding to each finger tips, one corresponding to centre of the palm and another corresponding to lower part of palm), extracted using Kinect is proposed for local feature extraction. Integrating 3D local feature has improved the performance of the system as shown in the result. Apart from serving as an aid to the disabled people, other applications of the system also include serving as a sign language tutor, interpreter and also be of use in electronic systems that take gesture input from the users. More »»
2013 Conference Paper R. J. Raghavan, Prasad, K. A., Muraleedharan, R., and M. Geetha, “Animation system for Indian Sign Language communication using LOTS notation”, in Emerging Trends in Communication, Control, Signal Processing Computing Applications (C2SPCA), 2013 International Conference on, 2013.[Abstract]

This paper presents an application aiding the social integration of the deaf community in India into the mainstream of society. This is achieved by feeding text in English and generating an animated gesture sequence representative of its content. This application consists of three main portions: an interface that allows the user to enter words, a language processing system that converts English text to ISL format and a virtual avatar that acts as an interpreter conveying the information at the interface. These gestures are dynamically animated based on a novel method devised by us in order to map the kinematic data for the corresponding word. The word after translation into ISL will be queried in the database where in lies the notation format for each word. This notation called as LOTS Notation will represent parameters enabling the system to identify features like hand location(L), Hand Orientation (O) in the 3D space, Hand Trajectory movement (T), hand shapes (S) and non-manual components like facial expression. The animation of a sentence fed is thus produced from the sequence of notations which are queued in order of appearance. We are also inserting the movement –epenthesis which is the inter sign transition gesture inorder to avoid jitters in gesturing. More than a million deaf adults and around half a million deaf children in India use the Indian Sign Language (ISL) as a mode of communication. However, this system would serve as an initiative in propelling the Sign Language Communication in the Banking Domain. The low audio dependency in the working domain supports the cause. More »»
2013 Conference Paper M. Geetha and Aswathi, P. V., “Dynamic gesture recognition of Indian sign language considering local motion of hand using spatial location of Key Maximum Curvature Points”, in 2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS), 2013.[Abstract]

Sign language is the most natural way of expression for the deaf community. Indian Sign Language (ISL) is a visual-spatial language which provides linguistic information using hands, arms, facial expressions, and head/body postures. In this paper we propose a new method for, vision-based recognition of dynamic signs corresponding to Indian Sign Language words. A new method is proposed for key frame extraction which is more accurate than the existing methods. The frames corresponding to the Maximum Curvature Points (MCPs) of the global trajectory are taken as the keyframes. The method accomodates the spatio temporal variability that may occur when different persons perform the same gesture. We are also proposing a new method based on spatial location of the Key Maximum Curvature Points of the boundary for shape feature extraction of key frames.Our method when compared with three other exisiting methods has given better performance. The method has considered the local as well as global trajectory information for recognition.The feature extraction method has proved to be scale invariant and translation invariant. More »»
2011 Conference Paper M. Geetha, Menon, R., Jayan, S., James, R., and Janardhan, G. V. V., “Gesture recognition for American sign language with polygon approximation”, in Proceedings - IEEE International Conference on Technology for Education, T4E 2011, Chennai, Tamil Nadu, 2011, pp. 241-245.[Abstract]

We propose a novel method to recognize symbols of the American Sign Language alphabet (A-Z) that have static gestures. Many of the existing systems require the use of special data acquisition devices like data gloves which are expensive and difficult to handle. Some of the methods like finger tip detection do not recognize the alphabets which have closed fingers. We propose a method where the boundary of the gesture image is approximated into a polygon with Douglas - Peucker algorithm. Each edge of the polygon is assigned the difference Freeman Chain Code Direction. We use finger tips count along with difference chain code sequence as a feature vector. The matching is done by looking for either perfect match and in case there is no perfect match, substring matching is done. The method efficiently recognizes the open and closed finger gestures. © 2011 IEEE. More »»
Publication Type: Journal Article
Year of Publication Publication Type Title
2015 Journal Article M. Geetha, B., A. Suresh, P., R., P., H. M., and UNNI, G. A. Y. A. T. H. R. I., “A Novel Method on Action Recognition Based on Region and Speed of Skeletal End points”, Advances in Computing Communications and Informatics(ICACCI), 2015.[Abstract]

This paper proposes a method to recognize human actions from a video sequence. The actions include walking, running, jogging, hand waving, clapping and boxing. The actions are categorized after recognition using a decision tree. Apart from other algorithms, our proposed method recognizes single human actions considering the speed, direction and the percentage of endpoints as a novel approach. In addition to action recognition this paper also proposes an error correction method for removing bifurcation. The system has been checked on various datasets and it performed well. More »»
2012 Journal Article M. Geetha and C, M. U., “A Vision Based Recognition of Indian Sign Language Alphabets and Numerals Using B-Spline Approximation”, International Journal on Computer Science & Engineering, vol. 4, 2012.[Abstract]

Sign language is the most natural way of expression for the deaf community. The urge to support the integration of deaf people into the hearing society made the automatic sign language recognition, an area of interest for the researchers. Indian Sign Language (ISL) is a visual-spatial language which provides linguistic information using hands, arms, facial expressions, and head/body postures. In this paper we propose a novel vision-based recognition of Indian Sign Language Alphabets and Numerals using B-Spline Approximation. Gestures of ISL alphabets are complex since it involves the gestures of both the hands together. Our algorithm approximates the boundary extracted from the Region of Interest, to a B-Spline curve by taking the Maximum Curvature Points (MCPs) as the Control points. Then the B-Spline curve is subjected to iterations for smoothening resulting in the extraction of Key Maximum Curvature points (KMCPs), which are the key contributors of the gesture shape. Hence a translation & scale invariant feature vector is obtained from the spatial locations of the KMCPs in the 8 Octant Regions of the 2D Space which is given for classification.

More »»
Faculty Details


Faculty Email: