Qualification: 
Ph.D
geetham@am.amrita.edu

Dr. Geetha M. currently serves as Vice Chairperson and Assistant Professor (Senior Grade) at the Department of Computer Science and Engineering, School of Engineering, Amrita Vishwa Vidyapeetham, Amritapuri.

Publications

Publication Type: Conference Paper

Year of Publication Title

2020

N. Aloysius and M. Geetha, “An Ensembled Scale-Space Model of Deep Convolutional Neural Networks for Sign Language Recognition”, in Advances in Artificial Intelligence and Data Engineering, Singapore, 2020.[Abstract]


A sign language translator is a utilitarian in facilitating communication between the deaf community and the hearing majority. This paper proffers an innovative specialized convolutional neural network (CNN) model, Sign-Net, to recognize hand gesture signs by incorporating scale-space theory to deep learning framework. The proposed model is an ensemble of CNNs–-a low resolution network (LRN) and a high resolution network (HRN). This architecture of the proposed model allows the ensemble to work at different spatial resolutions and at varying depths of CNN. The Sign-Net model was assessed with static signs of American Sign Language–-alphabets and digits. Since there exists no sign dataset for deep learning, the ensemble performance is evaluated on the synthetic dataset which we have collected for this task. Assessment of the synthetic dataset by Sign-Net reported an impressive accuracy of 74.5%, notably superior to the other existing models.

More »»

2020

G. Jayadeep, Vishnupriya, N. V., Venugopal, V., Vishnu, S., and M. Geetha, “Mudra: Convolutional Neural Network based Indian Sign Language Translator for Banks”, in 2020 4th International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, India, 2020.[Abstract]


Signlanguageisamediumofexpressing thoughts and feelings by the deaf-dumb community. It could be extremely challenging for deaf-mute people to communicate efficiently in banks, where they might have to explain their needs. There are very few people who can understand sign language. The main focus of our proposed method is to design an ISL (Indian Sign Language) hand gesture motion translation tool for banks for helping the deaf-mute community to convey their ideas by converting them to text format. In the fields of ASL (American Sign Language) and other languages, ample amounts of work have been done. Apart from other algorithms, our proposed method recognizes human actions considering isolated dynamic Indian signs related to the bank as a novel approach. There are very few research works carried out in this field of ISL recognition for banks. Over and above that, an insufficient amount of dataset along with dissimilarity in gestures length was a difficulty. We used a self-recorded ISL dataset for training the model for recognizing the gestures. Unlike image data, the video domain was a new challenge. Larger lengthened video gestures were taken and actions were recognized from a series of video frames. CNN (Convolutional Neural Network) named inception V3 was used to extract the image features. LSTM (Long Short Term Memory), an architecture of RNN (Recurrent neural network) classified these gestures and are translated into text. Experimental results display that this approach towards isolated word dynamic hand gesture recognition systems provides an accurate and effective method for the interaction between non-signer and signer

More »»

2018

D. Raj and M. Geetha, “A Trigraph Based Centrality Approach Towards Text Summarization”, in 2018 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 2018.[Abstract]


As the electronic documents are increasing due to the revolution of information there is an urgent need for summarizing the text documents. From the previous works we observed that there is no generalized graph model for text summarization and low order ngrams could not preserve the contextual meaning. This paper focuses on an extractive based graphical approach for text summarization, based on trigrams and graph based centrality measure. Trigraph is generated and the centrality of the connected trigraph is taken to extract the important trigrams. A mapping is done between the original words and the trigrams to regain the link between the words. And after comparing the centrality from the graph, the summary is extracted. The ROUGE-SU4 F-measure obtained for the proposed approach is 0.036 which is significantly better than the previous approaches.

More »»

2018

P. S. Lakshmi, M. Geetha, Menon, N. R., Krishnan, V., and Prof. Prema Nedungadi, “Automated Screening for Trisomy 21 by measuring Nuchal Translucency and Frontomaxillary Facial Angle”, in International Conference on Advances in Computing, Communications and Informatics (ICACCI), 2018.

2017

M. Geetha, Manoj, M., Sarika, A. S., Mohan, M., and Rao, S. N., “Detection and estimation of the extent of flood from crowd sourced images”, in 2017 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 2017.[Abstract]


An algorithm which estimates the extent of flood from random, crowd sourced images is proposed. It uses color based segmentation with brown color to segment out the flood water. The average brown color intensity, largest brown area, as well as water depth found out by comparison with human bodies detected, together contribute to the final estimation of the extent of flood. The algorithm, since it deals with normal images rather than satellite images or video sequences, can be used widely to explore flood affected areas so that adequate help can be supplied. This method can also be used in flood detection systems run in order to carry out rescue operations enabling us to lend our support for flood victims. Moreover, the fact that the existing work in this area deals with videos and satellite images mainly adds to the novelty of this work.

More »»

2017

N. Aloysius and M. Geetha, “A review on deep convolutional neural networks”, in 2017 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 2017.[Abstract]


The success of traditional methods for solving computer vision problems heavily depends on the feature extraction process. But Convolutional Neural Networks (CNN) have provided an alternative for automatically learning the domain specific features. Now every problem in the broader domain of computer vision is re-examined from the perspective of this new methodology. Therefore it is essential to figure-out the type of network specific to a problem. In this work, we have done a thorough literature survey of Convolutional Neural Networks which is the widely used framework of deep learning. With AlexNet as the base CNN model, we have reviewed all the variations emerged over time to suit various applications and a small discussion on the available frameworks for the implementation of the same. We hope this piece of article will really serve as a guide for any neophyte in the area.

More »»

2017

M. Mahesh, Jayaprakash, A., and M. Geetha, “Sign language translator for mobile platforms”, in 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Udupi, India, 2017.[Abstract]


The communication barrier of deaf and dumb community with the society still remains a matter of concern due to lack of perfect sign language translators. Usage of mobile phones for communication remains a dream for deaf and dumb community. We propose an android application that converts sign language to natural language and enable deaf and dumb community to talk over mobile phones. Developing Sign Recognition methods for mobile applications has challenges like need for light weight method with less CPU and memory utilization. The application captures image using device camera process it and determines the corresponding gesture. An initial phase of comparison using histogram matching is done to identify those gestures that are close to test sample and further only those samples are subjected to Oriented Fast and Rotated BRIEF(Binary Robust Independent Element Features) based comparison hence reducing the CPU time. The user of the application can also add new gestures into the dataset. The application allows easy communication of deaf and dumb with society. Though there are many computer based applications for sign language recognition, development in android platform is adequately less.

More »»

2016

J. Kavya and M. Geetha, “An FSM based methodology for interleaved and concurrent activity recognition”, in 2016 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Jaipur, India, 2016.[Abstract]


Research on human activity recognition is one of the most promising research topic and is attracted attention towards a number of disciplines and application domains. Successful research has so far focused on recognizing sequential human activities. In real life people are performing actions not only in sequential but also in complex (concurrent or interleaved) manner. Recognizing complex activities remains a challenging and active area of research. Due to a high degree of freedom of human activities, it is difficult to have a model which can deal with interleaved and concurrent activities. We propose a method that uses automatically constructed finite state automata, stack and queue data structures for recognizing concurrent and interleaved activities.

More »»

2015

M. Geetha, B., A. Suresh, Rohith, P., Gayathri Unni, and Harsha, M. P., “A Novel Method on Action Recognition Based on Region and Speed of Skeletal End points”, in 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Kochi, India, 2015.[Abstract]


This paper proposes a method to recognize human actions from a video sequence. The actions include walking, running, jogging, hand waving, clapping and boxing. The actions are categorized after recognition using a decision tree. Apart from other algorithms, our proposed method recognizes single human actions considering the speed, direction and the percentage of endpoints as a novel approach. In addition to action recognition this paper also proposes an error correction method for removing bifurcation. The system has been checked on various datasets and it performed well. More »»

2015

M. Geetha and Pillay, S. S., “Disrupted structural connectivity using diffusion tensor tractography in epilepsy”, in 2015 IEEE International Conference on Electronics, Computing and Communication Technologies, CONECCT 2015, 2015.[Abstract]


Human thoughts and emotions are communicated between different brain regions through pathways comprising of white matter tracts. Diffusion Tensor Imaging (DTI) is a newly developed Magnetic Resonance Imaging (MRI) technique to locate the white matter lesions which cannot be found on other types of clinical MRI. Fiber tracking using streamline tractography approaches has a limitation that it could not detect crossing or branching fibers. This limitation is overcome in Fast Marching technique of tractography where branching fibers are detected correctly but it takes more time than streamline tracking technique. For tracking fiber pathways in a noninvasive way, we propose an approach which utilizes the advantages of both tracking techniques: Fiber Assignment by Continuous Tracking and Fast Marching, to give a better and accurate tracking of fiber pathways as given by Fast Marching tracking technique and in less time as given by Fiber Assignment by Continuous tracking. © 2015 IEEE.

More »»

2014

M. Geetha, Anandsankar, B., Nair, L. S., Amrutha, T., and Rajeev, A., “An Improved Human Action Recognition System Using RSD Code Generation”, in Proceedings of the 2014 International Conference on Interdisciplinary Advances in Applied Computing (ICONIAAC '14), 2014.[Abstract]


This paper presents a novel method for recognizing human actions from a series of video frames. The paper uses the idea of an RSD (Region Speed Direction) code generation, which is capable of recognizing most of the common activities in spite of the spatiotemporal variability between subjects. Majority of the researches focus on upper body part or make use of hand and leg trajectories. The trajectory-based approach gives less accurate results due to variability of action pattern between subjects. In RSD Code, we give importance to three factors Region, Speed and Direction to detect the action. These three factors together gives better result for recognizing actions. The proposed method is free from occlusion, positional errors and missing information. The results from our algorithm are comparable to the results of the existing human action detection algorithms. More »»

2014

M. Geetha, Paul, M. P., and Kaimal, M. R., “An improved content based image retrieval in RGBD images using Point Clouds”, in International Conference on Communications and Signal Processing (ICCSP), 2014 , 2014.[Abstract]


Content-based image retrieval (CBIR) system helps users to retrieve images based on their contents. Therefore, a reliable CBIR method is required to extract important information from the image. This important information includes texture, color, shape of the object in the image etc. For RGBD images, the 3D surface of the object is the most important feature. We propose a new algorithm which recognize the 3D object by using 3D surface shape features, 2D boundary shape features, and the color features. We present an efficient method for 3D object shape extraction. For that we are using first and second order derivatives over the 3D coordinates of point clouds for detecting landmark points on the surface of RGBD object. Proposed algorithm identifies the 3D surface shape features efficiently. For the implementation we use Point Cloud Library(PCL). Experimental results show that the proposed method is effective and efficient and it is able to give more than 80% classification rate for any objects in our test data. Also it eliminates false positive results and it yields higher retrieval accuracy.

More »»

2014

M. Geetha and Rakendu, R., “An improved method for segmentation of point cloud using Minimum Spanning Tree”, in International Conference on Communications and Signal Processing (ICCSP), 2014 , 2014.[Abstract]


With the development of low-cost 3D sensing hardware such as the Kinect, three dimensional digital images have become popular in medical diagnosis, robotics etc. One of the difficult task in image processing is image segmentation. The problem become simpler if we add the depth channel along with height and width. The proposed algorithm uses Minimum Spanning Tree (MST) for the segmentation of point cloud. As a pre processing step, first level clustering is done which gives group of cluttered objects. Each of this cluttered group is subjected to more finite level of segmentation using MST based on distance and normal. In our method, we build a weighted planar graph of each of the clustered cloud and construct the MST of the corresponding graph. By taking the advantage of normal, we can separate the surface from the object. The proposed method is applied to different 3D scenes and the results are discussed.

More »»

2013

M. Geetha and Aswathi, P. V., “Dynamic gesture recognition of Indian sign language considering local motion of hand using spatial location of Key Maximum Curvature Points”, in 2013 IEEE Recent Advances in Intelligent Computational Systems (RAICS), 2013.[Abstract]


Sign language is the most natural way of expression for the deaf community. Indian Sign Language (ISL) is a visual-spatial language which provides linguistic information using hands, arms, facial expressions, and head/body postures. In this paper we propose a new method for, vision-based recognition of dynamic signs corresponding to Indian Sign Language words. A new method is proposed for key frame extraction which is more accurate than the existing methods. The frames corresponding to the Maximum Curvature Points (MCPs) of the global trajectory are taken as the keyframes. The method accomodates the spatio temporal variability that may occur when different persons perform the same gesture. We are also proposing a new method based on spatial location of the Key Maximum Curvature Points of the boundary for shape feature extraction of key frames.Our method when compared with three other exisiting methods has given better performance. The method has considered the local as well as global trajectory information for recognition.The feature extraction method has proved to be scale invariant and translation invariant. More »»

2013

R. J. Raghavan, Prasad, K. A., Muraleedharan, R., and M. Geetha, “Animation system for Indian Sign Language communication using LOTS notation”, in Emerging Trends in Communication, Control, Signal Processing Computing Applications (C2SPCA), 2013 International Conference on, 2013.[Abstract]


This paper presents an application aiding the social integration of the deaf community in India into the mainstream of society. This is achieved by feeding text in English and generating an animated gesture sequence representative of its content. This application consists of three main portions: an interface that allows the user to enter words, a language processing system that converts English text to ISL format and a virtual avatar that acts as an interpreter conveying the information at the interface. These gestures are dynamically animated based on a novel method devised by us in order to map the kinematic data for the corresponding word. The word after translation into ISL will be queried in the database where in lies the notation format for each word. This notation called as LOTS Notation will represent parameters enabling the system to identify features like hand location(L), Hand Orientation (O) in the 3D space, Hand Trajectory movement (T), hand shapes (S) and non-manual components like facial expression. The animation of a sentence fed is thus produced from the sequence of notations which are queued in order of appearance. We are also inserting the movement –epenthesis which is the inter sign transition gesture inorder to avoid jitters in gesturing. More than a million deaf adults and around half a million deaf children in India use the Indian Sign Language (ISL) as a mode of communication. However, this system would serve as an initiative in propelling the Sign Language Communication in the Banking Domain. The low audio dependency in the working domain supports the cause. More »»

2013

M. Geetha, C, M., P, U., and R, H., “A vision based dynamic gesture recognition of Indian Sign Language on Kinect based depth images”, in Emerging Trends in Communication, Control, Signal Processing Computing Applications (C2SPCA), 2013 International Conference on, 2013.[Abstract]


Indian Sign Language (ISL) is a visual-spatial language which provides linguistic information using hands, arms, facial expressions, and head/body postures. Our proposed work aims at recognizing 3D dynamic signs corresponding to ISL words. With the advent of 3D sensors like Microsoft Kinect Cameras, 3D geometric processing of images has received much attention in recent researches. We have captured 3D dynamic gestures of ISL words using Kinect camera and has proposed a novel method for feature extraction of dynamic gestures of ISL words. While languages like the American sign language(ASL) are of huge popularity in the field of research and development, Indian Sign Language on the other hand has been standardized recently and hence its (ISLs) recognition is less explored. The method extracts features from the signs and convert it to the intended textual form. The proposed method integrates both local as well as global information of the dynamic sign. A new trajectory based feature extraction method using the concept of Axis of Least Inertia (ALI) is proposed for global feature extraction. An eigen distance based method using the seven 3D key points- (five corresponding to each finger tips, one corresponding to centre of the palm and another corresponding to lower part of palm), extracted using Kinect is proposed for local feature extraction. Integrating 3D local feature has improved the performance of the system as shown in the result. Apart from serving as an aid to the disabled people, other applications of the system also include serving as a sign language tutor, interpreter and also be of use in electronic systems that take gesture input from the users. More »»

2013

M. Geetha, Aswathi, P. V., and Dr. Kaimal, M. R., “A Stroke Based Representation of Indian Sign Language Signs Incorporating Global and Local Motion Information”, in Second International Conference on Advanced Computing, Networking and Security, 2013.[Abstract]


Sign Language is a visual gesture language used by speech impaired people to convey their thoughts and ideas with the help of hand gestures and facial expressions. This paper presents a stroke based representation of dynamic gestures of Indian Sign Language Signs incorporating both local as well as global motion information. This compact representation of a gesture is analogous to phonemic representation of speech signals. To incorporate the local motion of the hand, each stroke contains the features corresponding to the hand shape as well. The dynamic gesture trajectories are segmented based on Maximum Curvature Points(MCPs). MCPs are selected based on the direction change of trajectories. The frames corresponding to the MCP points of the trajectory are considered as the key frames. Local information features are taken as the hand shape of the Key frames. The existing methods of Sign Language Recognition has scalability problems apart from high complexity and the need for extensive training data. In contrast, our proposed method of stroke based representation has less expensive training phase since it only requires the training of stroke features and stroke sequences of each word. Our algorithms also address the issue of scalability. We have tested our approach in the context of Indian Sign Language recognition and we present the results from this study More »»

2011

M. Geetha, Menon, R., Jayan, S., James, R., and Janardhan, G. V. V., “Gesture recognition for American sign language with polygon approximation”, in Proceedings - IEEE International Conference on Technology for Education, T4E 2011, Chennai, Tamil Nadu, 2011, pp. 241-245.[Abstract]


We propose a novel method to recognize symbols of the American Sign Language alphabet (A-Z) that have static gestures. Many of the existing systems require the use of special data acquisition devices like data gloves which are expensive and difficult to handle. Some of the methods like finger tip detection do not recognize the alphabets which have closed fingers. We propose a method where the boundary of the gesture image is approximated into a polygon with Douglas - Peucker algorithm. Each edge of the polygon is assigned the difference Freeman Chain Code Direction. We use finger tips count along with difference chain code sequence as a feature vector. The matching is done by looking for either perfect match and in case there is no perfect match, substring matching is done. The method efficiently recognizes the open and closed finger gestures. © 2011 IEEE. More »»

Publication Type: Book Chapter

Year of Publication Title

2020

C. Aparna and M. Geetha, “CNN and Stacked LSTM Model for Indian Sign Language Recognition”, in Machine Learning and Metaheuristics Algorithms, and Applications, 2020, pp. 126-134.[Abstract]


In this paper, we propose a deep learning for sign language recognition using convolutional neural network (CNN) and long short term memory (LSTM). The architecture used CNN as a pretrained model for feature extraction and is passed to the LSTM for capturing spatio-temporal information. One more LSTM is stacked for increasing the accuracy. Deep learning model which captures temporal information is less. There is only less papers which deals with sign language recognition by using the deep learning architectures such as CNN and LSTM. The algorithm was tested in Indian sign language (ISL) dataset. We have presented the performance evaluation after testing with ISL dataset. Literature shows that deep learning models capturing temporal information is still an open research problem.

More »»

Publication Type: Journal Article

Year of Publication Title

2020

N. Aloysius and M. Geetha, “A scale space model of weighted average CNN ensemble for ASL fingerspelling recognition”, International Journal of Computational Science and Engineering, vol. 22, 2020.[Abstract]


A sign language recognition system facilitates communication between the deaf community and the hearing majority. This paper proposes a novel specialised convolutional neural network (CNN) model, SignNet, to recognise hand gesture signs by incorporating scale space theory to deep learning framework. The proposed model is a weighted average ensemble of CNNs – a low resolution network (LRN), an intermediate resolution network (IRN) and a high resolution network (HRN). Augmented versions of VGG-16 are used as LRN, IRN and HRN. The ensemble works at different spatial resolutions and at varying depths of CNN. The SignNet model was assessed with static signs of American Sign Language – alphabets and digits. Since there exists no sign dataset for deep learning, the ensemble performance is evaluated on the synthetic dataset which we have collected for this task. Assessment of the synthetic dataset by SignNet reported an impressive accuracy of over 92%, notably superior to the other existing models.

More »»

2020

N. Aloysius and M. Geetha, “Understanding vision-based continuous sign language recognition”, Multimedia Tools and Applications, vol. 79, no. 31, pp. 22177 - 22209, 2020.[Abstract]


Real-time sign language translation systems, that convert continuous sign sequences to text/speech, will facilitate communication between the deaf-mute community and the normal hearing majority. A translation system could be vision-based or sensor-based, depending on the type of input it receives. To date, most of the commercial systems for this purpose are sensor-based, which are expensive and not user-friendly. Vision-based sign translation systems are the need of the hour but should overcome many challenges to build a full-fledged working system. Preliminary investigations in this work have revealed that the traditional approaches to continuous sign language recognition (CSLR) using HMM, CRF and DTW, tried to solve the problem of Isolated Sign Language Recognition (ISLR) and extended the solution to CSLR, leading to reduced performance. The main challenge of identifying Movement Epenthesis (ME) segments in continuous utterances, were handled explicitly with these traditional methods. With the advent of technologies like Deep Learning, more feasible solutions for vision-based CSLR are emerging, which has led to an increase in the research on vision-based approaches. In this paper, a detailed review of all the works in vision-based CSLR is presented, based on the methods they have followed. The challenges posed in continuous sign recognition are also discussed in detail, followed by a brief on sensor-based systems and benchmark databases. Finally, performance evaluation of all the associated methods are performed, which leads to a short discussion on the overall study and concludes by pointing out future research directions in the field.

More »»

2018

A. Neena and M. Geetha, “Image classification using an ensemble-based deep CNN”, Advances in Intelligent Systems and Computing, vol. 709, pp. 445-456, 2018.[Abstract]


For the customary classification algorithms, performance depends on feature extraction methods. However, it is challenging to extract such unique features. With the advancement of Convolutional Neural Networks (CNN), which is the widely used Deep Learning Framework, there seems to be a substantial improvement in classification performance combined with implicit feature extraction process. But, training a CNN is an intensive process that often needs high computing machines (GPU) and may take hours or even days. This may confine its application in a few situations. Considering these factors, an ensemble architecture is modelled, that is trained on a subset of mutually exclusive classes, grouped by Hierarchical Agglomerative Clustering based on similarity. A new Probabilistic Ensemble-Based Classifier is designed for classifying an image. This new model is trained in comparatively lesser time with classification accuracy comparable to the traditional ensemble model. Also, GPUs are not necessary for training this model, even for large datasets. © Springer Nature Singapore Pte Ltd. 2018

More »»

2017

M. Geetha and Dr. Kaimal, M. R., “A 3D stroke based representation of sign language signs using key maximum curvature points and 3D chain codes”, Multimedia Tools and Applications, pp. 1-34, 2017.[Abstract]


Sign Language is a visual spatial language used by deaf and dumb community to convey their thoughts and ideas with the help of hand gestures and facial expressions. This paper proposes a novel 3D stroke based representation of dynamic gestures of Sign Language Signs incorporating local as well as global motion information. The dynamic gesture trajectories are segmented into strokes or sub-units based on Key Maximum Curvature Points (KMCPs) of the trajectory. This new representation has helped us in uniquely representing the signs with fewer number of key frames. We extract 3D global features from global trajectories using a scheme of representing strokes as 3D codes, which involves dividing strokes into smaller units (stroke subsegment vectors or SSVs), and representing them as belonging to one of the 22 partitions. These partitions are obtained using a discretisation procedure which we call an equivolumetric partition (EVP) of sphere. The codes representing the strokes are referred to as an EVP code. In addition to global hand motion and local hand motion, facial expressions are also considered for non-manual signs to interpret the meaning of words completely. In contrast to existing methods, our method of stroke based representation has less expensive training phase since it only requires the training of key stroke features and stroke sequences of each word. © 2017 Springer Science+Business Media New York

More »»

2012

M. Geetha and C, M. U., “A Vision Based Recognition of Indian Sign Language Alphabets and Numerals Using B-Spline Approximation”, International Journal on Computer Science & Engineering (IJCSE), vol. 4, p. 3, 2012.[Abstract]


Sign language is the most natural way of expression for the deaf community. The urge to support the integration of deaf people into the hearing society made the automatic sign language recognition, an area of interest for the researchers. Indian Sign Language (ISL) is a visual-spatial language which provides linguistic information using hands, arms, facial expressions, and head/body postures. In this paper we propose a novel vision-based recognition of Indian Sign Language Alphabets and Numerals using B-Spline Approximation. Gestures of ISL alphabets are complex since it involves the gestures of both the hands together. Our algorithm approximates the boundary extracted from the Region of Interest, to a B-Spline curve by taking the Maximum Curvature Points (MCPs) as the Control points. Then the B-Spline curve is subjected to iterations for smoothening resulting in the extraction of Key Maximum Curvature points (KMCPs), which are the key contributors of the gesture shape. Hence a translation & scale invariant feature vector is obtained from the spatial locations of the KMCPs in the 8 Octant Regions of the 2D Space which is given for classification.

More »»