Qualification: 
Ph.D
t_senthilkumar@cb.amrita.edu

Dr. T. Senthil Kumar currently serves as Associate Professor at the Department of Computer Science and Engineering at Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Coimbatore. His research interest include Video Analytics, Big Data Analytics, Intrusion Detection Systems, Deep Learning , Behavioral Network Security, Malware Analysis.

He completed his B. Tech. (Computer Science and Engineering) from Sethu Institute of Technology, Madurai. He then completed his M. Tech. (Distributed Computing Systems) from Pondicherry Engineering College, Pondicherry. He completed his Ph.D. in Information and Communication Engineering from Anna University, Chennai.

He has received appreciation in Indian Express for agent-based programming for Banking Domain. He has published a Book on C++. He is a guiding scholar with Amrita Vishwa Vidyapeetham in the area of Video Analytics, Intrusion detection system, Behavioural Security. He has involved himself in developing in the competency areas of programming like Matlab, NS2, JIST, DotNet, Android, Hadoop, Spark, OpenCV with Qt He is working with Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Coimbatore since 2001. He is in charge of the Smart Spaces research lab.

Teaching

  1. Image Processing
  2. Data Mining
  3. Video Analytics
  4. Computer Networks
  5. Object Oriented Paradigm

Funding Projects

Year Agency/Organization Project Title Status
2015 IBM Shared University Research Malware detection using FPGA, Sandboxing and Machine Learning Completed 
2017 Department of Science and Technology A Framework for event modeling and detection for Smart Buildings using Vision Systems Completed 
2017 Ministry of Tribal Affairs Technology inputs in promoting indigenous food recipes of Irulas and Kurumbas tribes and empowering disadvantaged youth of Masinagudi and Ebbanad village of The Nilgiri District On-going
2018 IBM  Shared University Research Detection and Prevention of Advanced Persistent   Threat (APT) Activities in Heterogeneous Networks using SIEM and Deep Learning On-going

Research Projects

  • Video Annotation Tool
  • MetaData- A Tool to supplement Datascience
  • Fall Detection of Elderly people
  • Behavior Prediction with Facial Expression

Collaborative Research with Amrita Institute of Medical Sciences(AIMS)

  • T.Senthil Kumar, CSE and Dr. Kumar Menon- Physician Incharge Center For Digital Health, Amrita Institute of Medical Sciences, Kochi collaborated on the project Human Anatomy ATLAS for educational purpose
  • T.Senthil Kumar,  CSE and Dr Sherif Peter, Head and Neck Department – Plastic reconstructive Surgery, Amrita Institute of Medical Sciences, Kochi collaborated on the project 3D Modelling for cosmetic surgery

Ph. D. Students

  • K.S.Gautam
  • H.Haritha .H
  • P.Sridhar
  • R.Senthil Nathan
  • N.Prabhu
  • Tirumala Kumar Kandukuru

Training Session Delivered for Industry

  • Completed Training for  L&T TS, Mysore employees in Image Processing from June 19- 23, 2017.

Workshops Organized

  • Organizing member for National Workshop on Computer Vision & Image Processing Techniques - NWCVIPT' 17
  • Coordinator for National Workshop on Computer Vision & Image Processing Techniques - NWCVIPT' 2016

Reviewer-International Journals

  • Wiley-International Journal of Communication Systems
  • Springer-Neural Computing and Applications
  • Elsevier - Computers and electrical engineering
  • IEEE Access
  • International Journal of Intelligent Information Systems
  • Springer –Data Analysis
  • Journal of Electronic Imaging
  • CMES-Computer Modeling in Engineering &Sciences
  • Inderscience-Int. J. of Computational Medicine and Healthcare
  • Elesiver-Computer Methods and Programs in Biomedicine

Publications

Publication Type: Conference Proceedings

Year of Publication Title

2021

S. Nandhini, Suganya, R., Nandhana, K., Varsha, S., Deivalakshmi, S., and Dr. Senthil Kumar T., “Automatic Detection of Leaf Disease Using CNN Algorithm”, Machine Learning for Predictive Analysis, vol. 141. Springer Singapore, Singapore, pp. 237-244, 2021.[Abstract]


In Indian market, the highest commercial staple is tomato crop. The production of apples constituted 2.40% of the total fruits produced in India, and Maize is one of the highest yielding crops in the world, thus known as `miracle crop.' These plants' health and growth are usually affected by the diseases. There are various types of tomato, maize and apple leaf diseases that affect the crop. This paper uses the convolution neural network to detect and identify the diseases in the leaves by image classification. The main objective of the proposed system is to find a solution for the problem of tomato, corn and apple leaf diseases using the neural network. The proposed convolutional neural network model has eight layers including five convolution and three max pooling layers. The proposed system has achieved accuracy from the range 96–98% for three different types of the leaf images indicating the feasibility of neural network method.

More »»

2021

K. K. R. Sanj Kumar, Subramani, G., Dr. Senthil Kumar T., and Parameswaran, L., “A Mobile-Based Framework for Detecting Objects Using SSD-MobileNet in Indoor Environment”, Intelligence in Big Data Technologies–-Beyond the Hype, vol. 1167. Springer Singapore, Singapore, pp. 65-76, 2021.[Abstract]


Object detection has a prominent role in image recognition and identification. Emerging use of neural networks approaches toward image processing, classification and detection for increasing amount of complex datasets. With the collection of large amounts of data, faster and more efficient GPUs and better algorithms, computers can be trained conveniently to detect and classify multiple objects within an image with high accuracy. Single-shot detector-MobileNet (SSD) is predominantly used as it is a gateway to other tasks/problems such as delineating the object boundaries, classifying/categorizing the object, identifying sub-objects, tracking and estimating object's parameters and reconstructing the object. This research demonstrates an approach to train convolutional neural network (CNN) based on multiclass as well as single-class object detection classifiers and then utilize the model to an Android device. SSD achieves a good balance between speed and certainty. SSD runs a convolution network on the image which is fed into the system only once and produces a feature map. SSD on MobileNet has the highest mAP among the models targeted for real-time processing. This algorithm includes SSD architecture and MobileNets for faster process and greater detection ratio.

More »»

2021

P. Sridhar, Dr. Senthil Kumar T., and Parameswaran, L., “A New Approach for Fire Pixel Detection in Building Environment Using Vision Sensor”, Innovations in Computational Intelligence and Computer Vision, vol. 1189. Springer Singapore, Singapore, pp. 392-400, 2021.[Abstract]


<p>Computer Vision based approaches is a significant area of research area in detecting and segmenting the anomalies in a building environment. Vision sensor approaches influence automation in detection and localization. Existing fire detection framework has overcome the constraints in conventional approaches such as threshold limits, environmental pollution, proximity to fire, etc. In this paper we propose a new method for fire detection in smart building with a vision sensor that is inspired by computer vision approaches. This proposed method identifies fire pixel using three steps: the first step is based on Gaussian probability distribution; the second one uses a hybrid background subtraction method and the third is based on temporal variation. These three steps are essentially used to address distinct challenges such as Gaussian noise in frames and different resolution of videos. Experimental results show good detection accuracy for video frames under various illuminations and are robust to noise like smoke.</p>

More »»

2020

Dr. Senthil Kumar T., Rajeevan, T. V., Rajendrakumar, S., Subramaniam, P., Kumar, U., and Meenakshi, B., “A Collaborative Mobile Based Interactive Framework for Improving Tribal Empowerment”, Computational Vision and Bio-Inspired Computing, vol. 1108. Springer International Publishing, Cham, pp. 827-842, 2020.[Abstract]


Handling the user interactions and improving the responsiveness of the system through collaborative Recommendation system is the proposed framework. The application will have an interactive form that takes the user details. On valid user access the user is provided with options under a gallery as: Location wise Statistics, Tribes culture, Tribes Request and Tribal activities. The tribal users should be able to present their food products as part of the system. The tourist guest should be able to view the food products and place orders. Recommendation can be provided to the guest based on the past experience. The application will also facilitate for Geo Tagging of the users including tribes to improve their participation. The application will be developed using Android with cloud namely. The user statistics will be presented for region wise in a more visual manner. The Smart phones play a vital role in everyone's life. In smart phones we have so many applications and the applications need to be tested. The mobile applications are tested for its functionality, usability, and for its consistency. There are two types of mobile application testing. They can be of non-automatic or automatic testing. In this study automated approach is discussed. An android based mobile testing needs native test applications which is used for testing a single platform.

More »»

2020

U. Subbiah, Kumar, D. K., Dr. Senthil Kumar T., and Parameswaran, L., “An Extensive Study and Comparison of the Various Approaches to Object Detection using Deep Learning”, 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT). 2020.[Abstract]


Smart spaces are specialized environments developed to enable the automatic monitoring of events in a monitored setting. Smart surveillance uses deep learning for object detection, to detect any hazards or predict potential threats in the designated smart space. Deep learning improves the accuracy of the dataset and even humans in tasks like image classification, speech recognition, and predictive tasks. In smart spaces, deep learning can be used for actions like voice recognition, to identify trends in collected data and smart surveillance. Deep learning algorithms are capable of locating a region of interest in a frame and predicting a label for the object in the region of interest. There are a wide variety of architectures available, each with its advantages and limitations. This paper aims to provide a study of deep learning architecture performance tuning. After an extensive comparison, considering the given evaluation metrics and time constraints of a real-time smart surveillance system, the YOLO architecture and its variants are found to be the most efficient. This architecture has been implemented on a smart space dataset and the results have been documented.

More »»

2019

S. Rudra and Dr. Senthil Kumar T., “A Robust Q-Learning and Differential Evolution Based Policy Framework for Key Frame Extraction”, Springer Advances in Intelligent Systems and Computing, vol. 1039. Springer International Publishing, Cham, pp. 716-728, 2019.[Abstract]


With the recent development in multimedia technologies, in rapid conjunction with the increase of the volume of digital video data through internet and web technologies. For this purpose solely, content based video retrieval (CBVR) has become a wide and vast area of research throughout the last decade. The objective of this thesis is to present applications for temporal video frame analysis based on performance evaluation of key frames and video sequence from the extracted key frames retrieval based on different mathematical models. In this work, through performance analysis, we extracted the key frames from a video into its constituent units. This is achieved by identifying transitions between adjacent temporal features. The proposed algorithm aims to extract the key frames based on the validation measures and cross mutation function through the modified differential evaluation algorithm. Given the size of the vector containing image pixels, it can be modeled by a parameter based cross evaluation function of the parent vector. The proposed system, designed for extracting key frames, has led to reliable algorithm achieving high performance for object re-identification. In addition, the high computational time allows for key frame analysis in real time. In our research, we opted for a global method based on local optimization. The proposed methodology is being validated against various state of the art key frame extraction algorithm which proves this methodology as reliable and faster complexion process for object re-identification.

More »»

2019

T. M. Manickam, Yogesh, M., Sridhar, P., Dr. Senthil Kumar T., and Parameswaran, L., “Video-Based Fire Detection by Transforming to Optimal Color Space”, Proceedings of 3rd International Conference On Computational Vision and Bio Inspired Computing, vol. 1108. Springer International Publishing, Cham, 2019.[Abstract]


With the increase in number of fire accidents, the need for the fire detection system is growing every year. Detecting the fire at early stages can prevent both material loss and loss of human lives. Sensor based fire detection systems are commonly used for detecting the fire. But they have drawbacks like time delay and close proximity. Vision based fire detectors are cost efficient and can potentially detect fire in its early stages. Here we propose a lightweight pixel based fire detection model to extract frames from videos and identify frames with fire in it. We use matrix multiplication to transform our input frame to a new color space in which separation of fire pixels from non-fire pixels is easier. The optimal value for the matrix to be multiplied is obtained using fuzzy-c-means clustering and particle swarm optimization. Otsu thresholding is applied on transformed image in new color space to classify the fire pixels in the frame. Our result shows high accuracy on forest fire videos with very less inference time

More »»

2017

Dr. Senthil Kumar T. and Murthi, M., “A semi automated system for smart harvesting of tea leaves”, 2017 4th International Conference on Advanced Computing and Communication Systems (ICACCS). 2017.[Abstract]


The traditional method of tea leaves harvesting is done in following ways, such as Hand Plucking using knife, Hand Plucking without using Knife. In the recent years the harvesting machines are introduced which can be operated by a single person or multiple persons and also a robotic vehicle. The challenges faced in the above systems are not enough capacity of human resources, intruding of wild animals into the fields, the machines don't have the intelligence on its own and also the robotic vehicle is terrain dependent which can be used only in the plain terrain. But the fields in India are irregular terrain. This paper proposed a semi-automatic system where the tea leaves will be harvested automatically by a robotic arm which will pluck the tea leaves based on the grade. The grade identification is done using the image processing techniques such as, Key Frame extraction, Rice counting, optical flow, and noise model with segmentation. The proposed work is novel because it has capabilities of considering motion with key frame capabilities and the noise model.

More »»

2017

K. S. Gautam and Dr. Senthil Kumar T., “Hidden object detection for classification of threat”, 2017 4th International Conference on Advanced Computing and Communication Systems (ICACCS). 2017.[Abstract]


The paper proposes an intelligent K-means segmentation algorithm that clearly segments foreground objects and completely occluded objects. When a person completely occludes an object while entering into the area of video surveillance, it is considered as an anomaly. The paper comes up with a robust technical solution to address this. The proposed algorithm chooses an optimal value for K and segments the object. The scope of the system extends to the area such as prison, airport etc. where there is a need to monitor completely occluded objects and other objects in the foreground. The system is tested with images from Stereo Thermal Dataset and achieves a precision rate of 88.89% while segmenting objects. From the experimental results, we infer that the proposed algorithm is robust in segmenting the objects without losing its shape and number.

More »»

2016

K. S. Sahla and Dr. Senthil Kumar T., “Classroom Teaching Assessment Based on Student Emotions”, Intelligent Systems Technologies and Applications 2016. Springer International Publishing, Cham, pp. 475–486, 2016.[Abstract]


Classroom teaching assessments are designed to give a useful feedback on the teaching-learning process as it is happening. The best classroom evaluations additionally serve as significant sources of data for instructors, helping them recognize what they taught well and what they have to deal with. In the paper, we propose a deep learning method for emotion analysis. This work focuses on students of a classroom and thus, understand their facial emotions. Methodology includes the preprocessing phase in which face detection is performed, LBP encoding and mapping LBPs are done using deep convolutional neural networks and finally emotion prediction. More »»

2016

R. Suganya, Rajaram, S., Vishalini, S., Meena, R., and Dr. Senthil Kumar T., “Dental image retrieval using fused local binary pattern & scale invariant feature transform”, Advances in Intelligent Systems and Computing, vol. 425. Springer, pp. 215-224, 2016.[Abstract]


In the field of dental biometrics, textural information plays a significant role very often in tissue characterization and gum diseases diagnosis, in addition to morphology and intensity. Failure to diagnose gum diseases in its early stages may leads to oral cancer. Dental biometrics has emerged as vital biometric information of human being due to its stability, invariant nature and uniqueness. The objective of this paper is to improve the classification accuracy based on fused LBP and SIFT textural features for the development of a computer assisted screening system. The swift expansion of dental images has enforced the requirement of efficient dental image retrieval system for retrieving images that are visually similar to query image. This paper implements a dental image retrieval system using fused LBP & SIFT features. The fused LBP & SIFT features identify the gum diseases from the epithelial layer in classifying normal dental images about 91.6% more accurately compared to other features. © Springer International Publishing Switzerland 2016.

More »»

2016

Dr. Senthil Kumar T. and Narmatha, G., “Suspicious human activity detection in classroom examination”, Advances in Intelligent Systems and Computing, vol. 412. Springer , pp. 99-108, 2016.[Abstract]


The proposed work aims in developing a system that analyze and detect the suspicious activity that are often occurring in a classroom environment. Video Analytics provides an optimal solution for this as it helps in pointing out an event and retrieves the relevant information from the video recorded. The system framework consists of three parts to monitor the student activity during examination. Firstly, the face region of the students is detected and monitored using Haar feature Extraction. Secondly, the hand contact detection is analyzed when two students exchange papers or any other foreign objects between them by grid formation. Thirdly, the hand signaling of the student using convex hull is recognized and the alert is given to the invigilator. The system is built using C/C++ and OpenCV library that shows the better performance in the real-time video frames. © Springer Science+Business Media Singapore 2016.

More »»

2016

Dr. Senthil Kumar T., Gautam, K. S., and Haritha, H., “Debris detection and tracking system in water bodies using motion estimation technique”, Advances in Intelligent Systems and Computing, vol. 424. Springer, pp. 275-284, 2016.

2016

S. Sreelakshmi, Anupa Vijai, and Dr. Senthil Kumar T., “Detection and Segmentation of Cluttered Objects from Texture Cluttered Scene”, Proceedings of the International Conference on Soft Computing Systems , vol. 398. Springer, pp. 249-257, 2016.[Abstract]


The aim of this paper is to segment an object from a texture-cluttered image. Segmentation is achieved by extracting the local information of image and embedding it with active contour model based on region. Images with inhomogenous intensity can be segmented using this model by extracting the local information of image. The level set function [1] can be smoothened by introducing the Gaussian filtering to the current model and the need for resetting the contour for every iteration can be eliminated. Evaluation results showed that the results obtained from the proposed method is similar to the results obtained from LBF [2] (local binary fitting) energy model, but the proposed method is found to be more efficient in terms of computational aspect. Moreover, the method maintains the sub-pixel reliability and boundary fixing properties. The approach is presented with metrics of visual similarity and could be further extended with quantitative metrics.

More »»

2014

Dr. Senthil Kumar T. and Anupa Vijai, “3D Reconstruction of Face: A comparison of Marching Cube and Improved Marching Cube Algorithm”, Proceedings of the International Conference of Advances in Engineering and Technology, vol. 1. pp. 6-9, 2014.[Abstract]


3D reconstruction of face is one of the advancements in physical modeling techniques which uses engineering methods in the field of medicine.The systems in development propose a software tool that will help in craniofacial surgery. The existing approaches for 3D reconstruction has different applications from real scenary to human parts of body. The analysis of the different algorithms allow developers to make vital decisions in understanding the modelling of the face. The human face has different regions including the tissue and hard bones. The paper presents a comparison of two surface rendering techniques, Marching Cube(MC) and Improved Marching Cube(IMC) algorithms, and draws conclusions for analysing the suitable approach for a specific range of application.

More »»

2012

Dr. Senthil Kumar T. and Anupa Vijai, “3D Reconstruction of Face from 2D CT Scan Images”, Procedia Engineering, vol. 30. pp. 970-977, 2012.[Abstract]


3D reconstruction of face is one of the advancements in physical modeling techniques which uses engineering methods in the field of medicine.The systems in development propose a software tool that will help in craniofacial surgery. The existing approaches for 3D reconstruction has different applications from real scenary to human parts of body. The analysis of the different algorithms allow developers to make vital decisions in understanding the modelling of the face. The human face has different regions including the tissue and hard bones. The paper presents a survey on different 3D reconstruction approaches and draws conclusions for analysing the suitable approach for a specific range of application.

More »»

Publication Type: Journal Article

Year of Publication Title

2020

K. Gautam, Dr. Latha Parameswaran, and Dr. Senthil Kumar T., “Computer Vision Based Asset Surveillance for Smart Buildings”, Journal of Computational and Theoretical Nanoscience, vol. 17, no. 1, pp. 456 - 463, 2020.[Abstract]


Unraveling meaningful pattern form the video offers a solution to many real-world problems, especially surveillance and security. Detecting and tracking an object under the area of video surveillance, not only automates the security but also leverages smart nature of the buildings. The objective of the manuscript is to detect and track assets inside the building using vision system. In this manuscript, the strategies involved in asset detection and tracking are discussed with their pros and cons. In addition to it, a novel approach has been proposed that detects and tracks the object of interest across all the frames using correlation coefficient. The proposed approach is said to be significant since the user has an option to select the object of interest from any two frames in the video and correlation coefficient is calculated for the region of interest. Based on the arrived correlation coefficient the object of interest is tracked across the rest of the frames. Experimentation is carried out using the 10 videos acquired from IP camera inside the building.

More »»

2019

K. S. Gautam and Dr. Senthil Kumar T., “Video analytics-based facial emotion recognition system for smart buildings”, International Journal of Computers and Applications, pp. 1-10, 2019.[Abstract]


Video surveillance, within prisons, monitor the emotional status of inmates, as human emotions provide insight into their intended actions. This work attempts to build an automated system that cognizes human emotion from the pattern of pixels in a facial image. In this paper, a solution based on Iterative Optimization Strategy is proposed to minimize the loss function. The proposed strategy is applied in the Fully Connected layer of Deep ConvNet. To evaluate the performance of the system we use two benchmark datasets named Japanese Female Facial Expression database and Kaggle Facial Expression Recognition dataset respectively. The system was manually tested with captured video, and video from a real documentary on YouTube. From the results, we could see that the proffered system achieves a precision, i.e. (the closeness of agreement among a set of results) of 0.93.

More »»

2019

Dr. Senthil Kumar T., S, R., and Rajevan, T., “Deep Learning based Emotion Analysis Approach for Strengthening Teaching-Learning process in Schools of Tribal Regions”, Journal of Advanced Research in Dynamical and Control Systems, vol. 11, pp. 621-635, 2019.[Abstract]


Emotions are individual traits for different sort of people especially for deprivation in the society, age long marginalization, inaccessibility of basic infrastructures, poverty etc. The vulnerabilities among tribal students can be understood and analyzed using the sophisticated technologies. Face detection and recognition of ahuman is an essential move in deep learning framework. Human emotion recognition assumes to be vital in finding out the vulnerabilities of individuals.The automatic identification of emotions had been a consistent research theme from early time periods .There has been a few advancements made in this fieldusing automatic identification of face emotion. Feelings are reflected from facial expressions, hand and signals of the body and through outward appearances. Thus understanding and extraction of these emotions have a high significance in the collaboration among human and machine correspondence.The paper aims to detect face emotions and annotates the every human in a given video using a combination of computational analysis, manual annotation, and experimental validation. The technique can be applicable for Tribal school kids for understanding the emotions. The system can recognize the emotion of kids and can authenticate the person.It can also be used in detecting and annotating unauthorized entries in restricted areas. Through the gaming platforms it is easy to attract the tribal students and to analyze their emotions and plan the developmental aspects based on the outcome. It will also help to understand the specific reasons of school dropouts and other issues students in tribal community face. Teachers can develop a framework using the reportfrom the application to improve the educational qualities of students. It can be concluded that Deep learning architecture-Convolution Neural Network is able to accurately classify the emotions.

More »»

2019

H. Haritha and Dr. Senthil Kumar T., “A modified deep learning architecture for vehicle detection in traffic monitoring system”, International Journal of Computers and Applications, pp. 1-10, 2019.[Abstract]


Deployment of adequate surveillance and security measures is crucial in today’s scenario. The proposed system is designed to handle the surveillance that detects the vehicles present in the traffic. With the evolution of Deep convolutional neural network, the vision-based vehicle detection has reached a higher level in performance. The proposed method deals with the detection of vehicles and capable of handling the distant small-scale vehicles without bothering the image scalability. The method uses Deep Learning along with ROI pooling for handling image scalability. The method consists of a noise-reducing component and the convolutional layers. The method calculates varying scales of the input and does the filtering according to each scale without generalizing. Thereby it handles the image scalability issue. The filter size is small in the proposed method, hence there is a reduction in system complexity and an increase in performance rate. The global average pooling is introduced to the final layers so that the overfitting is avoided. The performance of the system is assessed by the comparison with already established methods for vehicle detection like MS-CNN and YOLO v2 and provides much better experimental results.

More »»

2019

K. S. Gautam and Dr. Senthil Kumar T., “Video analytics-based intelligent surveillance system for smart buildings”, Soft Computing, vol. 23, pp. 2813-2837, 2019.[Abstract]


The goal of the work is to automate video surveillance. The work holds its importance since the camera surveillance under manual supervision fails occasionally. Face images of authorized users in the building are trained, and for each face image, weight is calculated. When the test face comes into the building, the weight for the test face is calculated and compared with the existing weights. Based on the similarity between them, the person’s face is identified. For successful recognition of face across the frames, first face image has to be detected. To handle this task, we propose a hybrid algorithm using Haar cascade classifier and skin detector. The stand-alone performance of Haar cascade classifier and skin detector is analyzed, and the work discusses the need for hybridization. The proposed approach addresses the challenges in detecting faces such as orientation changes, varying illumination and partial occlusion. The performance of the system is comparatively analyzed with the videos of frontal and pose-varying faces, and the detection of face across frames is measured. From the experimental analysis, we infer that the detection rate of the proposed hybrid algorithm is 100% for frontal face video and 99.895% for pose-varying face video. The proposed hybrid algorithm is tested with VISOR dataset, and the proposed algorithm achieves precision of 95.20%. We also propose a deep learning framework based on the hybrid algorithm to detect face across the frames. The proposed framework generates more images by affine wrapping strategy and thus handling face orientation changes. The test data are labeled based on counting the prediction results of all the affine transformed images. To evaluate the performance of the system, the framework is tested with frames from VISOR dataset. From the experimental analysis, we infer that the precision of the proposed deep learning framework is 99.10%. Since the person’s face has been detected across the maximum number of frames, the next focus of the work is to identify the person whose face has been detected. For addressing the person identification problem, we use principal component analysis to reduce the dimension of the face vectors. The weight matrix or confidence for every face vector is calculated and checked for similarity with the weight matrix of test face vector. Based on the similarity between the two weight matrixes, a person’s face is identified. The built automated surveillance system is tested with NRC-IIT facial video database, from the experimental analysis, we infer that the system detects face across all the frames and out of ten videos, the system correctly identifies the face of the person in nine videos so that we calculate the recognition rate as 90%. © 2019, Springer-Verlag GmbH Germany, part of Springer Nature.

More »»

2018

K. R. Ramakrishnan and Dr. Senthil Kumar T., “Deep learning for identification and validation of objects and data viewed through vehicle windshield in lab environment – a DCNN approach”, Journal of Advanced Research in Dynamical and Control Systems, vol. 10, pp. 880-891, 2018.[Abstract]


Automobiles are becoming more complicated system considering the safety, environment and user luxury. It is becoming a great challenge considering the complexity of driving and safety in modern automotive world with human interaction systems. The current display systems on the instrument panel, which are used for displaying messages, forces the driver to get away from the road view. To overcome this critical behaviour of the driver windshield is used display the information. This unit is called as Head up Display (HUD), which reduces the duration and frequency of the driver to look/deviate away from the traffic situation/scene. As an advance to the HUD, Augmented Reality (AR) concept has come into play, to overcome the drawbacks of HUD such as the risk of hindering pertinent objects of traffic and phenomena like insight channeling and intellectual capture. In order to validate the HUD content, such as object detection, a deep learning based approach for recognizing and identifying the object types along with predicting and interpreting composite situation is proposed. A Deep Convolutional Neural Network (D CNN) is implemented to identify the object of interest in one evaluation from full image and adding to this performance analysis are carried out on different dataset scenarios. The network is implemented and deployed in lab environment aiming for real-time object detection testing system that is used for testing HUD contents. © 2018, Institute of Advanced Scientific Research, Inc. All rights reserved.

More »»

2018

V. U. Kumar and Dr. Senthil Kumar T., “Deep learning for I2V communication using moving QR code”, Journal of Advanced Research in Dynamical and Control Systems, vol. 10, pp. 892-898, 2018.[Abstract]


The main objective of this paper is to study the feasibility of using moving QR (Quick Response) code as medium of communication between Infrastructures to Vehicle (I2V) communication. Highly automated vehicles in general connected with the internet to get the real time data to make appropriate decisions. At present, RF technology (WiFi / 4G / 5G / DSRC) is commonly used for the wireless connectivity for the highly automated vehicles. There is a potential chance exists that increase in RF networks will lead to a congested communication channel. It also might cause health hazards to humans especially for children, animals and birds in the near future. Fail operational is a mandatory requirement in highly automated vehicles. So in order to maintain the fail operational it is needed to have alternate or redundant connection other than RF connectivity. This paper proposes a low cost alternate method/technology to establish wireless communication between Infrastructures to Vehicle (I2V) Communication. In this proposed method visible light is used as a medium for transmission; Optical Sensor (camera) is used as a receiver. Moving QR is used for encoding the data. Data rate improvement is discussed by employing Deep Neural Network (DNN). © 2018, Institute of Advanced Scientific Research, Inc. All rights reserved.

More »»

2018

A. M. Geetha and Dr. Senthil Kumar T., “Deep learning for driver assistance using estimated trajectory complexity parameter”, Journal of Advanced Research in Dynamical and Control Systems, vol. 10, pp. 871-879, 2018.[Abstract]


The current work aims at introducing the concept and suggesting a possible implementation methodology for achieving the following objectives, given a video containing vehicles under conditions of normal traffic: 1) Detection of target vehicle using deep learning 2) Trajectory complexity parameter (TCP) derivation. These objectives or phases achieved in sequence would result in TCP which is a parameter introduced to estimate the complexity of the trajectory of vehicle in front. The proposed method, employing Faster R-CNN (Regions with Convolutional Neural Networks) for detection of the vehicle and an x coordinate gradient based logic for deriving TCP, is tested for the feasibility of the concept. TCP is a representation of the driving pattern of the target vehicle’s driver. This measure has its potential usage in aspects like detecting rash drivers travelling in front of us and estimating conditions where more attention from the driver is required due to complex driving pattern of vehicles in front. © 2018, Institute of Advanced Scientific Research, Inc. All rights reserved.

More »»

2018

K. D. Kumar and Dr. Senthil Kumar T., “Assisting visually challenged person in the library environment”, Lecture Notes in Computational Vision and Biomechanics, vol. 28, pp. 722-733, 2018.[Abstract]


Nowadays, there is lot of assistive technologies to support visually impaired people. Among them computer vision based methods provide a feasible solution. An indoor environment provides challenges like object recognition, character and scene recognition. It is important to understand that people need to know information about things; places and they may feel insecure in places like their working places, shopping mall because of their challenges in vision. It is essential that a technology based solution can be provided to support the people so that they can be guided along in their pathways, rooms, shopping malls and they can also access things in their living environment. In this paper a model is proposed for detecting text from library book shelf scene video and informing the user about book name through audio to assist visually impaired people in accessing their book which is kept on shelf in library. Key frames are extracted using PSNR and Edge Change Ratio method. Text on the key frame is detected and localized using MSER and Projection profiles. CNN is used to recognize characters from the localized text. This paper gives an outline of different techniques which are combined to extract key frames, localize and recognize text from natural library scenes. © 2018, Springer International Publishing AG.

More »»

2017

H. Haritha and Dr. Senthil Kumar T., “Survey on various traffic monitoring and reasoning techniques”, Advances in Intelligent Systems and Computing, vol. 573, pp. 507-516, 2017.[Abstract]


Traffic monitoring and surveillance is advancing in recent years. This paper proposes a survey on an overview of various traffic sensing, monitoring techniques. Several projects have been developed for the detection and tracking of vehicles on multiple scenarios. The vehicle monitoring results depends mainly on the camera positioning. This paper gives a detailed description on the different camera positioning and monitoring which includes straight roads and intersections. In this survey a detailed description is given on preprocessing techniques, vehicle detection and tracking methods. Finally paper concludes with the challenges and the future scope. © Springer International Publishing AG 2017.

More »»

2016

A. Sankar, Bharathi, P. D., Midhun, M., Vijay, K., and Dr. Senthil Kumar T., “A conjectural study on machine learning algorithms”, Advances in Intelligent Systems and Computing, vol. 397, pp. 105-116, 2016.[Abstract]


<p>Artificial Intelligence, a field which deals with the study and design of systems, which has the capability of observing its environment and does functionalities which aims at maximizing the probability of its success in solving problems. AI turned out to be a field which captured wide interest and attention from the scientific world, so that it gained extraordinary growth. This in turn resulted in the increased focus on a field—which deals with developing the underlying conjectures of learning aspects and learning machines—machine learning. The methodologies and objectives of machine learning played a vital role in the considerable progress gained by AI. Machine learning aims at improving the learning capabilities of intelligent systems. This survey is aimed at providing a theoretical insight into the major algorithms that are used in machine learning and the basic methodology followed in them. © Springer India 2016.</p>

More »»

2016

R. G., Dr. Senthil Kumar T., Reyner, P. P. D., Leela, G., Mangayarkarasi, N., Abirami, A., and Vinayaka, K., “3D modelling of a jewellery and its virtual try-on”, Advances in Intelligent Systems and Computing, vol. 397, pp. 157-165, 2016.[Abstract]


Nowadays, everything is becoming automated. So, automation is indeed needed in the world of jewellery. The goldsmith or any jewellery vendor, rather than having all the real patterns of jewellery, can have the model of these jewellery, so that he can display them virtually on the customer’s hand using Augmented Reality. 2D representation of an object deals only with the height and the width of an object. 3D representations include the third dimension of an image which is the depth information of an object. This paper presents an overall approach to 3D modelling of jewellery from the uncalibrated images. The datasets are taken from different viewing planes at different intervals. From these images, we construct the 3D model of an object. 3D model provides a realistic view for the users by projecting it on human hand using the augmented reality technique. © Springer India 2016.

More »»

2016

K. S. Gautam and Dr. Senthil Kumar T., “Discrimination and detection of face and non-face using multilayer feedforward perceptron”, Advances in Intelligent Systems and Computing, vol. 397, pp. 89-103, 2016.[Abstract]


<p>The paper proposes a face detection system that locates and extracts faces from the background using the multilayer feedforward perceptron. Facial features are extracted from the local image using filters. In this approach, feature vector from Gabor filter acts as an input for the multilayer feedforward perceptron. The points holding high information on face image are used for extraction of feature vectors. Since Gabor filter extracts features from varying scales and orientations, the feature points are extracted with high accuracy. Experimental results show the multilayer feedforward perceptron discriminates and detects faces from non-face patterns irrespective of the illumination changes. © Springer India 2016.</p>

More »»

2015

Dr. Senthil Kumar T. and Saivenkateswaran, Sb, “Evaluation of video analytics for face detection and recognition”, International Journal of Applied Engineering Research, vol. 10, pp. 24003-24016, 2015.[Abstract]


<p>Face Detection and Recognition is presenting a challenging approach in the field of computer vision and Image processing [www.cosy.sbg.ac.at]. To localize and to extract the particular face region in the image or video, Face Detection is used as the first step for Face Recognition systems [www.idsia.ch] [1]. Face detection and recognition has several applications. They are content based image or video retrieval, video coding, video conferencing, crowd analysis, intelligent human computer interfaces [iasir.net][1]. Still many researches are going on because it is very tough to find the exact face of a person if we need to match the face region in the database that makes face detection a tough problem in computer vision [iasir.net].This paper analyzes how Face Detection and Recognition approaches can be used for a wide variety of applications like smart buildings, driver recognition during accidents. © Research India Publications.</p>

More »»

2015

Dr. Senthil Kumar T. and Pandey, S., “Customization of recommendation system using collaborative filtering algorithm on cloud using mahout”, Advances in Intelligent Systems and Computing, vol. 321, 2015.[Abstract]


Recommendation System helps people in decision making regarding an item/person. Growth of World Wide Web and E-commerce are the catalyst for recommendation system. Due to large size of data, recommendation system suffers from scalability problem. Hadoop is one of the solutions for this problem. Collaborative filtering is a machine learning algorithm and Mahout is an open source java library which favors collaborative filtering on Hadoop environment. The paper discusses on how recommendation system using collaborative filtering is possible using Mahout environment. The performance of the approach has been presented using Speedup and efficiency. More »»

2014

Dr. Senthil Kumar T., Vishak, J., Sanjeev, S., and Sneha, B., “Cloud Based Framework for Road Accident Analysis”, International Journal of Computer Science and Mobile Computing, vol. 3, pp. 1025 - 1032, 2014.

2014

Dr. Senthil Kumar T., Suresh, A., Pai, K. Kiron, and Chinnaswamy, P., “Survey on Predictive medical data analysis”, Journal of Engineering Research & Technology, vol. 3, pp. 2283-2286, 2014.

2014

Dr. Senthil Kumar T., Reddy, P. K. Ajay, Chidambaram, M., Anurag, D. V., Karthik, S., K. Teja, R., and N. Harish, S., “Video Recommender In Open/Closed Systems”, International Journal of Research in Engineering and Technology, vol. 3, pp. 24 - 28, 2014.

2014

Dr. Senthil Kumar T., Suresh, A., and Karumathil, A., “Improvised classification model for cloud based authentication using keystroke dynamics”, Lecture Notes in Electrical Engineering, vol. 309 LNEE, pp. 295-303, 2014.[Abstract]


<p>The etymology of communication is the transmission of data. Data has to be transmitted through different devices, network topologies and geographic locations. The strength of communication has tripled with the advent of cloud technologies providing high scalability and storage on demand. The need for cloud security is increasing in an alarming rate and using biometric techniques over traditional password based alternative has proved to be efficient. A behavioral biometric such as keystroke dynamics can be used to strengthen existing security techniques effectively.Due to the semi-autonomous nature of the typing behavior of an individual it is difficult to validate the identity of the user. This paper proposes a model to validate the identity of the user which acclimatizes to tolerance across multiple devices and provides a robust three dimensional model for classification. As an additional layer of security the model is transformed after every login to prevent professional intruders from predicting the acceptance region. © 2014 Springer-Verlag Berlin Heidelberg.</p>

More »»

2013

N. Susan Thampi, Dr. Senthil Kumar T., and Johnpaul, C. I., “Performance Analysis of Various Recommendation Algorithms Using Apache Hadoop and Mahout”, International Journal of Scientific & Engineering Research, vol. 4, pp. 279-287, 2013.

2013

R. Manoj, Dr. Senthil Kumar T., Maruthi, M., and Vivek, G., “A Survey: Artificial Neural Networks in Surveillance System”, International Journal of Computer Applications, vol. 1, pp. 19-22, 2013.

2013

S. Murugesan, Dr. Senthil Kumar T., Priyanka, U. Sree, and Abinaya, K., “Towards an Approach for Improved Security in Wireless Networks”, International Journal of Computer Applications, vol. 1, pp. 9-13, 2013.

2013

Dr. Senthil Kumar T., Kumar, S., and , “A Novel Face Recognition Algorithm using PCA”, International Journal of Computer Applications, vol. 3, pp. 8 - 12, 2013.

2013

Dr. Senthil Kumar T., .Sivanandam, N., Gokul, M., and Anusha, B., “Logo Classification of Vehicles using SURF based on Low Detailed Feature Recognition”, International Journal of Computer Applications, vol. 3, pp. 5 - 7, 2013.

2012

S. N. Sivanandam, Dr. Senthil Kumar T., kumar, krishna, and Ajay, A., “An Improved Approach for Character recognition in vehicle Number plate using Eigenfeature Regularisation and Extraction Method”, International Journal of Research and Reviews in Electrical and Computer Engineering, vol. 2, pp. 64-69, 2012.

2012

Dr. Senthil Kumar T. and Sivanandam, S. Nb, “A modified approach for detecting car in video using feature extraction techniques”, European Journal of Scientific Research, vol. 77, pp. 134-144, 2012.[Abstract]


<p>Deployment of effective surveillance and security measures is important in these days. The proposed approach is able to detect, identify and track of different types of vehicles and people entering the secured premises, to avoid any mishap from happening. There are many existing approaches which are used for tracking objects. Edge matching, Divide-and-Conquer search, Gradient matching, Histograms of receptive field responses, Pose clustering, SIFT, SURF etc are some of the approaches applied. All these methods are either Appearance based methods or Feature based methods. They lag in one or the other way when it comes to real time applications.So there has been a need for creating a new system that could combine positive aspects of both the methods and increase the efficiency in tracking objects, when it comes to real life scenario. A novel approach for car detection and classification is presented, to a whole new level, by devising a system that takes the video of a vehicle as input, detects and classifies the vehicle based on its make and model. It takes into consideration four prominent features namely Logo of vehicle, its number plate, colour and shape. Logo detector and recognizer algorithms are implemented to find the manufacturer of the vehicle. The detection process is based on the Adaboost algorithm, which is a cascade of binary features to rapidly locate and detect logos. The number plate region is localized and extracted using blob extraction method. Then colour of the vehicle is retrieved by applying Haar cascade classifier to first localize on the vehicle region and then applying a novel algorithm to find colour. Shape of the vehicle is also extracted using blob extraction method. The classification is done by a very efficient algorithm called Support vector machines. Experimental results show that our system is a viable approach and achieves good feature extraction and classification rates across a range of videos with vehicles under different conditions. © EuroJournals Publishing, Inc. 2012.</p>

More »»

2012

Dr. Senthil Kumar T., Sivanandam, S. Nb, and Akhila, G. Pc, “Detection of car in video using soft computing techniques”, Communications in Computer and Information Science, vol. 270 CCIS, pp. 556-565, 2012.[Abstract]


The features indicate the characteristics of the object. The features vary from object to object like colour, size, shape, texture etc. Natural images can be decomposed into constituent objects, which are in turn composed of features. The corners or edges of the object can be considered as part of feature extraction. The edges / corner detection is also complex for certain objects as it has varied characteristics due to other objects in representation. The other examples of features include motion in image sequences, curves, boundaries between different image regions, properties of region. Feature extraction is the process of transforming of high-dimensional data into a meaningful representation of reduced dimensionality. The identified features are beneficial to mitigate the computational complexity and improve the accuracy of a particular classifier. This paper suggests mechanism for selection of appropriate technique for detecting object like car in video. © 2012 Springer-Verlag.

More »»

2012

Dr. Senthil Kumar T. and Sivanandam, S. Nb, “An improved approach for detecting car in video using neural network model”, Journal of Computer Science, vol. 8, pp. 1759-1768, 2012.[Abstract]


The study represents a novel approach taken towards car detection, feature extraction and classification in a video. Though many methods have been proposed to deal with individual features of a vehicle, like edge, license plate, corners, no system has been implemented to combine features. Combination of four unique features, namely, color, shape, number plate and logo gives the application a stronghold on various applications like surveillance recording to detect accident percentage(for every make of a company), authentication of a car in the Parliament(for high security), learning system(readily available knowledge for automobile tyro enthusiasts) with increased accuracy of matching. Video surveillance is a security solution for government buildings, facilities and operations. Installing this system can enhance existing security systems or help start a comprehensive security solution that can keep the building, employees and records safe. The system uses a Haar cascaded classifier to detect a car in a video and implements an efficient algorithm to extract the color of it along with the confidence rating. An gadabouts trained classifier is used to detect the logo (Suzuki/Toyota/Hyunadai) of the car whose accuracy is enhanced by implementing SURF matching. A combination of blobs and contour tracing is applied for shape detection and model classification while number plate detection is performed in a smart and efficient algorithm which uses morphological operations and contour tracing. Finally, a trained, single perceptron neural network model is integrated with the system for identifying the make of the car. A thorough work on the system has proved it to be efficient and accurate, under different illumination conditions, when tested with a huge dataset which has been collected over a period of six months. © 2012 Science Publications.

More »»

Publication Type: Conference Paper

Year of Publication Title

2020

R. Nair, Chugani, M. N., and Dr. Senthil Kumar T., “MetaData: A Tool to Supplement Data Science Education for the First Year Undergraduates”, in Proceedings of the 2020 8th International Conference on Information and Education Technology, New York, NY, USA, 2020.[Abstract]


In the Indian Universities, data science courses are offered to computer science undergraduates only in their higher semesters of under-graduation. Keeping in mind, the growing importance of data science, under-graduates need to be introduced to data science courses in their first year of under-graduation itself. Although first year undergraduates are furnished with the required mathematical and statistical concepts during their higher secondary education, a requirement of understanding sophisticated programming concepts hamstrings universities from offering courses in data science to first year undergraduates. As an outcome of our research, we propose a software named MetaData. MetaData abstracts all levels of implementation and helps students better understand the fundamental concepts in data science by observing the applicability of these concepts on real-world datasets. We justify the effectiveness of the tool through a data science classroom scenario, wherein 44 first year undergraduates were encouraged to use the tool and provide constructive feedback.

More »»

2019

P. S. Srishilesh, Parameswaran, L., Tharagesh, R. S. Sanjay, Dr. Senthil Kumar T., and Sridhar, P., “Dynamic and Chromatic Analysis for Fire Detection and Alarm Raising Using Real-Time Video Analysis”, in Proceedings of 3rd International Conference On Computational Vision and Bio Inspired Computing, Cham, 2019.[Abstract]


Fire outbreak has become a common accident that occurs in several places such as in forests, manufacturing industries, living house and in widely crowded areas. These incidents cause severe damage to nature as well as to living creatures in the affected surroundings. Due to this, the need for efficient fire detection system has been increased rapidly. Using fire detecting sensors has proved to be an efficient solution but its effectiveness on delivering quick results depends on the affinity of fire sources. In the proposed method, we present an economical and affordable fire detection algorithm using video processing techniques which is compatible with CCTV and other stationary surveillance cameras. The algorithm uses an RGB color model with chromatic and dynamic disorder analysis to detect the fire. Fire pixels are detected by the rules of the color model which is mainly dependent on the fire pixel intensity and also the saturation of red color component in the fire pixel. The extracted fire like pixels are authorized by growth combined with the disorder of the fire regions. Furthermore, based on iterative checking the real fire is identified, if it is present then the appropriate signals will be sent. The proposed method is tested on various datasets acquired in real time environments and from the internet. This methodology can be used for fully automatic fire detection surveillance with reduced false true errors.

More »»

2018

S. P, Parameswaran, L., and Dr. Senthil Kumar T., “An Efficient Rule Based Algorithm for Fire Detection on Real Time Videos”, in Proceedings of the first international conference on Intelligent Computing, 2018.[Abstract]


The projected work shows generic rule in YCbCr color space based fire pixel detection is proposed for smart building which will complement the conventional electronic sensor based fire detection system. The proposed method handles YCbCr color model is used for decoupling the luminance and chrominance which added discriminate the color than RGB color model. This algorithm has been tested on fire and fire like images which results in 97.95% detection accuracy. Obtained experimental results have been compared with other existing algorithms and it is observed that gives a very high the proposed algorithm detection accuracy and feasible true positive rate in fire images. © 2020 American Scientific Publishers.

More »»

2016

Dhanya N. M., Dr. Senthil Kumar T., Sujithra, C., Prasanth, S., and Shruthi, U. K., “Pedagogue: A Model for Improving Core Competency Level in Placement Interviews Through Interactive Android Application”, in Proceedings of the International Conference on Soft Computing Systems, 2016.[Abstract]


This paper discusses about developing a mobile application running on the cloud server. The Cloud acclaims a new era of computing, where application services are provided through the Internet. Though mobile systems are resource-constrained devices with limited computation power, memory, storage, and energy, the use of cloud computing enhances the capability of mobile systems by offering virtually unlimited dynamic resources for computation and storage. The challenge faced here is that traditional smartphones do not support cloud, these applications require specialized mobile cloud application model. The core innovativeness of the application lies in its delivery structure as an interactive android application centered on emerging technologies like mobile cloud computing–that improves the core competencies of the students by taking up online tests posted by the faculty in the campus. The performance of this application has been presented using scalability, accessibility, portability, security, data consistency, user session migration, and redirection delay

More »»

2013

Dr. Senthil Kumar T., Gajendran, V., Harshad, R., Aswani, S., and Narayanan, D. Sankara, “MEDISCRIPT-MOBILE CLOUD COLLABRATIVE SPEECH RECOGNITION FRAMEWORK”, in IJCA Proceedings on International Conference on Innovation in Communication, Information and Computing 2013, 2013.

Publication Type: Book Chapter

Year of Publication Title

2019

G. Ganesan, Senthilkumar, S., and Dr. Senthil Kumar T., “ONESTOP: A tool for performing generic operations with visual support”, in Lecture Notes in Computational Vision and Biomechanics, 2019, pp. 1565-1583.[Abstract]


Programming has become tedious for every person these days. Learning programming languages and writing a computer program for different tasks using various programming languages is a difficult and time-consuming task. Therefore, modules are used to make programming easier and faster. Cloud computing enables applications to be accessed everywhere. The ‘ONESTOP’ tool will be provided as a facility to the users under the category ‘Software as a Service’. The paper provides directions for enabling the same facility. It does not address the challenges for provisioning this tool on the cloud. Every module in ONESTOP consists of the operations under that category. The tool processes the input by removing fillers, identifying the operation to be performed using trie data structure and synonym mapping and displaying the result. User need not write codes or define functions. A simple sentence in English is sufficient to perform the task. The tool is easy to use and does not require any programming knowledge to use it. All the operations are performed in less time enhancing the performance of the tool. Key aspect of ONESTOP is that it does not produce any error and saves debugging time.

More »»

2019

J. Asharudeen and Dr. Senthil Kumar T., “Multi-insight Monocular Vision System Using a Refractive Projection Model”, in Lecture Notes in Computational Vision and Biomechanics, 2019, pp. 1553-1563.[Abstract]


The depth information of a scene, imaged from the inside of a patient’s body, is a difficult task using a monocular vision system. A multi-perception vision system idea has been proposed as a solution in this work. The vision system of the camera has been altered with the refractive projection model. The developed lens model recognises the scene with multiple perceptions. The motion parallax is observed under the different lenses for the single shot, captured through the monocular vision system. The presence of multiple lenses refracts the light in the scene at the different angles. Eventually, the appearance of the object dimension is augmented with more spatial cues that help in capturing 3D information in a single shot. The affine transformations between the lenses have been estimated to calibrate the multi-insight monocular vision system. The geometrical model of the refractive projection is proposed. The multi-insight lens plays a significant role in spatial user interaction.

More »»

2019

Dr. Senthil Kumar T., Rajendran, N., and Vaiapury, K., “Defocus Map-Based Segmentation of Automotive Vehicles”, in Lecture Notes in Computational Vision and Biomechanics, 2019, pp. 1537-1552.[Abstract]


Defocus estimation plays a vital role in segmentation and computer vision applications. Most of the existing work uses defocus map for segmentation, matting, decolorization and salient region detection. In this paper, we propose to use both defocus map and grabcut using wavelet for reliable segmentation of the image. The result shows the comparative analysis between the bi-orthogonal and Haar function using wavelet, grabcut and defocus map. Experimental results show promising results, and hence, this algorithm can be used to obtain the defocus map of the scene.

More »»

2016

S. V. Girish, Prakash, R., Swetha, S. N. H., Pareek, G., and Dr. Senthil Kumar T., “Advances in Intelligent Systems and Computing”, in A Network Model of GUI-Based Implementation of Sensor Node for Indoor Air Quality Monitoring, vol. 397, New Delhi: Springer India, 2016, pp. 209–217.

2016

V. Reghu and Dr. Senthil Kumar T., “Gesture Controlled Automation for Physically Impaired”, P. L. Suresh and Panigrahi, K. Bijaya, Eds. New Delhi: Springer India, 2016, pp. 673–683.[Abstract]


Hand gesture recognition system is widely used for interfacing between computer and human using hand gesture. This work presents a room for automation system using hand gestures, which is meant for physically impaired. The objective of project is to develop an algorithm for recognition of hand gestures with reasonable accuracy. Most of the gesture recognition system fails when the background is complex, here in our method we use hand detection in complex background. Hu moments are used as feature which is used to classify the gestures. Kinect sensor is used to get the input video which will give the depth map from which we will get the location of the people performing gesture. Two gestures are used to switching on and off function for an Electrical appliance. Arduino Board is used to interface between the computer and the appliances.

More »»

2016

S. V. Girish, Prakash, R., Swetha, S. N. H., Pareek, G., Dr. Senthil Kumar T., and A. Ganesh, B., “Video Analysis for Malpractice Detection in Classroom Examination”, P. L. Suresh and Panigrahi, K. Bijaya, Eds. New Delhi: Springer India, 2016, pp. 209–217.[Abstract]


This paper describes a wireless sensor network-based indoor air quality monitoring system. The indoor air quality defines the quality of the environment where people live. Here wireless sensor network serves as the tool for estimating the indoor air quality. The WSN comprises sensor nodes and a coordinator node which communicates using the IEEE 802.15.4 wireless standard ZigBee wireless module. The indoor air quality estimation is done by interfacing CO2, temperature and RH (Relative humidity) sensors with the sensor node. The sensor node gathers the sensor data and reports it to the coordinator for real-time monitoring using a GUI (Graphical user interface) developed in Java NetBeans to run on windows PC. The collected data can be used to maintain the environment parameters by interfacing it to a HVAC (Heating, Ventilation and Air Conditioning) system.

More »»

2015

Dr. Senthil Kumar T. and Prakash, K. I. Ohhm, “A Queueing Model for e-Learning System”, P. L. Suresh, Dash, S. Subhransu, and Panigrahi, K. Bijaya, Eds. New Delhi: Springer India, 2015, pp. 89–94.[Abstract]


There has been much written about e-Learning practice; however, little attention has been given to come out with a mathematical model for e-Learning. As the lack of a proper mathematical model will hinder providing better service to the customers, we have come up with an attempt to make a study on which of the existing mathematical models could fit e-Learning. We argue with statistical data that (M/M/C): (∞/FIFO) is one of the models which best fit e-Learning. This paper aims to provide inputs that the suggested queuing model can be used for e-Learning system in real conditions.

More »»