Back close

Understanding vision-based continuous sign language recognition

Publication Type : Journal Article

Publisher : Multimedia Tools and Applications

Source : Multimedia Tools and Applications, vol. 79, no. 31, pp. 22177 - 22209, 2020.

Url : https://doi.org/10.1007/s11042-020-08961-z

Campus : Amritapuri

School : School of Computing, Department of Computer Science and Engineering, School of Engineering

Center : AI and Disability Studies, Computer Vision and Robotics

Department : Computer Science

Year : 2020

Abstract : Real-time sign language translation systems, that convert continuous sign sequences to text/speech, will facilitate communication between the deaf-mute community and the normal hearing majority. A translation system could be vision-based or sensor-based, depending on the type of input it receives. To date, most of the commercial systems for this purpose are sensor-based, which are expensive and not user-friendly. Vision-based sign translation systems are the need of the hour but should overcome many challenges to build a full-fledged working system. Preliminary investigations in this work have revealed that the traditional approaches to continuous sign language recognition (CSLR) using HMM, CRF and DTW, tried to solve the problem of Isolated Sign Language Recognition (ISLR) and extended the solution to CSLR, leading to reduced performance. The main challenge of identifying Movement Epenthesis (ME) segments in continuous utterances, were handled explicitly with these traditional methods. With the advent of technologies like Deep Learning, more feasible solutions for vision-based CSLR are emerging, which has led to an increase in the research on vision-based approaches. In this paper, a detailed review of all the works in vision-based CSLR is presented, based on the methods they have followed. The challenges posed in continuous sign recognition are also discussed in detail, followed by a brief on sensor-based systems and benchmark databases. Finally, performance evaluation of all the associated methods are performed, which leads to a short discussion on the overall study and concludes by pointing out future research directions in the field.

Cite this Research Publication : N. Aloysius and M. Geetha, “Understanding vision-based continuous sign language recognition”, Multimedia Tools and Applications, vol. 79, no. 31, pp. 22177 - 22209, 2020.

Admissions Apply Now