Back close

American Sign Language Static Gesture Recognition using Deep Learning and Computer Vision

Publication Type : Conference Paper

Publisher : IEEE

Source : Proceedings - 2nd International Conference on Smart Electronics and Communication, ICOSEC 2021

Url : https://ieeexplore.ieee.org/document/9591845

Campus : Bengaluru

School : School of Engineering

Department : Electrical and Electronics

Year : 2021

Abstract : Specially-abled people (speech and hearing impaired) rely on hand-gestures for communication on a daily basis. Majority of the people aren’t aware of the universally accepted hand-gestures alphabet, making communication difficult between the two groups of people. In an attempt to fill this void, this research work proposes a real time hand-gesture based recognition system based on the American Sign Language (ASL) dataset and capturing data through a BGR webcam and processing it using Computer Vision (OpenCV). The 29 static gestures (the alphabet) from the official, standard ASL dataset were trained with the help of Vision Transformer Model (ViT). The model showed an accuracy rate of 99.99% after being trained with 87,000 RGB samples for 1 epoch (2719 batches of 32 images each).

Cite this Research Publication : S. Yadav, Sai Nikhilesh, JayaSurya, NadipalliSuneel,“American Sign Language Static Gesture Recognition using Deep Learning and Computer Vision.”Proceedings - 2nd International Conference on Smart Electronics and Communication, ICOSEC 2021

Admissions Apply Now