A sign language translator is a utilitarian in facilitating communication between the deaf community and the hearing majority. This paper proffers an innovative specialized convolutional neural network (CNN) model, Sign-Net, to recognize hand gesture signs by incorporating scale-space theory to deep learning framework. The proposed model is an ensemble of CNNs–-a low resolution network (LRN) and a high resolution network (HRN). This architecture of the proposed model allows the ensemble to work at different spatial resolutions and at varying depths of CNN. The Sign-Net model was assessed with static signs of American Sign Language–-alphabets and digits. Since there exists no sign dataset for deep learning, the ensemble performance is evaluated on the synthetic dataset which we have collected for this task. Assessment of the synthetic dataset by Sign-Net reported an impressive accuracy of 74.5%, notably superior to the other existing models.
N. Aloysius and M. Geetha, “An Ensembled Scale-Space Model of Deep Convolutional Neural Networks for Sign Language Recognition”, in Advances in Artificial Intelligence and Data Engineering, Singapore, 2020.