Back close

YOLO vs. CNN: Deep Learning Approaches for ASL Alphabet Classification

Publication Type : Conference Paper

Publisher : IEEE

Source : 2025 IEEE Recent Advances in Intelligent Computational Systems (RAICS)

Url : https://doi.org/10.1109/raics66191.2025.11330938

Campus : Coimbatore

School : School of Artificial Intelligence

Year : 2025

Abstract :

American Sign Language (ASL) is a series of symbols used by deaf and mute individuals to communicate their thoughts and opinions with the world. This paper presents a comparative analysis of different deep learning models, including You Only Look Once (YOLO), Convolutional Neural Network (CNN), for translating ASL gestures into English using computer vision techniques. A dataset consisting of images of ASL gestures captured under various lighting conditions was chosen for this study.Existing ASL translation systems often rely on expensive hardware or limited datasets, whereas this study explores an efficient and scalable solution using deep learning.YOLO enables real-time detection, CNN captures spatial features, achieving accuracies of 99%, and 83%, respectively.

Cite this Research Publication : Anirudh Jayan, Abhinav Variyath, S Tara Samiksha, Sarvesh Ram Kumar, Aravind S Harilal, Lekshmi C. R, Neethu Mohan, YOLO vs. CNN: Deep Learning Approaches for ASL Alphabet Classification, 2025 IEEE Recent Advances in Intelligent Computational Systems (RAICS), IEEE, 2025, https://doi.org/10.1109/raics66191.2025.11330938

Admissions Apply Now