We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Performance Comparison of Deep Learning Models for Real Time Sign Language Recognition.
- Authors
Khan, Irtika; Kumar, Sheetal; Rai, Pratham; Rastogi, Aarnav; Ragha, Leena
- Abstract
Communicating with speaking/hearing impaired people has been one major issue throughout the years. Deaf and Dumb people use Sign Language consisting of various hand gestures as a means of communication. But, normal people fail to understand, as learning sign language is a tedious task and such language is required only in rare situations when we come across such people. Understanding what is conveyed is very important to convey back the message of doing the needful. There is a need for technology, which can reduce the problem of communicating with speaking/hearing impaired people. Survey shows an urgent requirement of robust real time sign language translators. So we aim to build a machine learning model which recognizes the hand gestures in real time through real time video processing and gives the meaning of the hand gesture as the textual output which can be further played as an audio signal. Through this research paper, we suggest the process to create dataset of Indian Sign Language and then using that dataset we have trained two models, YOLOv5 and YOLOV7 and carried out a comparative study on the same to learn about the differences in both models with respect to object detection, accuracy, precision, recall and more. The proposed YOLOv5 system attains accuracy of 87.6% while YOLOv7 system attains accuracy of 96.5% while being tested over locally created database for small phrases and sentences.
- Subjects
DEEP learning; SIGN language; MACHINE learning; HEARING impaired; VIDEO processing; DEAF people
- Publication
Grenze International Journal of Engineering & Technology (GIJET), 2024, Vol 10, Issue 1, p386
- ISSN
2395-5287
- Publication type
Article