
Deep Learning Based American Sign Language Translation System
Sign language is a form of visual communication which involves a complex combination of hand movements. Certain placements of fingers can represent individual alphabets, while a complete motion of characters and phrases translate to a full sentence or gesture. The National Center for Health Statistics estimates that 28 million Americans (about 10% of the population) use sign language gestures as a means of non-verbal communication to express their thoughts and emotions (Jay, 2021). But non-signers find it extremely difficult to understand, hence trained sign language interpreters are needed during medical and legal appointments, educational and training sessions. An efficient solution would be to use an application that can recognize the sign language and convert them into English language Speech. With recent advances in deep learning and computer vision there has been promising progress in the fields of motion and gesture recognition. The purpose of this project is to study and build a Sign Language to text and Speech translation system with OpenCV, Keras/TensorFlow using deep learning and computer vision concepts in order to communicate using American Sign Language(ASL) based gestures in real-time video streams.
Files
Penn State Only
Files are only accessible to users logged-in with a Penn State Access ID.
Metadata
Work Title | Deep Learning Based American Sign Language Translation System |
---|---|
Access | |
Creators |
|
Keyword |
|
License | In Copyright (Rights Reserved) |
Work Type | Research Paper |
Acknowledgments |
|
Publication Date | 2022 |
DOI | doi:10.26207/7hyc-6777 |
Deposited | April 12, 2022 |