Deep Learning Based American Sign Language Translation System

Sign language is a form of visual communication which involves a complex combination of hand movements. Certain placements of fingers can represent individual alphabets, while a complete motion of characters and phrases translate to a full sentence or gesture. The National Center for Health Statistics estimates that 28 million Americans (about 10% of the population) use sign language gestures as a means of non-verbal communication to express their thoughts and emotions (Jay, 2021). But non-signers find it extremely difficult to understand, hence trained sign language interpreters are needed during medical and legal appointments, educational and training sessions. An efficient solution would be to use an application that can recognize the sign language and convert them into English language Speech. With recent advances in deep learning and computer vision there has been promising progress in the fields of motion and gesture recognition. The purpose of this project is to study and build a Sign Language to text and Speech translation system with OpenCV, Keras/TensorFlow using deep learning and computer vision concepts in order to communicate using American Sign Language(ASL) based gestures in real-time video streams.



Work Title Deep Learning Based American Sign Language Translation System
Penn State
  1. Vysnavi Mathavaraj
  2. Emily Mross
  1. Sign language translation
  2. ASL
  3. American Sign Language
  4. gesture recognition
  5. Convolution neural networks
  6. Deep learning
  7. computer vision
License In Copyright (Rights Reserved)
Work Type Research Paper
  1. Girish Subramanian
Publication Date 2022
DOI doi:10.26207/7hyc-6777
Deposited April 12, 2022




Work History

Version 1

  • Created
  • Updated
  • Updated
  • Updated Acknowledgments Show Changes
    • Girish Subramanian
  • Added Creator Vysnavi Mathavaraj
  • Added Creator Emily Mross
  • Updated License Show Changes
  • Published