Menu

Real-time Intelligent Sign Language Translator: Accessibility through AI and Deep Learning

Accessibility is a key concept in today's digital world, which has enabled us to remove many barriers.

This sign language translator enables communication between the deaf and hearing community by interpreting the signs used and converting them into text.

Carried out by Esteban Bardolet

Qualification Bachelor of Engineering in Software Development

Technologies Computer Vision | Artificial Intelligence (AI) | Machine Learning | Neural Network LSTM (Long Short-Term Memory)

This deep learning model is capable of interpreting sign language and transforming it into speech in real time. To achieve this, an LSTM (short and long term memory) neural network has been implemented.

Why LSTM?

LSTM networks are ideal for working with long data streams due to their memory cells and gates that control the flow of information:

  • They decide what information to keep, forget or use.
  • They allow complex patterns to be learned in sequences, which is essential when processing sign language.

This is only the beginning. With more data and improvements in Deep Learning models, very promising advances in sign language interpretation and accessibility can be made.

arrow-right