SignBridge: Real-Time Sign Language Translator with Python and TensorFlow


  • Nur Erlida Ruslan University Malaysia of Computer Science and Engineering (UNIMY)
  • Lee Boon Teck University Malaysia of Computer Science & Engineering (UNIMY)
  • Tang Shao Ming University Malaysia of Computer Science & Engineering (UNIMY)


Sign language is the primary way for those deaf individuals to convey themselves. In real life, many people, even some deaf individuals, do not understand sign language. This research aims to develop a sign language translator using computer vision technology which helps to minimize the communication gap between normal s and deaf individuals. SignBridge mainly focuses on the user's hand gestures and body pose. SignBridge uses the MediaPipe Holistic to extract the key points of the user’s gestures and pose and perform real-time detection using OpenCV. It is trained using the Sequential model, which consists of the Long Short Term Memory (LSTM) model and the Dense layer with the set of videos as the training data stored in NumPy. After training, SignBridge can perform the prediction of the signs and show the result to the user by displaying the text on the top of the interface. The SignBridge system will be implemented on the laptop with a webcam to capture the gestures and pose of the user. A series of data analyses and comparisons have been conducted to determine the optimal model for the prediction based on four categories: alphabets, numbers, basic gestures, and a combination of three categories. With the comparison result, SignBridge successfully gained the highest accuracy among the models, and the overall accuracy will be in the range of 96.97 percent to 100 percent.



How to Cite

Ruslan, N. E., Lee , B. T., & Tang, S. M. (2023). SignBridge: Real-Time Sign Language Translator with Python and TensorFlow. Journal of Innovation and Emerging Digital Technologies (JIEDT), 1(2), 1–9. Retrieved from