[GitHub] [Paper] [Full Report]
This project presents a bi-directional (full-duplex) communication system designed to bridge the gap between hearing-impaired individuals and the general public. It translates spoken English into visual Indian Sign Language (ISL) gestures in real-time, while simultaneously converting ISL gestures from live video back into audible speech.
Communication barriers exist due to a shortage of qualified ISL interpreters and general awareness. This can lead to social isolation and reduced access to information. Our project addresses this challenge by providing a technological solution that serves as a low-cost, accessible communication aid.
Key Technologies: Python | TensorFlow & Keras | OpenCV | Google Speech Recognition API