An integrated communication system for Deaf and Mute with signed language interpretation
Over 5% of the world’s population, or 43 crore people are deaf and mute. According to the World Health Organization (WHO), by the year 2050, over 70 crore people or one in every ten people will have disabling hearing and speech loss. Deaf and mute people use sign language for communication. For a normal person, it is difficult to learn sign language to communicate with deaf and mute people. In addition, the deaf and mute people often feel excluded from the general society due to this communication gap.
To solve this problem we have developed an embedded standalone device that acts as a sign language translator between the deaf-mute and normal person. The device is capable of translating sign language in real-time by employing computer vision and deep learning techniques. An efficient and faster way called transfer learning used to train the deep learning models to recognize and translate sign language to speech.
Some of the use cases include one-to-one communication, public presentations, taking coaching classes, etc. Our project enables efficient two-way communication between deaf-mute and normal people, thus solving the problem of the communication gap.