| dc.description.abstract |
In Today’s world, Communication is very important for every single person to allow others and ourselves to understand information more accurately and quickly, but it is very difficult for deaf and dumb people to communicate with ordinary people. They convey their message using Sign Language which is a combination of gestures, orientations, movements of hands, arms or body, and facial expression, but not every normal person can understand that language. Our System is used to help those people who are unable to hear and unable to speak to make conversation with other normal people. Creating an Android Application that is used to translate Sign Language into normal language. The proposed system includes four modules: hand segmentation, preprocessing, feature extraction, signature identification, and translated text of the sign. People just need to make the gesture in front of a mobile application camera, Camera start to capture the frame of the gesture then after a single click a background process is executed which can process images using libraries of Tensor Flow and within a few seconds it is converted into meaning full sentence or phrases. Outputs are in Text. Our system can handle our own signs that we train during the project and translate them into English. We will also create our own dataset by capturing 100 images of the sign. With the help of mobile vision and neural networks, we can detect gestures and provide relevant text outputs. |
en_US |