Abstract:
In recent times, the increasing advancement in technology gave researchers an opportunity to develop new methods for Spoken languages to Sign languages translation. Sign languages being the primary language for deaf community as well as for people suffering with speech disorders need translators to help the affected people with their day-to-day communication. Sign language image generation using GANs is an exceptionally difficult and less investigated task. It contains the critical issue of output correctness and visual quality. Sign languages also pose the risk of semantic irregularity and inconsistency. Moreover, Translation to sign languages prove to be a difficult because grammar rules for sign languages have still not been standardized. Our project presents a way to generate sign language images from English text utilizing ongoing advances in a class of machine learning frameworks named, Generative Adversarial Networks. To address the aforementioned problems, we divide the image generation task using GANs into sub processes. We first tokenize the input text from the user. In the second phase, Tokens are fed to the trained model that produces sign language images. Nearest neighbors approach has been used to evaluate the sign languages images. The output images closely resemble the real data and fulfill our criteria threshold for satisfactory output