DSpace Repository

Sign Language Translator Application using AI

Show simple item record

dc.contributor.author Muhammad Haris Sheikh, 01-134171-042
dc.date.accessioned 2023-03-03T06:33:03Z
dc.date.available 2023-03-03T06:33:03Z
dc.date.issued 2023
dc.identifier.uri http://hdl.handle.net/123456789/15060
dc.description Supervised by Dr. Asfand Yar en_US
dc.description.abstract According to World Health Organization, 5 percent of every country’s population is considered deaf/mute. These people are called "Signers" as they use sign language to communicate with others. In Pakistan, the preferred language for the Signers is "Pakistan Sign Language (PSL)". According to the WHO’s statistics, Pakistan has an average of 8 to 10 Million Signers which is a huge figure. Now, the real issue is that Sign Language is not very common in normal people and therefore it makes a huge communication gap for Signers and non Signers. As it is difficult to educate whole country about PSL but following the current world trend of creating Intelligent Computer Software to solve human problems, we can design such a Translator, that can translate Sign Language Gestures into Text which can be read by the non signer to develop communication with signers. The app’s interface will be familiar to the people as it will be just like watching a movie and reading the subtitles at same time. As now a days, people are using smartphones along with laptops and tablets etc so, we should create such a Software that is ready to use on any kind of screen or platform. So, a light weight Web Application will be a good choice. The system will work by taking a video of person performing a gesture by making signs and a Machine Learning Model will interpret that sequence and will translate into local language text so that the normal person can read and understand it. One important thing that is worth noticing here is that it takes a video as an input, it means it is taking a whole video action and is thus able to recognize the whole sequence, rather than taking a single image of hands and interpreting it as a whole sign. This will remove ambiguity from predictions as there are many gestures that start from or have similar signs but are differentiated from each other by a minor difference of hand shapes or actions. This application will be able to handle these issues by utilizing Deep Learning techniques with best possible accuracy en_US
dc.language.iso en en_US
dc.publisher Computer Sciences en_US
dc.relation.ispartofseries BS (CS);P-1830
dc.subject Sign Language en_US
dc.subject Translator Application en_US
dc.title Sign Language Translator Application using AI en_US
dc.type Project Reports en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account