Abstract:
One of the primary areas that the public ward on is social correspondence. Language is,
doubt, the best means to communicate and connect with one another, both vocally without a
and nonverbally. Because non-deaf persons have poorer comprehension of sign languages,
there is a constant communication gap between the deaf and non-deaf hearing communities.
As a result, numerous strategies have been used to address this problem, including turning
sign language to text or audio and vice versa. In recent years, research into the use of
puters, artificial intelligence, and machine learning to detect and translate sign language
interactive prototype that was created with
com
has evolved steadily. The suggested system is
the use of a Deep Learning model that was trained on
signals. We used SSD MobileNet model to train our
an
dataset of photos that included PSL
dataset and achieved an accuracy of 80%
a
in detecting the gestures in real time.