Abstract:
In our society hearing impaired people are often ignored because communication between
signers and non-signers is difficult. Normal people cannot understand sign language and hearing
impaired people cannot hear and usually cannot speak spoken language used by normal people.
Text to Gestures visualization will help hearing impaired people learn common English words
and phrases. It also allows normal people to know common signs used in sign language which
they can later use to communicate with hearing impaired people. Main goal of this project is to
lessen the communication barrier between hearing impaired and normal people. In Pakistan very
little work is done on text to sign language visualization software although some Kinect based
gesture recognition systems are developed. It can play a great role in educating special people in
our society. This project involves two main components coordinates extraction (skeleton data of
joints) through Kinect V2 Sensor and real time rendering of those coordinates through OpenTK
that wraps OpenGL in .Net framework. Kinect V2 for windows to extract coordinates of hand
and wrist joint movements is a subsystem of this project. It gives information about hand states
and positions of joints with respect to origin in coordinate space. Kinect provide various classes
and libraries that help in skeletal tracking and detect up to 25 skeletal joints. These classes like
BodyFrame can be used to extract the coordinates that can be stored in database or File. OpenGL
(Open Graphics Library) is a cross-language, multi-platform application programming interface
(API) having different wrappers like OpenTK (Open Toolkit Library) which is an advanced,
low-level C# library used for 2D or 3D modeling.