DSpace Repository

Signify: Gesture to Speech Sign Language Recognition

Show simple item record

dc.contributor.author Zia Ul Hassan, 01-134212-199
dc.contributor.author Abdullah Khalil, 01-134212-007
dc.date.accessioned 2026-02-19T07:12:00Z
dc.date.available 2026-02-19T07:12:00Z
dc.date.issued 2025
dc.identifier.uri http://hdl.handle.net/123456789/20633
dc.description Supervised by Dr. Arif Ur Rahman en_US
dc.description.abstract Individuals with speech impairments often face considerable barriers in expressing themselves, leading to communication challenges in daily life. This project presents Signify, an Android application designed to bridge this gap by converting hand gestures into audible speech. Instead of relying on image-based classification, the system uses MediaPipe to extract 63 hand landmarks in real-time, normalizes them through translation and scale transformations, and feeds them into a pre-trained Dense Neural Network (DNN) converted to TensorFlow Lite for on-device inference. Recognized gestures are collected into a sequence, which is then passed to Cohere’s Lite language model to generate grammatically correct sentences. Finally, the generated text is spoken aloud using Text-to-Speech (TTS) APIs. The app is optimized for Android using CameraX and MediaPipe’s asynchronous frame processing to ensure low latency. With a focus on accuracy, responsiveness, and accessibility, Signify offers a practical communication tool for non-verbal users, promoting greater social inclusion. en_US
dc.language.iso en en_US
dc.publisher Computer Sciences en_US
dc.relation.ispartofseries BS(CS);P-3111
dc.subject Signify en_US
dc.subject Gesture to Speech en_US
dc.subject Sign Language Recognition en_US
dc.title Signify: Gesture to Speech Sign Language Recognition en_US
dc.type Project Reports en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account