Impact Factor (2025): 6.9
DOI Prefix: 10.47001/IRJIET
Vol 8 No 1 (2024): Volume 8, Issue 1, January 2024 | Pages: 62-68
International Research Journal of Innovations in Engineering and Technology
OPEN ACCESS | Research Article | Published Date: 23-01-2024
Individuals with hearing or speech impairments often possess a higher degree of familiarity with sign language due to its widespread usage. Consequently, a noticeable communication divide exists between those with such impairments and the general populace. A pivotal solution in bridging this gap involves employing human sign language interpreters. Regrettably, the scarcity of sign language interpreters worldwide in comparison to the number of individuals with hearing or speech impairments has resulted in certain individuals being unable to bear the cost of a human interpreter each time they engage in conversation. To address this issue, it is imperative to automate communication in a manner that liberates the deaf community from dependence on human translators. This article centers on the development of an application capable of real-time conversion between American Sign Language and text, along with supplementary functionalities aimed at dismantling the communication barriers faced by individuals with hearing or speech challenges when interacting with the broader public. the primary objective of this application is to accurately perceive and interpret the user's sign language. The initial stage involves training the system to recognize and interpret signs utilizing algorithms for object recognition and motion tracking. For this purpose, a convolutional neural network model was trained employing a meticulously curated American Sign Language dataset. Subsequently, the recognized signs are translated into English, forming grammatically sound words. Incorporated within the system are a language translator and a virtual sign keyboard, augmenting its capabilities. Moreover, a text-to-text transformer, structured on an encoder-decoder framework, is harnessed to identify grammatical inaccuracies and generate coherent phrases. For a comprehensive understanding of the process, detailed elaboration will follow in subsequent sections.
American Sign Language, Convolutional neural network, Grammatically correct sentences, Object Detection, Real-time, Speech and hearing impairments, Voice-to-sign language
Suranjini Silva, Thilini Jayalath, Madhushani E.A.Y.C., Amarasooriya H.D., Elpitiya S.N., Frank Perera, “Machine Learning Based Customized Solution for Deafness and Mute People” Published in International Research Journal of Innovations in Engineering and Technology - IRJIET, Volume 8, Issue 1, pp 62-68, January 2024. Article DOI https://doi.org/10.47001/IRJIET/2024.801008
This work is licensed under Creative common Attribution Non Commercial 4.0 Internation Licence