Impact Factor (2025): 6.9
DOI Prefix: 10.47001/IRJIET
Individuals
with hearing or speech impairments often possess a higher degree of familiarity
with sign language due to its widespread usage. Consequently, a noticeable
communication divide exists between those with such impairments and the general
populace. A pivotal solution in bridging this gap involves employing human sign
language interpreters. Regrettably, the scarcity of sign language interpreters
worldwide in comparison to the number of individuals with hearing or speech
impairments has resulted in certain individuals being unable to bear the cost
of a human interpreter each time they engage in conversation. To address this
issue, it is imperative to automate communication in a manner that liberates
the deaf community from dependence on human translators. This article centers
on the development of an application capable of real-time conversion between American
Sign Language and text, along with supplementary functionalities aimed at
dismantling the communication barriers faced by individuals with hearing or
speech challenges when interacting with the broader public. the primary
objective of this application is to accurately perceive and interpret the
user's sign language. The initial stage involves training the system to
recognize and interpret signs utilizing algorithms for object recognition and
motion tracking. For this purpose, a convolutional neural network model was
trained employing a meticulously curated American Sign Language dataset.
Subsequently, the recognized signs are translated into English, forming
grammatically sound words. Incorporated within the system are a language
translator and a virtual sign keyboard, augmenting its capabilities. Moreover,
a text-to-text transformer, structured on an encoder-decoder framework, is
harnessed to identify grammatical inaccuracies and generate coherent phrases.
For a comprehensive understanding of the process, detailed elaboration will
follow in subsequent sections.
Country : Sri Lanka
IRJIET, Volume 8, Issue 1, January 2024 pp. 62-68