Impact Factor (2025): 6.9
DOI Prefix: 10.47001/IRJIET
This extend
centers on creating an AI-based framework for real-time sign dialect discovery
utilizing computer vision and profound learning methods. The essential
objective is to bridge the communication hole between the hard of hearing and
hearing communities by precisely recognizing hand motions and changing over
them into content or discourse. The examination includes utilizing MediaPipe
Hands, OpenCV, and a profound learning show prepared on a dataset of sign
dialect signals. Strategies such as convolutional neural systems (CNNs) and
repetitive neural systems (RNNs) are utilized to make strides motion
acknowledgment precision.
The MediaPipe Hands system, combined with OpenCV, empowers vigorous
real-time hand following and keypoint extraction. Profound learning models,
especially CNN-based models, accomplish tall precision in classifying sign
dialect motions. The framework performs well in controlled situations but faces
challenges with varieties in lighting, foundation clutter, and hand occlusions.
Growing the dataset and coordination more complex worldly models (e.g., LSTMs
or Transformers) can assist upgrade acknowledgment exactness. Move forward
dataset differing qualities by joining more hand shapes, skin tones, and
lighting conditions. Execute transient modeling methods (e.g., LSTMs,
Transformers) to improve acknowledgment of ceaseless sign dialect.
Country : India
IRJIET, Volume 9, Special Issue of ICCIS-2025 May 2025 pp. 144-149