Impact Factor (2025): 6.9
DOI Prefix: 10.47001/IRJIET
This paper
presents an AI-driven assistive system designed to support autistic individuals
by leveraging multi-modal artificial intelligence (AI) technologies for
communication enhancement, emotional well-being, and social interaction. The
system comprises four core modules: Speak and Learn, See and Learn,
Collaborations, and ASD Community, each integrating advanced AI methodologies.
The Speak and Learn module utilizes speech-to-text (STT) and text-to-speech
(TTS) technologies, reinforced by a Natural Language Processing (NLP) model, to
facilitate real-time, adaptive communication. A custom chatbot trained on
predefined conversational patterns enhances user engagement and interaction
with 95% Accuracy. The See and Learn module employs a TensorFlow Lite
(TFLite)-based deep learning model for real-time emotion detection, classifying
emotions into angry, sad, and happy categories. Based on the detected emotional
state, the system dynamically suggests curated videos and GIFs to promote
emotional regulation and engagement with 95% accuracy. The Collaborations
module features a secure, low-latency real-time messaging system, enabling
direct communication between autistic individuals and psychologists for
tailored professional support. Lastly, the ASD Community module serves as an
interactive, AI-powered social engagement platform where users can share
experiences, provide feedback, and connect with peers. The system is optimized
for low-latency performance using resource-efficient AI models, making it compatible
with mobile and embedded platforms. By integrating NLP-driven conversational
agents, deep learning-based emotion recognition, and secure communication
channels, this application creates a comprehensive, intelligent ecosystem
tailored to the needs of autistic users. Experimental results demonstrate the
system's efficiency, accuracy, and real-time performance, ensuring seamless
user experiences across diverse deployment environments.
Country : India
IRJIET, Volume 9, Special Issue of INSPIRE’25 April 2025 pp. 14-23