AI - Driven Emotion Sentiment Analysis

Abstract

This research introduces an AI-driven framework designed for multimodal emotion recognition and sentiment analysis, which combines facial analysis with text-based affective modeling to enhance personalized emotional healthcare. Facial data is analyzed using Deepface to estimate emotions, age, gender, and number of faces, alongside preprocessing methods like face detection, normalization, and alignment. For text analysis, transformer-based models are utilized, specifically a DistilRoBERTa model for recognizing multiple emotions and a RoBERTa model for detecting sentiment polarity. The system includes fallback mechanisms to generate outputs in limited environments by using randomized distributions of age, gender, number of faces, and text-based emotions. The framework was trained and validated using datasets such as FER-2013 and AffectNet, allowing for the identification of various emotions beyond simple binary sentiment. A user interface offers emotion diaries, visual analytics, and long-term mood tracking, providing actionable insights and personalized recommendations. By integrating Deepface-based facial analysis with transformer-based text modeling and incorporating robust fallback strategies, the system moves towards a comprehensive, context-aware, and empathetic AI platform for mental wellness.

Country : India

1 Ms. Kiran Likhar2 Abhiruchi Yeole3 Tanvi Ninawe4 Kashish Meshram5 Vinay Lahoti6 Vaishnavi Agrawal

  1. Department of Computer Science and Engineering, G. H. Raisoni College of Engineering and Management, Nagpur, Maharashtra, India
  2. Department of Computer Science and Engineering, G. H. Raisoni College of Engineering and Management, Nagpur, Maharashtra, India
  3. Department of Computer Science and Engineering, G. H. Raisoni College of Engineering and Management, Nagpur, Maharashtra, India
  4. Department of Computer Science and Engineering, G. H. Raisoni College of Engineering and Management, Nagpur, Maharashtra, India
  5. Department of Computer Science and Engineering, G. H. Raisoni College of Engineering and Management, Nagpur, Maharashtra, India
  6. Department of Computer Science and Engineering, G. H. Raisoni College of Engineering and Management, Nagpur, Maharashtra, India

IRJIET, Volume 9, Issue 10, October 2025 pp. 122-127

doi.org/10.47001/IRJIET/2025.910016

References

  1. M. Soleymani, M. Pantic, and T. Pun, “Multimodal emotion recognition in response to videos,” IEEE Trans. Affective Comput., vol. 3, no. 2, pp. 211–223, 2012.
  2. R. A. Calvo and S. D’Mello, “Affect detection: An interdisciplinary review of models, methods, and their applications,” IEEE Trans. Affective Comput., vol. 1, no. 1, pp. 18–37, 2010.
  3. Y. Wu et al., “MGAT: Multi-Granularity Attention Based Transformers for Multi-Modal Emotion Recognition,” IEEE Trans. Affective Comput., 2024.
  4. M. Khan et al., “MemoCMT: Multimodal emotion recognition using cross-modal transformer-based feature fusion,” Scientific Reports, vol. 15, 2025.
  5. A.Devlin et al., “BERT: Pre-training of deep bidirectional transformers for language understanding,” Proc. NAACL-HLT, 2019.
  6. M. H. Yi et al., “HyFusER: Hybrid Multimodal Transformer for Emotion Recognition Using Dual Cross Modal Attention,” Applied Sciences, vol. 15, 2025.
  7. S. Hazmoune et al., “Using transformers for multimodal emotion recognition,” Eng. Appl. Artif. Intell., vol. 108, 2024.
  8. F. Zeng et al., “Multimodal temporal fusion for emotion recognition,” IEEE Access, 2023.
  9. S. Poria et al., “MELD: A multimodal multi-party dataset for emotion recognition in conversations,” Proc. ACL, 2019.
  10. H. Bhoite, “Real-Time Multimodal Emotion Recognition for Edge Virtual Assistants Using Lightweight Transformer Models,” TechRxiv, 2025.
  11. J. Liu et al., “Cross-modal transformers for wild emotion recognition,” IEEE Trans. Multimedia, 2024.
  12. K. Singh et al., “TACFN: Adaptive cross-modal networks for emotion recognition,” IEEE Trans. Neural Networks Learn. Syst., 2023.
  13. L. Wu et al., “Benchmarking Deep Facial Expression Recognition,” arXiv preprint arXiv:2311.02910, 2023.
  14. P. Zhang et al., “Physiological signal integration in affective computing,” IEEE Trans. Affective Comput., 2024.