MindMirror - Mirror that Reads Emotions

Abstract

Automatic depression assessment based on visual and vocal cues is a rapidly growing research domain. The present exhaustive review of existing approaches as reported in over sixty publications during the last ten years focuses on image processing and machine learning algorithms. Visual manifestations of depression, various procedures used for data collection, and existing datasets are summarized. The review outlines methods and algorithms for visual feature extraction, dimensionality reduction, decision methods for classification and regression approaches, as well as different fusion strategies.

A quantitative meta-analysis of reported results, relying on performance metrics robust to chance, is included, identifying general trends and key unresolved issues to be considered in future studies of automatic depression assessment utilizing visual and vocal cues alone or in combination with cues. The proposed work also carried out to predict the depression level according to current input of videos using deep learning as well as NLP.

Country : India

1 Sudhanva Kale

  1. Student, Electronics and Telecommunication, College of Engineering, Pune, Maharashtra, India

IRJIET, Volume 9, Issue 11, November 2025 pp. 129-133

doi.org/10.47001/IRJIET/2025.911016

References

  1. Girard, Jeffrey M., Jeffrey F. Cohn, Mohammad H. Mahoor, Seyed mohammad Mavadati, and Dean P. Rosenwald. “Social risk and depression: Evidence from manual and automatic facial expression analysis.” In Automatic Face and Gesture Recognition (FG), 10th IEEE International Conference and Workshops on, pp. 1-8. IEEE, 2013.
  2. Alghowinem, Sharifa, Roland Goecke, Jeffrey F. Cohn, Michael Wagner, Gordon Parker, and Michael Breakspear. “Cross-cultural detection of depression from nonverbal behavior.” In Automatic Face and Gesture Recognition (FG), 11th IEEE International Conference and Workshops on, vol. 1, pp. 1-8. IEEE, 2015.
  3. Pampouchidou, A., O. Simantiraki, C-M. Vazakopoulou, C. Chatzaki, M. Pediaditis, A. Maridaki, K. Marias et al. “Facial geometry and speech analysis for depression detection.” In Engineering in Medicine and Biology Society (EMBC), 39th Annual International Conference of the IEEE, pp. 1433-1436. IEEE, 2017.
  4. Harati, Sahar, Andrea Crowell, Helen Mayberg, Jun Kong, and Shamim Nemati. “Discriminating clinical phases of recovery from major depressive disorder using the dynamics of facial expression.” In Engineering in Medicine and Biology Society (EMBC), 38th Annual International Conference of the, pp. 2254- 2257, IEEE, 2016.
  5. Cohn, Jeffrey F., Tomas Simon Kruez, Iain Matthews, Ying Yang, Minh Hoai Nguyen, Margara Tejera Padilla, Feng Zhou, and Fernando De la Torre. “Detecting depression from facial actions and vocal prosody.” In Affective Computing and Intelligent Interaction and Workshops. ACII 2009. 3rd International Conference on, pp. 1-7. IEEE, 2009.
  6. Tasnim, Mashrura, Rifat Shahriyar, Nowshin Nahar, and Hossain Mahmud. “Intelligent depression detection and support system: Statistical analysis, psychological review and design implication.” In e-Health Networking, Applications and Services (Healthcom), 18th International Conference on, pp. 1-6.IEEE, 2016.
  7. Pampouchidou, Anastasia, Kostas Marias, ManolisTsiknakis, P. Simos, Fan Yang, and Fabrice Meriaudeau. “Designing a framework for assisting depression severity assessment from facial image analysis.” In Signal and Image Processing Applications (ICSIPA), International Conference on, pp. 578-583, IEEE, 2015.
  8. Maddage, Namunu C., Rajinda Senaratne, Lu-Shih Alex Low, Margaret Lech, and Nicholas Allen. “Video-based detection of the clinical depression in adolescents.” In Engineering in Medicine and Biology Society, (EMBC). Annual International Conference of the IEEE, pp. 3723-3726. IEEE, 2009.
  9. Meng, Hongying, Di Huang, Heng Wang, Hongyu Yang, Mohammed AI-Shuraifi, and Yunhong Wang. “Depression recognition based on dynamic facial and vocal expression features using partial least square regression.” In Proceedings of the 3rd ACM international workshop on Audio/visual emotion challenge, pp. 21-30, ACM, 2013.