Music Recommendation System Leveraging Facial Emotion

Abstract

Music plays a significant role in improving and elevating one’s mood as it is one of the important source of entertainment and inspiration to move forward. Recent studies have shown that humans respond as well as react to music in a very positive manner and that music has a high impact on human’s brain activity. Now a day, people often prefer to listen to music based on their moods and interests. This work focuses on a system that suggests songs to the users, based on their state of mind. In this system, computer vision components are used to determine the user’s emotion through facial expressions. Once the emotion is recognized, the system suggests a song for that emotion, saving a lot of time for a user over selecting and playing songs manually. Conventional method of playing music depending upon the mood of a person requires human interaction. Migrating to the computer vision technology will enable automation of such system. To achieve this goal, an algorithm is used to classify the human expressions and play a music track as according to the present emotion detected. It reduces the effort and time required in manually searching a song from the list based on the present state of mind of a person. The expressions of a person are detected by extracting the facial features using the PCA algorithm and Euclidean Distance classifier. An inbuilt camera is used to capture the facial expressions of a person which reduces the designing cost of the system as compared to other methods. The results show that the proposed system achieves up to 84.82.

Country : India

1 Gayatri Kale2 Smita Birajdar3 Divya Suryawanshi4 Dr. Sarika Khope

  1. Student, Electronics and Telecommunication Engineering, G H Raisoni College of Engineering and Management, Pune, Maharashtra, India
  2. Student, Electronics and Telecommunication Engineering, G H Raisoni College of Engineering and Management, Pune, Maharashtra, India
  3. Student, Electronics and Telecommunication Engineering, G H Raisoni College of Engineering and Management, Pune, Maharashtra, India
  4. Professor, Electronics and Telecommunication Engineering, G H Raisoni College of Engineering and Management, Pune, Maharashtra, India

IRJIET, Volume 9, Issue 5, May 2025 pp. 447-451

doi.org/10.47001/IRJIET/2025.905050

References

  1. S L Happy and Aurobinda Routray, “Automatic Facial Expression Recognition using Features of salient Facial Patches,” in IEEE Trans. On Affective Computing, January-March 2015, pp. 1-12.
  2. Hafeez Kabani, Sharik Khan, Omar Khan and Shabana Tadvi, “Emotion based Music Player,” Int. J. of Eng. Research and General Sci., Vol. 3, Issue 1,pp. 750-756,January-February 2015.
  3. Li Siquan, Zhang Xuanxiong. Research on Facial Expression Recognition Based on Convolutional Neural Networks [J]. Journal of Software, 2018, v.17; No.183 (01): 32-35.
  4. Hou Yuqingyang, Quan Jicheng, Wang Hongwei. Overview of the development of deep learning [J]. Ship Electronic Engineering, 2017, 4: 5-9.
  5. Liu Sijia, Chen Zhikun, Wang Fubin, et al., Multi-angle face recognition based on convolutional neural network [J]. Journal of North China University of Technology (Natural Science Edition), 2019, 41 (4): 103-108.
  6. Li Huihui. Research on facial expression recognition based on cognitive ma chine learning [D]. Guangzhou: South China University of Technology, 2019.
  7. Li Yong, Lin Xiaozhu, Jiang Mengying., Facial expression recognition based on cross-connection LeNet-5 network [J]. Journal of Automation, 2018,44 (1): 176-182.
  8. Yao L S, Xu G M, Zhap F., Facial Expression Recognition Based on CNN Local Feature Fusion [J]., Laser and Optoelectronics Progress, 2020, 57(03): 032501.
  9. Xie S, Hu H. Facial expression recognition with FRR-CNN [J]. Electronics Letters, 2017, 53 (4): 235-237.
  10. Zou Jiancheng, Deng Hao., An automatic facial expression recognition method based on convolutional neural network [J]. Journal of North China University of Technology, 2019, 31 (5): 51-56.