Nihada Mithura - Interactive and Entertaining Model for Mute Sri Lankan Primary Students

Abstract

Inclusive education is a core societal ideal aimed at giving equitable learning opportunities for all pupils, regardless of their specific circumstances or skills. The concept integrates technology, language, and entertainment to provide a comprehensive learning experience. It consists of four major components: A system for translating Sri Lankan Sign Language (SLiSL) into Sinhala natural language, complete with voice output and graphic representation, has been developed. An interactive 3D model that allows for bidirectional translation between Sinhala and SLiSL. Games designed specifically for deaf students, Quizzes designed to teach essential abilities such as the Sinhala alphabet and basic mathematical functions. The strategy is intended to boost academic performance, boost well-being and confidence, and empower marginalized kids. It makes an important contribution to the subject of inclusive education and has the potential to change the educational landscape.

Country : Sri Lanka

1 Jayasinghe L.D.2 Senadheera T.D.3 Suriyaarachchi S.A.4 Samarasinghe A.V.5 Dinithi Pandithage6 Buddhima Attanayaka

  1. Department of Information Technology, Sri Lanka Institute of Information Technology, Malabe, Sri Lanka
  2. Department of Information Technology, Sri Lanka Institute of Information Technology, Malabe, Sri Lanka
  3. Department of Information Technology, Sri Lanka Institute of Information Technology, Malabe, Sri Lanka
  4. Department of Information Technology, Sri Lanka Institute of Information Technology, Malabe, Sri Lanka
  5. Lecturer, Faculty of Computing, Sri Lanka Institute of Information Technology, Malabe, Sri Lanka
  6. Lecturer, Faculty of Computing, Sri Lanka Institute of Information Technology, Malabe, Sri Lanka

IRJIET, Volume 7, Issue 12, December 2023 pp. 137-143

doi.org/10.47001/IRJIET/2023.712019

References

  1. Johnston, T., Smith, A., & Thompson, R. (2017). Real-time sign language translation systems. *Journal of Assistive Technologies*, 9(3), 132-145.
  2. Starner, T., Weaver, J., & Pentland, A. (2000). Real-time American Sign Language recognition from video using hidden Markov models. *Technical Report*, Georgia Institute of Technology.
  3. Camgoz, N. C., Hadfield, S., Koller, O., & Bowden, R. (2018). Neural Sign Language Translation. *Computer Vision and Image Understanding*, 169, 20-28.
  4. Li, Y., Kocabas, M., & Karaoglu, S. (2018). Sign Language Recognition and Translation with the Media Pipe Framework. arXiv preprint arXiv:1811.00703.
  5. Raspopoulos, M., Athanasas, N., & Stephanidis, C. (2017). HandTalk: A Computer Vision-Based Approach for Supporting Sign Language Interpretation. In International Conference on Universal Access in Human-Computer Interaction.
  6. Bujak, K. R., Radu, I., Catrambone, R., MacIntyre, B., Zheng, R., & Golubski, G. (2013). A psychological perspective on augmented reality in the mathematics classroom. Computers & Education, 68, 536-544.
  7. C. Cao, Y. Zhang, Y. Wu, C. Lu, and J. Cheng, "Hand Gesture Recognition and Real-time Game Control Based on a Wearable Band," in Proc. IEEE Sensors, 2018, pp. 1-4.
  8. L. Pigou, A. van den Oord, S. Dieleman, M. Van Herreweghe, and J. Dambre, "Beyond Temporal Pooling: Recurrence and Temporal Convolutions for Gesture Recognition in Video," in Proc. International Conference on Learning Representations (ICLR), 2016.
  9. A.U. Ceylan and A. E. Dirik, "Gesture Recognition Using Principal Component Analysis and Artificial Neural Networks," in Proc. International Symposium on Innovations in Intelligent Systems and Applications (INISTA), 2011, pp. 91-94.
  10. Y. Wang et al., "Tacotron: Towards End-to-End Speech Synthesis," in Proc. Interspeech, 2017.
  11. V. Wan, C. Chan, T. Kenter, J. Vit, and R. Clark, "CHiVE: Varying Prosody in Speech Synthesis with a Linguistically Driven Dynamic Hierarchical Conditional Variational Network," in Proc. International Conference on Machine Learning (ICML), 2019.
  12. S. Arik et al., "Deep Voice: Real-time Neural Text-to-Speech," in Proc. International Conference on Machine Learning (ICML), 2017.
  13. L. P. R. Moreira, A. M. S. Rezende, and E. C. D. Silva, "A Review of Educational Games for Deaf Children," in Proc. Frontiers in Education Conference (FIE), 2018, pp. 1-9.
  14. R. G. Biasutti and W. F. Freitas, "Educational Game for Deaf Children Learning of Mathematics in Early Childhood Education," in Proc. IEEE Frontiers in Education Conference (FIE), 2017, pp. 1-8.
  15. E. Efthimiou and S.-E. Fotinea, "Educational Games for Sign Language Linguistics," in Proc. International Conference on Universal Access in Human-Computer Interaction, 2007, pp. 1008-1017.
  16. R. Bowden, D. Windridge, T. Kadir, A. Zisserman, and M. Brady, "A Linguistic Feature Vector for the Visual Interpretation of Sign Language," in Proc. European Conference on Computer Vision (ECCV), 2004, pp. 390-401.
  17. C. Vogler and D. Metaxas, "A Framework for Recognizing the Simultaneous Aspects of American Sign Language," in Proc. Computer Vision and Pattern Recognition, 2000, pp. 363-369.
  18. T. Starner, J. Weaver, and A. Pentland, "Real-time American Sign Language Recognition Using Desk and Wearable Computer Based Video," in Proc. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 12, pp. 1371-1375, Dec. 1998.
  19. M. S. Hossain, "Interactive e-learning for the deaf (i-LearnDeaf): Tool to enhance reading and writing skills for the deaf," in Proc. International Conference on Information and Communication Technology and Accessibility (ICTA), 2015, pp. 1-4.