Understanding and Supporting People with Hearing and Speech Impairments

Abstract

The goal of this study article is to examine the complex topic of comprehending and helping people in Sri Lanka who have hearing and speech problems. These limitations provide serious difficulties for those who are impacted, impairing their social connections, communication abilities, and general quality of life. In addition to examining ways and treatments for offering practical support, this study aims to further our understanding of the experiences, needs, and situations that people with these impairments face. Through an extensive review of existing literature, this research paper compiles current knowledge on hearing and speech impairments, encompassing their causes, prevalence, and potential impacts on individuals' psychological, emotional, and social well-being. By gaining insight into these factors, we can more successfully adapt intervention strategies, educational programs, and support systems to meet the unique requirements of people with hearing and speech impairments. This study also emphasizes how crucial it is for academics, healthcare workers, educators, and communities to work together to foster inclusive settings. The ultimate goal of this study is to advance the body of knowledge and guide the creation of extensive support systems that enable people with hearing and speech impairments to live happy, full lives. We may close comprehension gaps and improve the assistance accessible to people with hearing and speech impairments by recognizing the value of inclusion and taking a person-centered approach, therefore fostering a more inclusive and equitable society.

Country : Sri Lanka

1 W. M. D. C. Wanasooriya2 B. G. M. S. Thilakawardhana3 P. H. S. Y. De Silva4 M. M. Y. S. Menikhitiya5 Samadhi Rathnayake

  1. Faculty of Computing, Sri Lanka Institute of Information Technology, Malabe, Sri Lanka
  2. Faculty of Computing, Sri Lanka Institute of Information Technology, Malabe, Sri Lanka
  3. Faculty of Computing, Sri Lanka Institute of Information Technology, Malabe, Sri Lanka
  4. Faculty of Computing, Sri Lanka Institute of Information Technology, Malabe, Sri Lanka
  5. Department of Information Technology, Sri Lanka Institute of Information Technology, Malabe, Sri Lanka

IRJIET, Volume 7, Issue 11, November 2023 pp. 113-119

doi.org/10.47001/IRJIET/2023.711016

References

  1. H. K.,. K. P. A. B. Anant Gaodida, "Aiding Speech Therapy Using Audio And Video," 2020 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE) | 978-1- 6654-1974-1/20/$31.00 ©2020 IEEE | DOI: 10.1109/CSDE50874.2020.941157, p. 5, 2022.
  2. P.,. J. B. A. W. R. M. P. Selim S. Awad, "Android-Based Real-Time Signal Processing to Treat Speech-Language Pathologies," Authorized licensed use limited to: SLIIT - Sri Lanka Institute of Information Technology. Downloaded on March 27, 2023 at 07:31:46 UTC from IEEE Xplore. Restrictions apply, p. 8, 2015.
  3. M. E. J. M. ˜. a. S. C. Mariana Diogo1, "ROBUST SCORING OF VOICE EXERCISES IN COMPUTER-BASED," 2016 24th European Signal Processing Conference (EUSIPCO, p. 5, 2016.
  4. Vanryckeghem, M., & Houston, D. M. (2018). Mobile applications for speech and language therapy in children: A review of current evidence. Journal of Communication Disorders, 73, 1- 18.
  5. Wilson, E. C., & Proctor, A. (2020). Technology-supported speech and language therapy interventions: A systematic review of theoretical frameworks, evidence of effectiveness, and barriers to adoption. International Journal of Language and Communication Disorders, 55(3), 299-317.
  6. Dagenais, P. A., & Lalonde, J. (2021). Augmenting speech therapy with technology: Current state and future directions. American Journal of Speech-Language Pathology, 30(1), 45-56.
  7. Rvachew, S., & Nowak, M. (2019). Speech therapy apps for children with speech sound disorders: A systematic review of quality and efficacy. American Journal of Speech-Language Pathology, 28(4), 1535-1551.
  8. Mueller, J. L., Jones, C. A., & Uysal, S. (2017). Use of a mobile application to improve voice therapy practice between clinic visits: A pilot study. Journal of Voice, 31(6), 676-e7.
  9. P. J. E. W. P. a. D. . K. O. P. . REBECCA E. EILERS, "Assessment techniques to evaluate tactual aids for".
  10. S. Kulibaba1, "Advanced Communication Model with the Voice Control".
  11. A.K.P.J.T. Dhara Dewasurendra, "Emergency Communication Application for Speech".
  12. "World Health Organization. (2021). Deafness and hearing loss. Retrieved from https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-loss".
  13. H. B. A. R. M. Theresa Hnath Chisolm 1, "Short- and long-term outcomes of adult audiological rehabilitation," 2004 Oct;25(5):464-77. doi: 10.1097/01.aud.0000145114.24651.4e.
  14. M. I. Carla Viegas1, Including Facial Expressions in Contextual Embeddings for Sign, Educational Neuroscience Program, Gallaudet University, Washington, D.C, USA, 2022.
  15. T. Y. Shi Feng, "Sign language translation based on new continuous," 2022 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA) , p. 4, 2022.
  16. M. S. J. A. P. D. J. M. Ahmad Firooz Shokoori, "Sign Language Recognition and Translation into," Proceedings of the Sixth International Conference on Computing Methodologies and Communication (ICCMC 2022), p. 5, 2022.
  17. S. He, "Research of a Sign Language Translation System," 2019 International Conference on Artificial Intelligence and Advanced Manufacturing (AIAM), p. 5, 2019.
  18. S. H. O. K. H. N. R. B. Necati Cihan Camgoz1, "Neural Sign Language Translation," p. 10.
  19. B. H. M. H. N. H. A. W. Andra Ardiansyaha, "Systematic Literature Review: American Sign Language Translator," 5th International Conference on Computer Science and Computational Intelligence 2020, p. 9, 2020.
  20. S. J. N. Swapna Johnnya, "Sign Language Translator Using Machine Learning," p. 9, 2022.
  21. D. P. W. Pumudu Fernando, "Sign Language Translation Approach to Sinhalese," © The Author(s) 2016. This article is published with open access by the GSTF. , p. 10, 2016.
  22. M. M. F. F. M. S. YASSINE Rabhi *, "A Real-time Emotion Recognition System for," 4th International Conference on Advanced Technologies, p. 6, 2018.
  23. H. Y. P. W. Nan Song, "A Gesture-to-Emotional Speech Conversion by," 2018 First Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia), p. 6, 2018.
  24. L. C. a. K. A. B. Aryel Beck, "Towards an Affect Space for Robots to Display Emotional Body," 19th IEEE International Symposium on Robot and Human Interactive Communication, p. 6, 2010.
  25. L.-V. &. e. Bin Li Dimas Lima Gonzalez-Yubero, "Facial expression recognition via ResNet-50," p. 8, 2021.
  26. Camgöz, N., Hadfield, S., Koller, O., Ney, H., & Bowden, R. (2018). On the Use of Deep Learning for Symbolic Time-series Analysis: A case study in Automated Sign Language Recognition. Computer Vision and Image Understanding, 169, 75-85.