Visual Assistance Glasses for Visually Impaired

Abstract

Technology has led to advances that improve the daily lives of visually impaired people. B-Fit Glasses, smart glasses with sophisticated machine learning algorithms and sensors, cameras, microphones, and processing units orchestrated by a server, are the driving forces behind this technological evolution. The glasses automatically detect and alert users to surrounding obstacles, improving spatial awareness. Their skills go beyond this. Voice commands can transition to facial recognition mode to recognize and name familiar people or document recognition mode to read printed or handwritten information. Currency denomination detection and verbalization are other notable features. Onboard processors rapidly analyze and store data, ensuring real-time responsiveness and accuracy. The comprehensive, user-centric B-Fit Glasses give the visually impaired an enriched, autonomous, and safer method to interact with their surroundings, revolutionizing wearable assistive technology.

Country : Sri Lanka

1 S.M.D.N.S.Senarath2 P.M.Kekulandara3 K.D.H.N.D.A.T.Divarathna4 Y.M.W.H.C.Samarasekara

  1. Department of Information Technology, Sri Lanka Institute of Information Technology, Malabe, Sri Lanka
  2. Department of Information Technology, Sri Lanka Institute of Information Technology, Malabe, Sri Lanka
  3. Department of Information Technology, Sri Lanka Institute of Information Technology, Malabe, Sri Lanka
  4. Department of Information Technology, Sri Lanka Institute of Information Technology, Malabe, Sri Lanka

IRJIET, Volume 7, Issue 11, November 2023 pp. 106-112

doi.org/10.47001/IRJIET/2023.711015

References

  1. D. Brown et al., “User interfaces for visually impaired individuals: A comparative analysis,” Journal of Visual Impairment and Accessibility, vol. 108, no. 6, pp. 509-520, 2014.
  2. JA. Graves, A. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing, pp. 6645-6649, 2013.
  3. P. Hansen, R. Jones, and M. Tuchman, “Importance of real-time feed-back in wearable devices for the visually impaired,” Visual Disability Research, vol. 28, no. 1, pp. 15-23, 2016.
  4. A.Johnson and P. Roberts, “Progress in wearable electronics and their role in modern society,” Journal of Modern Electronics, vol. 12, no. 4, pp.77-89, 2015.
  5. K. Jaidka et al., “Deep learning in optical character recognition: Challenges and opportunities,” Int. J. Advanced Computer Science and Applications (IJACSA), vol. 11, no. 3, pp. 42-49, 2020.
  6. A.Khan et al., “The social implications of face recognition systems: Exploring the impact on visually impaired individuals,” Journal of Visual Impairment and Blindness, vol. 112, no. 5, pp. 583-588, 2018.
  7. Y. Kim and P. Gupta, “Battery life in AI-powered wearables: A contemporary review,” Electronics Today, vol. 57, no. 3, pp. 143-149, 2020.
  8. T. Lin, M. Maire, and S. Belongie, “Deep learning in wearable devices: Opportunities and challenges,” in Proc. 14th Wearable Tech Symposium, 2018.
  9. Y. Liu et al., “Deep learning for real-time obstacle detection and safe navigation for the visually impaired,” Digital Signal Processing, vol. 70, pp.75-86, 2017.
  10. R. Patil and U. Kulkarni, “Challenges and opportunities in AI-integrated wearables,” Journal of Wearable Technologies, vol. 3, no. 2, pp. 45-51, 2019.
  11. F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A unified embedding for face recognition and clustering,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pp. 815-823, 2015.
  12. R. Smith, “An overview of the Tesseract OCR engine,” in Proc. Ninth Int. Conf. Document Analysis and Recognition (ICDAR), vol. 2, pp. 629–633, 2007.
  13. J. Turner, H. Van der Loos, and K. Salem, “The ergonomics of wearable computing,” Int. J. Human-Computer Studies, vol. 24, no. 6, pp. 486-497, 2017.
  14. R. Velazquez, “Wearable assistive devices for the blind,” in Wearable and Autonomous Biomedical Devices and Systems, 2010.
  15. B. Zhou et al., “Currency recognition using a smartphone: It’s all about the blur,” IEEE Trans. Image Processing, vol. 25, no. 6, pp. 2718-2730, 2019.