Drone Face.AI

Abstract

The convergence of drone technology and virtual reality (VR) has given rise to a compelling fusion of innovation, offering transformative potential across various domains. This abstract provides a concise overview of a comprehensive exploration into the synergy between drone and VR technology, underscoring the groundbreaking possibilities it holds for redefining human interaction and applications. This project endeavors to investigate the symbiotic relationship between drone technology, with its capacity for real-world presence and operation, and VR, which immerses users in digital environments. The primary goal is to harness the combined strengths of these technologies, creating a holistic experience that transcends conventional boundaries. The project's key objectives include:

Data Visualization and Analysis: Investigating the impact of VR on real-time data processing and visualization from drone sensors, offering improved insights and efficiencies in areas such as surveillance, environmental monitoring, and research.

Application Diversity: Demonstrating the versatile applications of this fusion across diverse sectors, from agriculture and construction to gaming and disaster response, highlighting how it can revolutionize industries and address real- world challenges.

Country : India

1 Deesha Korche2 Aniket Chile3 Poorva Padave4 Arjun Mhatre5 Prof. Swati Vyas

  1. Stuent, Smt. Indira Gandhi College of Engineering, Ghansoli, Navi Mumbai, Maharashtra, India
  2. Stuent, Smt. Indira Gandhi College of Engineering, Ghansoli, Navi Mumbai, Maharashtra, India
  3. Stuent, Smt. Indira Gandhi College of Engineering, Ghansoli, Navi Mumbai, Maharashtra, India
  4. Stuent, Smt. Indira Gandhi College of Engineering, Ghansoli, Navi Mumbai, Maharashtra, India
  5. Professor, Dept. of AI & ML, Smt. Indira Gandhi College of Engineering, Ghansoli, Navi Mumbai, Maharashtra, India

IRJIET, Volume 8, Issue 4, April 2024 pp. 135-142

doi.org/10.47001/IRJIET/2024.804018

References

  1. W. H. Sumby and I. Pollack, “Visual contribution to speech intelligibility in noise,'' J. Acoust. Soc. Amer., vol. 26, no. 2, pp. 212215, Mar. 1954.
  2. E. D. Petajan, “Automatic lipreading to enhance speech recognition,'' in Proc. Global Telecommun. Conf., 1984, pp. 265272.
  3. J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng, ``Multimodal deep learning,'' in Proc. 28th Int. Conf. Mach. Learn. (ICML), 2011, pp. 689696.
  4. H. Lee, C. Ekanadham, and A. Y. Ng, “Sparse deep belief net model for visual area V2,'' in Proc. Adv. Neural Inf. Process. Syst., 2008, pp. 873880.
  5. K. Noda, Y. Yamaguchi, K. Nakadai, H. G. Okuno, and T. Ogata, “Lipreading using convolutional neural network,'' in Proc. Conf. Int. speech Commun. Assoc., 2014, pp. 11491153.
  6. S. Suwajanakorn, S. M. Seitz, and I. Kemelmacher-Shlizerman, “Synthesizing Obama: Learning lip sync from audio,'' ACM Trans. Graph., vol. 36, no. 4, p. 95, Jul. 2017.
  7. J. S. Chung and A. Zisserman, “Lip reading in the wild,'' in Proc. Asian Conf. Comput. Vis. Cham, Switzerland: Springer, 2016, pp. 87103.
  8. Y. M. Assael, B. Shillingford, S. Whiteson, and N. de Freitas, “Lip-Net: End-to-end sentence-level lipreading,'' 2016, arXiv:1611.01599. [Online]. Available: http://arxiv.org/abs/1611.01599.
  9. K. Noda, Y. Yamaguchi, K. Nakadai, H. G. Okuno, and T. Ogata, “Lipreading using convolutional neural network,'' in Proc. Conf. Int. speech Commun. Assoc., 2014, pp. 11491153.
  10. https://en.wikipedia.org/wiki/Visual_Studio_Code
  11. https://en.wikipedia.org/wiki/PyCharm
  12. https://en.wikipedia.org/wiki/Natural_language_ processing
  13. https://www.javatpoint.com/software-engineering-agile-model