Comparison of Generative AI and Artificial Intelligence

Abstract

Generative AI represents a transformative branch of artificial intelligence focused on creating new data, such as images, text, or audio, based on patterns learned from existing data. Unlike traditional AI, which primarily focuses on classification, prediction, or optimization tasks, generative AI models, such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), aim to simulate creative processes by generating outputs that resemble real-world data. This paper reviews the current state of generative AI technologies, exploring the underlying architectures, including deep learning techniques that power models like GPT and DALL·E. It also examines applications across various fields, such as healthcare, art, entertainment, and natural language processing. Moreover, the ethical considerations surrounding AI-generated content, including issues of bias, authenticity, and misuse, are critically analyzed. By synthesizing current research and advancements, this paper highlights both the opportunities and challenges that generative AI presents for the future of AI development and its societal impact. In recent years, the study of artificial intelligence (AI) has undergone a paradigm shift. This has been propelled by the groundbreaking capabilities of generative models both in supervised and unsupervised learning scenarios. Generative AI has shown state-of-the-art performance in solving perplexing real-world conundrums in fields such as image translation, medical diagnostics, textual imagery fusion, natural language processing, and beyond. This paper documents the systematic review and analysis of recent advancements and techniques in Generative AI with a detailed discussion of their applications including application-specific models. Indeed, the major impact that generative AI has made to date, has been in language generation with the development of large language models, in the field of image translation and several other interdisciplinary applications of generative AI. Moreover, the primary contribution of this paper lies in its coherent synthesis of the latest advancements in these areas, seamlessly weaving together contemporary breakthroughs in the field. Particularly, how it shares an exploration of the future trajectory for generative AI. In conclusion, the paper ends with a dis0ussion of Responsible AI principles, and the necessary ethical considerations for the sustainability and growth of these generative models.

Country : India

1 Rohan Tayade2 Abhishek Khodke3 Shruti Jaiswal,4 Dr. Shilpa B. Sarvaiya

  1. MCA-II, Department of MCA, Vidya Bharti Mahavidyalaya, Amravati, Maharashtra, India
  2. MCA-II, Department of MCA, Vidya Bharti Mahavidyalaya, Amravati, Maharashtra, India
  3. MCA-II, Department of MCA, Vidya Bharti Mahavidyalaya, Amravati, Maharashtra, India
  4. Head, Department of MCA, Vidya Bharti Mahavidyalaya, Amravati, Maharashtra, India

IRJIET, Volume 8, Issue 10, October 2024 pp. 213-220

doi.org/10.47001/IRJIET/2024.810028

References

  1. Ahmad, B., Sun, J., You, Q., Palade, V., and Mao, Z. (2022). Brain tumor classi f ication using a combination of variational autoencoders and generative adversarial networks. Biomedicines, 10(2):223. 31.
  2. Ahuja, K., Diddee, H., Hada, R., Ochieng, M., Ramesh, K., Jain, P., Nambi, A., Ganu, T., Segal, S., Axmed, M., Bali, K., and Sitaram, S. (2023). Mega: Multilingual evaluation of generative ai.
  3. Akbik, A., Blythe, D., and Vollgraf, R. (2018). Contextual string embeddings for sequence labeling. In Proceedings of the 27th international conference on computational linguistics, pages 1638–1649.
  4. Al-Sabahi, K., Zuping, Z., and Nadher, M. (2018). A hierarchical structured self-attentive model for extractive document summarization (hssas). IEEE Access, 6:24205–24212.
  5. Ali, H., Biswas, M. R., Mohsen, F., Shah, U., Alamgir, A., Mousa, O., and Shah, Z. (2022). The role of generative adversarial networks in brain mri: a scoping review. Insights into imaging, 13(1):98.
  6. Arjovsky, M., Chintala, S., and Bottou, L. (2017). Wasserstein gan.
  7. Arnab, A., Dehghani, M., Heigold, G., Sun, C., Luˇ ci´ c, M., and Schmid, C. (2021). Vivit: A video vision transformer. In Proceedings of the IEEE/CVF international conference on computer vision, pages 6836–6846.
  8. Arrieta, A. B., D´ ıaz-Rodr´ ıguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garc´ıa, S., Gil-L´opez, S., Molina, D., Benjamins, R., et al. (2020). Explain able artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information fusion, 58:82–115.
  9. Atapour-Abarghouei, A. and Breckon, T. P. (2018). Real-time monocular depth estimation using synthetic data with domain adaptation via image style transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  10. Balazevic, I., Allen, C., and Hospedales, T. (2019). TuckER: Tensor factorization for knowledge graph completion. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics.