Smart Automation Using LLM

Priya EthapeSmt. Indira Gandhi College of Engineering, Navi Mumbai, Maharashtra, IndiaRiya KaneSmt. Indira Gandhi College of Engineering, Navi Mumbai, Maharashtra, IndiaGhanashyam GadekarSmt. Indira Gandhi College of Engineering, Navi Mumbai, Maharashtra, IndiaSahil ChimaneSmt. Indira Gandhi College of Engineering, Navi Mumbai, Maharashtra, India

Vol 7 No 11 (2023): Volume 7, Issue 11, November 2023 | Pages: 603-610

International Research Journal of Innovations in Engineering and Technology

OPEN ACCESS | Research Article | Published Date: 25-11-2023

doi Logo doi.org/10.47001/IRJIET/2023.711080

Abstract

Smart Automation Using Large language models (LLMs) have emerged as a transformation AI technology, achieving state-of-the-art performance across natural language processing tasks. This survey summarizes the development progress on the General Computer Automation Using Large Language Models project. It aims to create an intelligent agent for automating computer tasks by leveraging large language models. The modular architecture includes components for conversational intelligence, document handling, and application control. Open AI’s GPT-3 is integrated for natural language capabilities. Trust in AI agents has been extensively studied in the literature, resulting in significant advancements in our understanding of this field. However, the rapid advancements in Large Language Models (LLMs) and the emergence of LLM-based AI agent frameworks pose new challenges and opportunities for further research. In the field of process automation, a new generation of AI-based agents has emerged, enabling the execution of complex tasks.

Keywords

LLM, AutoGPT, General computer autonomous


Citation of this Article

Priya Ethape, Riya Kane, Ghanashyam Gadekar, Sahil Chimane, “Smart Automation Using LLM” Published in International Research Journal of Innovations in Engineering and Technology - IRJIET, Volume 7, Issue 11, pp 603-610, November 2023. Article DOI https://doi.org/10.47001/IRJIET/2023.711080

References
  1. Brown, T. B., et al. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
  2. Devlin, J., et al. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. NAACL.
  3. Chen, M. X., et al. (2020). A systematic survey of neural machine translation adaptation. arXiv preprint arXiv:2009.13906.
  4. Adiwardana, D., et al. (2020). Towards a human-like open- domain chatbot. arXiv preprint arXiv:2001.09977.
  5. Baevski, A., et al. (2020). wav2vec 2.0: A framework for self-supervised learning of speech representations. NeurIPS.