Smart Automation Using LLM

Abstract

Smart Automation Using Large language models (LLMs) have emerged as a transformation AI technology, achieving state-of-the-art performance across natural language processing tasks. This survey summarizes the development progress on the General Computer Automation Using Large Language Models project. It aims to create an intelligent agent for automating computer tasks by leveraging large language models. The modular architecture includes components for conversational intelligence, document handling, and application control. Open AI’s GPT-3 is integrated for natural language capabilities. Trust in AI agents has been extensively studied in the literature, resulting in significant advancements in our understanding of this field. However, the rapid advancements in Large Language Models (LLMs) and the emergence of LLM-based AI agent frameworks pose new challenges and opportunities for further research. In the field of process automation, a new generation of AI-based agents has emerged, enabling the execution of complex tasks.

Country : India

1 Priya Ethape2 Riya Kane3 Ghanashyam Gadekar4 Sahil Chimane

  1. Smt. Indira Gandhi College of Engineering, Navi Mumbai, Maharashtra, India
  2. Smt. Indira Gandhi College of Engineering, Navi Mumbai, Maharashtra, India
  3. Smt. Indira Gandhi College of Engineering, Navi Mumbai, Maharashtra, India
  4. Smt. Indira Gandhi College of Engineering, Navi Mumbai, Maharashtra, India

IRJIET, Volume 7, Issue 11, November 2023 pp. 603-610

doi.org/10.47001/IRJIET/2023.711080

References

  1. Brown, T. B., et al. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
  2. Devlin, J., et al. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. NAACL.
  3. Chen, M. X., et al. (2020). A systematic survey of neural machine translation adaptation. arXiv preprint arXiv:2009.13906.
  4. Adiwardana, D., et al. (2020). Towards a human-like open- domain chatbot. arXiv preprint arXiv:2001.09977.
  5. Baevski, A., et al. (2020). wav2vec 2.0: A framework for self-supervised learning of speech representations. NeurIPS.