Data Mining-Based Transaction Labelling Enhancing Financial Insights through Automated Technique

Abstract

During the most recent era, it has been suggested that various accounts be maintained that preserve the application of natural language text inputs that are submitted as the financial transactions (debit and credit entries) that are automatically generated are the inquiries and responses to specific queries. Machine learning serves as the basis for AI-powered categorization in the process described above. This categorization method makes use of deep learning, natural language processing procedures, and other techniques to improve its algorithms for conducting the operations. It is necessary to employ data mining techniques to carry out predictive modelling that is founded on data analytics. The process of machine learning involves the existence of a vast array of data kinds. Here are some ways that use natural language processing to carry out transaction processing based on data description. Following that, several real-time applications that are special to the financial sector were addressed, each of which was based on a different technique that was being implemented. As a result, a variety of problems and potential solutions are addressed based on the performance of the various methodologies achieved through the utilization of various performance measures.

Country : USA

1 Praneeth Reddy Amudala Puchakayala

  1. Data scientist, Regions Bank, USA

IRJIET, Volume 7, Issue 5, May 2023 pp. 362-376

doi.org/10.47001/IRJIET/2023.705054

References

  1. Ahmad, Wasi, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2020. A transformer-based approach for source code summarization. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 4998–5007. Association for Computational Linguistics.
  2. Allamanis, Miltiadis, and Marc Brockschmidt. 2017. Smartpaste: Learning to adapt source code. arXiv preprint arXiv:1705.07867.
  3. Allamanis, Miltiadis, Marc Brockschmidt, and Mahmoud Khademi. 2018. Learning to represent programs with graphs. International Conference on Learning Representations.
  4. Allen, Frances E. 1970. Control flow analysis. ACM Sigplan Notices, 5(7):1–19.
  5. Austin, Jacob, Augustus Odena, Maxwell Nye, Maarten Bosma, et al. 2021. Program synthesis with large language models. CoRR, abs/2108.07732.
  6. Banerjee, Satanjeev, and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, 65–72.
  7. Bunel, Rudy, Matthew Hausknecht, Jacob Devlin, Rishabh Singh, and Pushmeet Kohli. 2018. Leveraging grammar and reinforcement learning for neural program synthesis. International Conference on Learning Representations.
  8. Clement, Colin B., Dawn Drain, Jonathan Timcheck, Alexey Svyatkovskiy, and Neel Sundaresan. 2020. Pymt5: Multi-mode translation of natural language and Python code with transformers. arXiv preprint arXiv:2010.03150.
  9. Feng, Zhangyin, Daya Guo, Duyu Tang, et al. 2020. CodeBERT: A pre-trained model for programming and natural languages. Findings of the Association for Computational Linguistics: EMNLP 2020, 1536–1547.
  10. Ferrante, Jeanne, Karl J. Ottenstein, and Joe D. Warren. 1987. The program dependence graph and its use in optimization. ACM Transactions on Programming Languages and Systems (TOPLAS), 9(3):319–349.
  11. Goyal, Vishrav, et al. 2020. Unsupervised cross-lingual representation learning at scale. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 8440–8451.
  12. Guo, Daya, Shuo Ren, Shuai Lu, et al. 2020. Graphcodebert: Pre-training code representations with data flow. International Conference on Learning Representations.
  13. Hindle, Abram, Earl T. Barr, Mark Gabel, Zhendong Su, and Premkumar Devanbu. 2016. On the naturalness of software. Communications of the ACM, 59(5):122–131.
  14. Hovsepyan, Aram, Riccardo Scandariato, Wouter Joosen, and James Walden. 2012. Software vulnerability prediction using text analysis techniques. Proceedings of the 4th international workshop on Security measurements and metrics, 7–10.
  15. Hu, Xing, Ge Li, Xin Xia, et al. 2018. Deep code comment generation. 2018 IEEE/ACM 26th International Conference on Program Comprehension (ICPC), 200–20010.
  16. Husain, Hamel, Ho-Hsiang Wu, Tiferet Gazit, et al. 2019. Code search net challenge: Evaluating the state of semantic code search. arXiv preprint arXiv:1909.09436.
  17. Hochreiter, Sepp, and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780.
  18. Gupta, Kavi, Peter Ebert Christensen, Xinyun Chen, and Dawn Song. 2020. Synthesize, execute and debug: Learning to repair for neural program synthesis. Advances in Neural Information Processing Systems, 33:17685–17695.