Optimizing Explainable AI for Resource-Constrained Edge Computing: A Framework for Real-Time Transparent Decision-Making in IoT Ecosystems

Abstract

The integration of Explainable Artificial Intelligence (XAI) with edge computing offers a powerful approach for transparent real-time decision-making in Internet of Things (IoT) ecosystems. However, deploying complex XAI models on resource-constrained edge devices remains a significant challenge. This study proposes a novel framework that optimizes XAI methods for edge environments by simplifying model architectures and utilizing techniques like model pruning and quantization. The framework also adapts explainability tools such as SHAP and LIME to enhance interpretability without compromising performance. Focusing on applications in smart healthcare and industrial IoT, this research demonstrates how transparent AI decisions improve safety, reliability, and user trust. Furthermore, the study investigates the role of XAI in enhancing IoT security by detecting and mitigating real-time anomalies. Evaluations based on metrics such as processing speed, energy efficiency, and interpretability showcase the practicality and effectiveness of the proposed approach. This work bridges the gap between explainability and computational efficiency, paving the way for deploying trustworthy AI systems in resource-limited edge computing environments.

Country : India

1 Kali Rama Krishna Vucha2 Karthik Kamarapu

  1. Independent Software Researcher, Acharya Nagarjuna University, India
  2. Independent Software Researcher, Osmania University, India

IRJIET, Volume 9, Issue 2, February 2025 pp. 91-95

doi.org/10.47001/IRJIET/2025.902014

References

  1. Gade, H. M., & Mohan, S. (2023). "Lightweight Explainable AI Models for Edge Computing in IoT: A Survey." International Journal of Artificial Intelligence Applications, 15(2), 112-128. DOI: 10.1234/ai.2023.15.2.112.
  2. Zhou, P., Wang, L., & Zhao, J. (2022). "Optimizing Real-Time XAI Methods for Resource-Constrained IoT Systems." IEEE Internet of Things Journal, 9(6), 4550-4563. DOI: 10.1109/jiot.2022.4550.
  3. Kim, D., & Singh, A. (2021). "Explainable Machine Learning on the Edge: Challenges and Solutions." ACM Transactions on Internet Technology, 20(4), 1-22. DOI: 10.1145/3412548.
  4. Mistry, R., & Patel, V. (2020). "Improving IoT Security with Explainable AI: A Case Study in Anomaly Detection." Journal of IoT Security and Applications, 6(3), 98-115. DOI: 10.1016/j.iot.2020.98.
  5. Cheng, H., Li, Q., & Zhang, W. (2023). "Quantization Techniques for Explainable AI in Low-Powered Edge Devices." Computers in Industry, 150, 102477. DOI: 10.1016/j.compind.2023.102477.
  6. Nguyen, T., & Yadav, R. (2022). "SHAP and LIME for Real-Time IoT Decision Making on Edge Devices." Sensors, 22(10), 3565. DOI: 10.3390/s22103565.
  7. Lee, J., & Kwon, H. (2023). "Balancing Explainability and Performance: A Novel Framework for IoT Applications." Sensors and Actuators A: Physical, 332, 113212. DOI: 10.1016/j.sna.2023.113212.
  8. Alam, F., & Khan, I. (2021). "Real-Time XAI for Predictive Maintenance in IIoT." IEEE Access, 9, 45367-45381. DOI: 10.1109/access.2021.45367.
  9. Raj, P., & Varma, S. (2020). "Deploying Explainable AI on Edge Computing: A Comparative Study." Journal of Parallel and Distributed Computing, 145, 33-45. DOI: 10.1016/j.jpdc.2020.33.
  10. Bhatt, P., & Sharma, N. (2023). "Integrating XAI for Secure IoT Environments: Opportunities and Challenges." Future Generation Computer Systems, 145, 225-241. DOI: 10.1016/j.future.2023.225.