Nipuna Thalpage
Cite: Thalpage, N. Explainable AI for Cybersecurity Applications: A Review Article on Techniques, Deployments, and Usability Challenges. JDS, 7(1), 3-10, (2025). https://doi.org/10.33847/2686-8296.7.1_1
Abstract.The growing reliance on Artificial Intelligence (AI) in cybersecurity has elevated concerns about the interpretability and transparency of automated decision-making systems. In environments where trust, accountability, and real-time responsiveness are critical, the “black box” nature of many AI models poses significant barriers to their adoption and operational effectiveness. This systematic literature review examines recent developments in Explainable Artificial Intelligence (XAI) within the cybersecurity domain, focusing on its role in enhancing transparency, trust, and human-AI collaboration. A structured search was conducted across six major academic databases and preprint repositories, yielding nine peer-reviewed studies that met rigorous inclusion criteria. These studies were analyzed across five quality dimensions: relevance, clarity of XAI methods, empirical grounding, human factors consideration, and deployment realism. Findings reveal that while technical innovations—such as SHAP, LIME, Grad-CAM, and lightweight edge-based models—offer substantial gains in model transparency, these advances often fail to translate into actionable insights for end-users due to limitations in cognitive usability and system integration. The review identifies a recurring gap between the theoretical promise of XAI and its practical implementation in real-world security infrastructures. Studies highlight issues such as user disengagement, underutilization of explanation tools, and inadequate alignment with operational workflows. Emerging directions emphasize the need for user-centered design, co-explainability frameworks, and interdisciplinary approaches that incorporate cognitive science and human-computer interaction. In conclusion, the future of XAI in cybersecurity hinges on its ability to go beyond algorithmic transparency and embed interpretability within the social, cognitive, and organizational contexts in which security professionals operate. Bridging these gaps will be essential for realizing the full potential of explainable AI systems as trustworthy and effective tools in modern cybersecurity operations.
Keywords: Explainable AI, Cybersecurity, Artificial Intelligence.
References
Barletta, V.S. et al. (2023) ‘Serious Games for Cybersecurity: How to Improve Perception and Human Factors’, in 2023 IEEE International Conference on Metrology for eXtended Reality, Artificial Intelligence and Neural Engineering (MetroXRAINE). IEEE, pp. 1110–1115. Available at: https://doi.org/10.1109/MetroXRAINE58569.2023.10405607.
Capuano, N. et al. (2022) ‘Explainable Artificial Intelligence in CyberSecurity: A Survey’, IEEE Access, 10, pp. 93575–93600. Available at: https://doi.org/10.1109/ACCESS.2022.3204171.
Desai, B. et al. (2024) ‘Explainable AI in Cybersecurity: A Comprehensive Framework for enhancing transparency, trust, and Human-AI Collaboration’, in 2024 International Seminar on Application for Technology of Information and Communication (iSemantic). Semarang, Indonesia: IEEE, pp. 135–150. Available at: https://doi.org/10.1109/iSemantic63362.2024.10762690.
Milad Rahmati (2025) ‘Towards Explainable and Lightweight AI for Real-Time Cyber Threat Hunting in Edge Networks’.
https://doi.org/10.48550/arXiv.2504.16118
Nazim, S. et al. (2025) ‘Advancing malware imagery classification with explainable deep learning: A state-of-the-art approach using SHAP, LIME and Grad-CAM’, PLOS One, 20(5), p. e0318542. Available at: https://doi.org/10.1371/journal.pone.0318542.
Nyre-Yu, M. et al. (2022) ‘Explainable AI in Cybersecurity Operations: Lessons Learned from xAI Tool Deployment’, in Proceedings 2022 Symposium on Usable Security. Reston, VA: Internet Society. Available at: https://doi.org/10.14722/usec.2022.23014.
Pan, Z. and Mishra, P. (2023) Explainable AI for Cybersecurity. Cham: Springer Nature Switzerland. Available at: https://doi.org/10.1007/978-3-031-46479-9.
Ravikumar, C. et al. (2024) ‘Advancing Malware Detection Using Memory Analysis and Explainable AI Approach’, in 2024 Second International Conference on Intelligent Cyber Physical Systems and Internet of Things (ICoICI). IEEE, pp. 518–523. Available at: https://doi.org/10.1109/ICoICI62503.2024.10696406.
Sarker, I.H. et al. (2024) ‘Explainable AI for cybersecurity automation, intelligence and trustworthiness in digital twin: Methods, taxonomy, challenges and prospects’, ICT Express, 10(4), pp. 935–958. Available at: https://doi.org/10.1016/j.icte.2024.05.007.
Thalpage, N. (2024) ‘The Integration of Machine Learning and Explainable AI and Business Digitization: Unleashing the Power of Data – A Review’, Journal of Digital Science, 6(1), pp. 18–27. Available at: https://doi.org/10.33847/2686-8296.6.1_2.
Thalpage, N.S. and Nisansala, T.A.D. (2023) ‘Exploring the Opportunities of Applying Digital Twins for Intrusion Detection in Industrial Control Systems of Production and Manufacturing – A Systematic Review’, in Data Protection in a Post-Pandemic Society. Cham: Springer International Publishing, pp. 113–143. Available at: https://doi.org/10.1007/978-3-031-34006-2_4.
Published online 25.06.2025
