Back close

Counterfactual Explanations for Enhanced Interpretability in Cross-site Scripting (Xss) Detection

Publication Type : Conference Paper

Publisher : IEEE

Source : 2025 7th International Conference on Innovative Data Communication Technologies and Application (ICIDCA)

Url : https://doi.org/10.1109/icidca66325.2025.11280340

Campus : Amritapuri

School : Centre for Cybersecurity Systems and Networks

Year : 2025

Abstract :

Machine learning (ML) models are vital for detecting Cross-Site Scripting (XSS) attacks in web security, but are often hindered by their opaque decision-making processes. To address this, we propose a novel interpretability framework combining Local Interpretable Model-agnostic Explanations (LIME) and Diverse Counterfactual Explanations (DiCE) to enhance the transparency of an XSS detection model. Our approach utilizes lexical and structural URL features to train four ML models, Random Forest (RF), Support Vector Machine (SVM), Multi-Layer Perceptron (MLP), and Logistic Regression, with RF achieving superior performance (99.28% accuracy, 99.11% precision, 98.29% recall, and a 98.70% F1-score) after hyperparameter optimization via 5-fold cross-validation and Grid Search. LIME identifies critical features like html_attr_background, html_tag_svg, and url_special_characters as key indicators of XSS attacks, while DiCE generates actionable counterfactuals, demonstrating how minimal feature adjustments can shift predictions from malicious to benign. This framework enhances trust and provides actionable insights for security practitioners, improving the deployment of ML in security critical contexts.

Cite this Research Publication : Alphin Kayalathu Mathew, Ashwin M, Vysakh Kani Kolil, Devi Rajeev, Counterfactual Explanations for Enhanced Interpretability in Cross-Site Scripting (XSS) Detection, 2025 7th International Conference on Innovative Data Communication Technologies and Application (ICIDCA), IEEE, 2025, https://doi.org/10.1109/icidca66325.2025.11280340

Admissions Apply Now