Back close

Evaluation of Saliency-based Explainability Method

Publication Type : Journal Article

Thematic Areas : Center for Computational Engineering and Networking (CEN)

Source : ICML Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI, 2021

Url : https://doi.org/10.48550/arXiv.2106.12773

Campus : Coimbatore

Year : 2021

Abstract : A particular class of Explainable AI (XAI) methods provide saliency maps to highlight part of the image a Convolutional Neural Network (CNN) model looks at to classify the image as a way to explain its working. These methods provide an intuitive way for users to understand predictions made by CNNs. Other than quantitative computational tests, the vast majority of evidence to highlight that the methods are valuable is anecdotal. Given that humans would be the end-users of such methods, we devise three human subject experiments through which we gauge the effectiveness of these saliency-based explainability methods.

Cite this Research Publication : Sam Zabdiel Sunder Samuel, Vidhya Kamakshi, Namrata Lodhi, Narayanan C Krishnan "", ICML Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI, 2021

Admissions Apply Now