Publication Type : Journal Article
Thematic Areas : Center for Computational Engineering and Networking (CEN)
Source : IEEE Transactions on Artificial Intelligence, 2021
Url : https://ieeexplore.ieee.org/abstract/document/9536400
Campus : Coimbatore
Year : 2021
Abstract : Deep convolutional networks have been quite successful at various image classification tasks. The current methods to explain the predictions of a pretrained model rely on gradient information, often resulting in saliency maps that focus on the foreground object as a whole. However, humans typically reason by dissecting an image and pointing out the presence of smaller concepts. The final output is often an aggregation of the presence or absence of these smaller concepts. In this work, we propose MACE: a model agnostic concept extractor, which can explain the working of a convolutional network through smaller concepts. The MACE framework dissects the feature maps generated by a convolution network for an image to extract concept-based prototypical explanations. Furthermore, it estimates the relevance of the extracted concepts to the pretrained model’s predictions, a critical aspect for explaining the individual class predictions, missing in existing approaches. We validate our framework using VGG16 and ResNet50 CNN architectures and datasets like Animals With Attributes 2 (AWA2) and Places365. Our experiments demonstrate that the concepts extracted by the MACE framework increase the human interpretability of the explanations and are faithful to the underlying pretrained black-box model.
Cite this Research Publication : Ashish Kumar, Karan Sehgal, Prerna Garg, Vidhya Kamakshi, Narayanan Chatapuram Krishnan "MACE: Model Agnostic Concept Extractor for Explaining Image Classification Networks", IEEE Transactions on Artificial Intelligence, 2021