Publication Type : Conference Proceedings
Publisher : IEEE
Source : 2025 8th International Conference on Computing Methodologies and Communication (ICCMC)
Url : https://doi.org/10.1109/iccmc65190.2025.11140777
Campus : Kochi
School : School of Computing
Year : 2025
Abstract : Bone cancer is a dangerous disease that needs to be identified early and accurately in order to treat it properly. Conventional diagnostics such as X-rays, MRIs, and biopsies are used as a starting point to identify abnormalities but are very operator dependent, slow, and variability-prone. Deep learning has proven to possess tremendous potential in the process of automated medical image analysis with outstanding accuracy in the identification of cancer. Despite their high accuracy, the absence of interpretability is still a great hindrance for clinical implementation.This work proposes an Explainable AI (XAI) framework for deep learning based detection of bone cancer with the guarantee of both accuracy and interpretability of the predictions. The ResNet-101 model is fine-tuned for binary classification of bone cancer based on a dataset from Hugging Face with preprocessing steps involving resizing, normalization, and equalization of brightness to facilitate improved learning. Grad-CAM is incorporated to highlight the regions of the image which are most influential to the decision of the model in order for medical experts to be able to rely on explanations of AI predictions. The presented method has shown a spectacular accuracy of 94% and F1-score of 0.94, which makes it suitable for bone cancer detection. By integrating deep learning and explainability, the research presents a novel diagnostic system that not only achieves high classification accuracy but also provides visual explanations of model predictions using Grad-CAM. This twofold emphasis on performance as well as interpretability is the essence of the 'Explainable AI Perspective' of the title, with the goal of fostering trust among clinicians and enabling clinical adoption in practice. The findings indicate that the integration of interpretability into AI-based diagnostics can make them more adoptable within clinical practice, which in turn will result in better patient outcomes.
Cite this Research Publication : Riya P Reji, Devika Shine, Remya Nair T, Interpretable Deep Learning for Bone Cancer Diagnosis: An Explainable AI Perspective, 2025 8th International Conference on Computing Methodologies and Communication (ICCMC), IEEE, 2025, https://doi.org/10.1109/iccmc65190.2025.11140777