Back close

Understanding Privacy Risks in Typical Deep Learning Models for Medical Image Analysis

Publication Type : Conference Paper

Thematic Areas : Wireless Network and Application

Publisher : Proceedings Volume 11601, Medical Imaging 2021: Imaging Informatics for Healthcare, Research, and Applications

Source : Accepted for Publication at SPIE 2020, Proceedings Volume 11601, Medical Imaging 2021: Imaging Informatics for Healthcare, Research, and Applications, San Diego, February 2021.

Url : https://www.spiedigitallibrary.org/conference-proceedings-of-spie/11601/116010E/Understanding-privacy-risks-in-typical-deep-learning-models-for-medical/10.1117/12.2582014.short?SSO=1

Campus : Amritapuri

School : School of Engineering

Center : Amrita Center for Wireless Networks and Applications (AmritaWNA)

Department : Wireless Networks and Applications (AWNA)

Year : 2021

Abstract : Deep learning in medical imaging typically requires sensitive and confidential patient data for model training. Recent research in computer vision has shown that it is possible to recover training data from trained models using model inversion techniques. In this paper, we investigate the degree to which encoder-decoder like architectures (U-Nets, etc) commonly used in medical imaging are vulnerable to simple model inversion attacks. Utilising a database consisting of 20 MRI datasets from acute ischemic stroke patients, we trained an autoencoder model for image reconstruction and a U-Net model for lesion segmentation. In the second step, model inversion decoders were developed and trained to reconstruct the original MRIs from the low dimensional representation of the trained autoencoder and the U-Net model. The inversion decoders were trained using 24 independent MRI datasets of acute stroke patients not used for training of the original models. Skull-stripped as well as the full original datasets including the skull and other non-brain tissues were used for model training and evaluation. The results show that the trained inversion decoder can be used to reconstruct training datasets after skull stripping given the latent space of the autoencoder trained for image reconstruction (mean correlation coefficient= 0.49), while it was not possible to fully reconstruct the original image used for training of a segmentation task UNet (mean correlation coefficient=0.18). These results are further supported by the structural similarity index measure (SSIM) scores, which show a mean SSIM score of 0.51± 0.14 for the autoencoder trained for image reconstruction, while the average SSIM score for the U-Net trained for the lesion segmentation task was 0.28±0.12. The same experiments were then conducted on the same images but without skull stripping. In this case, the U-Net trained for segmentation shows significantly worse results, while the autoencoder trained for image reconstruction is not affected. Our results suggest that an autoencoder model trained for image compression can be inverted with high accuracy while this is much harder to achieve for a U-Net trained for lesion segmentation.

Cite this Research Publication : N. K. Subbanna, A. Tuladhar, M. Wilms, and N. D. Forkert, "Understanding Privacy Risks in Typical Deep Learning Models for Medical Image Analysis", Accepted for Publication at SPIE 2020, San Diego, February 2021.

Admissions Apply Now