Cross-Modality Medical Image Retrieval with Deep Features

Ashery Mbilinyi, Heiko Schuldt
In Proceedings
Appears in
Proceedings of the 3rd Workshop in Artificial Intelligence Techniques for BioMedicine and HealthCare (AIBH 2020)
Seoul, South Korea (held virtually)

In medical imaging, modality refers to the technique and process used to create visual representations of a particular part of the body, organs, or tissues. Conventional modalities include X-ray, CT-scan, Magnetic Resonance Imaging (MRI), Ultrasound, and Positron Emission Tomography (PET). Depending on the modality used, the same disease can be detected differently, making a modality an essential filter in evaluating the relevance of search results when retrieving similar medical images. Traditionally, texture features have been used for content-based medical image retrieval. However, texture features are limited in capturing the semantic similarity between medical images, let alone their modalities. This paper explores deep features (features extracted by deep convolutional neural networks (CNN)) and analyzes their effectiveness in retrieving similar medical images, semantic-wise and modality-wise, from a collection with different medical image modalities. We have examined CNNs of different architectures pre-trained in natural images and CNNs we fine-tuned and fully-trained from scratch in medical images to extract deep features. Based on retrieval performance evaluation, we show that deep features, even though extracted by CNN pre-trained in natural images, still outperform texture features. On the other end, we show that deep features extracted by a smaller, simpler, and yet computationally efficient CNN we trained in medical images can compete with large and complex ImageNet CNNs fine-tuned or fully trained in medical images.


Co-located with the 2020 IEEE International Conference on Bioinformatics and Biomedicine (IEEE BIBM 2020).