Retrieving Chest X-rays for Differential Diagnosis: A Deep Metric Learning Approach
When interpreting chest X-rays, a differential diagnosis that distinguishes a particular disease from others presenting similar visual clues is one of the most complex tasks for radiologists. Given an ambiguous image, it is common practice to search for confirmed similar images from medical image archives for guidance. Content-based image retrieval (CBIR) approaches have been proposed to automate this task. However, searching based on content alone might retrieve random images that will not significantly help in the comparative analysis required for differential diagnosis. In this work, we investigate the potential of an end-to-end method that can augment differential diagnosis by enabling a radiologist to submit a query comprised of an ambiguous image plus a text highlighting specific diagnoses the retrieved images should have. This allows the retrieval of visually similar images with specific diagnoses needed in comparative observation for differential diagnosis. We propose a new method based on deep metric learning. It learns query representations that use information from both image and text from radiologists, and then moves this query in metric space closer to specific images to be retrieved and further away from images that are not needed. We have evaluated our approach on a large chest X-ray dataset and compared it with the traditional CBIR approach. The results show that our method performs better than the traditional CBIR approach, thus providing a promising way to augment the differential diagnosis task.