Dealing with ambiguous Queries in Multimodal Video Retrieval

Authors
Luca Rossetto, Claudiu Tănase, Heiko Schuldt
Type
In Proceedings
Date
2016/1
Appears in
Proceedings of the 22nd International Conference on Multimedia Modeling (MMM 2016)
Location
Miami, FL, USA
Abstract

Dealing with ambiguous queries is an important challenge in information retrieval (IR). While this problem is well understood in text retrieval, this is not the case in video retrieval, especially when multi-modal queries have to be considered as for instance in Query-by-Example or Query-by-Sketch. Systems supporting such query types usually consider dedicated features for the different modalities. This can be intrinsic object features like color, edge, or texture for the visual modality or motion for the kinesthetic modality. Sketch-based queries are naturally inclined to be ambiguous as they lack specification in some information channels. In this case, the IR system has to deal with the lack of information in a query, as it cannot deduce whether this information should be absent in the result or whether it has simply not been specified, and needs to properly select the features to be considered. In this paper, we present an approach that deals with such ambiguous queries in sketch-based multimodal video retrieval. This approach anticipates the intent(s) of a user based on the information specified in a query and accordingly selects the features to be considered for query execution. We have evaluated our approach based on Cineast, a sketch-based video retrieval system. The evaluation results show that disregarding certain features based on the anticipated query intent(s) can lead to an increase in retrieval quality of more than 25% over a generic query execution strategy.