Find in Library
Search millions of books, articles, and more
Indexed Open Access Databases
Task-Dependent and Query-Dependent Subspace Learning for Cross-Modal Retrieval
oleh: Li Wang, Lei Zhu, En Yu, Jiande Sun, Huaxiang Zhang
| Format: | Article |
|---|---|
| Diterbitkan: | IEEE 2018-01-01 |
Deskripsi
Most existing cross-modal retrieval approaches learn the same couple of projection matrices for different sub-retrieval tasks (such as, image retrieves text and text retrieves image) and various queries. They ignore the important fact that, different sub-retrieval tasks and queries have unique characteristics themselves in real practice. To tackle the problem, we propose a task-dependent and query-dependent subspace learning approach for cross-modal retrieval. Specifically, we first develop a unified cross-modal learning framework, where task-specific and category-specific subspaces can be learned simultaneously via an efficient iterative optimization. Based on this step, a task-category-projection mapping table is built. Subsequently, an efficient linear classifier is trained to learn a semantic mapping function between multimedia documents and their potential categories. In the online retrieval stage, the task-dependent and query-dependent matching subspace is adaptively identified by considering the specific sub-retrieval task type, the potential semantic category of the query, and the task-category-projection mapping table. Experimental results demonstrate the superior performance of the proposed approach compared with several state-of-the-art techniques.