Quantized Correlation Hashing for Fast Cross-Modal Search

Authors: Botong Wu, Qiang Yang, Wei-Shi Zheng, Yizhou Wang, Jingdong Wang

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on three real world datasets demonstrate that our approach outperforms the state-of-the-art multi-modal hashing methods.
Researcher Affiliation Collaboration School of Information Science and Technology, Sun Yat-sen University, China Nat l Eng. Lab. for Video Technology, Cooperative Medianet Innovation Center, Sch l of EECS, Peking University Collaborative Innovation Center of High Performance Computing, National University of Defense Technology Guangdong Provincial Key Laboratory of Computational Science Microsoft Research Asia, China
Pseudocode Yes Algorithm 1 Quantized Correlation Hashing
Open Source Code No The paper does not provide any explicit statement or link for the open-source availability of its code.
Open Datasets Yes To verify the efficiency and effectiveness of QCH, a series of experiments are carried out on two benchmark multimodal datasets, Wiki[Rasiwasia et al., 2010] and NUS-WIDE [Chua et al., 2009], and a large-scale dataset 58W-CIFAR [Krizhevsky and Hinton, 2009] for which we extracted two types of features to build multi-view data, so that cross-view retrieval can be performed.
Dataset Splits No The paper specifies training and testing sets, but does not explicitly mention a separate validation set or split for model tuning.
Hardware Specification Yes All the experiments were conducted on a workstation with 24 Intel(R) Xeon(R) E5-2620@2.0GHz CPUs, 96 GB RAM and 64-bit Ubuntu system.
Software Dependencies No The paper mentions a "64-bit Ubuntu system" but does not specify any software dependencies (e.g., libraries, frameworks) with version numbers.
Experiment Setup Yes Firstly, we investigate the influence of two parameters introduced in QCH: α and β. α controls the tradeoff between hash function learning stage and quantization stage and β is a regularizer coefficient. During this experiment, c = 16 is used. [...] setting α = 0.05 and β = 0.02 is a reasonable to QCH for Wiki and NUS-WIDE datasets. QCH performs better on 58W-CIFAR dataset when α and β are larger. Since multi-view data have stronger correlation than cross-modal data, so we set larger values, i.e. α = 1 and β = 0.1 on multi-view datasets for example 58W-CIFAR.