Coordinate Discrete Optimization for Efficient Cross-View Image Retrieval

Authors: Yadong Mu, Wei Liu, Cheng Deng, Zongting Lv, Xinbo Gao

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive evaluations are conducted on three image benchmarks. The clearly superior experimental results faithfully prove the merits of the proposed method.
Researcher Affiliation Collaboration Yadong Mu1, Wei Liu2, Cheng Deng3, Zongting Lv2, Xinbo Gao3 1Institute of Computer Science and Technology, Peking University, Beijing, 100871, China 2Didi Research, Beijing, 100193, China 3Xidian University, Shannxi, 710126, China
Pseudocode Yes Algorithm 1 Coordinate Discrete Optimization Procedure; Algorithm 2 Active Set Reduction
Open Source Code No The paper does not provide an explicit link or statement about open-source code for the described methodology. It only thanks authors for sharing code for baseline algorithms.
Open Datasets Yes Datasets: We adopt three datasets, including Wiki [Rasiwasia et al., 2010], NUS-WIDE [seng Chua et al., 2009] and MIRFlickr [Huiskes and Lew, 2008]. For all three, our data preparation is essentially identical to other relevant works.
Dataset Splits No The paper specifies training and querying (test) splits for the datasets, but it does not mention a distinct validation set split.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., CPU, GPU models).
Software Dependencies No The paper does not provide specific version numbers for any software dependencies used in the experiments.
Experiment Setup Yes The parameter γ in our proposed CDH plays a role of balancing the learned hash code quality and the generalization ability to unseen data. We use a grid search scheme to find the optimal γ on all datasets. Particularly, multiple trials are conducted with different γ/n values from the candidate set [0, 10-2, 10-1, 1, 5, 10, 20, 50]. We initialize the hashing parameters wk,v using random numbers drawn from 1-D Gaussian distribution, and afterwards initialize the hash bit by bk(x) = sign[PVk (1, x(v)) 0.5)]. The largest dataset NUS-WIDE contains more than 180,000 images... We thus randomly split the data into several chunks, each of which contains 10,000 images.