Deep Joint Semantic-Embedding Hashing

Authors: Ning Li, Chao Li, Cheng Deng, Xianglong Liu, Xinbo Gao

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on three benchmark datasets show that the proposed model outperforms current state-of-the-art methods. ... 4 Experiments
Researcher Affiliation Academia 1 School of Electronic Engineering, Xidian University, Xi an 710071, China 2 Beihang University, Beijing 100191, China
Pseudocode Yes Algorithm 1 The learning algorithm for our DSEH
Open Source Code No The paper states 'Our model is implemented on Tensor Flow [Abadi et al., 2016]', but it does not provide any explicit statement or link for the open-source code of their DSEH method.
Open Datasets Yes The experiments are conducted on three benchmark image retrieval datasets: NUS-WIDE [Chua et al., 2009], Image Net [Russakovsky et al., 2015], and MS-COCO [Lin et al., 2014].
Dataset Splits No The paper states 'The learning rate is chosen from 10^-2 to 10^-6 with a validation set', implying a validation set was used. However, it does not explicitly specify the size or percentage of the validation split for any of the datasets.
Hardware Specification Yes Our model is implemented on Tensor Flow [Abadi et al., 2016] on a server with two NVIDIA TITAN X GPUs.
Software Dependencies No The paper states 'Our model is implemented on Tensor Flow [Abadi et al., 2016]', but it does not specify a version number for TensorFlow or any other software dependencies.
Experiment Setup Yes The learning rate is chosen from 10^-2 to 10^-6 with a validation set. The batch size of Lab Net and Img Net are set to 32 and 128 respectively. For the hyper-parameters in Lab Net, we conduct cross-validation to search α and γ from 10^-3 to 10^2, and search β from 10^-6 to 10^-1. We find that the optimal result can be obtained when α = γ = 1, and β = 0.005. Then we search from 10^-3 to 10^2 and discover η = 1 is the best for Img Net.