Semi-Supervised Deep Hashing with a Bipartite Graph

Authors: Xinyu Yan, Lijun Zhang, Wu-Jun Li

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on real datasets show that our BGDH outperforms state-of-the-art hashing methods.
Researcher Affiliation Academia Xinyu Yan, Lijun Zhang, Wu-Jun Li National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China {yanxy, zhanglj}@lamda.nju.edu.cn, liwujun@nju.edu.cn
Pseudocode Yes Algorithm 1 Context generation based on random walk
Open Source Code No The paper does not include an explicit statement about releasing the source code for the methodology described, nor does it provide a link to a code repository.
Open Datasets Yes We conduct experiments on two widely used benchmark datasets: CIFAR-10 and NUS-WIDE. The CIFAR-10 dataset^1 consists of 60,000 images from 10 classes... ^1 https://www.cs.toronto.edu/ kriz/cifar.html
Dataset Splits No The paper describes query and training set sizes but does not explicitly define a separate validation set split (e.g., 'X% for validation' or 'Y samples for validation').
Hardware Specification Yes All the experiments are performed on a NVIDIA K80 GPU server with Mat Conv Net [Vedaldi and Lenc, 2014].
Software Dependencies No The paper mentions 'Mat Conv Net' as software used for experiments but does not specify a version number for it or any other software dependencies.
Experiment Setup Yes The bipartite graph of BGDH is constructed based on handcrafted features with heat kernel, where the hyper-parameter ρ is set as 1 for CIFAR-10 and 10 for NUS-WIDE. The hyper-parameter η in BGDH is set as 10 for CIFAR-10 and 100 for NUS-WIDE similar to DPSH [Li et al., 2016]. We simply set T1 = 10, T2 = 5, λ = 0.1 in all the experiments.