Deep Hashing Network for Efficient Similarity Retrieval

Authors: Han Zhu, Mingsheng Long, Jianmin Wang, Yue Cao

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on standard image retrieval datasets show the proposed DHN model yields substantial boosts over latest state-of-the-art hashing methods. We conduct extensive experiments to evaluate the efficacy of the proposed DHN model against several state-of-the-art hashing methods on three widely-used benchmark datasets.
Researcher Affiliation Academia Han Zhu, Mingsheng Long, Jianmin Wang and Yue Cao School of Software, Tsinghua University, Beijing, China Tsinghua National Laboratory for Information Science and Technology {zhuhan10,caoyue10}@gmail.com {mingsheng,jimwang}@tsinghua.edu.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The codes and configurations will be made available online.
Open Datasets Yes NUS-WIDE1 is a public web image dataset. CIFAR-102 is a dataset containing 60,000 color images in 10 classes. Flickr3 consists of 25,000 images collected from Flickr.
Dataset Splits Yes We cross-validate the learning rate from 10 5 to 10 2 with a multiplicative step-size 10. We choose the quantization penalty parameter λ by cross-validation from 10 5 to 102 with a multiplicative step-size 10.
Hardware Specification No The paper mentions using the Caffe framework and Alex Net architecture but does not specify any hardware details (e.g., CPU, GPU models, or cloud computing resources) used for the experiments.
Software Dependencies No The paper mentions using the "Caffe framework (Jia et al. 2014)" but does not provide specific version numbers for Caffe or any other software dependencies.
Experiment Setup Yes As the fch layer is trained from scratch, we set its learning rate to be 10 times that of the lower layers. We use the mini-batch stochastic gradient descent (SGD) with 0.9 momentum and the learning rate annealing strategy implemented in Caffe... We fix the mini-batch size of images as 64 and the weight decay parameter as 0.0005.