Deep Hashing: A Joint Approach for Image Signature Learning

Authors: Yadong Mu, Zhu Liu

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive quantitative evaluations are conducted. On all adopted benchmarks, our proposed algorithm generates new performance records by significant improvement margins.
Researcher Affiliation Collaboration Yadong Mu,1 Zhu Liu2 1Institute of Computer Science and Technology, Peking University, China 2Multimedia Department, AT&T Labs, U.S.A. Email: myd@pku.edu.cn, zliu@research.att.com
Pseudocode Yes Algorithm 1 Deep Hash Algorithm
Open Source Code No The paper mentions implementing a customized version of the open-source Caffe but does not explicitly state that the custom code for their proposed method is open-source or provide access details.
Open Datasets Yes Description of Datasets: We conduct quantitative comparisons over four image benchmarks which represent different visual classification tasks. They include MNIST (Lecun et al. 1998) for handwritten digits recognition, CIFAR10 (Krizhevsky 2009) which is a subset of 80 million Tiny Images dataset and consists of images from ten animal or object categories, Kaggle-Face, which is a Kagglehosted facial expression classification dataset to stimulate the research on facial feature representation learning, and SUN397 (Xiao et al. 2010) which is a large scale scene image dataset of 397 categories.
Dataset Splits No The paper provides Train/Query Set sizes in Table 1 but does not explicitly describe a separate validation split or its size.
Hardware Specification Yes All the evaluations are conducted on a large-scale private cluster, equipped with 12 NVIDIA Tesla K20 GPUs and 8 K40 GPUs.
Software Dependencies No The paper mentions using "open-source Caffe (Jia 2013)" but does not specify its version number or any other software dependencies with their respective versions.
Experiment Setup Yes In all cases, the learning rate in gradient descent drops at a con-stant factor (0.1 in all of our experiments) until the training converges.