Deep Semantic-Preserving and Ranking-Based Hashing for Image Retrieval

Authors: Ting Yao, Fuchen Long, Tao Mei, Yong Rui

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have conducted experiments on CIFAR-10 and NUS-WIDE image benchmarks, demonstrating that our approach can provide superior image search accuracy than other state-of-theart hashing techniques.
Researcher Affiliation Collaboration Microsoft Research, Beijing, China University of Science and Technology of China, Hefei, China {tiyao, tmei, yongrui}@microsoft.com, longfc.ustc@gmail.com
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or a direct link to open-source code for the described methodology.
Open Datasets Yes The CIFAR-10 dataset consists of 60,000 real world tiny images in 10 classes. Each class has 6,000 images in size 32 32. We randomly select 1,000 images (100 images per class) as the test query set. For the unsupervised setting, all the rest images are used as training samples. For the supervised setting, we additionally sample 500 images from each class in the training samples and constitute a subset of 5,000 labeled images for training. 1http://www.cs.toronto.edu/ kriz/cifar.html 2http://lms.comp.nus.edu.sg/research/NUS-WIDE.htm
Dataset Splits No The paper states that "The hyper-parameter λ is determined by using a validation set" but does not provide specific details about the size or method of splitting for this validation set.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments.
Software Dependencies No The paper mentions implementing the proposed method based on the open-source Caffe [Jia et al., 2014] framework, but does not provide specific version numbers for Caffe or other software dependencies.
Experiment Setup Yes The hyper-parameter λ is determined by using a validation set and set to 0.25 finally. We implement the proposed method based on the open-source Caffe [Jia et al., 2014] framework. In all experiments, our networks are trained by stochastic gradient descent with 0.9 momentum. The start learning rate is set to 0.01, and we decrease it to 10% after 5,000 iterations on CIFAR-10 and after 20,000 iterations on NUS-WIDE. The mini-batch size of images is 64. The weight decay parameter is 0.0002.