Supervised Hashing for Image Retrieval via Image Representation Learning
Authors: Rongkai Xia, Yan Pan, Hanjiang Lai, Cong Liu, Shuicheng Yan
AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive empirical evaluations on three benchmark datasets with different kinds of images show that the proposed method has superior performance gains over several state-of-the-art supervised and unsupervised hashing methods. |
| Researcher Affiliation | Academia | Rongkai Xia1, Yan Pan1, Hanjiang Lai1,2, Cong Liu1, and Shuicheng Yan2 1Sun Yat-sen University, Guangzhou, China 2National University of Singapore, Singapore |
| Pseudocode | Yes | Algorithm 1 Coordinate descent algorithm for hash bit learning. |
| Open Source Code | No | The paper states that 'The results of the other six baseline methods are obtained by the open-source implementations provided by their respective authors,' but does not provide open-source code for its own proposed method. |
| Open Datasets | Yes | We evaluate the proposed method on three benchmark datasets with different kinds of images. (1) The MNIST2 dataset consists of 70K 28 28 greyscale images of handwritten digits from 0 to 9 . (2) The CIFAR-103 consists of 60K 32 32 color tinny images which are categorized into 10 classes (6K images per class). (3) The NUS-WIDE4 dataset has nearly 270K images collected from the web. |
| Dataset Splits | No | In MNIST and CIFAR-10, we randomly select 1K images (100 images per class) as the test query set. For the unsupervised methods, we use the rest images as training samples. For the supervised methods, we randomly select 5K images (500 images per class) from the rest images as the training set. The paper specifies train and test sets but does not explicitly mention a separate validation set or its split. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU/CPU models or system specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like 'deep convolutional neural networks (CNNs)' and 'LBFGS-B solver' but does not provide specific version numbers for any of them. |
| Experiment Setup | Yes | We use 32, 64, 128 filters (with the size 5x5) in the 1st, 2nd and 3rd convolutional layers. We use dropout (Hinton et al. 2012) in the fully connected layer with a rate of 0.5. |