Learning Deep Unsupervised Binary Codes for Image Retrieval
Authors: Junjie Chen, William K. Cheung, Anran Wang
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To evaluate the performance of our proposed Deep Quan method for image retrieval, we conduct extensive experiments on two publicly available datasets. The details of the experiments and the results are described in the following subsections. |
| Researcher Affiliation | Academia | Junjie Chen1, William K. Cheung1 and Anran Wang2 1 Department of Computer Science, Hong Kong Baptist University, Hong Kong, China 2 Institute for Infocomm Research, A*STAR, Singapore |
| Pseudocode | No | The paper describes the model and optimization steps in text but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | For more evaluation results, readers can refer to https://github.com/chenjunjie1994/IJCAI-18. |
| Open Datasets | Yes | To make our performance comparison consistent with some recent related work [Lu et al., 2017; Huang et al., 2017; Erin Liong et al., 2015], we here conduct our experiments on two widely used benchmark datasets: CIFAR-10 and MNIST. |
| Dataset Splits | No | The paper specifies 'training set' and 'test set' for datasets but does not explicitly mention a 'validation set' or details about how validation data was used for hyperparameter tuning or model selection. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | Following [Xie et al., 2016], we configure our model as a deep autoencoder with the number of unit of encoder as [D 500 500 2000 L]... The trade-off parameter η = 1.0 is adopted for all the experiments. Also, we have tested the cases empirically with s = 1.0, 1.0, 0.001 and λ = 0.1, 0.2, 0.2 for the datasets MNIST, CIFAR-10-CNN and CIFAR-10-GIST respectively. We set the batch size as 512 and the learning rate as 0.01 for the stochastic gradient descent. |