Deep Recurrent Quantization for Generating Sequential Binary Codes
Authors: Jingkuan Song, Xiaosu Zhu, Lianli Gao, Xin-Shun Xu, Wu Liu, Heng Tao Shen
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on the benchmark datasets show that our model achieves comparable or even better performance compared with the state-of-the-art for image retrieval. We perform extensive experiments on three public datasets: CIFAR-10, NUS-WIDE and Image Net. We perform ablation study on NUS-WIDE and show results in Tab. 5. |
| Researcher Affiliation | Collaboration | Jingkuan Song1 , Xiaosu Zhu1 , Lianli Gao1 , Xin-Shun Xu2 , Wu Liu3 and Heng Tao Shen1 1Center for Future Media, University of Electronic Science and Technology of China 2Shandong University 3JD AI Research |
| Pseudocode | No | The paper includes diagrams and mathematical formulations of the proposed method but does not provide pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is published online: https://github.com/cfm-uestc/DRQ. |
| Open Datasets | Yes | We perform extensive experiments on three public datasets: CIFAR-10, NUS-WIDE and Image Net. CIFAR-10 is a public dataset labeled in 10 classes. NUS-WIDE is a public dataset consisting of 81 concepts... On Image Net, we follow [Cao et al., 2017]... |
| Dataset Splits | Yes | CIFAR-10... It consists of 50,000 images for training and 10,000 images for validation. We follow [Yu et al., 2018] to combine the training and validation set together, and randomly sample 5,000 images per class as database. The remaining 10,000 images are used as queries. Meanwhile, we use the whole database to train the network. On NUS-WIDE, We randomly sample 1,000 images per concept as the query set, and use the remaining images as the database. Furthermore, we randomly sample 5,000 images per concept from the database as the training set. On Image Net, we use all the images of these classes in the training set as the database, and use all the images of these classes in the validation set as the queries. Furthermore, we randomly select 100 images for each class in the database for training. |
| Hardware Specification | No | The paper mentions implementing the model with TensorFlow and using a pre-trained Alex Net, but it does not specify any particular hardware (e.g., GPU model, CPU type) used for the experiments. |
| Software Dependencies | No | The paper states: "We implement our model with Tensorflow," but it does not specify the version number of TensorFlow or any other software dependencies. |
| Experiment Setup | Yes | We implement our model with Tensorflow, using a pre-trained Alex Net and construct intermediate layers on top of the fc7 layer. Meanwhile, we randomly initialize codebook with specified M and K, which will be described below. We use Adam optimizer with lr = 0.001, β1 = 0.9, β2 = 0.999 for training. |