Beyond Product Quantization: Deep Progressive Quantization for Image Retrieval

Authors: Lianli Gao, Xiaosu Zhu, Jingkuan Song, Zhou Zhao, Heng Tao Shen

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on the benchmark datasets show that our model significantly outperforms the state-ofthe-art for image retrieval. Our model is trained once for different code lengths and therefore requires less computation time. Additional ablation study demonstrates the effect of each component of our proposed model.
Researcher Affiliation Academia Lianli Gao1 , Xiaosu Zhu1 , Jingkuan Song1 , Zhou Zhao2 and Heng Tao Shen1 1Center for Future Media, University of Electronic Science and Technology of China 2Zhejiang University
Pseudocode No The paper describes the optimization process but does not present it in a formally labeled pseudocode or algorithm block.
Open Source Code Yes Our code is released at https://github.com/cfm-uestc/DPQ.
Open Datasets Yes We conduct the experiments on three public benchmark datasets: CIFAR-10, NUS-WIDE and Image Net. CIFAR-10 [Krizhevsky and Hinton, 2009] is a public dataset labeled in 10 classes. NUS-WIDE [Chua et al., 2009] consists of 81 concepts... Image Net [Deng et al., 2009] contains 1.2M images labeled with 1,000 classes.
Dataset Splits Yes CIFAR-10 [Krizhevsky and Hinton, 2009]... It consists of 50,000 images for training and 10,000 images for validation. We follow [Cao et al., 2016; Cao et al., 2017] to combine all images together. Randomly select 500 images per class as the training set, and 100 images per class as the query set. The remaining images are used as database. NUS-WIDE [Chua et al., 2009]... We randomly sample 5,000 images as the query set, and use the remaining images as database. Furthermore, we randomly select 10,000 images from the database as the training set. Image Net [Deng et al., 2009]... We use all images of these classes in the training set and validation set as the database and queries respectively.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types) used for running the experiments. It only mentions 'Our implementation is based on Tensorflow,' which is a software framework.
Software Dependencies No The paper states 'Our implementation is based on Tensorflow.' However, it does not specify a version number for TensorFlow or any other software library.
Experiment Setup Yes We set epoch to 64 and batch to 16. We use the Adam optimizer with default value. We tune the learning rate η from 10 4 to 10 1. As for λ, τ, µ, ν in loss function Eq. 24, we empirically set them as λ = 0.1, τ = 1, µ = 1, ν = 0.1.