Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Similarity Preserving Deep Asymmetric Quantization for Image Retrieval

Authors: Junjie Chen, William K. Cheung8183-8190

AAAI 2019 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on four widely-used benchmark datasets demonstrate the superiority of our proposed SPDAQ model.
Researcher Affiliation Academia Junjie Chen, William K. Cheung Department of Computer Science, Hong Kong Baptist University, Hong Kong, China EMAIL
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes Four datasets are adopted: CIFAR-10 (Krizhevsky and Hinton 2009), NUS-WIDE-21, NUS-WIDE-81 (Chua et al. 2009) and MS-COCO (Lin et al. 2014).
Dataset Splits Yes For CIFAR-10, we randomly select 1, 000 images (100 images per class) to form the testing query set and take the rest 59, 000 images as the database as in (Liu et al. 2018a; Li, Wang, and Kang 2015). Since training with the whole image database is time-consuming for the existing deep quantization methods, we follow the original settings as in their papers and sample a subset of 5, 000 images (500 images per class) from the database for training. For NUSWIDE-21, we adopt the widely-used protocol and randomly sample 2, 100 images (100 images per class) as the testing query set while the remaining images as the retrieval database. A subset of 10, 500 images (500 images per class) will be further sampled for training.
Hardware Specification Yes All the models are evaluated with Nvidia Tesla K80 Dual GPU Module.
Software Dependencies No The paper mentions 'Tensorflow' but does not specify version numbers for any software dependencies, library, or solver.
Experiment Setup Yes The learning rate is fine-tuned in the range of [10 3, 10 7] for each dataset. For the composite quantization, we set the number of codewords in each codebook as K = 256. We set the number of epochs as 50 for all the datasets.