Pairwise-Label-Based Deep Incremental Hashing with Simultaneous Code Expansion

Authors: Dayan Wu, Qinghang Su, Bo Li, Weiping Wang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on three widely-used image retrieval benchmarks, demonstrating that our method can significantly reduce the time required to expand existing database codes, while maintaining state-of-the-art retrieval performance.
Researcher Affiliation Academia Dayan Wu1, Qinghang Su1,2, Bo Li1*, Weiping Wang1 1Institute of Information Engineering, Chinese Academy of Sciences 2School of Cyber Security, University of Chinese Academy of Sciences
Pseudocode No The paper describes optimization steps and formulas but does not provide structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about the release of its source code or a link to a code repository.
Open Datasets Yes We conduct extensive experiments on three public benchmark image retrieval datasets: CIFAR-10 (Krizhevsky and Hinton 2009), NUS-WIDE (Chua et al. 2009) and Image Net (Lin et al. 2014).
Dataset Splits Yes Following (Lai et al. 2015), we randomly select 1,000 images (100 images per class) as the test query set, and 5,000 images (500 images per class) as the training set.
Hardware Specification No The paper mentions running experiments 'with GPU' but does not specify any particular GPU model, CPU, or other hardware specifications.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup Yes We tune λ in the range of [0.1, 1000] by fixing {γ = 1, q = 2000} for CIFAR-10, NUS-WIDE, and Image Net. Similarly, we set {λ = 1, q = 2000} for CIFAR-10, NUSWIDE, and Image Net when tuning γ. When tuning q, we set {λ = 1, γ = 1} for CIFAR-10, NUS-WIDE, and Image Net.