Dynamic Multi-View Hashing for Online Image Retrieval

Authors: Liang Xie, Jialie Shen, Jungong Han, Lei Zhu, Ling Shao

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on two real-world image datasets demonstrate superior performance of DWVH over several state-of-the-art hashing methods.
Researcher Affiliation Academia 1Wuhan University of Technology, China 2Northumbria University, United Kingdom 3Lancaster University, United Kingdom 4The University of Queensland, Australia 5University of East Anglia, United Kingdom
Pseudocode Yes Algorithm 1 Online learning process of DMVH at step t. Input: xm t |M m=1,Dm t 1|M m=1, e Km t 1|M m=1, α Output: Ht, Dm t |M m=1, e Km t |M m=1, α 1: Compute ht by Eq.(5); 2: Compute lt(ht, Dm t 1|M m=1) by Eq.(6) 3: if lt(ht, Dm t 1|M m=1) < δ then 4: Ht = [HT t 1, sgn(h T t )]T; 5: e Km t = e Km t 1 and Dm t = Dm t 1; 6: else if lt(ht, Dm t 1|M m=1) >= δ then 7: Add t into the buffer, and use Algorithm 2 to optimize Ht, α and e Km t; 8: end if
Open Source Code No The paper does not provide any specific repository link or explicit statement about the availability of the source code for the described methodology.
Open Datasets Yes We use two multi-view image datasets: MIR Flickr [Huiskes and Lew, 2008] and NUS-WIDE [Chua et al., 2009]
Dataset Splits No The paper states 'We select 1% images as queries, and the rest images are added to the database sequentially.' for MIR Flickr and NUS-WIDE, which defines the query set but does not provide explicit training, validation, and test splits with percentages or counts, nor does it refer to predefined standard splits for these datasets.
Hardware Specification Yes All the experiments are conducted on a computer with Intel Core(TM) i5 2.6GHz 2 processors and 12.0GB RAM.
Software Dependencies No The paper mentions mathematical functions and parameters but does not provide specific software dependencies or library versions (e.g., Python, PyTorch, scikit-learn versions) required to replicate the experiment.
Experiment Setup Yes In the implementation of DMVH, we use Gaussian kernel for all visual features, and histogram intersection kernel for text feature. DMVH does not contain many parameters to set. The regularization λ is set to 10-3, it is used to avoid the matrix singularity and has little influence on the results. The maximum buffer size is set to 1000 on MIR Flickr and 5000 on NUS-WIDE respectively, the threshold ρ is set to 0.5.