Deep Unified Cross-Modality Hashing by Pairwise Data Alignment
Authors: Yimu Wang, Bo Xue, Quan Cheng, Yuhui Chen, Lijun Zhang
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three representative image-text datasets demonstrate the superiority of our DUCMH over several state-of-the-art cross-modality hashing methods. |
| Researcher Affiliation | Academia | National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China {wangym, xueb, chengq, chenyuhui, zhanglj}@lamda.nju.edu.cn |
| Pseudocode | Yes | Algorithm 1 The alternative learning algorithm |
| Open Source Code | No | No explicit statement or link providing access to open-source code was found. |
| Open Datasets | Yes | Three datasets, MIRFLICKR-25K [Huiskes and Lew, 2008], IAPR TC-12 [Escalante et al., 2010], and NUS-WIDE [Chua et al., 2009] are used for evaluation. |
| Dataset Splits | No | For MIRFLICKR-25K and IAPR TC-12, 2,000 data points are randomly sampled as the test (query) set, while for NUS-WIDE, 2,100 data points are selected. The remaining points as the retrieval set (database). |
| Hardware Specification | Yes | Our DUCMH method is implemented based on Py Torch [Paszke et al., 2019] with eight NVIDIA V100 GPUs and optimized by the mini-batch SGD with the size of 64 and weight decay. |
| Software Dependencies | No | The paper mentions 'Py Torch [Paszke et al., 2019]' but does not specify a version number for it or other software dependencies. |
| Experiment Setup | Yes | Our DUCMH method is implemented based on Py Torch [Paszke et al., 2019] with eight NVIDIA V100 GPUs and optimized by the mini-batch SGD with the size of 64 and weight decay. The learning rate is initialized as 0.0001 for the image to text mapping fi2t( ) and 0.004 for the unified hash function hy( ). Hyper-parameters ϵ, α and ρ are empirically set to 5000, 50 and 200 for scaling the order of each loss. |