Unsupervised Generative Adversarial Cross-Modal Hashing
Authors: Jian Zhang, Yuxin Peng, Mingkuan Yuan
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments compared with 6 state-of-the-art methods on 2 widely-used datasets verify the effectiveness of our proposed approach. |
| Researcher Affiliation | Academia | Jian Zhang, Yuxin Peng, Mingkuan Yuan Institute of Computer Science and Technology, Peking University Beijing 100871, China pengyuxin@pku.edu.cn |
| Pseudocode | No | The paper describes the model components and their operations using mathematical equations and textual descriptions, but does not provide structured pseudocode or an algorithm block. |
| Open Source Code | No | The paper states 'We implement the proposed UGACH by tensorflow1' and provides a link to the general TensorFlow website, but does not offer concrete access (e.g., a specific repository link or a clear statement of code release) for their UGACH implementation. |
| Open Datasets | Yes | In the experiments, we conduct cross-modal hashing on 2 widely-used datasets: NUS-WIDE (Chua et al. 2009) and MIRFLICKR (Huiskes and Lew 2008). |
| Dataset Splits | No | The paper defines a 'retrieval database' used as a training set and a 'query set' for testing, but does not explicitly mention a separate validation set for the proposed UGACH method. It mentions a training set for 'supervised methods' (baselines), not for UGACH itself. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper only mentions 'tensorflow1' but does not specify a version number or any other software dependencies with their versions. |
| Experiment Setup | Yes | The dimension of common representation layer is set to be 4096, while the hashing layer s dimension is set to be the same as hash code length. Moreover, we train the proposed UGACH in a mini-batch way and set the batch size as 64 for discriminative and generative models. We train the proposed UGACH iteratively. After the discriminative model is trained in 1 epoch, the generative model respectively will be trained in 1 epoch. The learning rate of UGACH is decreased by a factor of 10 each two epochs, while it is initialized as 0.01. |