Extensible Cross-Modal Hashing
Authors: Tian-yi Chen, Lan Zhang, Shi-cong Zhang, Zi-long Li, Bai-chuan Huang
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive experiments show the effectiveness of our design. and sections like 3 Experiment |
| Researcher Affiliation | Academia | 1School of Computer Science and Technology, University of Science and Technology of China, China 2School of Data Science, University of Science and Technology of China, China 3School of Information Science and Engineering, Northeastern University, China 4Department of Physics, University of California Berkeley, USA |
| Pseudocode | Yes | Algorithm 1 Core Algorithm and Algorithm 2 Extending Model for New Tasks. |
| Open Source Code | No | Not found. The paper does not provide an explicit statement or link to open-source code for the described methodology. |
| Open Datasets | Yes | The MIRFLICKR-25k dataset [Huiskes and Lew, 2008]... We also adopt the MSCOCO-2014 dataset [Lin et al., 2014] |
| Dataset Splits | Yes | In experiments using MIRFLICKR-25k, 10,015 instances are randomly chosen as the train set, and the rest 10000 are used for validation, namely 2000 for the query and 8000 for the database. In experiments using MSCOCO, 16,869 randomly chosen instances are used for training, and the rest 5000 and 15000 instances are used as query and database, respectively. |
| Hardware Specification | Yes | All experiments are conducted on a server with 4 TITAN X GPUs. |
| Software Dependencies | No | We implement ECMH via Pytorch. |
| Experiment Setup | Yes | We set all learning rate to 1.5 and decrease it by 5% every 100 steps. α is set to the range of [0.1, 0.15] and β is set to 0.5. Batch size is fixed to 500. |