Equally-Guided Discriminative Hashing for Cross-modal Retrieval

Authors: Yufeng Shi, Xinge You, Feng Zheng, Shuo Wang, Qinmu Peng

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 3 Experiments 3.1 Datasets Performance evaluation was conducted on two benchmark datasets: MIRFLICKR-25K [Huiskes and Lew, 2008] and MS COCO [Lin et al., 2014].
Researcher Affiliation Academia 1School of Electronic Information and Communications, Huazhong University of Science and Technology 2Department of Computer Science and Engineering, Southern University of Science and Technology
Pseudocode Yes Algorithm 1 Equally-Guided Discriminative Hashing
Open Source Code No The paper does not provide any statement or link indicating that the source code for their method is open-source or publicly available.
Open Datasets Yes Performance evaluation was conducted on two benchmark datasets: MIRFLICKR-25K [Huiskes and Lew, 2008] and MS COCO [Lin et al., 2014].
Dataset Splits No For both datasets, 10000 image-text pairs are randomly chosen from retrieval set for training. The paper mentions the original MS COCO dataset has training and validation images, but does not specify a validation split created for *their* experiments.
Hardware Specification Yes We implement all deep learning methods with Tensorflow on a NVIDIA 1080ti GPU server.
Software Dependencies No We implement all deep learning methods with Tensorflow on a NVIDIA 1080ti GPU server. The paper mentions TensorFlow but does not specify a version number or other software dependencies with versions.
Experiment Setup Yes We set hyper-parameters as: α = β = γ = 1. To learn neural network parameters, we apply the Adam solver with a learning rate within 10 2 10 6 and set batch size as 128.