Dual Deep Neural Networks Cross-Modal Hashing

Authors: Zhen-Duo Chen, Wan-Jin Yu, Chuan-Xiang Li, Liqiang Nie, Xin-Shun Xu

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental DDCMH is tested on several benchmark datasets. The results demonstrate that it outperforms both deep and shallow state-of-the-art hashing methods.
Researcher Affiliation Academia School of Computer Science and Technology, Shandong University School of Software, Shandong University
Pseudocode No The paper includes a schematic illustration in Figure 1, but no formal pseudocode or algorithm blocks with structured steps.
Open Source Code No The paper does not include any explicit statement about releasing source code or provide a link to a code repository.
Open Datasets Yes To justify our proposed method, we carried out extensive experiments on two public benchmark datasets, i.e., MIRFlickr-25K (Huiskes and Lew 2008) and NUS-WIDE (Chua et al. 2009)
Dataset Splits No The paper describes training and test sets but does not explicitly specify a separate validation set or its size/split for either dataset.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions using Alex Net and COSDISH but does not specify version numbers for any software, libraries, or dependencies used in the experiments.
Experiment Setup No The paper describes the architecture of the networks (e.g., modified Alex Net, MLP) and loss functions, but it does not specify concrete training setup details such as hyperparameters (learning rate, batch size, number of epochs) or optimizer settings.