SADIH: Semantic-Aware DIscrete Hashing

Authors: Zheng Zhang, Guo-sen Xie, Yang Li, Sheng Li, Zi Huang5853-5860

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on multiple large-scale datasets demonstrate that our SADIH can clearly outperform the state-of-the-art baselines with the additional benefit of lower computational costs.
Researcher Affiliation Collaboration Zheng Zhang,1 Guo-sen Xie,2 Yang Li,1 Sheng Li,3 Zi Huang1 1The University of Queensland, Australia 2 Inception Institute of Artificial Intelligence, UAE 3 University of Georgia, USA
Pseudocode No The paper describes an 'alternating optimization algorithm' with steps (B-Step, W-Step, F-Step, P-Step) using mathematical equations, but it does not provide a formally labeled pseudocode or algorithm block.
Open Source Code No The paper states: 'To make fair comparison, all the compared methods were reimplemented using the released source codes given by the corresponding authors.' This refers to baseline methods, not the code for the proposed SADIH. There is no statement about SADIH's own source code being openly available.
Open Datasets Yes We evaluate the proposed SADIH and SADIH- L1 on three publicly available benchmark databases: CIFAR-10 (Krizhevsky and Hinton 2009), Sun397 (Xiao et al. 2010), and Image Net (Deng et al. 2009).
Dataset Splits Yes we randomly split the CIFAR-10 dataset into a training set (59K images) and a test query set (1, 000 images)... To make fair comparison, all the compared methods were reimplemented using the released source codes given by the corresponding authors. Specifically, we searched the best parameters carefully for each algorithm by five-folds cross-validation
Hardware Specification No All the experiments were implemented on Matlab 2013a on a standard Window PC with 64 GB RAM. The paper does not provide specific CPU or GPU models used.
Software Dependencies No All the experiments were implemented on Matlab 2013a on a standard Window PC with 64 GB RAM. No other software dependencies with specific version numbers are mentioned.
Experiment Setup Yes For our SSAH, the parameter γ was empirically set to 0.001. For the parameters α and β, we should tune it by cross-validation from the candidate set {0.01, 0.1, 1.0, 5, 10}. The maximum iteration number t was set to 5, which could assure the best performance.