Natural Supervised Hashing

Authors: Qi Liu, Hongtao Lu

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In our experiment, training 16-bit and 96-bit code on NUS-WIDE cost respectively only 3 and 6 minutes. ... We evaluate our method on 4 datasets: NUS-WIDE IAPRTC122, ESPGAME 3, MIRFLICKR25K 4. For NUSWIDE [Chua et al., 2009], we use the 500 dimensional bagof-words vectors provided by the authors. For the other three, we use the 512 dimensional GIST features provided by [Guillaumin et al., 2010]. Each dataset is split into a database and a query set. Statistics of datasets are given in table 1.
Researcher Affiliation Academia Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering Department of Computer Science and Engineering, Shanghai Jiao Tong University, P.R. China luoguliu@gmail.com, lu-ht@cs.sjtu.edu.cn
Pseudocode No No pseudocode or algorithm blocks were explicitly labeled or provided in a structured format within the paper.
Open Source Code No The paper does not provide any links to open-source code or explicitly state that the code for their methodology is available.
Open Datasets Yes We evaluate our method on 4 datasets: NUS-WIDE IAPRTC122, ESPGAME 3, MIRFLICKR25K 4. For NUSWIDE [Chua et al., 2009], we use the 500 dimensional bagof-words vectors provided by the authors. For the other three, we use the 512 dimensional GIST features provided by [Guillaumin et al., 2010]. ... 2http://www.imageclef.org/photodata 3http://www.hunch.net/ jl/ 4http://press.liacs.nl/mirflickr/
Dataset Splits No The paper mentions splitting data into 'database' and 'query' sets for evaluation and samples 5000 points from databases for training some methods. However, it does not explicitly specify a distinct 'validation' set or its size/split for hyperparameter tuning or early stopping, only 'training' and 'query' (testing) sets.
Hardware Specification Yes All our results are obtained on a laptop with Intel Core i5-3210M 2.50 GHz and 12 GB RAM.
Software Dependencies No The paper mentions using "Pegasos [Shalev-Shwartz et al., 2011] as our SVM" but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes In experiments, we give NSH 50 training iterations and set λ = 10 4 for all cases. ... The kernel we choose is RBF (x, y) = exp(||x y||/σ2) , where σ is set to an appropriate value. 1000 anchors are sampled from databases for all datasets except MIRFLICKR25K, for which we use only 500 anchors considering it is not large-scale and contains less labels.