Class-Wise Supervised Hashing with Label Embedding and Active Bits

Authors: Long-Kai Huang, Sinno Jialin Pan

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results verify the superior effectiveness of our proposed method over other baseline hashing methods. The datasets we employ to test the performance of the proposed CSH method include the Animals with Attributes dataset2 (Aw A) and the CIFAR-100 dataset [Krizhevsky, 2009].
Researcher Affiliation Academia Long-Kai Huang and Sinno Jialin Pan Nanyang Technology University, Singapore lhuang018@e.ntu.edu.sg, sinnopan@ntu.edu.sg
Pseudocode No The paper describes its optimization approach in detail with mathematical formulations (e.g., in Section 4 'Optimization') but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statements or links indicating that the source code for the described methodology is publicly available.
Open Datasets Yes The datasets we employ to test the performance of the proposed CSH method include the Animals with Attributes dataset2 (Aw A) and the CIFAR-100 dataset [Krizhevsky, 2009]. Footnote 2: http://attributes.kyb.tuebingen.mpg.de/
Dataset Splits No The paper describes training and query (test) sets: 'On this dataset, 5% of data (i.e. 1, 457 instances) are randomly picked up as queries, and the remained instances form a training set. CIFAR-100... 58K instances are randomly selected from the whole set to comprise a training set while the remained 2K instances are used as queries.' However, it does not explicitly mention a separate validation set or details for a validation split.
Hardware Specification No The paper mentions 'With the computational resources we have, we can generate an instance-pairwise similarity matrix of around 30K instances.' but does not specify any hardware details like CPU, GPU models, or memory.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers (e.g., 'Python 3.8', 'PyTorch 1.9').
Experiment Setup Yes In the optimization problem (2) for CSH, there are three parameter , β and m. The tradeoff parameters and β are to balance the impact of three terms in the objective. ...We set = n/L2 and β = n/n = 1. Regarding the sparsity parameter m, we set m = 3 4r, where r is the code length. ...For initialization on W in CSH, we adopt the same strategy as LSH. To avoid bias in initialization, we run CSH and LSH 10 times with different random initializations on W3, and report the averaged results.