Greedy Hash: Towards Fast Optimization for Accurate Hash Coding in CNN
Authors: Shupeng Su, Chao Zhang, Kai Han, Yonghong Tian
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on benchmark datasets show that our scheme outperforms state-of-the-art hashing methods in both supervised and unsupervised tasks. We evaluate the efficacy of our proposed Greedy Hash in this section and the source code is available at: https://github.com/ssppp/Greedy Hash. 3.1 Datasets 3.2 Implementation details 3.3 Comparison on fast optimization 3.4 Comparison on accurate coding |
| Researcher Affiliation | Collaboration | Shupeng Su1 Chao Zhang1 Kai Han1,3 Yonghong Tian1,2 1Key Laboratory of Machine Perception (MOE), School of EECS, Peking University 2National Engineering Laboratory for Video Technology, School of EECS, Peking University 3Huawei Noah s Ark Lab |
| Pseudocode | Yes | Our method has been summarized in Algorithm 1. Algorithm 1 Greedy Hash Prepare training set X and neural network FΘ, in which Θ denotes parameters of the network. repeat H = FΘ(X). B = sgn(H) [forward propagation of our hash layer]. Calculate the loss function : Loss = L(B) + α H sgn(H) p p , where L can be any learning function such as the Cross Entropy Loss. Calculate Loss B [backward propagation of our hash layer]. Calculate Loss H + α H sgn(H) p p H = L B + α p H sgn(H) p 1 p 1. Calculate Loss Θ . Update the whole network s parameters. until convergence. |
| Open Source Code | Yes | We evaluate the efficacy of our proposed Greedy Hash in this section and the source code is available at: https://github.com/ssppp/Greedy Hash. |
| Open Datasets | Yes | CIFAR-10 The CIFAR-10 dataset [14] consists of 60,000 32 32 color images in 10 classes. ... Image Net Image Net [25] that consists of 1,000 classes is a benchmark image set for object category classification and detection in Large Scale Visual Recognition Challenge (ILSVRC). |
| Dataset Splits | No | For CIFAR-10, the paper defines training, query, and database sets, but no explicit validation set split for hyperparameter tuning. For ImageNet, it states 'use all the images in the validation set as the queries', indicating the validation set is used as the test/query set rather than a typical validation split for training. |
| Hardware Specification | No | The paper states 'Our model is implemented with Pytorch [23] framework' but does not provide any specific details about the hardware used for experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions 'Our model is implemented with Pytorch [23] framework' but does not specify the version number of Pytorch or any other software dependencies. |
| Experiment Setup | Yes | Our model is implemented with Pytorch [23] framework. We set the batch size as 32 and use SGD as the optimizer with a weight decay of 0.0005 and a momentum of 0.9. For supervised experiments we use 0.001 as the initial learning rate while for unsupervised experiments we use 0.0001, and we divide both of them by 10 when the loss stop decreasing. In addition, we cross-validate the hyper-parameters α and p in the penalty term α H sgn(H) p p, which are finally fixed with p = 3, α = 0.1 1 N K (using 1 N K term to remove the impacts of the various encoding length and input size) for CIFAR-10, while for Image Net α = 1 1 N K . |