Neurons Merging Layer: Towards Progressive Redundancy Reduction for Deep Supervised Hashing

Authors: Chaoyou Fu, Liangchen Song, Xiang Wu, Guoli Wang, Ran He

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on four datasets demonstrate that our proposed method outperforms state-of-the-art hashing methods.
Researcher Affiliation Collaboration 1NLPR & CRIPAC, Institute of Automation, Chinese Academy of Sciences 2University of Chinese Academy of Sciences 3Center for Excellence in Brain Science and Intelligence Technology, CAS 4Horizon Robotics
Pseudocode No The paper describes the methods and processes in narrative text and with diagrams (Figure 2, Figure 3), but it does not include any explicit pseudocode blocks or algorithm listings.
Open Source Code No The paper does not provide any explicit statements about making the source code available, nor does it include a link to a code repository.
Open Datasets Yes We evaluate our method on four datasets, including CIFAR-10 [Krizhevsky and Hinton, 2009], NUS-WIDE [Chua et al., 2009], MS-COCO [Lin et al., 2014b] and Clothing1M [Xiao et al., 2015].
Dataset Splits Yes The division of CIFAR-10 and NUS-WIDE is the same with [Li et al., 2016]. The division of MS-COCO and Clothing1M is the same with [Jiang and Li, 2018] and [Jiang et al., 2018], respectively. In addition, since a validation set is needed to calculate neuron scores in the active phase, we split the original training set into two parts: a new training set and a validation set. The number of validation set of CIFAR-10, NUS-WIDE, MSCOCO and Clothing1M is 200, 420, 400 and 280, respectively.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments (e.g., GPU models, CPU types, or cloud instance specifications).
Software Dependencies No The paper mentions using a "CNN-F network [Chatfield et al., 2014]" and "Stochastic Gradient Descent (SGD)" but does not specify software versions for any libraries, frameworks, or programming languages (e.g., PyTorch, TensorFlow, Python version).
Experiment Setup Yes The parameters in our algorithm are experimentally set as follows. The number of neurons Bin in hashing layer is set to 60. In addition, the number of truncating edges in per step, i.e. m, is set to 4. During training, we set the batch size to 128 and use Stochastic Gradient Descent (SGD) with 10 4 learning rate and 10 5 weight decay to optimize the backbone network. Then, the learning rate of NMLayer and the hyper-parameter η in Eq. (7) are set to 10 2 and 1200 respectively. Moreover, the parameters N0 and N1 are set to 5 and 40 respectively.