Binarized Neural Networks for Resource-Efficient Hashing with Minimizing Quantization Loss
Authors: Feng Zheng, Cheng Deng, Heng Huang
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerous experiments show that the proposed method can achieve fast code generation without sacrificing accuracy. 5 Experimental Results We validate our proposed method in two tasks: semantic retrieval and object matching. The common characteristic of these two tasks is that their application in real systems requires efficient prediction and matching. [...] In this section, CIFAR-10 and MNIST datasets are used to compare our method with the state-of-the-arts. |
| Researcher Affiliation | Collaboration | Feng Zheng1 , Cheng Deng2 and Heng Huang3,4 1Department of Computer Science and Engineering, Southern University of Science and Technology 2School of Electronic Enigineering, Xidian University 3Department of Electrical and Computer Engineering, University of Pittsburgh 4JD Finance America Corporation |
| Pseudocode | Yes | Algorithm 1 Training binary neural networks algorithm |
| Open Source Code | No | All the proofs will be provided in an anonymous website (https://github.com/AI-2019/IJCAI2019.git). The paper explicitly states that the link is for 'proofs' and does not mention source code. |
| Open Datasets | Yes | We validate our proposed method in two tasks: semantic retrieval and object matching. [...] In this section, CIFAR-10 and MNIST datasets are used to compare our method with the state-of-the-arts. The experimental setting for CIFAR-10 and MNIST, in which label information is provided, is the same as that in [Liong et al., 2015]. |
| Dataset Splits | Yes | Generally, there are two types of dataset partitions and the difference between them that affects MAP performance is the number of gallery. The first setting is the same as the one in DH [Liong et al., 2015], in which, for both datasets, 1000 samples, 100 per class, are randomly selected as the query data, and the remaining samples are used as the gallery set. Whilst, the second setting is the same as the one in BDNN [Do et al., 2016], in which the query set contains 10,000 samples, and the others are treated as gallery set. |
| Hardware Specification | No | The paper discusses model size, memory, and computation time for different architectures (Alex Net, Goog Le Net, VGG-16, Deep Bit, REDH) but does not specify the particular hardware (e.g., GPU, CPU models, or RAM) used to perform their own experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | In our method, the balancing parameters λ1 = 0.1, λ2 = 3.5 and λ3 = 125 are fixed in all experiments. Moreover, we use Eq. 10 to capture the special properties of the data. [...] In our model, a deep architecture based on the Goog Le Net [Szegedy et al., 2015] style Inception model is used as the structure of the convolutional neural network. The output layer is replaced by a binarizing layer which is the same as the previous activations in intermediate layers and can produce the required number of binary codes. The parameters will be tuned and updated according to the objective function in Eq. 12 for some specific tasks. |