BinGAN: Learning Compact Binary Descriptors with a Regularized GAN
Authors: Maciej Zieba, Piotr Semberecki, Tarek El-Gaaly, Tomasz Trzcinski
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the resulting binary image descriptors on two challenging applications, image matching and retrieval, and achieve state-of-the-art results. We conduct experiments on two benchmark datasets, Brown gray-scale patches [3] and CIFAR-10 color images [12]. The table in Fig. 1 shows the CIFAR10 retrieval results based on the mean Average Precision (m AP) of the top 1000 returned images with respect to different bit lengths. |
| Researcher Affiliation | Collaboration | Maciej Zieba Wroclaw University of Science and Technology, Tooploox maciej.zieba@pwr.edu.pl Piotr Semberecki Wroclaw University of Science and Technology, Tooploox piotr.semberecki@pwr.edu.pl Tarek El-Gaaly Voyage tarek@voyage.auto Tomasz Trzcinski Warsaw University of Technology, Tooploox t.trzcinski@ii.pw.edu.pl |
| Pseudocode | No | The paper describes the training procedure in paragraph text but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Last but not least, we release the code of the method along with the evaluation scripts to enable reproducible research1. 1The code is available at: github.com/maciejzieba/bin GAN |
| Open Datasets | Yes | We conduct experiments on two benchmark datasets, Brown gray-scale patches [3] and CIFAR-10 color images [12]. [3] M. Brown, G. Hua, and S. Winder. Discriminative learning of local image descriptors. TPAMI, 33(1):43 57, 2011. [12] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009. |
| Dataset Splits | Yes | The data is split into training and test sets according to the provided ground truth, with 50,000 training pairs (25,000 matched and 25,000 non-matched pairs) and 10,000 test pairs (5,000 matched, and 5,000 non-matched pairs), respectively. CIFAR-10 dataset has 10 categories and each of them is composed of 6,000 pictures with a resolution 32 32 color images. The whole dataset has 50,000 training and 10,000 testing images. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as GPU models, CPU models, or memory specifications. It only describes model architectures and parameter settings. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used in the implementation or experimentation (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | There are 4 hyperparameters in our method: γ, β and regularization parameters: λDMR, λBRE. In all our experiments, we fix the parameters to: λDMR = 0.05, λBRE = 0.01, γ = 0.001 and β = 0.5. The hyperparameter γ controls the softness of the sign( ) function and the value was set according to suggestions provided in [5] therefore additional tuning was not needed. The value of a scaling parameter β was set according to prior assumptions based on the analysis of the impact of scaling factor for the Laplace distribution. We scale the distances by the number of the units (M), therefore the value of β can be constant among various applications. The values of regularization terms λBRE and λDMR were fixed empirically following the methodology provided in [6]. |