Optimizing affinity-based binary hashing using auxiliary coordinates

Authors: Ramin Raziperchikolaei, Miguel A. Carreira-Perpinan

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Compared to this, our optimization is guaranteed to obtain better hash functions while being not much slower, as demonstrated experimentally in various supervised datasets.
Researcher Affiliation Academia Ramin Raziperchikolaei EECS, University of California, Merced rraziperchikolaei@ucmerced.edu Miguel A. Carreira-Perpi n an EECS, University of California, Merced mcarreira-perpinan@ucmerced.edu
Pseudocode No The supplementary material gives the overall MAC algorithm to learn a hash function by optimizing an affinity-based loss function.
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository for the methodology described.
Open Datasets Yes (1) CIFAR [13] contains 60 000 images in 10 classes. We use D = 320 GIST features [23] from each image. We use 58 000 images for training and 2 000 for test. (2) Infinite MNIST [20]. We generated, using elastic deformations of the original MNIST handwritten digit dataset, 1 000 000 images for training and 2 000 for test, in 10 classes.
Dataset Splits Yes (1) CIFAR [13] contains 60 000 images in 10 classes. ... We use 58 000 images for training and 2 000 for test. (2) Infinite MNIST [20]. ... We generated ... 1 000 000 images for training and 2 000 for test, in 10 classes. We train the hash functions in a subset of 10 000 points of the training set, and report precision and recall by searching for a test query on the entire dataset (the base set).
Hardware Specification No The runtime per iteration for our 10 000-point training sets with b = 48 bits and κ+ = 100 and κ = 500 neighbors in a laptop is 2 for both MACcut and MACquad.
Software Dependencies No As hash functions (for each bit), we use linear SVMs (trained with LIBLINEAR; [9]) and kernel SVMs (with 500 basis functions).
Experiment Setup Yes We use the following schedule for the penalty parameter µ in the MAC algorithm (regardless of the hash function type or dataset). We initialize Z with µ = 0, i.e., the result of quad or cut. Starting from µ1 = 0.3 (MACcut) or 0.01 (MACquad), we multiply µ by 1.4 after each iteration (Z and h step).