An ensemble diversity approach to supervised binary hashing

Authors: Miguel A. Carreira-Perpinan, Ramin Raziperchikolaei

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental section 3 gives evidence with image retrieval datasets that this simple approach indeed works very well, and section 4 further discusses the connection between hashing and ensembles. 3 Experiments We use the following labeled datasets (all using the Euclidean distance in feature space): (1) CIFAR [19]... (2) Infinite MNIST [29].
Researcher Affiliation Academia Miguel A. Carreira-Perpi n an EECS, University of California, Merced mcarreira-perpinan@ucmerced.edu Ramin Raziperchikolaei EECS, University of California, Merced rraziperchikolaei@ucmerced.edu
Pseudocode No The paper describes algorithms in text (e.g., 'min-cut algorithm (as implemented in [4])'), but does not contain structured pseudocode or an explicitly labeled algorithm block.
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository for the methodology described.
Open Datasets Yes We use the following labeled datasets (all using the Euclidean distance in feature space): (1) CIFAR [19] contains 60 000 images in 10 classes... (2) Infinite MNIST [29]. We generated, using elastic deformations of the original MNIST handwritten digit dataset, 1 000 000 images for training and 2 000 for test, in 10 classes.
Dataset Splits No The paper specifies training and test splits for CIFAR (58,000 for training, 2,000 for test) and Infinite MNIST (1,000,000 for training, 2,000 for test) but does not provide details for a separate validation split explicitly used in their main experiments for model selection or early stopping.
Hardware Specification No The paper mentions training 'in a single processor' but does not provide specific details about the hardware used for experiments, such as CPU/GPU models or memory specifications.
Software Dependencies No The paper mentions using 'LIBLINEAR; [12]' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We use linear and kernel SVMs as hash functions. Without loss of generality (see later), we use the Laplacian objective (1), which for a single bit takes the form E(z) = PN n,m=1 ynm(zn zm)2, zn = h(xn) { 1, 1}, n = 1, . . . , N. (2) To optimize it, we use a two-step approach... We train the hash functions in a subset of 5 000 points of the training set... As hash functions (for each bit), we use linear SVMs (trained with LIBLINEAR; [12]) and kernel SVMs (with 500 basis functions centered at a random subset of training points).