Large Margin Discriminant Dimensionality Reduction in Prediction Space

Authors: Mohammad Saberian, Jose Costa Pereira, Can Xu, Jian Yang, Nuno Nvasconcelos

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that this embedding can significantly improve performance on tasks such as hashing and image/scene classification.
Researcher Affiliation Collaboration Mohammad Saberian Netflix esaberian@netflix.com Jose Costa Pereira INESCTEC jose.c.pereira@inesctec.pt Can Xu Google canxu@google.com Jian Yang Yahoo Research jianyang@yahoo-inc.com Nuno Vasconcelos UC San Diego nvasconcelos@ucsd.edu
Pseudocode Yes Algorithm 1 MCBoost; Algorithm 2 Codeword boosting; Algorithm 3 LADDER
Open Source Code No The paper mentions that "All implementations were provided by [1]" in the context of comparing LADDER to classical dimensionality reduction techniques, referring to external baselines, not their own LADDER implementation. There is no explicit statement or link indicating that the code for LADDER or the experiments is open-sourced.
Open Datasets Yes This experiment was based on 2K instances from 17 different types of traffic signs in the first set of the Summer traffic sign dataset [25]. We compare this hashing method to a number of popular techniques on CIFAR-10 [21], which contains 60K images of ten classes. For this we selected the scene understanding pipeline... on the MIT Indoor dataset [32].
Dataset Splits Yes This experiment was based on 2K instances from 17 different types of traffic signs in the first set of the Summer traffic sign dataset [25], which was split into training and test set. Evaluation was based on the test settings of [26], using 1, 000 randomly selected images. Learning was based on a random set of 2, 000 images, sampled from the remaining 59K. This is a dataset of 67 indoor scene categories where the standard train/test split contains 80 images for training and 20 images for testing per class.
Hardware Specification No The paper does not provide any specific details about the hardware used for running its experiments, such as CPU models, GPU models, or memory specifications.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup Yes The former was implemented by running MCBoost (Algorithm 1) for Nb = 200 iterations, using the optimal solution of (13) as codeword set. LADDER was implemented with Algorithm 3, using Nb = 2, Nc = 4, and Nr = 100.