Max-Margin Invariant Features from Transformed Unlabelled Data

Authors: Dipan Pal, Ashwin Kannan, Gautam Arakalgud, Marios Savvides

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental As an illustration, we design an framework for face recognition and demonstrate the efficacy of our approach on a large scale semi-synthetic dataset with 153,000 images and a new challenging protocol on Labelled Faces in the Wild (LFW) while out-performing strong baselines.
Researcher Affiliation Academia Dipan K. Pal, Ashwin A. Kannan , Gautam Arakalgud , Marios Savvides Department of Electrical and Computer Engineering Carnegie Mellon University Pittsburgh, PA 15213 {dipanp,aalapakk,garakalgud,marioss}@cmu.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No No explicit statement about releasing source code or a link to a code repository for the described methodology was found.
Open Datasets Yes We utilize a large-scale semi-synthetic face dataset to generate the sets TG and X for MMIF. In this dataset, only two major transformations exist, that of pose variation and subject variation. All other transformations such as illumination, translation, rotation etc are strictly and synthetically controlled. This provides a very good benchmark for face recognition. where we want to be invariant to pose variation and be discriminative for subject variation. The experiment follows the exact protocol and data as described in [10] 3. ... MMIF on LFW (deep features): Unseen subject protocol. ... We choose the top 500 subjects with a total of 6,300 images for training MMIF on VGG-Face features and test on the remaining subjects with 7,000 images.
Dataset Splits No The paper mentions training and test sets but does not explicitly describe a validation set or its split.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models or processor types) used for running its experiments.
Software Dependencies No The paper mentions the use of 'VGG-Face [12]' but does not provide specific version numbers for any software dependencies or libraries required to replicate the experiments.
Experiment Setup No The paper mentions applying MMIF on raw pixels and deep features from a pre-trained VGG-Face network, and uses normalized cosine distance as a matching metric. It refers to external protocols for data. However, it does not provide specific hyperparameters or system-level training settings for the MMIF method itself within the paper.