Characterizing Bias in Classifiers using Generative Models

Authors: Daniel McDuff, Shuang Ma, Yale Song, Ashish Kapoor

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments and Results
Researcher Affiliation Collaboration Daniel Mc Duff, Yale Song and Ashish Kapoor Microsoft Redmond, WA, USA {damcduff,yalesong,akapoor}@microsoft.com Shuang Ma SUNY Buffalo Buffalo, NY shuangma@buffalo.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes We release the nationality data, model and code to accompany the image data used in this paper (see the supplementary material).
Open Datasets Yes We use the MS-CELEB-1M (Guo et al., 2016) dataset for our experimentation.
Dataset Splits No The paper mentions sampling images for training and testing, and describes how it curated a subset of MS-CELEB-1M, but it does not provide specific train/validation/test dataset splits (e.g., percentages or exact counts) needed to reproduce the data partitioning.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, memory amounts) used for running its experiments.
Software Dependencies No The paper mentions software like the NLTK library and implies the use of frameworks for GANs, but it does not provide specific version numbers for these software dependencies (e.g., Python 3.8, NLTK 3.x).
Experiment Setup Yes We ran our sampling procedure for a fixed number of iterations (400) in each trial... The above composite function is a convex combination of the misclassification cost with a term that encourages discovering new solutions θ that are diverse from the set of previously found examples Θi... We found the results were not sensitive to the size of this set and fixed it to 50 in our main experiments. In this work, we follow the recommendations in (Snoek et al., 2012), and model the composite function via a GP with a Radial Basis Function (RBF) kernel, and use Expected Improvement (EI) as an acquisition function.