Fair Generative Modeling via Weak Supervision
Authors: Kristy Choi, Aditya Grover, Trisha Singh, Rui Shu, Stefano Ermon
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we demonstrate the efficacy of our approach which reduces bias w.r.t. latent factors by an average of up to 34.6% over baselines for comparable image generation using generative adversarial networks. |
| Researcher Affiliation | Academia | 1Department of Computer Science, Stanford University 2Department of Statistics, Stanford University. |
| Pseudocode | Yes | Algorithm 1 Learning Fair Generative Models |
| Open Source Code | Yes | We provide reference implementations in Py Torch (Paszke et al., 2017), and the codebase for this work is open-sourced at https://github.com/ermongroup/fairgen. |
| Open Datasets | Yes | We consider the Celeb A (Ziwei Liu & Tang, 2015) dataset, which is commonly used for benchmarking deep generative models and comprises of images of faces with 40 labeled binary attributes. |
| Dataset Splits | Yes | For both models, we use a variant of Res Net18 (He et al., 2016) on the standard train and validation splits of Celeb A. |
| Hardware Specification | No | For both models, we use a variant of Res Net18 (He et al., 2016) on the standard train and validation splits of Celeb A. For the generative model, we used a Big GAN (Brock et al., 2018) trained to minimize the hinge loss (Lim & Ye, 2017; Tran et al., 2017) objective. |
| Software Dependencies | Yes | We provide reference implementations in Py Torch (Paszke et al., 2017) |
| Experiment Setup | Yes | Additional details regarding the architectural design and hyperparameters in Supplement C. |