Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
A Fair Generative Model Using LeCam Divergence
Authors: Soobin Um, Changho Suh
AAAI 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on benchmark real datasets demonstrate that the proposed framework can significantly improve the fairness performance while maintaining realistic sample quality for a wide range of the reference set size all the way down to 1% relative to training set. |
| Researcher Affiliation | Academia | Soobin Um1, Changho Suh2 1 Graduate School of AI, KAIST 2 School of Electrical Engineering, KAIST EMAIL |
| Pseudocode | Yes | For instance, we parameterize (D, Dref, G) with three neural networks and then employ three-way alternating gradient descent (Goodfellow et al. 2014) for the parameterized neural networks; see Algorithm 1 in the supplementary for details. |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We conduct experiments on three benchmark real datasets: Celeb A (Liu et al. 2015), UTKFace (Zhang, Song, and Qi 2017), and Fair Face (Karkkainen and Joo 2021). |
| Dataset Splits | Yes | Celeb A and Fair Face classifiers are trained over the standard train and validation splits of Celeb A and Fair Face, respectively. For training the UTKFace classifier, we use 8 : 1 : 1 splits of UTKFace dataset. |
| Hardware Specification | Yes | We implement our algorithm in Py Torch (Paszke et al. 2019), and all experiments are performed on servers with TITAN RTX and Quadro RTX 8000 GPUs. |
| Software Dependencies | Yes | We implement our algorithm in Py Torch (Paszke et al. 2019) |
| Experiment Setup | No | The paper mentions employing the Big GAN architecture and varying reference set sizes, but does not provide specific hyperparameters such as learning rates, batch sizes, or optimizer configurations in the main text. |