MaGNET: Uniform Sampling from Deep Generative Network Manifolds Without Retraining
Authors: Ahmed Imtiaz Humayun, Randall Balestriero, Richard Baraniuk
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform a range of experiments on several datasets and DGNs, e.g., for the state-of-the-art Style GAN2 trained on the FFHQ dataset, uniform sampling via Ma GNET increases distribution precision by 4.1% and recall by 3.0% and decreases gender bias by 41.2%, without requiring labels or retraining. |
| Researcher Affiliation | Academia | Ahmed Imtiaz Humayun Rice University imtiaz@rice.edu Randall Balestriero Rice University randallbalestriero@gmail.com Richard Baraniuk Rice University richb@rice.edu |
| Pseudocode | Yes | Algorithm 1: Ma GNET Sampling as described in Sec. 3.2 |
| Open Source Code | Yes | Colab and codes at bit.ly/magnet-sampling |
| Open Datasets | Yes | For example, the Celeb A dataset contains a large fraction of smiling faces. |
| Dataset Splits | No | The paper refers to using existing datasets for training and evaluation but does not explicitly provide details about the specific training, validation, and test dataset splits used for reproducibility. |
| Hardware Specification | Yes | All the experiments were run on a Quadro RTX 8000 GPU, which has 48 GB of high-speed GDDR6 memory and 576 Tensor cores. |
| Software Dependencies | Yes | In short, we employed TF2 (2.4 at the time of writing), all the usual Python scientific libraries such as Num Py and Py Torch. |
| Experiment Setup | Yes | For Style GAN2, we use the official config-e provided in the Git Hub Style GAN2 repo1, unless specified. We use the recommended default of ψ = 0.5 as the interpolating stylespace truncation, to ensure generation quality of faces for the qualitative experiments. |