Decentralized Attribution of Generative Models
Authors: Changhoon Kim, Yi Ren, Yezhou Yang
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our method is validated on MNIST, Celeb A, and FFHQ datasets. |
| Researcher Affiliation | Academia | Changhoon Kim1, , Yi Ren2, , Yezhou Yang1 School of Computing, Informatics, and Decision Systems Engineering1 School for Engineering of Matter, Transport, and Energy2 Arizona State University {kch, yiren, yz.yang}@asu.edu |
| Pseudocode | Yes | Algorithm 1: Training of Gφ input : φ, G0 output: Gφ, γ |
| Open Source Code | Yes | 1https://github.com/ASU-Active-Perception-Group/decentralized_ attribution_of_generative_models |
| Open Datasets | Yes | We validate these rules using DCGAN (Radford et al., 2015) and Style GAN (Karras et al., 2019a) on benchmark datasets including MNIST (Le Cun & Cortes, 2010), Celeb A (Liu et al., 2015), and FFHQ (Karras et al., 2019a). |
| Dataset Splits | No | The paper uses the term 'validation' in the context of validating theorems, not as a dataset split (e.g., train/validation/test percentage or count). |
| Hardware Specification | Yes | All experiments are conducted on V100 Tesla GPUs. |
| Software Dependencies | No | The paper mentions software components and frameworks like PyTorch (inferred from Kornia citation) and specific implementations for blurring and JPEG conversion, but it does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | We adopt the Adam optimizer for training. Training hyper-parameters are summarized in Table 3. |