Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Decentralized Attribution of Generative Models
Authors: Changhoon Kim, Yi Ren, Yezhou Yang
ICLR 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our method is validated on MNIST, Celeb A, and FFHQ datasets. |
| Researcher Affiliation | Academia | Changhoon Kim1, , Yi Ren2, , Yezhou Yang1 School of Computing, Informatics, and Decision Systems Engineering1 School for Engineering of Matter, Transport, and Energy2 Arizona State University EMAIL |
| Pseudocode | Yes | Algorithm 1: Training of GĻ input : Ļ, G0 output: GĻ, γ |
| Open Source Code | Yes | 1https://github.com/ASU-Active-Perception-Group/decentralized_ attribution_of_generative_models |
| Open Datasets | Yes | We validate these rules using DCGAN (Radford et al., 2015) and Style GAN (Karras et al., 2019a) on benchmark datasets including MNIST (Le Cun & Cortes, 2010), Celeb A (Liu et al., 2015), and FFHQ (Karras et al., 2019a). |
| Dataset Splits | No | The paper uses the term 'validation' in the context of validating theorems, not as a dataset split (e.g., train/validation/test percentage or count). |
| Hardware Specification | Yes | All experiments are conducted on V100 Tesla GPUs. |
| Software Dependencies | No | The paper mentions software components and frameworks like PyTorch (inferred from Kornia citation) and specific implementations for blurring and JPEG conversion, but it does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | We adopt the Adam optimizer for training. Training hyper-parameters are summarized in Table 3. |