FRAME Revisited: An Interpretation View Based on Particle Evolution
Authors: Xu Cai, Yang Wu, Guanbin Li, Ziliang Chen, Liang Lin3256-3263
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Quantitative and qualitative experiments have been respectively conducted on several widely used datasets. The empirical studies have evidenced the effectiveness and superiority of our method. Experiments In this section, we intensively compare our proposed method with FRAME from two aspects, one is the confirmatory experiment of model collapse under varied settings with respect to the baseline, the other is the quantitative and qualitative comparison of generated results on extensively used datasets. |
| Researcher Affiliation | Collaboration | Xu Cai,1 Yang Wu,1 Guanbin Li,1 Ziliang Chen,1 Liang Lin1,2 1School of Data and Computer Science, Sun Yat-Sen University, China 2Dark Matter AI Inc. |
| Pseudocode | Yes | Algorithm 1 Persistent Learning and Synthesizing in Wasserstein FRAME |
| Open Source Code | No | The paper does not provide any explicit statement about open-source code availability or a link to a code repository. |
| Open Datasets | Yes | Celeb A (Liu et al. 2015) and LSUN-Bedroom (Yu et al. 2015) images are cropped and resized to 64 64. CIFAR-10 (Krizhevsky and Hinton 2009) includes various categories and we learn both algorithms conditioned |
| Dataset Splits | No | The paper mentions using well-known datasets like Celeb A, LSUN-Bedroom, and CIFAR-10, but it does not specify the exact train/validation/test split percentages, sample counts, or a methodology for splitting the data. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions "the first 4 convolutional layers of a pre-learned VGG-16 (Simonyan and Zisserman 2014)" as an implementation detail, but it does not list any specific software components with version numbers (e.g., Python, PyTorch, TensorFlow, or other libraries). |
| Experiment Setup | Yes | As for default experimental settings, σ = 0.01, β = 60, the number of learning iterations is set to T = 100, the step number L of Langevin sampling within each learning iteration is 50 and the batch size is N = M = 9. The hyper-parameters appear in Algorithm 1 differs on each dataset in order to achieve the best results. As for FRAME we use default settings in (Lu, Zhu, and Wu 2015). Celeb A (Liu et al. 2015) and LSUN-Bedroom (Yu et al. 2015) images are cropped and resized to 64 64. we set λ = 1e 3 in both datasets, δ = 0.2 in Celeb A and δ = 0.15 in LSUN-Bedroom. In this experiment, we set δ = 0.15, λ = 2e 3 and images size are of 32 32. |