Distribution-Interpolation Trade off in Generative Models
Authors: Damian Leśniak, Igor Sieradzki, Igor Podolak
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experiments conducted using the DCGAN model on the Celeb A dataset are presented solely to illustrate the problem, not to study the DCGAN itself, theoretically or empirically. All experiments were conducted using a DCGAN model (Radford et al., 2015), in which the generator network consisted of a linear layer with 8192 neurons, followed by four convolution transposition layers, each using 5 5 filters and strides of 2, with number of filters in order of layers: 256, 128, 64, 3. |
| Researcher Affiliation | Academia | Damian Le sniak Jagiellonian University Igor Sieradzki Jagiellonian University Igor Podolak Jagiellonian University |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the methodology described. |
| Open Datasets | Yes | The experiments conducted using the DCGAN model on the Celeb A dataset (Liu et al., 2015) |
| Dataset Splits | No | The paper mentions using the Celeb A dataset and specific training parameters like batch size, but does not specify the training, validation, or test dataset splits. |
| Hardware Specification | No | The paper describes the model architecture and training parameters, but it does not specify any hardware details (e.g., GPU models, CPU types, or memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using the 'Adam optimiser' and 'No batch normalisation' but does not specify any version numbers for software dependencies such as libraries or frameworks. |
| Experiment Setup | Yes | Adam optimiser with learning rate of 2e 4 and momentum set to 0.5 was used. Batch size 64 was used throughout all experiments. If not explicitly stated otherwise, latent space dimension was set to 100. For the Celeb A dataset we resized the input images to 64 64. |