Size-Noise Tradeoffs in Generative Networks
Authors: Bolton Bailey, Matus J. Telgarsky
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | This paper investigates the ability of generative networks to convert their input noise distributions into other distributions. Firstly, we demonstrate a construction that allows Re LU networks to increase the dimensionality of their noise distribution by implementing a space-filling function based on iterated tent maps. We show this construction is optimal by analyzing the number of affine pieces in functions computed by multivariate Re LU networks. Secondly, we provide efficient ways (using polylog(1/ϵ) nodes) for networks to pass between univariate uniform and normal distributions, using a Taylor series approximation and a binary search gadget for computing function inverses. Lastly, we indicate how high dimensional distributions can be efficiently transformed into low dimensional distributions.We ran some simple initial experiments measuring how well GANs of different architectures and noise distributions learned MNIST generation, and we found them inconclusive; in particular, we could not be certain if our empirical observations were a consequence purely of representation, or some combination of representation and training. |
| Researcher Affiliation | Academia | Bolton Bailey Matus Telgarsky {boltonb2,mjt}@illinois.edu University of Illinois, Urbana-Champaign |
| Pseudocode | No | The paper does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | No statement or link regarding the release of open-source code for the described methodology. |
| Open Datasets | Yes | We ran some simple initial experiments measuring how well GANs of different architectures and noise distributions learned MNIST generation |
| Dataset Splits | No | The paper is theoretical and does not detail experimental setups including dataset splits for training, validation, or testing. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) are mentioned in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers are mentioned in the paper. |
| Experiment Setup | No | No specific experimental setup details such as hyperparameters, model initialization, or training schedules are provided. |