A Theory of Generative ConvNet
Authors: Jianwen Xie, Yang Lu, Song-Chun Zhu, Yingnian Wu
ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 7. Synthesis and Reconstruction. We show that the generative Conv Net is capable of learning and generating realistic natural image patterns. Such an empirical proof of concept validates the generative capacity of the model. We also show that contrastive divergence learning can indeed reconstruct the observed images, thus empirically validating Proposition 3. |
| Researcher Affiliation | Academia | Jianwen Xie JIANWEN@UCLA.EDU Yang Lu YANGLV@UCLA.EDU Song-Chun Zhu SCZHU@STAT.UCLA.EDU Ying Nian Wu YWU@STAT.UCLA.EDU Department of Statistics, University of California, Los Angeles, CA, USA |
| Pseudocode | Yes | Algorithm 1 Learning and sampling algorithm |
| Open Source Code | Yes | The code and training images can be downloaded from the project page: http://www.stat.ucla.edu/ ywu/ Generative Conv Net/main.html |
| Open Datasets | Yes | The code and training images can be downloaded from the project page: http://www.stat.ucla.edu/ ywu/ Generative Conv Net/main.html |
| Dataset Splits | No | No specific dataset splits for training, validation, or test sets were explicitly provided. The paper mentions 'training images' but does not specify a validation set or its proportion. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory, or cloud instances) used for running experiments were mentioned. |
| Software Dependencies | No | The code in our experiments is based on the Mat Conv Net package of (Vedaldi & Lenc, 2015). This mentions a software package but does not provide specific version numbers for it or any other dependencies. |
| Experiment Setup | Yes | We use M = 16 parallel chains for Langevin sampling. The number of Langevin iterations between every two consecutive updates of parameters is L = 10. With each new added layer, the number of learning iterations is T = 700. ... The first layer has 100 15 15 filters with sub-sampling size of 3. The second layer has 64 5 5 filters with sub-sampling size of 1. The third layer has 30 3 3 filters with sub-sampling size of 1. ... The number of learning iterations is T = 1200. Starting from the observed images, the number of Langevin iterations is L = 1. |