Alternating Back-Propagation for Generator Network
Authors: Tian Han, Yang Lu, Song-Chun Zhu, Ying Nian Wu
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experiments The code in our experiments is based on the Mat Conv Net package of (Vedaldi and Lenc 2015). The training images and sounds are scaled so that the intensities are within the range [ 1, 1]. We adopt the structure of the generator network of (Radford, Metz, and Chintala 2016; Dosovitskiy, Springenberg, and Brox 2015), where the topdown network consists of multiple layers of deconvolution by linear superposition, Re LU non-linearity, and up-sampling, with tanh non-linearity at the bottom-layer (Radford, Metz, and Chintala 2016) to make the signals fall within [ 1, 1]. We also adopt batch normalization (Ioffe and Szegedy 2015). We fix σ = .3 for the standard deviation of the noise vector ϵ. We use l = 10 or 30 steps of Langevin dynamics within each learning iteration, and the Langevin step size s is set at .1 or .3. We run T = 600 learning iterations, with learning rate .0001, and momentum .5. 4.2 Quantitative experiments Experiment 5. Learning from incomplete data. Table 1: Recovery errors in 5 experiments of learning from occluded images. |
| Researcher Affiliation | Academia | Tian Han, Yang Lu, Song-Chun Zhu, Ying Nian Wu Department of Statistics University of California Los Angeles, USA |
| Pseudocode | Yes | Algorithm 1 Alternating back-propagation |
| Open Source Code | Yes | Code, images, sounds, and videos http://www.stat.ucla.edu/~ywu/ABP/main.html |
| Open Datasets | Yes | We evaluate our method on 10,000 images randomly selected from Celeb A dataset (Liu et al. 2015). |
| Dataset Splits | No | The paper mentions training and testing sets, but does not explicitly describe a separate validation set or its split. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU or CPU models used for experiments. |
| Software Dependencies | No | The code in our experiments is based on the Mat Conv Net package of (Vedaldi and Lenc 2015). |
| Experiment Setup | Yes | We fix σ = .3 for the standard deviation of the noise vector ϵ. We use l = 10 or 30 steps of Langevin dynamics within each learning iteration, and the Langevin step size s is set at .1 or .3. We run T = 600 learning iterations, with learning rate .0001, and momentum .5. We use Re LU with a leaking factor .2 (Maas, Hannun, and Ng 2013; Xu et al. 2015). |