Generative networks as inverse problems with Scattering transforms

Authors: Tomás Angles, Stéphane Mallat

ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments demonstrate that the resulting Scattering generators have similar properties as GANs or VAEs, without learning a discriminative network or an encoder. This section evaluates generative Scattering networks with several experiments. The accuracy of the inversion given by the generative network is first computed by calculating the reconstruction error of training images. We assess the generalization capabilities by computing the reconstruction error on test images.
Researcher Affiliation Academia Tom as Angles & St ephane Mallat Ecole normale sup erieure, Coll ege de France, PSL Research University 75005 Paris, France tomas.angles@ens.fr
Pseudocode No The paper describes the computational steps and models, but it does not include a formally labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes The code to reproduce the experiments can be found in 1. 1https://github.com/tomas-angles/generative-scattering-networks
Open Datasets Yes We consider three datasets that have different levels of variabilities: Celeb A (Liu et al., 2015), LSUN (bedrooms) (Yu et al., 2015) and Polygon5.
Dataset Splits No The paper states: 'For each dataset, we consider only 65536 training images and 16384 test images.' It does not specify a validation set or explicit train/test/validation split percentages.
Hardware Specification No The paper does not specify the hardware used for running experiments, such as CPU or GPU models, or memory details.
Software Dependencies No The paper mentions the use of 'Adam optimizer' and 'DCGAN architecture' but does not specify software dependencies with version numbers (e.g., Python version, specific deep learning framework versions like TensorFlow or PyTorch).
Experiment Setup Yes The minimization is done with the Adam optimizer (Kingma & Ba, 2014), using the default hyperparameters. The generator illustrated in Figure 1, is a DCGAN generator (Radford et al., 2016), of depth J + 2: G = ρ WJ+1 ρ WJ ... ρ W1 ρ W0. The non-linearity ρ is a Re LU. The first operator W0 is linear (fully-connected) plus a bias, it transforms Z into a 4x4 array of 1024 channels. The next operators Wj for 1 <= j <= J perform a bilinear upsampling of their input, followed by a multichannel convolution along the spatial variables, and the addition of a constant bias for each channel. All the convolutional layers have filters of size 7, with symmetric padding at the boundaries.