Neural Photo Editing with Introspective Adversarial Networks

Authors: Andrew Brock, Theodore Lim, J.M. Ritchie, Nick Weston

ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our contributions on Celeb A, SVHN, and CIFAR-100, and produce samples and reconstructions with high visual fidelity.
Researcher Affiliation Collaboration Andrew Brock, Theodore Lim, & J.M. Ritchie School of Engineering and Physical Sciences Heriot-Watt University Edinburgh, UK {ajb5, t.lim, j.m.ritchie}@hw.ac.uk Nick Weston Renishaw plc Research Ave, North Edinburgh, UK Nick.Weston@renishaw.com
Pseudocode No The paper includes architectural diagrams (e.g., Figure 3, Figure 4) but does not provide any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes All of our code is publicly available.2 [Footnote 2: https://github.com/ajbrock/Neural-Photo-Editor]
Open Datasets Yes We qualitatively evaluate the IAN on 64x64 Celeb A (Liu et al., 2015), 32x32 SVHN (Netzer et al., 2011), 32x32 CIFAR-10 (Krizhevsky & Hinton, 2009), and 64x64 Imagenet (Russakovsky et al., 2015).
Dataset Splits Yes For use in evaluating the IAN, we additionally train 40-layer, k=12 Dense Nets on the Celeb A attribute classification task with varying amounts of Orthogonal Regularization. A plot of the train and validation error during training is available in Figure 7. [...] We train a set of 40-layer, k=12 Dense Nets for 50 epochs, annealing the learning rate at 25 and 37 epochs. [...] we report the test error after training in Table 1. [...] We first evaluate using the procedure of (Radford et al., 2015) by training an L2-SVM on the output of the FC layer of the encoder subnetwork, and report average test error and standard deviation across 100 different SVMs, each trained on 1000 random examples from the training set.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as GPU models, CPU specifications, or memory.
Software Dependencies No The paper states 'Our models are implemented in Theano (Team, 2016) with Lasagne (Dieleman et al., 2015),' but it does not specify version numbers for these software dependencies, which is necessary for reproducibility.
Experiment Setup Yes We set λimg to 3 and leave the other terms at 1. [...] As in (Radford et al., 2015), we use Batch Normalization (Ioffe & Szegedy, 2015) and Adam (Kingma & Ba, 2014) in both networks. [...] In our architecture, we replace the hidden layers of the generator with Standard MDC blocks, using F=5 and D=2 [...]. We train a set of 40-layer, k=12 Dense Nets for 50 epochs, annealing the learning rate at 25 and 37 epochs. [...] We add varying amounts of Orthogonal Regularization.