Amortised MAP Inference for Image Super-resolution

Authors: Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, Ferenc Huszár

ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We initially illustrate the behaviour of the proposed algorithms on data where exact MAP inference is computationally tractable. Here the HR data y = [y1, y2] is drawn from a two-dimensional noisy Swiss-roll distribution and the one-dimensional LR data x is simply the average of the two HR pixels. Next we tested the proposed algorithm in a series of experiments on natural images using 4 downsampling.. For the first dataset, we took random crops from HR images containing grass texture. SR of random textures is known to be very hard using MSE or MAE loss functions. Finally, we tested the proposed models on real image data of faces (Celeb-A) and natural images (Image Net).
Researcher Affiliation Collaboration Casper Kaae Sønderby1 2 , Jose Caballero1, Lucas Theis1, Wenzhe Shi1& Ferenc Huszár1 casperkaae@gmail.com, {jcaballero,ltheis,wshi,fhuszar}@twitter.com 1Twitter, London, UK 2University of Copenhagen, Denmark
Pseudocode No The paper describes methods textually but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions 'Open source code' by David Garcia in the references but does not state that the authors are providing open-source code for their own described methodology. No specific link for their code is provided.
Open Datasets Yes For the Celeb A experiments the datasets were split into train, validation and test set using the standard splitting. ... For the Image NET experiments the 2012 dataset were randomly split into train, validation and test set with 104 samples in the test and validation sets.
Dataset Splits Yes For the Celeb A experiments the datasets were split into train, validation and test set using the standard splitting. All images were center cropped and resized to 64 64 before down-sampling to 16 16 using A. ... For the Image NET experiments the 2012 dataset were randomly split into train, validation and test set with 104 samples in the test and validation sets.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. It only mentions the software used.
Software Dependencies No The paper mentions software like Theano and Lasagne with citations, but does not provide specific version numbers for these software components.
Experiment Setup Yes For all image models we used convolutional models using Re LU nonlinearities and batch normalization in all layers except the output. All generators used skip connections similar to (Huang et al., 2016) and a final sigmoid non-linearity was applied to output of the model which were either used directly or feed through the affine transformation layers parameterised by A and A+. The discriminators were standard convolutional networks followed by a final sigmoid layer. For the grass texture experiments... The generators used 6 layers of convolutions with 32, 32, 64, 64, 128 and filter maps and skip connections after every second layer. The discriminators had four layers of strided convolutions with 32, 64, 128 and 256 filter maps.