Unsupervised Adversarial Image Reconstruction

Authors: Arthur Pajot, Emmanuel de Bezenac, Patrick Gallinari

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our reconstructions on several image datasets with different types of corruptions. The proposed approach yields better results than alternative baselines, and comparable performance with model variants trained with additional supervision.
Researcher Affiliation Collaboration a Sorbonne Universités, UMR 7606, LIP6, F-75005 Paris, France b Criteo AI Lab, Paris, France
Pseudocode Yes Algorithm 1 Training Procedure.
Open Source Code No The paper does not provide an explicit statement about releasing its own source code or a link to a repository for the methodology described in the paper. The only link found is for a baseline method (Deep Image Prior).
Open Datasets Yes We evaluate our approach using three different image datasets : Celeb A. Dataset of celebrities, containing approximately 200 000 samples. As Bora et al. (2018), the images are center-cropped. LSUN Bedrooms. Dataset of bedrooms, containing 3 million samples. Recipe-1M. Dataset of cooked meals, containing approximately 600 000 samples.
Dataset Splits Yes We withhold 15% of the training set for validation, selected uniformly at random for each dataset.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. It only vaguely mentions 'on GPU' in the context of a baseline method.
Software Dependencies No The paper mentions several techniques and components like 'batch normalization', 'Re LU activation', 'spectral normalization', 'residual networks', and 'Adam optimizer', but it does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Hyperparameters have been selected on the validation set, based on the mean square error between the reconstructions ˆx and the image x. As in Zhang et al. (2018), we use imbalanced learning rates for the generator and the discriminator (0.0001 and 0.0004, respectively), using the Adam optimizer (Kingma & Ba (2014)), using β1 = 0 and β2 = 0.9. The weights are initialized using orthogonal initialization. We set λ = 2, and exponentially decay the learning rate every 400 iterations, setting the decay factor to 0.995.