The Perception-Robustness Tradeoff in Deterministic Image Restoration

Authors: Guy Ohayon, Tomer Michaeli, Michael Elad

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate our theory on single image superresolution algorithms, addressing both noisy and noiseless settings. ... Through experiments on popular deterministic image super-resolution methods (Section 4.2), we demonstrate that a lower statistical distance between p ˆ X,Y and p X,Y indicates worse average robustness to adversarial attacks.
Researcher Affiliation Academia 1Faculty of Computer Science, Technion, Haifa, Israel 2Faculty of Electrical and Computer Engineering, Technion, Haifa, Israel. Correspondence to: Guy Ohayon <ohayonguy@campus.technion.ac.il>.
Pseudocode Yes Algorithm 1 Farthest Point Sampling approach to explore the posterior distribution with a deterministic estimator ˆX = f(Y ) that attains high joint perceptual quality.
Open Source Code No The paper mentions using 'official code provided by the authors' for GFPGAN and RRDB, which refers to code from other researchers, not their own source code for the methodology described in this paper.
Open Datasets Yes We use the DIV2K (Agustsson & Timofte, 2017; Timofte et al., 2017) test set... We take the first 1000 face images from the Celeb A-HQ (Karras et al., 2018) data set...
Dataset Splits No The paper mentions a 'validation set' for the toy example (Appendix D.1) but does not specify its size or percentages for splits. For the main quantitative evaluation, it uses the 'DIV2K test set' without providing explicit train/validation/test split percentages or absolute counts needed for full reproduction from raw data.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions software components like 'Adam optimizer (Kingma & Ba, 2014)' and 'ot.emd2 function from (Flamary et al., 2021)' but does not provide specific version numbers for any libraries, frameworks, or programming languages used in the experiments.
Experiment Setup Yes We optimize both of the networks using the Adam optimizer (Kingma & Ba, 2014) with β1 = 0.5, β2 = 0.9, a learning rate of 10 4, a batch size of 128, and for a total of 100, 000 training steps for each network (we perform one training step at a time for each network). We multiply the learning rate by 0.5 every 5,000 training steps, starting after 50,000 steps, i.e., we perform multi-step learning rate scheduling. ... We attack each degraded image using a tweaked version of the I-FGSM basic attack with α = 16/255 and T = 100.