Image-Adaptive GAN Based Reconstruction
Authors: Shady Abu Hussein, Tom Tirer, Raja Giryes3121-3129
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically demonstrate the advantages of our proposed approach for image super-resolution and compressed sensing. In our experiments we use two recently proposed GAN models... Apart from presenting visual results, we compare the performance of the different methods using two quantitative measures. The first one is the widely-used mean squared error (MSE) (sometimes in its PSNR form). The second is a distance between images that focuses on perceptual similarity (PS)... |
| Researcher Affiliation | Academia | School of Electrical Engineering Tel Aviv University, Tel Aviv, Israel |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions a companion technical report (Abu Hussein, Tirer, and Giryes 2019. Image-adaptive GAN based reconstruction. ar Xiv preprint ar Xiv:1906.05284), but does not provide a direct link to open-source code for the described methodology. |
| Open Datasets | Yes | In our experiments we use two recently proposed GAN models... BEGAN (Berthelot, Schumm, and Metz 2017), trained on Celeb A dataset (Liu et al. 2015)... The second is PGGAN (Karras et al. 2017), trained on Celeb A-HQ dataset (Karras et al. 2017)... |
| Dataset Splits | No | The paper describes the optimization process and iteration counts, but it does not specify explicit training/validation/test dataset splits or mention a validation set used for hyperparameter tuning. |
| Hardware Specification | Yes | For example, for compression ratio of 0.5, using NVIDIA RTX 2080ti GPU we got the following per image run-time: for Celeb A: DIP 100s, CSGM 30s, and IAGAN 35s; and for Celeb A-HQ: DIP 1400s, CSGM 120s, and IAGAN 140s. |
| Software Dependencies | No | The paper mentions using "ADAM optimizer" but does not provide specific version numbers for any software dependencies, libraries, or frameworks used in the experiments. |
| Experiment Setup | Yes | For CSGM we follow (Bora et al. 2017) and optimize (2) using ADAM optimizer (Kingma and Ba 2014) with learning rate (LR) of 0.1. We use 1600 iterations for BEGAN and 1800 iterations for PGGAN... In the reconstruction based on image-adaptive GANs, which we denote by IAGAN, we initialize z with ˆz, and then optimize (3) jointly for z and θ (the generator parameters). For BEGAN we use LR of 10 4 for both z and θ in all scenarios, and for PGGAN we use LR of 10 4 and 10 3 for z and θ, respectively. For BEGAN, we use 600 iterations for compressed sensing and 500 for super-resolution. For PGGAN we use 500 and 300 iterations for compressed sensing and super-resolution, respectively. |