Tuning-free Plug-and-Play Proximal Algorithm for Inverse Imaging Problems
Authors: Kaixuan Wei, Angelica Aviles-Rivero, Jingwei Liang, Ying Fu, Carola-Bibiane Schönlieb, Hua Huang
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate, through numerical and visual experiments, that the learned policy can customize different parameters for different states, and often more effcient and effective than existing handcrafted criteria. |
| Researcher Affiliation | Academia | 1School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China 2DPMMS, University of Cambridge, Cambridge, United Kingdom 3DAMTP, University of Cambridge, Cambridge, United Kingdom. Correspondence to: Ying Fu <fuying@bit.edu.cn>. |
| Pseudocode | No | The paper describes algorithmic steps and equations in paragraph form, but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include an unambiguous statement that the authors are releasing the code for the described methodology, nor does it provide a direct link to a source-code repository. |
| Open Datasets | Yes | For training the denoising network, we follow the common practice that uses 87,000 overlapping patches (with size 128 128) drawn from 400 images from the BSD dataset (Martin et al., 2001). [...] To train the policy network and value network, we use the 17,125 resized images with size 128 128 from the PASCAL VOC dataset (Everingham et al., 2014). |
| Dataset Splits | No | The paper describes the datasets used for training and testing, but it does not specify any explicit validation dataset splits (e.g., percentages or sample counts for a validation set). |
| Hardware Specification | No | Table 1 includes a row for 'GPU runtime (ms)', implying the use of GPUs. However, no specific hardware details such as GPU models, CPU models, or memory specifications are provided. |
| Software Dependencies | No | The paper mentions software components and architectures like 'residual U-Net', 'Res Net 18', 'Adam optimizer', 'Dn CNN', and 'Mem Net'. However, it does not specify any version numbers for these software components or the programming languages/libraries used (e.g., Python 3.x, TensorFlow x.x, PyTorch x.x). |
| Experiment Setup | Yes | To reduce the computation cost, we defne the transition function p to involve m iterations of the optimization. At each time step, the agent thus needs to decide the internal parameters for m iterates. We set m = 5 and the max time step N = 6 in our algorithm, leading to 30 iterations of the optimization at most. [...] We set η = 0.05 in our algorithm. [...] For training the denoising network, we follow the common practice that uses 87,000 overlapping patches (with size 128 128) drawn from 400 images from the BSD dataset (Martin et al., 2001). [...] The denoising networks are trained with 50 epoch using L1 loss and Adam optimizer (Kingma & Ba, 2014) with batch size 32. The base learning rate is set to 10 4 and halved at epoch 30, then reduced to 10 5 at epoch 40. [...] Both networks are trained using Adam optimizer with batch size 48 and 1500 iterations, with a base learning rate of 3 10 4 for the policy network and 10 3 for the value network. Then we set these learning rates to 10 4 and 3 10 4 at iteration 1000. We perform 10 gradient steps at every iteration. |