Optimization for Amortized Inverse Problems
Authors: Tianci Liu, Tong Yang, Quan Zhang, Qi Lei
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate the performance of the proposed algorithm on three inverse problem tasks, including denoising, noisy compressed sensing, and inpainting. |
| Researcher Affiliation | Academia | 1 Purdue University, United States 2 Peking University, China 3 Michigan State University, United States 4 New York University, United States. |
| Pseudocode | Yes | Algorithm 1 AIPO algorithm |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code availability for the described methodology. |
| Open Datasets | Yes | The two models are trained on the Celeb A dataset (Liu et al., 2015) |
| Dataset Splits | No | The paper mentions using samples from the Celeb A test set but does not provide explicit training, validation, or test splits used within their experimental setup. |
| Hardware Specification | No | The paper does not provide specific hardware details such as CPU or GPU models used for the experiments. |
| Software Dependencies | No | The paper mentions software components like 'Adam', 'Real NVP', and 'GLOW' but does not specify version numbers for them or any other software dependencies. |
| Experiment Setup | Yes | In all the experiments, we compare the algorithms with a prespecified λ, which is set to be 0.3, 0.5, 1.0, 1.5, 2.0, respectively. Our AIPO and the baseline algorithm with the MLE initialization require a solution to (4) on the NCS and inpainting tasks, where we run 500 iterations of projected gradient descent. Table 4. Hyper-parameters used in Amortized Optimization appeared Algorithm 2. Hyperparameter step size α iteration K target rate r min. step size δ0 min δmin,h δmin,h Value 0.05 40 0.05 Λ/20 4δ0 min 1 δ0 min/4 |