Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Prior Mismatch and Adaptation in PnP-ADMM with a Nonconvex Convergence Analysis
Authors: Shirin Shoushtari, Jiaming Liu, Edward P. Chandler, M. Salman Asif, Ulugbek S. Kamilov
ICML 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our first set of numerical results quantifies the impact of the prior distribution mismatch on the performance of Pn P-ADMM on the problem of image super-resolution. Our second set of numerical results considers a simple and effective domain adaption strategy that closes the performance gap due to the use of mismatched denoisers. |
| Researcher Affiliation | Academia | 1Washington University in St. Louis, St. Louis, MO, USA 2University of California, Riverside, CA, USA. |
| Pseudocode | Yes | Algorithm 1 Pn P-ADMM |
| Open Source Code | Yes | The code for our numerical evaluation is available at https: //github.com/wustl-cig/MMPn PADMM. |
| Open Datasets | Yes | We use DRUNet architecture (Zhang et al., 2021) for all image denoisers. To model prior mismatch, we train denoisers on five image datasets: Met Faces (Karras et al., 2020), AFHQ (Choi et al., 2020), Celeb A (Liu et al., 2015), Bre Ca HAD (Aksac et al., 2019), and Rx Rx1 (Sypetkowski et al., 2023). |
| Dataset Splits | No | The paper mentions using a 'training dataset' and a 'test set', but does not specify a separate validation split, its size, or how it was used to tune hyperparameters for reproducibility. |
| Hardware Specification | No | No specific hardware details (e.g., GPU model, CPU type, memory size) used for running experiments are mentioned in the paper. |
| Software Dependencies | No | The paper mentions 'DRUNet architecture (Zhang et al., 2021)' and 'Adam optimizer (Kingma & Ba, 2015)' but does not provide specific version numbers for these software components or any other libraries like Python, PyTorch, or CUDA. |
| Experiment Setup | Yes | We use DRUNet architecture (Zhang et al., 2021) for all image denoisers. ... Our training dataset consists of 1000 randomly chosen, resized, or cropped image slices, each measuring 256 256 pixels. ... All denoisers (Adapted, matched, and mismatched) were trained using the DRUNet architecture (Zhang et al., 2021) with Mean Squared Error (MSE) loss, employing the Adam optimizer (Kingma & Ba, 2015) with a learning rate of 10 4. We incorporated a noise level map strength that decreases logarithmically from σoptim to σ = 0.01 over 15 iterations, where σoptim is fine-tuned for optimal performance for each test image and prior individually. ... For our Pn P-ADMM algorithm, we performed 15 iterations for all denoisers. In all experiments, the algorithm is initialized with z0 = s0 = 0. |