Test-Time Degradation Adaptation for Open-Set Image Restoration

Authors: Yuanbiao Gou, Haiyu Zhao, Boyun Li, Xinyan Xiao, Xi Peng

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4. Experiments In this section, we first introduce the experimental settings, and then show quantitative and qualitative results on multiple degradations. Finally, we perform analysis experiments including ablation studies and result visualizations.
Researcher Affiliation Collaboration Yuanbiao Gou 1 Haiyu Zhao 1 Boyun Li 1 Xinyan Xiao 2 Xi Peng 1 1 College of Computer Science, Sichuan University, Chengdu, China 2Baidu Inc., Beijing, China.
Pseudocode No The paper does not contain any pseudocode or algorithm blocks. The methods are described in text and diagrams and flowcharts.
Open Source Code Yes The code is available at https://github.com/XLearning SCU/2024-ICML-TAO.
Open Datasets Yes In experiments, we introduce the HSTS dataset from RESIDE (Li et al., 2018a) for evaluations. ... we introduce the test subset from LOL (Wei et al., 2018b) dataset. ... we introduce Kodak24 dataset which consists of 24 natural clean images, and is commonly used for testing image denoising methods.
Dataset Splits No LOL includes 485 training and 15 test image pairs of low- and normal-light... The paper does not explicitly provide training/test/validation dataset splits, percentages, or methodology for all datasets used, like HSTS or Kodak24, beyond mentioning test subsets.
Hardware Specification Yes all experiments are conducted through Py Torch framework on Ubuntu20.04 with Ge Force RTX 3090 GPUs.
Software Dependencies No The paper mentions "Py Torch framework on Ubuntu20.04" and "Adam optimizer", but does not provide specific version numbers for PyTorch, Ubuntu, or other software libraries.
Experiment Setup Yes In experiments, we employ an unconditional image diffusion model (Dhariwal & Nichol, 2021) pretrained on Image Net (Deng et al., 2009) as PDM, and sets the timestep as T = 1000, which is further divided into three stages in a heuristic way, i.e., the first stage 999-700, the second stage 700-50, and the third stage 50-0. Both the adapter and discriminators are four-layer convolutional networks, and optimized once at each denoising step through Adam optimizer with default learning rate of 1e-3.