Parameter Efficient Adaptation for Image Restoration with Heterogeneous Mixture-of-Experts

Authors: Hang Guo, Tao Dai, Yuanchao Bai, Bin Chen, Xudong Ren, Zexuan Zhu, Shu-Tao Xia

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our Adapt IR achieves stable performance on single-degradation tasks, and excels in hybrid-degradation tasks, with fine-tuning only 0.6% parameters for 8 hours. We first employ single-degradation restoration tasks to assess the performance stability of different PETL methods, including image SR, color image denoising, image deraining, and low-light enhancement. Subsequently, we introduce hybrid degradation to further evaluate the ability to learn heterogeneous representations. In addition, we compare with recent all-in-one methods in both effectiveness and efficiency to demonstrate the advantages of applying PETL for generalized image restoration. Finally, we conduct ablation studies to reveal the working mechanism of the proposed method as well as different design choices.
Researcher Affiliation Academia Hang Guo1 Tao Dai 2 Yuanchao Bai3 Bin Chen3 Xudong Ren1 Zexuan Zhu2 Shu-Tao Xia1,4 1Tsinghua University 2Shenzhen University 3Harbin Institute of Technology 4Peng Cheng Larboratory
Pseudocode No The paper describes methods with diagrams and mathematical formulations, but no explicit pseudocode or algorithm blocks are provided.
Open Source Code No https://github.com/csguoh/Adapt IR (on page 1). However, the NeurIPS checklist states 'No' for open access to code, with the justification: 'We will release our code after review, but we have already provided a detailed explanation of how to implement our algorithm and the specific implementation details in the paper.'
Open Datasets Yes For image SR, we choose DIV2K [36] and Flickr2K [37] as the training set, and we evaluate on Set5 [38], Set14 [39], BSDS100 [40], Urban100 [41], and Manga109 [42]. For color image denoising, training sets consist of DIV2K [36], Flickr2K [37], BSD400 [40], and WED [43], and we have two testing sets: CBSD68 [44] and Urban100 [41]. For low-light image enhancement, we utilize the training and testing set of LOLv1 [45].
Dataset Splits No The paper describes training and testing sets for various tasks, but does not specify a separate validation set or detailed train/validation/test splits with percentages or sample counts for reproduction.
Hardware Specification Yes All experiments are conducted on four NVIDIA 3080Ti GPUs.
Software Dependencies No We use Adam W [46] as the optimizer and train for 500 epochs.
Experiment Setup Yes We use Adam W [46] as the optimizer and train for 500 epochs. The learning rate is initialized to 1e-4 and decayed by half at {250,400,450,475} epochs. All experiments are conducted on four NVIDIA 3080Ti GPUs.