Fourmer: An Efficient Global Modeling Paradigm for Image Restoration

Authors: Man Zhou, Jie Huang, Chun-Le Guo, Chongyi Li

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our paradigm, Fourmer, achieves competitive performance on common image restoration tasks such as image de-raining, image enhancement, image dehazing, and guided image super-resolution, while requiring fewer computational resources. Our contributions are summarized as follows: (1) We propose a global modeling paradigm for image restoration that balances effectiveness and efficiency in comparison to existing global modeling-based frameworks. (3) Our paradigm Fourmer achieves competitive performance on several mainstream image restoration tasks, such as image de-raining, enhancement, dehazing, and guided super-resolution, while requiring fewer computational resources.
Researcher Affiliation Academia 1S-Lab, Nanyang Technological University, Singapore 2 Department of Automation, University of Science and Technology of China, Hefei, China 3School of Computer Science, Nankai University, Tianjin, China.
Pseudocode No The paper includes architectural diagrams and mathematical equations, but no structured pseudocode or algorithm blocks.
Open Source Code Yes The code for Fourmer is publicly available at https://manman1995.github.io/.
Open Datasets Yes Low-light image enhancement. We evaluate our paradigm on two popular low-light image enhancement benchmarks, including LOL (Chen Wei, 2018) and Huawei (Hai et al., 2021). LOL dataset consists of 500 low-/normallight image pairs and splits 485 for training and 15 for testing. Huawei dataset contains 2,480 paired images and splits 2,200 for training and 280 for testing.
Dataset Splits No The paper provides details on training and testing splits for datasets, but does not explicitly mention a separate validation split or its size.
Hardware Specification No The paper states that the method requires fewer computational resources but does not provide specific details about the hardware used for experiments (e.g., GPU models, CPU specifications).
Software Dependencies No The paper mentions the use of FFT/IFFT algorithms but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Optimization Flow. In addition to the novel network designs, we also introduce a new loss function for optimizing the network training for better results in both spatial and frequency domains. The new loss function consists of two parts: a spatial domain loss and a frequency domain loss. In the spatial domain, we adopt the L1 loss function, as expressed in Equation (3). ... Finally, the overall loss function is formulated as L = Lspa + λLfre, (5) where λ is the weight factor and is set to 0.1.