Omni-Kernel Network for Image Restoration

Authors: Yuning Cui, Wenqi Ren, Alois Knoll

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our network achieves state-of-the-art performance on 11 benchmark datasets for three representative image restoration tasks, including image dehazing, image desnowing, and image defocus deblurring.
Researcher Affiliation Academia Yuning Cui1, Wenqi Ren2*, Alois Knoll1 1Technical University of Munich 2Shenzhen Campus of Sun Yat-sen University
Pseudocode No The paper describes the architecture and components in text and diagrams (Figure 2) but does not provide any pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/c-yn/OKNet.
Open Datasets Yes We conduct dehazing experiments on three kinds of datasets: daytime synthetic dataset (RESIDE (Li et al. 2018)), daytime real-world datasets (Dense-Haze (Ancuti et al. 2019), NH-HAZE (Ancuti, Ancuti, and Timofte 2020), O-Haze (Ancuti et al. 2018b), and I-Haze (Ancuti et al. 2018a)), and nighttime dataset (NHR (Zhang et al. 2020))... verify the effectiveness of the proposed network for single-image defocus deblurring using the widely used DPDD (Abuolaim and Brown 2020) dataset... Snow100K (Liu et al. 2018), SRRS (Chen et al. 2020), and CSD (Chen et al. 2021b).
Dataset Splits No The paper mentions using benchmark datasets for training and evaluation (e.g., "training OKNet-S on the RESIDE-Indoor (Li et al. 2018) dataset for 300 epochs and evaluating on SOTS-Indoor (Li et al. 2018)"), but it does not specify explicit train/validation/test dataset splits (e.g., percentages or sample counts) for these datasets within the paper.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper mentions using the "Adam optimizer" and "cosine annealing decay strategy" but does not list specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes The models are trained using the Adam optimizer (Kingma and Ba 2014) with β1 = 0.9 and β2 = 0.999. The batch size is set to 8. The learning rate is initially set to 2e 4 and decreased to 1e 6 gradually using the cosine annealing decay strategy (Loshchilov and Hutter 2016). For data augmentation, the cropped patches of size 256 256 are randomly horizontally flipped with a probability of 0.5. FLOPs are measured on 256 256 patch size.