Cross-Scale Self-Supervised Blind Image Deblurring via Implicit Neural Representation

Authors: Tianjing Zhang, Yuhui Quan, Hui Ji

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Combined with a progressively coarse-to-fine training scheme, the proposed method significantly outperforms existing self-supervised methods in extensive experiments.
Researcher Affiliation Academia 1 Department of Mathematics, National University of Singapore 2 School of Computer Science and Engineering, South China University of Technology
Pseudocode Yes Algorithm 1: Self-supervised progressively coarse-to-fine training for BID
Open Source Code Yes The code of the proposed method is available on Github.1. https://github.com/tjzhang-nus/Deblur-INR
Open Datasets Yes Synthetic dataset with uniform blurring from Lai et al. [26]: This dataset consists of 100 images categorized into five groups: Manmade, Natural, People, Saturated, and Text [...] Synthetic dataset with modest non-uniform blurring from Köhler et al. [22] [...] Real-world dataset Real Blur [48] [...] Microscopic dataset [45] [...] Levin et al. s dataset [28]
Dataset Splits No The paper describes testing on various datasets and mentions a 'test dataset' explicitly in some contexts, but it does not specify a training/validation/test split for reproducibility purposes, nor does it explicitly mention a 'validation set' with specific details for any of the datasets used.
Hardware Specification Yes The results are reported in terms of running time, number of parameters, and memory usage, when processing a 256 x 256 image with a 31 x 31 blur kernel on an NVIDIA 3090 RTX GPU.
Software Dependencies No The paper mentions software components like "Adam optimizer" and using "SSIM [60]" as a metric, but it does not specify version numbers for these or any other software dependencies, which are necessary for reproducible descriptions.
Experiment Setup Yes The training consists of 5000 iterations across three stages. The first stage operates at the coarsest scale S0 with 500 iterations. The second stage refines training from scale S0 to scale 0, with 500 iterations per scale. The final stage is tuning at scale 0 for the remaining iterations. The NN is trained using the Adam optimizer with a batch size of 1. The initial learning rates for the image and kernel generators are set to 5e-3 and 5e-5, respectively, decreasing to half their values every 2000 iterations. The weight λ in the loss function is set to 0.001