Deep Non-Blind Deconvolution via Generalized Low-Rank Approximation

Authors: Wenqi Ren, Jiawei Zhang, Lin Ma, Jinshan Pan, Xiaochun Cao, Wangmeng Zuo, Wei Liu, Ming-Hsuan Yang

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on benchmark datasets with noisy and saturated pixels demonstrate that the proposed deconvolution approach relying on generalized low-rank approximation performs favorably against the state-of-the-arts.
Researcher Affiliation Collaboration Wenqi Ren IIE, CAS Jiawei Zhang Sense Time Research Lin Ma Tencent AI Lab Jinshan Pan NJUST Xiaochun Cao IIE, CAS Wangmeng Zuo HIT Wei Liu Tencent AI Lab Ming-Hsuan Yang UCMerced, Google Cloud
Pseudocode No The paper describes the algorithm and network architecture in text and diagrams but does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper states 'The implementation code, the trained model, as well as the test data, can be found at our project website.' but does not provide a specific URL or link to the project website, making concrete access impossible.
Open Datasets Yes In order to generate blurred images for training, we use the BSD500 dataset [1] and randomly crop image patches with a size of 256 256 pixels as clear images.
Dataset Splits No The paper mentions using BSD500 for training and Flickr/BSD100 for testing, but it does not specify any validation set or a detailed breakdown of training/validation/test splits with percentages or sample counts.
Hardware Specification Yes For all the results reported in the paper, we train the network for 200,000 iterations, which takes 30 hours on an Nvidia K80 GPU.
Software Dependencies No The paper mentions using the ADAM optimizer and Xavier initialization, but it does not specify any software libraries (e.g., PyTorch, TensorFlow) or their version numbers.
Experiment Setup Yes The image patch size is set as 256 256 in the proposed network. We use the ADAM [14] optimizer with a batch size 1 for training with the L2 loss. The initial learning rate is 0.0001 and is decreased by 0.5 for every 5,000 iterations. Note that we fix parameters in the second layer from the estimated M without tuning the parameters. The first three layers are trained using the initialization from separable inversion as described Section 3.3. We use the Xavier initialization method [7] to set the weights of the last three convolutional kernels. For all the results reported in the paper, we train the network for 200,000 iterations... The default values of β1 and β2 (0.9 and 0.999) are used, and we set the weight decay to 0.00001.