Deep Wiener Deconvolution: Wiener Meets Deep Learning for Image Deblurring
Authors: Jiangxin Dong, Stefan Roth, Bernt Schiele
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive experimental results show that the proposed deep Wiener deconvolution network facilitates deblurred results with visibly fewer artifacts. Moreover, our approach quantitatively outperforms state-of-the-art non-blind image deblurring methods by a wide margin. 4 Experimental Results |
| Researcher Affiliation | Academia | Jiangxin Dong MPI Informatics jdong@mpi-inf.mpg.de Stefan Roth TU Darmstadt stefan.roth@visinf.tu-darmstadt.de Bernt Schiele MPI Informatics schiele@mpi-inf.mpg.de |
| Pseudocode | No | No explicit pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | The PyTorch code and trained models are available at our Project page. |
| Open Datasets | Yes | We collect a training dataset including 400 images from the Berkeley segmentation [24] and 4744 images from the Waterloo Exploration [22] datasets. |
| Dataset Splits | No | The paper describes training and test datasets but does not explicitly provide details on a separate validation split or cross-validation methodology. It only mentions 'Test datasets' for evaluation. |
| Hardware Specification | No | No specific hardware details (like GPU/CPU models, memory, or cloud instance types) used for experiments were provided in the paper. |
| Software Dependencies | No | The paper mentions 'PyTorch code' but does not provide specific version numbers for PyTorch or any other software libraries or dependencies. |
| Experiment Setup | Yes | Balancing effectiveness and efficiency, we use a total of two scales in the multi-scale feature refinement module. We empirically use M = 16 features and set γl = 1. For training the network parameters, we adopt the Adam optimizer [14] with default parameters. The batch size is set to 8. The learning rate is initialized as 10^-4, which is halved every 200 epochs. |