Attention-based Pyramid Dilated Lattice Network for Blind Image Denoising

Authors: Mohammad Nikzad, Yongsheng Gao, Jun Zhou

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive experimental investigation verifies the effectiveness and efficiency of the APDL architecture for image denoising as well as JPEG artifacts suppression tasks.
Researcher Affiliation Academia Mohammad Nikzad , Yongsheng Gao and Jun Zhou Institute for Integrated and Intelligent Systems (IIIS), Griffith University, Australia m.nikzaddehaji@griffithuni.edu.au, yongsheng.gao, jun.zhou@griffith.edu.au
Pseudocode No The paper includes architectural diagrams (Figure 1 and Figure 2) and descriptions of components, but it does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement about releasing source code or a link to a code repository.
Open Datasets Yes We use 400 images of size 180 180 from Berkeley segmentation dataset (BSD500) [Arbelaez et al., 2010] as the training data for the image restoration tasks.
Dataset Splits No The paper mentions training and test datasets but does not explicitly provide details about a validation dataset split or how it was used in relation to training and testing.
Hardware Specification Yes All models are trained using a single NVIDIA Geforce Titan RTX GPU card.
Software Dependencies No The paper mentions using the 'Adam optimizer' and 'cosine annealing technique' but does not specify version numbers for any software dependencies or frameworks (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes We adopt Adam optimizer with default hyper-parameters and 10 5 as the weight decay for the training of the proposed models. All models are trained for 100 epochs with minibatch size of 32 and initial learning rate of 0.001 which is decayed to 10 5 by adopting cosine annealing technique .