Joint Sub-bands Learning with Clique Structures for Wavelet Domain Super-Resolution

Authors: Zhisheng Zhong, Tiancheng Shen, Yibo Yang, Zhouchen Lin, Chao Zhang

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive quantitative and qualitative experiments on benchmark datasets show that our method achieves superior performance over the state-of-the-art methods.
Researcher Affiliation Academia Zhisheng Zhong1 Tiancheng Shen1,2 Yibo Yang1,2 Chao Zhang1, Zhouchen Lin1,3 1Key Laboratory of Machine Perception (MOE), School of EECS, Peking University 2Academy for Advanced Interdisciplinary Studies, Peking University 3Cooperative Medianet Innovation Center, Shanghai Jiao Tong University {zszhong, tianchengshen, ibo, c.zhang, zlin}@pku.edu.cn
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements or links indicating the availability of open-source code for the described methodology.
Open Datasets Yes We trained all networks using images from DIV2K [33] and Flickr [25]. For testing, we used four standard banchmark datasets: Set5 [3], Set14 [41], BSDS100 [2] and Urban100 [12].
Dataset Splits Yes Following settings of [25], we used a batch size of 16 with size 32 32 for LR images... We recorded the best performance in terms of PSNR/SSIM [37] on Set5 with magnification factor 2 during 400 epochs.
Hardware Specification No The paper states 'We conducted all experiments using Py Torch.' but does not specify any hardware details like CPU, GPU models, or memory.
Software Dependencies No The paper mentions 'Py Torch' but does not provide specific version numbers for it or any other software dependencies.
Experiment Setup Yes Following settings of [25], we used a batch size of 16 with size 32 32 for LR images, while the size of HR images changes according to the magnification factor. We randomly augmented the patches by flipping horizontally or vertically and rotating 90 . We chose parametric rectified linear units (PRe LUs) as the activation function for our networks. The base learning rate was initialized to 10 5 for all layers and decreased by a factor of 2 for every 200 epochs. The total training epoch was set to 500. We used Adam [19] as our optimizer and conducted all experiments using Py Torch.