Cross-Scale Internal Graph Neural Network for Image Super-Resolution
Authors: Shangchen Zhou, Jiawei Zhang, Wangmeng Zuo, Chen Change Loy
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the effectiveness of IGNN against the state-of-the-art SISR methods including existing non-local networks on standard benchmarks. |
| Researcher Affiliation | Collaboration | Shangchen Zhou1 Jiawei Zhang2 Wangmeng Zuo3 Chen Change Loy1 1Nanyang Technological University 2Sense Time Research 3Harbin Institute of Technology {s200094,ccloy}@ntu.edu.sg zhangjiawei@sensetime.com wmzuo@hit.edu.cn |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | https://github.com/sczhou/IGNN |
| Open Datasets | Yes | Following [23, 12, 45, 43, 5], we use 800 high-quality (2K resolution) images from DIV2K dataset [34] as training set. |
| Dataset Splits | No | The paper does not explicitly provide training/validation/test dataset splits. It mentions using DIV2K for training and evaluating on standard benchmarks, but no specific validation split information. |
| Hardware Specification | Yes | The IGNN is implemented on the Py Torch framework on an NVIDIA Tesla V100 GPU. |
| Software Dependencies | No | The paper mentions 'Py Torch framework' but does not specify a version number or other software dependencies with their versions. |
| Experiment Setup | Yes | We set the minibatch size to 4 and train our model using ADAM [18] optimizer with the settings of β1 = 0.9, β2 = 0.999, ϵ = 10 8. The initial learning rate is set as 10 4 and then decreases to half for every 2 105 iterations. Training is terminated after 8 105 iterations. The network is trained by using ℓ1 norm loss. |