ShuffleMixer: An Efficient ConvNet for Image Super-Resolution
Authors: Long Sun, Jinshan Pan, Jinhui Tang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that the proposed Shuffle Mixer is about 3 smaller than the state-of-the-art efficient SR methods, e.g. CARN, in terms of model parameters and FLOPs while achieving competitive performance. |
| Researcher Affiliation | Academia | Long Sun, Jinshan Pan , Jinhui Tang Nanjing University of Science and Technology {cs.longsun, jspan, jinhuitang}@njust.edu.cn |
| Pseudocode | No | The paper describes the network architecture with diagrams but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/sunny2109/Shuffle Mixer. |
| Open Datasets | Yes | Following existing methods [22, 24, 23], we train our models on the DF2K dataset, a merged dataset with DIV2K [37] and Flickr2K [25], which contains 3450 (800 + 2650) high-quality images. |
| Dataset Splits | Yes | Table 2: Ablation studies of the shuffler mixer layer and the feature mixing block on 4 DIV2K validation set[37]. |
| Hardware Specification | Yes | All experiments are conducted with the Py Torch framework on an Nvidia Tesla V100 GPU. |
| Software Dependencies | No | The paper mentions 'PyTorch framework' but does not specify its version number or any other software dependencies with version details. |
| Experiment Setup | Yes | In each training mini-batch, we randomly crop 64 patches of size 64 64 from LR images as the input. The proposed model is trained by minimizing L1 loss and the frequency loss [5] with Adam [19] optimizer for 300,000 total iterations. The learning rate is set to a constant 5 10 4. |