A Decoder-free Transformer-like Architecture for High-efficiency Single Image Deraining
Authors: Xiao Wu, Ting-Zhu Huang, Liang-Jian Deng, Tian-Jing Zhang
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the superiority of DFTL compared with competitive Transformer architectures, e.g., Vi T, DETR, IPT, Uformer, and Restormer. The code is available at https://github.com/Xiao Xiao Woo/derain. In this section, we demonstrate the advantages of proposed method via comprehensive experiments on both synthetic and real datasets. |
| Researcher Affiliation | Academia | Xiao Wu , Ting-Zhu Huang , Liang-Jian Deng and Tian-Jing Zhang University of Electronic Science and Technology of China, Chengdu, 611731 wxwsx1997@gmail.com, tingzhuhuang@126.com, liangjian.deng@uestc.edu.cn, zhangtianjinguestc@163.com |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/Xiao Xiao Woo/derain. |
| Open Datasets | Yes | We evaluate our model on five synthetic datasets to compare quantitative results... Methods Datasets Rain12 Rain200L Rain200H DID-Data DDN-Data |
| Dataset Splits | No | The paper discusses evaluating models on various datasets but does not explicitly provide details about specific training, validation, or test dataset splits, such as percentages or sample counts. |
| Hardware Specification | No | The paper mentions 'GPU memory requirements' and training on a 'single GPU' but does not provide specific hardware details such as GPU model numbers, CPU types, or memory amounts used for experiments. |
| Software Dependencies | No | The paper does not provide specific software dependency details, such as programming language versions or library version numbers, needed to replicate the experiment. |
| Experiment Setup | No | The paper states 'All models are trained in the same framework with default settings as their original codes' and refers to supplementary materials for 'implementation details', but does not include specific hyperparameter values or training configurations in the main text. |