SiamTrans: Zero-Shot Multi-Frame Image Restoration with Pre-trained Siamese Transformers

Authors: Lin Liu, Shanxin Yuan, Jianzhuang Liu, Xin Guo, Youliang Yan, Qi Tian1747-1755

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we show ablation study and comparison with state-of-the-art methods. Our algorithm is implemented on a NVIDIA Tesla V100 GPU in Py Torch.
Researcher Affiliation Collaboration Lin Liu1, Shanxin Yuan2*, Jianzhuang Liu2, Xin Guo1, Youliang Yan2, Qi Tian3 1EEIS Department, University of Science and Technology of China 2Huawei Noah s Ark Lab 3Huawei Cloud BU {ll0825,willing}mail.ustc.edu.cn {shanxin.yuan, liu.jianzhuang, yanyouliang, tian.qi1}@huawei.com
Pseudocode No The paper describes the method and uses mathematical formulations but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks or figures.
Open Source Code No The paper does not contain any explicit statement about releasing the source code or a link to a code repository for the methodology described.
Open Datasets Yes The pre-training task is denoising with the Place365 dataset (Zhou et al. 2017). ... Since there is no existing short-sequence deraining dataset, we build our multi-frame deraining test set through extracting adjacent frames from the NTURain dataset (Chen et al. 2018)... The training set for the compared supervised methods is Rain100L (Yang et al. 2017)...
Dataset Splits No The paper describes training and testing sets for various datasets, often for 'compared supervised methods'. However, it does not explicitly specify a validation dataset split or percentages used for its own model's development or tuning in a traditional sense.
Hardware Specification Yes Our algorithm is implemented on a NVIDIA Tesla V100 GPU in Py Torch.
Software Dependencies No The paper states: 'Our algorithm is implemented on a NVIDIA Tesla V100 GPU in Py Torch.' While PyTorch is mentioned, no specific version number is provided for it or any other software dependencies.
Experiment Setup Yes In both the pre-training and zero-shot restoration, the batch size is set to 1 and the initial learning rate is 1 10 5. The algorithm runs for 20 epochs and 20 iterations for the pre-training and the hard patch refinement, respectively. For zero-shot restoration, it takes 200, 500, and 1000 iterations for demoir eing, desnowing, and deraining, respectively. The λ and α in Eqn. 8 and Eqn. 9/Eqn. 10 are empirically set to 5 and 0.9 respectively.