Transcoded Video Restoration by Temporal Spatial Auxiliary Network

Authors: Li Xu, Gang He, Jinjia Zhou, Jie Lei, Weiying Xie, Yunsong Li, Yu-Wing Tai2875-2883

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results demonstrate that the performance of the proposed method is superior to that of the previous techniques. We quantitatively and qualitatively demonstrate our proposed method is superior to that of the previous methods.
Researcher Affiliation Collaboration Li Xu1, Gang He1,2, , Jinjia Zhou3, Jie Lei1, Weiying Xie1, Yunsong Li1, Yu-Wing Tai2 1Xidian University, China 2Kuaishou Technology, China 3Hosei University, Japan
Pseudocode No The paper provides architectural diagrams and textual descriptions of its modules but does not include any pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/icecherylXuli/TSAN.
Open Datasets Yes To establish a training dataset for video transcoding restoration, we employed 108 sequences from Xiph.org (Xiph.org), VQEG (VQEG), and Joint Collaborative Team on Video Coding (JCT-VC) (Bossen et al. 2013).
Dataset Splits Yes To establish a training dataset for video transcoding restoration, we employed 108 sequences from Xiph.org (Xiph.org), VQEG (VQEG), and Joint Collaborative Team on Video Coding (JCT-VC) (Bossen et al. 2013). We adopt all 18 standard test sequences from JCT-VC for testing.
Hardware Specification Yes We implement our TSAN with Pytorch 1.6.0 framework on a NVIDIA Ge Force 2080Ti GPU
Software Dependencies Yes We implement our TSAN with Pytorch 1.6.0 framework
Experiment Setup Yes The batch size is set to 16 and the learning rate is initialized as 1e-4. The network training stops after 300k iterations.