AverNet: All-in-one Video Restoration for Time-varying Unknown Degradations

Authors: Haiyu Zhao, Lei Tian, Xinyan Xiao, Peng Hu, Yuanbiao Gou, Xi Peng

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are carried out on two synthesized datasets featuring seven types of degradations with random corruption levels.
Researcher Affiliation Collaboration Haiyu Zhao1, Lei Tian2, Xinyan Xiao2, Peng Hu1, Yuanbiao Gou1 , Xi Peng1,3 1College of Computer Science, Sichuan University, China. 2Baidu Inc., Beijing, China. 3State Key Laboratory of Hydraulics and Mountain River Engineering, Sichuan University, China.
Pseudocode No The paper describes methods with diagrams and equations but does not include a formal pseudocode or algorithm block.
Open Source Code Yes The code is available at https://github.com/XLearning-SCU/2024-NeurIPS-AverNet.
Open Datasets Yes We adopt two widely-used video datasets in experiments, i.e., DAVIS [41] and Set8 [42].
Dataset Splits Yes We train all models on DAVIS training set, and test them on DAVIS-test and Set8.
Hardware Specification Yes The experiments are conducted in Py Torch [45] framework with four NVIDIA Ge Force RTX 3090 GPUs.
Software Dependencies No The paper mentions 'Py Torch [45] framework' and 'pre-trained SPy Net [43, 44]' but does not provide specific version numbers for these software components.
Experiment Setup Yes The number of channels is set to 96, and the embedding length, dimension, and size of prompts are set to 5, 96, and 96 96, respectively. For training, we use Charbonnier loss [46] and Adam [47] optimizer with β1 = 0.9 and β2 = 0.999. The initial learning rates of main and optical flow networks are set to 1e 4 and 2.5e 5, respectively, which are gradually decreased to 1e 7 through cosine annealing strategy [48]. The number and resolution of input frames are set to 12 and 256 256, respectively. We train the networks with the batch size of 1 for 600K iterations.