Video Face Super-Resolution with Motion-Adaptive Feedback Cell
Authors: Jingwei Xin, Nannan Wang, Jie Li, Xinbo Gao, Zhifeng Li12468-12475
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive evaluations and comparisons validate the strengths of our approach, and the experimental results demonstrated that the proposed framework is outperform the state-of-the-art methods. |
| Researcher Affiliation | Collaboration | Jingwei Xin, Nannan Wang, Jie Li, Xinbo Gao, Zhifeng LiState Key Laboratory of Integrated Services Networks, School of Electronic Engineering, Xidian University, Xi an 710071, China State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi an 710071, China Tencent AI Lab, China, |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | No | The paper uses the 'Vox Celeb dataset' but does not provide a specific link, DOI, repository, or formal citation for accessing it. |
| Dataset Splits | Yes | Dataset Vox Celeb objects sequences frames Training 100 3884 776640 Validation 5 10 2144 Testing 18 697 139368 Table 1: Datasets used in facial video super-resolution. and Here we select 3884 video sequences of 100 people for training, 10 video sequences of 5 people for veriļ¬cation and 697 sequences of 18 people for testing. |
| Hardware Specification | Yes | Training a MAFN on Vox Celeb dataset generally takes 10 hours with one Titan X Pascal GPU. |
| Software Dependencies | No | The paper mentions 'pytorch environment' but does not provide specific version numbers for software dependencies. |
| Experiment Setup | Yes | The momentum parameter is set to 0.1, weight decay is set to 2 10 4, and the initial learning rate is set to 1 10 3 and be divided a half every 10 epochs. Batchsize is set to 16. |