Towards High-Fidelity Face Self-Occlusion Recovery via Multi-View Residual-Based GAN Inversion
Authors: Jinsong Chen, Hu Han, Shiguang Shan294-302
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that our approach outperforms the state-of-the-art methods in face selfocclusion recovery under unconstrained scenarios. |
| Researcher Affiliation | Academia | 1 Key Laboratory of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China 2 University of Chinese Academy of Sciences, Beijing 100049, China 3 Pengcheng National Laboratory, Shenzhen 518055, China chenjinsong20@mails.ucas.ac.cn, {hanhu, sgshan}@ict.ac.cn |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or a link to the open-source code for the described methodology. |
| Open Datasets | Yes | The face image datasets we used for training are Celeb A (Liu et al. 2018) and FFHQ dataset collected by (Karras, Laine, and Aila 2019). |
| Dataset Splits | No | The paper mentions using Celeb A and FFHQ datasets for training and MOFA-test for evaluation, but does not provide specific details on train/validation/test splits (e.g., percentages, sample counts, or explicit standard split references). |
| Hardware Specification | Yes | We implement our method with torch(1.7.1) and Py Torch3D (v0.4.0), and run our experiments on NVIDIA 1080Ti GPUs with Intel 2.1GHz CPUs. |
| Software Dependencies | Yes | We implement our method with torch(1.7.1) and Py Torch3D (v0.4.0) |
| Experiment Setup | Yes | We set hyper-parameters λrec = 1.9, λperc = 0.2 following (Deng et al. 2019b), and the other hyper-parameters empirically: λid = 0.8, λadv = 0.1. We set the input image size to 224 × 224 and the number of vertices and triangle faces to 35, 709 and 70, 897 respectively, the same as (Shang et al. 2020). |