Geometry-Aware Face Completion and Editing
Authors: Linsen Song, Jie Cao, Lingxiao Song, Yibo Hu, Ran He2506-2513
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results qualitatively and quantitatively demonstrate that our network is able to generate visually pleasing face completion results and edit face attributes as well. |
| Researcher Affiliation | Academia | National Laboratory of Pattern Recognition, CASIA Center for Research on Intelligent Perception and Computing, CASIA Center for Excellence in Brain Science and Intelligence Technology, CAS University of Chinese Academy of Sciences, Beijing 100190, China |
| Pseudocode | No | The paper describes its algorithms and architectures using figures and text but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., a specific repository link or an explicit statement of code release) for its methodology. |
| Open Datasets | Yes | Datasets. We evaluate our model under both controlled and in-the-wild settings. To this end, two publicly available datasets are employed in our experiments: Multi-PIE (Gross et al. 2010) and Celeb A (Liu et al. 2015b). |
| Dataset Splits | Yes | The standard split for Celeb A is employed in our experiments, where 162,770 images for training, 19,867 for validation and 19,962 for testing. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU models, or cloud computing instance types. |
| Software Dependencies | No | The paper mentions using 'Adam optimizer (Kingma and Ba 2015)' but does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We set the learning rate as 0.0002 and deploy Adam optimizer (Kingma and Ba 2015) for the facial geometry estimator, the generator and the two discriminators. The FCENet is trained for 200 epochs on the Multi-PIE and 20 epochs on the Celeb A. The weights λ1, λ2, λ3, λ4, λ5, λ6 in overall loss are set as 10, 1, 1, 0.001, 0.01, 0.0001 in practice, respectively. |