3D Face Synthesis Driven by Personality Impression
Authors: Yining Lang, Wei Liang, Yujia Wang, Lap-Fai Yu1707-1714
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate our approach for synthesizing 3D faces giving desired personality impressions on a variety of 3D face models. Perceptual studies show that the perceived personality impressions of the synthesized faces agree with the target personality impressions specified for synthesizing the faces. We conducted experiments on a Linux machine equipped with an Intel i7-5930K CPU, 32GB of RAM and a Nvidia GTX 1080 graphics card. To demonstrate how the similarity cost and personality impression cost affect the face synthesis results during the optimization, we do an ablation study of optimizing a face model with the personality impression type of hostile in Figure 2. |
| Researcher Affiliation | Academia | 1Beijing Laboratory of Intelligent Information Technology, Beijing Institute of Technology, 2George Mason University langyining@bit.edu.cn, liangwei@bit.edu.cn, wangyujia@bit.edu.cn, craigyu@gmu.edu. |
| Pseudocode | No | The paper describes the proposed methods in detail but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | To train the CNN classifiers, we collected 10, 000 images from CASIA Web Face database (Yi et al. 2014) and annotated them with the corresponding personality impression. We train the Siamese network using the LFW dataset (Huang et al. 2007). |
| Dataset Splits | No | The paper mentions using CASIA Web Face and LFW datasets for training, but does not provide specific details on train/validation/test splits (e.g., percentages or sample counts) for reproducibility. |
| Hardware Specification | Yes | We conducted experiments on a Linux machine equipped with an Intel i7-5930K CPU, 32GB of RAM and a Nvidia GTX 1080 graphics card. |
| Software Dependencies | No | The paper states that the approach was 'implemented in C++' and mentions 'fine-tuning Goog Le Net', but does not provide specific version numbers for software dependencies or libraries. |
| Experiment Setup | Yes | We use a GPU-based engine and implement asynchronous stochastic gradient descent with 0.9 momentum in the finetuning stage and a fixed learning rate schedule (with learning rate decreased by 4% every 8 epochs). The mini-batch size is 128. The constant ρ is set as 2.0. By default, we empirically set t to 1.0 and decrease it by 0.05 every 10 iterations until it reaches zero. In our experiments, we set α = 0.8 by default. |