Fast and Robust Face-to-Parameter Translation for Game Character Auto-Creation

Authors: Tianyang Shi, Zhengxia Zuo, Yi Yuan, Changjie Fan, Tianyang Shi, Zhengxia Zuo, Yi Yuan, Changjie Fan1733-1740

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comparison results and ablation analysis on seven public face verification datasets suggest the effectiveness of our method. Experiments We test our method on two games with East Asian faces, Justice and Revelation5 , where the former one is a PC game and the latter one is a mobile game on Android/IOS devices. We use seven well-known face verification datasets to quantitatively evaluate effectiveness of our method, including LFW (Huang et al. 2008), CFP FF (Sengupta et al. 2016), CFP FP (Sengupta et al. 2016), Age DB (Moschoglou et al. 2017), CALFW (Zheng, Deng, and Hu 2017), CPLFW (Zheng and Deng 2018), and Vggface2 FP (Cao et al. 2018).
Researcher Affiliation Collaboration Tianyang Shi,1 Zhengxia Zou,2 Yi Yuan,1 Changjie Fan1 1Net Ease Fuxi AI Lab 2University of Michigan, Ann Arbor {shitianyang, yuanyi, fanchangjie}@corp.netease.com, zzhengxi@umich.edu
Pseudocode No The paper describes the implementation pipeline in bullet points but does not provide structured pseudocode or an explicitly labeled algorithm block.
Open Source Code No The paper does not explicitly state that the source code for the methodology is openly available or provide a link to a code repository.
Open Datasets Yes We use the Celeb A dataset (Liu et al. 2015) to train our translator T. We use seven well-known face verification datasets to quantitatively evaluate effectiveness of our method, including LFW (Huang et al. 2008), CFP FF (Sengupta et al. 2016), CFP FP (Sengupta et al. 2016), Age DB (Moschoglou et al. 2017), CALFW (Zheng, Deng, and Hu 2017), CPLFW (Zheng and Deng 2018), and Vggface2 FP (Cao et al. 2018).
Dataset Splits No The paper mentions using the Celeb A dataset for training and specifies some training steps (e.g., 'sampling from the full Celeb A training set' or 'sampling from a subset of the Celeb A training set'), but it does not provide explicit details on dataset splits (e.g., percentages, sample counts for train/validation/test sets, or specific predefined splits with citations for reproducibility of the partitioning).
Hardware Specification Yes * Inference time under GTX 1080Ti.
Software Dependencies No The paper mentions software components like Light CNN-29v2, Resnet-50, SGD, Adam, and the dlib library, but it does not provide specific version numbers for these software dependencies, which are necessary for full reproducibility.
Experiment Setup Yes We set batch size = 16, momentum = 0.9. The learning rate decay is set to 10% per 50 epochs and the training stops at 500 epochs. We then freeze above networks, set λ1 = 0.01, λ2 = 1, and λ3 = 1, and use the Adam optimizer (Kingma and Ba 2014) to train T with the learning rate of 10 4 and max-iteration of 20 epochs. We set λ2 = 0 every 4 training steps.