Dense Interspecies Face Embedding

Authors: Sejong Yang, Subin Jeon, Seonghyeon Nam, Seon Joo Kim

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To quantitatively evaluate our method over possible previous methodologies like unsupervised keypoint detection, we perform interspecies facial keypoint transfer on MAFL and AP-10K. Furthermore, the results of other applications like interspecies face image manipulation and dense keypoint transfer are provided.
Researcher Affiliation Academia Sejong Yang Yonsei University sejong.yang@yonsei.ac.kr Subin Jeon Yonsei University subinjeon@yonsei.ac.kr Seonghyeon Nam York University snam0331@gmail.com Seon Joo Kim Yonsei University seonjookim@yonsei.ac.kr
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/kingsj0405/dife.
Open Datasets Yes We use four datasets for evaluation: MAFL [50], AP-10K [45], WFLW [42] and Animal Web [20]. We pre-train CSE with Dense Pose-COCO [23] and Dense Pose-LVIS [11], which are the datasets for full-body keypoints of human and animal respectively. Style GAN2 is pre-trained with FFHQ [19] for human and AFHQ [9] for animals.
Dataset Splits Yes The AP-10K dataset is composed of 10k images of various animal species which is split into three disjoint subsets, i.e., train, validation, and test sets, with the ratio of 7:1:2 per animal species.
Hardware Specification Yes All experiments are carried out on one NVIDIA Titan XP.
Software Dependencies No The paper mentions software components like 'Adam optimizer' and 'Style GAN2' but does not specify version numbers for these or other relevant software dependencies like programming languages or deep learning frameworks.
Experiment Setup Yes We use Adam optimizer [21] with the learning rate of 10 3, batch size of 12, and max training step of 1M with early stopping. The lambda values for each loss functions are fixed to λ1 = 10 2, λ2 = 100, λ3 = 10 2, and λ4 = 10 2 in all experiments.