Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Dense Interspecies Face Embedding
Authors: Sejong Yang, Subin Jeon, Seonghyeon Nam, Seon Joo Kim
NeurIPS 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To quantitatively evaluate our method over possible previous methodologies like unsupervised keypoint detection, we perform interspecies facial keypoint transfer on MAFL and AP-10K. Furthermore, the results of other applications like interspecies face image manipulation and dense keypoint transfer are provided. |
| Researcher Affiliation | Academia | Sejong Yang Yonsei University EMAIL Subin Jeon Yonsei University EMAIL Seonghyeon Nam York University EMAIL Seon Joo Kim Yonsei University EMAIL |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/kingsj0405/dife. |
| Open Datasets | Yes | We use four datasets for evaluation: MAFL [50], AP-10K [45], WFLW [42] and Animal Web [20]. We pre-train CSE with Dense Pose-COCO [23] and Dense Pose-LVIS [11], which are the datasets for full-body keypoints of human and animal respectively. Style GAN2 is pre-trained with FFHQ [19] for human and AFHQ [9] for animals. |
| Dataset Splits | Yes | The AP-10K dataset is composed of 10k images of various animal species which is split into three disjoint subsets, i.e., train, validation, and test sets, with the ratio of 7:1:2 per animal species. |
| Hardware Specification | Yes | All experiments are carried out on one NVIDIA Titan XP. |
| Software Dependencies | No | The paper mentions software components like 'Adam optimizer' and 'Style GAN2' but does not specify version numbers for these or other relevant software dependencies like programming languages or deep learning frameworks. |
| Experiment Setup | Yes | We use Adam optimizer [21] with the learning rate of 10 3, batch size of 12, and max training step of 1M with early stopping. The lambda values for each loss functions are fixed to λ1 = 10 2, λ2 = 100, λ3 = 10 2, and λ4 = 10 2 in all experiments. |