Geometry-Contrastive Transformer for Generalized 3D Pose Transfer

Authors: Haoyu Chen, Hao Tang, Zitong Yu, Nicu Sebe, Guoying Zhao258-266

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The massive experimental results prove the efficacy of our approach by showing state-of-the-art quantitative performances on SMPL-NPT, FAUST and our new proposed dataset SMG3D datasets, as well as promising qualitative results on MGcloth and SMAL datasets. It s demonstrated that our method can achieve robust 3D pose transfer and be generalized to challenging meshes from unknown spaces on cross-dataset tasks.
Researcher Affiliation Academia 1 CMVS, University of Oulu 2 Computer Vision Lab, ETH Zurich 3 DISI, University of Trento
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes The code and dataset are made available. Code is available: https://github.com/mikecheninoulu/CGT.
Open Datasets Yes SMPL-NPT (Wang et al. 2020) dataset contains 24,000 synthesized body meshes with the SMPL model (Bogo et al. 2016)... SMG-3D (Chen et al. 2019) dataset contains 8,000 pairs of naturally plausible body meshes... FAUST (Bogo et al. 2014) dataset consists of 10 different human subjects... MG-Cloth (Bhatnagar et al. 2019) dataset contain 96 dressed identity meshes... SMAL (Zuffiet al. 2017) animal dataset... The code and dataset are made available. Code is available: https://github.com/mikecheninoulu/CGT.
Dataset Splits No For training, 16 different identities and 400 different poses are randomly selected and made into pairs as GTs. For testing, 14 new identities are paired with those 400 poses and 200 new poses as seen and unseen sets. (for SMPL-NPT). 35 identities and 180 poses are used as the training set. The rest 5 identities with the 180 poses and the other 20 poses are used for seen and unseen testing. (for SMG-3D). The paper describes training and testing sets, but no explicit validation set or split is provided for the experiments.
Hardware Specification No The paper mentions "GPU memory limits" and thanks "CSC-IT Center for Science, Finland, for their computational resources" but does not specify any particular GPU models, CPU models, or other detailed hardware specifications used for the experiments.
Software Dependencies No The paper does not provide specific software names along with their version numbers (e.g., Python 3.8, PyTorch 1.9) needed to replicate the experiment.
Experiment Setup Yes We adopt four blocks as default in our experiments. (referring to multi-head attention blocks). It shows that it gains the best performance when λconstra is set as 0.0005, which proves that our CGC loss could effectively improve the geometric reconstruction results. Please refer to the supplementary materials for more implementing details.