3D Pose Transfer with Correspondence Learning and Mesh Refinement
Authors: Chaoyue Song, Jiacheng Wei, Ruibo Li, Fayao Liu, Guosheng Lin
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results show that the proposed architecture can effectively transfer the poses from source to target meshes and produce better results with satisfied visual performance than state-of-the-art methods. |
| Researcher Affiliation | Academia | 1S-Lab, Nanyang Technological University, Singapore 2School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 3School of Computer Science and Engineering, Nanyang Technological University, Singapore 4Institute for Inforcomm Research, A*STAR, Singapore |
| Pseudocode | No | The paper describes the proposed network architecture and modules in text and through a diagram (Figure 2), but it does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code and data are available at https://github.com/Chaoyue Song/3d-corenet. |
| Open Datasets | Yes | For the human mesh dataset, we use the same dataset generated by SMPL [23] as [37]. This dataset consists of 30 identities with 800 poses. Each mesh has 6890 vertices. For the training data, we randomly choose 4000 pairs (identity and pose meshes) from 16 identities with 400 poses and shuffle them every epoch. The ground truth meshes will be determined according to the identity and pose parameters from the pairs. ... For the animal mesh dataset, we generate animal training and test data using SMAL model [46]. |
| Dataset Splits | No | The paper specifies training and testing data splits and sizes, for example, 'randomly choose 4000 pairs ... for training' and 'randomly choose 400 pairs for testing', but it does not explicitly mention a distinct validation set or split for hyperparameter tuning or early stopping. |
| Hardware Specification | Yes | Our model is trained for 200 epochs on one RTX 3090 GPU |
| Software Dependencies | No | We implement our model with Pytorch. The paper mentions the software 'Pytorch' but does not specify a version number for it or any other software libraries or dependencies. |
| Experiment Setup | Yes | λrec in the loss function is set as 2000. We implement our model with Pytorch and use Adam optimizer. ... Our model is trained for 200 epochs on one RTX 3090 GPU, the learning rate is fixed at 1e-4 in the first 100 epochs and decays 1e-6 each epoch after 100 epochs. The batch size is 8. |