ContactGen: Contact-Guided Interactive 3D Human Generation for Partners
Authors: Dongjun Gu, Jaehyeok Shim, Jaehoon Jang, Changwoo Kang, Kyungdon Joo
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate Contact Gen on the CHI3D dataset, where our method generates physically plausible and diverse poses compared to comparison methods. |
| Researcher Affiliation | Academia | Dongjun Gu, Jaehyeok Shim, Jaehoon Jang, Changwoo Kang, Kyungdon Joo* Artifcial Intelligence Graduate School, UNIST {djku1020, jh.shim, erick1997, kangchangwoo, kyungdon}@unist.ac.kr |
| Pseudocode | Yes | Algorithm 1: Training; Algorithm 2: Guided Sampling |
| Open Source Code | Yes | Source code is available at https://dongjunku.github.io/contactgen. |
| Open Datasets | Yes | We basically use CHI3D (Fieraru et al. 2020), which is a 3D motion capture dataset of 8 close human interaction scenarios |
| Dataset Splits | No | The paper mentions using the CHI3D dataset and performing pre-processing for training but does not provide specific details on train/validation/test splits (e.g., percentages or sample counts). |
| Hardware Specification | No | The paper mentions support from an 'HPC Support Project' but does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for experiments. |
| Software Dependencies | No | The paper mentions using SMPL-X representation, Adam optimizer, DDM, and classifier-free guidance, but it does not specify version numbers for any software libraries or frameworks used. |
| Experiment Setup | Yes | For the training procedure, noise is sampled from the linear noise scheduler initialized with β0 = 5e 6, βT = 5e 3, and diffusion step T = 1000. |