Unified Human-Scene Interaction via Prompted Chain-of-Contacts

Authors: Zeqi Xiao, Tai Wang, Jingbo Wang, Jinkun Cao, Wenwei Zhang, Bo Dai, Dahua Lin, Jiangmiao Pang

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments demonstrate the effectiveness of our framework in versatile task execution and generalizability to real scanned scenes.
Researcher Affiliation Collaboration 1Shanghai AI Laboratory, 2S-Lab, NTU, 3CMU
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Project page at this URL.
Open Datasets Yes To this end, we create a novel dataset named Scene Plan. It encompasses thousands of interaction plans based on scenarios constructed from Part Net (Mo et al., 2019) and Scan Net (Dai et al., 2017) datasets.
Dataset Splits No The paper mentions "training set" and "evaluation set" but does not provide specific details or percentages for a separate validation split, nor does it refer to a standard validation splitting methodology.
Hardware Specification No The paper does not provide specific hardware details (such as exact GPU/CPU models or processor types) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment.
Experiment Setup Yes In our implementation, we feed 10 adjacent frames together into the discriminator to assess the style." "We used the SAMP dataset (Hassan et al., 2021a) and CIRCLE (Ara ujo et al., 2023) as our motion dataset. SAMP includes 100 minutes of Mo Cap clips... We use all clips in SAMP and pick 20 representative clips in CIRCLE for training." "This experiment involved a total of 70 objects (30 for sitting, 30 for lying down, and 10 for reaching) with 4096 trials per task and random variations in orientation and object placement during evaluation.