FAVOR: Full-Body AR-Driven Virtual Object Rearrangement Guided by Instruction Text
Authors: Kailin Li, Lixin Yang, Zenan Lin, Jian Xu, Xinyu Zhan, Yifei Zhao, Pengxiang Zhu, Wenxiong Kang, Kejian Wu, Cewu Lu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results, both qualitative and quantitative, suggest that this dataset and pipeline deliver high-quality motion sequences. |
| Researcher Affiliation | Collaboration | 1Shanghai Jiao Tong University 2XREAL 3South China University of Technology |
| Pseudocode | No | The paper describes its methods and algorithms in text and diagrams (e.g., Figure 3 workflow, descriptions of KNET and INET), but it does not include any explicitly labeled "Pseudocode" or "Algorithm" blocks. |
| Open Source Code | Yes | Our dataset, code, and appendix are available at https://kailinli.github.io/FAVOR. |
| Open Datasets | Yes | Our dataset, code, and appendix are available at https://kailinli.github.io/FAVOR. |
| Dataset Splits | Yes | The dataset is split into training, validation, and test sets in an 8:1:1 ratio, based on motion sequences. |
| Hardware Specification | Yes | The infrared motion capture system includes 12 temporally synchronized Optitrack Prime 13W infrared cameras used for tracking reflective markers (Fig. 2 I.). We utilize the XREAL X AR glasses for scene rendering. |
| Software Dependencies | No | The paper mentions several software components and models such as GPT-4, Owl-ViT, SMPL-X, Vposer, and HuMoR, but it does not provide specific version numbers for any of these or for any programming languages or libraries used. |
| Experiment Setup | No | The paper states: 'Training details are in the Appx.', indicating that specific experimental setup details such as hyperparameters are not provided in the main text. |