Social Motion Prediction with Cognitive Hierarchies
Authors: Wentao Zhu, Jason Qin, Yuke Lou, Hang Ye, Xiaoxuan Ma, Hai Ci, Yizhou Wang
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct comprehensive experiments to validate the effectiveness of our proposed dataset and approach. |
| Researcher Affiliation | Academia | 1 Center on Frontiers of Computing Studies, School of Computer Science, Peking University 2 Institute for Artificial Intelligence, Peking University |
| Pseudocode | No | The paper describes the model architecture and training objectives mathematically and textually, but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper states: 'We plan to release the processed 3D human motion sequences under a license agreement that permits their use for non-commercial scientific research purposes.' This refers to the dataset, not the source code for the methodology or experiments. No other explicit statement or link for the code is provided. |
| Open Datasets | Yes | We present Wusi, the first large-scale multi-human 3D motion dataset featuring intense and strategic interactions. We conduct experiments with the following baseline methods including a naive baseline and multiple state-of-the-art approaches: ... 0 20 40 60 80 100 mm Frozen HRI MRT Figure 4: Comparison of baseline methods on CMU-Mocap [1] and our dataset. The baseline methods are trained and tested on two datasets separately. [1] Cmu graphics lab motion capture database. http://mocap.cs.cmu.edu/. |
| Dataset Splits | No | The paper mentions training and testing on datasets but does not explicitly specify train/validation/test splits, percentages, or sample counts for reproducibility. |
| Hardware Specification | Yes | We implement the proposed framework with Py Torch [47] using a Linux machine with 1 NVIDIA V100 GPU. |
| Software Dependencies | No | The paper mentions using 'Py Torch [47]' and 'Adam [33] optimizer' but does not specify their version numbers or other ancillary software with versions required for reproducibility. |
| Experiment Setup | Yes | We implement the presented framework to train and test on the proposed Wusi dataset. We employ Transformer encoder [62] for both the local and global state encoders, as well as Transformer decoders for the policy networks. Each Transformer consists of 3 layers with 8 attention heads. We share parameters for policy networks φ(1) . . . φ(K). We set the strategic reasoning depth K = 3 unless otherwise stated. For all the methods, we provide 1s motion history and predict future 1s motion... We set λ = 0.002, batch size 32, learning rate 0.0001, and train for 60 epochs using Adam [33] optimizer. |