EgoChoir: Capturing 3D Human-Object Interaction Regions from Egocentric Views
Authors: Yuhang Yang, Wei Zhai, Chengfeng Wang, Chengjun Yu, Yang Cao, Zheng-Jun Zha
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on them demonstrate the effectiveness and superiority of Ego Choir. |
| Researcher Affiliation | Academia | 1 University of Science and Technology of China 2 Institute of Artificial Intelligence, Hefei Comprehensive National Science Center |
| Pseudocode | No | The paper describes the methods in text and uses figures but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | https://yyvhang.github.io/Ego Choir (from the first page); Code and demo are in supplementary materials. (from NeurIPS checklist question 13 justification) |
| Open Datasets | Yes | we collect video clips with egocentric interactions from Ego-Exo4D [27] and GIMO [113] |
| Dataset Splits | No | Among them, 1216 video clips are used for training, and 354 are used for testing. The paper explicitly states training and testing splits, but does not explicitly mention a separate validation split. |
| Hardware Specification | Yes | All training processes are on 2 NVIDIA A40 GPUs (20 GPU hours). |
| Software Dependencies | No | Ego Choir is implemented by Py Torch and trained with the Adam optimizer. The paper mentions PyTorch and Adam optimizer but does not specify their version numbers, which is required for reproducible software dependencies. |
| Experiment Setup | Yes | The training epoch is set to 100, the training batch size is set to 8, and the initial learning rate is 1e-4 with cosine annealing. |