Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Egocentric Object-Interaction Anticipation with Retentive and Predictive Learning
Authors: Guo Chen, Yifei Huang, Yin-dong Zheng, Yicheng Liu, Jiahao Wang, Tong Lu
IJCAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the effectiveness of our framework using the Ego4D short-term object interaction anticipation benchmark, covering both STAv1 and STAv2. Extensive experiments demonstrate that our framework outperforms existing methods, while ablation studies highlight the effectiveness of each design inside our retentive and predictive learning framework. |
| Researcher Affiliation | Collaboration | 1Nanjing University 2The University of Tokyo 3Kuaishou Technology EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology using textual descriptions and architectural diagrams (Figures 1, 2, 3, 4, 5) but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code or provide a link to a code repository. |
| Open Datasets | Yes | We evaluate our framework on the STAv1 and STAv2 benchmarks from the Ego4D dataset. These benchmarks comprise 120 hours of annotated clips, including 27,801/98,276 training, 17,217/47,395 validation, and 19,780/19,780 test samples, spanning 87/128 noun and 74/81 verb classes. |
| Dataset Splits | Yes | These benchmarks comprise 120 hours of annotated clips, including 27,801/98,276 training, 17,217/47,395 validation, and 19,780/19,780 test samples, spanning 87/128 noun and 74/81 verb classes. |
| Hardware Specification | Yes | Table 7 presents relevant parameters and inference speeds on RTX 4090 GPU. |
| Software Dependencies | No | The paper mentions using 'Faster R-CNN pretrained on Ego4D' and 'Vi T-B serves as the backbone for video feature extraction' but does not specify version numbers for these or other software components/libraries used. |
| Experiment Setup | Yes | During training, only prediction boxes with Io U > 0.5 with the ground truth are utilized. Loss coefficients are balanced as λ1 = λ2 = 10 and τ = 3. ... We sample 8 frames with a stride of 8, resulting in l = 64. |