Symbiotic Attention with Privileged Information for Egocentric Action Recognition
Authors: Xiaohan Wang, Yu Wu, Linchao Zhu, Yi Yang12249-12256
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate the effectiveness of our SAP quantitatively and qualitatively. Notably, it achieves the state-ofthe-art on two large-scale egocentric video datasets.Experiments Datasets We evaluate our method on two large-scale egocentric datasets: EPIC-Kitchens (Damen et al. 2018) and EGTEA (Li, Liu, and Rehg 2018). |
| Researcher Affiliation | Collaboration | Xiaohan Wang,1,2 Yu Wu,1,2 Linchao Zhu,1 Yi Yang1 1Re LER, University of Technology Sydney, 2Baidu Research {xiaohan.wang-3, yu.wu-3}@student.uts.edu.au, {linchao.zhu, yi.yang}@uts.edu.au |
| Pseudocode | No | The paper describes the method and its components in text and mathematical equations, but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an unambiguous statement about releasing source code for the described methodology, nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | We evaluate our method on two large-scale egocentric datasets: EPIC-Kitchens (Damen et al. 2018) and EGTEA (Li, Liu, and Rehg 2018). |
| Dataset Splits | Yes | We split the original training set to new training and validation set following (Baradel et al. 2018). Our model outperforms the state-of-the-art methods by a large margin on all three evaluation splits, i.e., the validation set, the test seen (S1) set and the test unseen (S2) set. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory, or other computational resources used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Caffe2, Paddle Paddle and Pytorch' but does not specify exact version numbers for these software frameworks or any other software dependencies. |
| Experiment Setup | Yes | The overall learning rate is initialized to 0.003 and then changed to 0.0003 in the last 10 epochs. The batch size is 32. The learning rate is initialized to 0.0001 and then reduced to 0.00001 in the last 20 epochs. The rest training details are the same as the backbone details. |