Temporal Coherent Test Time Optimization for Robust Video Classification
Authors: Chenyu Yi, SIYUAN YANG, Yufei Wang, Haoliang Li, Yap-peng Tan, Alex Kot
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the corruption robustness of Te Co on Mini Kinetics-C and Mini SSV2-C. The mean performance on corruption is the main metric for robustness measurement. We also compare it with other baseline methods across architectures. |
| Researcher Affiliation | Academia | 1School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 2Interdisciplinary Graduate Programme, Nanyang Technological University, Singapore 3Department of Electrical Engineering, City University of Hong Kong, China |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states 'We re-implement them on the video classification frameworks based on the public available codes.' in Appendix A.3, referring to baseline methods. However, it does not provide concrete access or an explicit statement of code availability for the Te Co methodology itself. |
| Open Datasets | Yes | Kinetics (Carreira & Zisserman, 2017) and Something-Something-V2 (SSV2) (Goyal et al., 2017) are two most popular large-scale datasets in video classification community. We use their variants Mini Kinetic-C and Mini SSV2-C (Yi et al., 2021) to evaluate the robustness of models against corruptions. |
| Dataset Splits | No | The paper mentions using Mini Kinetics-C and Mini SSV2-C datasets and their respective test set sizes. It also describes the pretraining stage using uniform sampling. However, it does not provide specific details on the training, validation, and test dataset splits (e.g., percentages, sample counts for each split, or predefined splits for validation). |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running the experiments. It lacks any mention of the computing environment's specifications. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers like Python 3.8, PyTorch 1.9) needed to replicate the experiment. |
| Experiment Setup | Yes | We train the models with an initial learning rate of 0.01 and a cosine annealing learning rate schedule. For the optimization, we use SGD with momentum. On Mini Kinetics-C, we use a batch size of 32 and a learning rate of 0.001; while for Mini SSV2-C, we use the same batch size but a learning rate of 0.00001. In test-time optimization, we use the pre-trained model for network weight initialization and update the model parameters for one epoch. |