Hypercorrelation Evolution for Video Class-Incremental Learning

Authors: Sen Liang, Kai Zhu, Wei Zhai, Zhiheng Liu, Yang Cao

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate the effectiveness of the proposed method on three standard video class-incremental learning benchmarks, outperforming state-of-the-art methods.
Researcher Affiliation Academia 1University of Science and Technology of China 2Institute of Artificial Intelligence, Hefei Comprehensive National Science Center {liangsen@mail., zkzy@mail., wzhai056@, lzh990528@mail., forrest@}ustc.edu.cn
Pseudocode No The paper describes its methods using prose and diagrams (Figure 2, Figure 3) but does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes Code is available at: https://github.com/Lsen991031/HCE
Open Datasets Yes We conduct a comprehensive evaluation of the proposed HCE on three widely-used action recognition datasets: UCF101 (Soomro, Zamir, and Shah 2012), HMDB51 (Kuehne et al. 2011), and Something-Something V2 (Goyal et al. 2017).
Dataset Splits No To evaluate the performance of our VCIL method, we adopt different training strategies for each dataset. Specifically, for UCF101, we first train the model on 51 classes and then divide the remaining 50 classes into 5, 10, and 25 tasks, respectively. For HMDB51, we train the base model using videos from 26 classes and then separate the remaining 25 classes into 5 or 25 groups. For Something-Something V2, we first train on 84 classes in the initial stage and then generate groups of 10 and 5 classes.
Hardware Specification No The paper does not specify the hardware (e.g., GPU model, CPU type, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies or their version numbers (e.g., Python, PyTorch, CUDA versions).
Experiment Setup No The paper describes the overall framework, loss functions, and dataset evaluation protocols, but it does not specify concrete experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings.