Learning to Prompt Knowledge Transfer for Open-World Continual Learning

Authors: Yujie Li, Xin Yang, Hao Wang, Xiangkun Wang, Tianrui Li

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results using two real-world datasets demonstrate that the proposed Pro-KT outperforms the state-of-the-art counterparts in both the detection of unknowns and the classification of knowns markedly.
Researcher Affiliation Academia Yujie Li1,3, Xin Yang1,2*, Hao Wang4, Xiangkun Wang1,2, Tianrui Li5 1Complex Laboratory of New Finance and Economics, Southwestern University of Finance and Economics 2School of Computing and Artificial Intelligence, Southwestern University of Finance and Economics 3School of Management Science and Engineering, Southwestern University of Finance and Economics 4School of Computer Science and Engineering, Nanyang Technological University 5School of Computing and Artificial Intelligence, Southwest Jiaotong University
Pseudocode No The paper describes its method using text and mathematical equations, but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes Code released at https: //github.com/Yujie Li42/Pro-KT.
Open Datasets Yes We experiment on two commonly-used and publicly-available datasets, namely Split CIFAR100 (Krizhevsky, Hinton et al. 2009) and 5-datasets (Ebrahimi et al. 2020).
Dataset Splits No The paper does not explicitly mention or provide details for a 'validation' dataset split (e.g., percentages, counts, or reference to a standard validation split). It primarily discusses training and test sets.
Hardware Specification No The paper mentions using 'Res Net32 and Vi T' as backbones, which are model architectures, not hardware specifications. No specific CPU, GPU, or other hardware details are provided for the experimental setup.
Software Dependencies No The paper does not provide specific version numbers for any ancillary software, libraries, or programming languages used in the experiments.
Experiment Setup No The paper discusses model-specific parameters (M, Lp, K) in its sensitivity analysis, but it does not provide general experimental setup details such as learning rate, batch size, optimizer type, number of epochs, or other system-level training configurations used for the main results.