Adaptive Discovering and Merging for Incremental Novel Class Discovery

Authors: Guangyao Chen, Peixi Peng, Yangru Huang, Mengyue Geng, Yonghong Tian

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on several datasets show that ADM significantly outperforms existing class-incremental Novel Class Discovery (class-i NCD) approaches.
Researcher Affiliation Academia 1 National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University, China. 2 School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, China. 3 Peng Cheng Laboratory, China.
Pseudocode Yes Overall, Algorithm 1 summarizes the details of incremental category discovery with gated linear merging.
Open Source Code Yes The source code is included in the supplementary materials.
Open Datasets Yes We employ three datasets to conduct experiments for class-i NCD: CIFAR-10 (Krizhevsky and Hinton 2009), CIFAR-100 (Krizhevsky and Hinton 2009) and Tiny-Image Net (Le and Yang 2015).
Dataset Splits No The paper mentions that 'Each dataset is split into the old and new classes following (Han et al. 2020; Zhong et al. 2021a; Liu and Tuytelaars 2022; Roy et al. 2022). For CIFAR10, 5 base classes and 5 novel classes are randomly sampled. 20 novel classes are sampled for CIFAR100 and Tiny Image Net and the rest are base classes.' and 'The CIFAR-10 and CIFAR-100 datasets contain 50,000 and 10,000 32 32 color images for training and testing.' However, it does not provide explicit train/validation/test split percentages, sample counts, or specific random seeds for reproducibility of data partitioning.
Hardware Specification No The paper states 'Computing support was provided by Pengcheng Cloudbrain.' but does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running experiments.
Software Dependencies No The paper mentions using 'Res Net-18 (He et al. 2016) as the backbone' and adopting 'most of the implementation following (Roy et al. 2022),' but it does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup No The paper states 'We used Res Net-18 (He et al. 2016) as the backbone in all the experiments. We have adopted most of the implementation following (Roy et al. 2022).' However, it does not provide specific hyperparameters such as learning rate, batch size, number of epochs, or optimizer settings within the main text.