Dynamic against Dynamic: An Open-Set Self-Learning Framework

Authors: Haifeng Yang, Chuanxing Geng, Pong C. Yuen, Songcan Chen

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method establishes new performance milestones respectively in almost all standard and cross-data benchmarks. ... Extensive experiments verify the effectiveness of our OSSL, establishing new performance milestones respectively in almost all standard and cross-data benchmark datasets. ... Table 1: Evaluation on open-set detection (AUROC) under the standard-dataset setting. ... Table 2: Macro-F1 score (%) of different methods under the crossdataset setting (MNIST as the ID data).
Researcher Affiliation Academia Haifeng Yang1,2 , Chuanxing Geng1,2,3 , Pong C. Yuen3 and Songcan Chen1,2 1Nanjing University of Aeronautics and Astronautics 2MIIT Key Laboratory of Pattern Analysis and Machine Intelligence 3Hong Kong Baptist University
Pseudocode Yes Algorithm 1 Training Procedure of OSSL Framework
Open Source Code Yes 1https://github.com/Chuanxing Geng/OSSL
Open Datasets Yes Datastes We here follow the protocol defined in [Neal et al., 2018], and provide six standard OSR benchmarks: MNIST, SVHN, Cifar10. For MNIST [Lake et al., 2015], SVHN [Netzer et al., 2011], and CIFAR10 [Krizhevsky, 2009]... Tiny Image Net. Tiny Image Net, a derived subset of the larger Image Net [Russakovsky et al., 2014] dataset...
Dataset Splits No The paper defines training and test sets but does not specify a validation set or explicit percentages/counts for train/validation/test splits. The partitioning of the test set into three parts is for the self-learning process, not a standard validation split.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) were mentioned for running experiments. Only software dependencies and general experimental setup details are provided.
Software Dependencies No The paper mentions using 'Stochastic Gradient Descent (SGD) technique as the optimizer' and a 'network architecture in [Vaze et al., 2022] as the backbone' but does not provide specific version numbers for any software libraries or environments.
Experiment Setup Yes For the threshold parameters used in the partition of test set, we set µ = 0.3, γ = 0.03 for Tiny Image Net, µ = 0.5, γ = 0.03 for Cifar10, Cifar+10, Cifar+50, while µ = 0.8, γ = 0.02 for MNIST and SVHN. In addition, the number of samples from Trs in each batch is set to 16 (the batch-size is 256), while the hyper-parameter λ in LMar is set to 2 for all benchmark datasets. ... Considering that the feature extractor F( ) is a already well-trained network, we here set its learning rate to 10 4 for Tiny Image Net while 10 5 for other datasets. Furthermore, the learning rates of other parts except for F( ) are uniformly set to 0.01.