Class-Specific Semantic Generation and Reconstruction Learning for Open Set Recognition

Authors: Liu Haoyang, Yaojin Lin, Peipei Li, Jun Hu, Xuegang Hu

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results show that the proposed method outperforms existing advanced methods and achieves new stateof-the-art performance. and 5 Experiment section with subsections like 5.2 Experiments for Unknown Detection, 5.3 Experiments for Open Set Recognition, 5.4 Experiments for Out-of-Distribution Detection, 5.5 Ablation Study.
Researcher Affiliation Academia 1Key Laboratory of Knowledge Engineering with Big Data (Hefei University of Technology), Ministry of Education 2School of Computer Science and Information Engineering, Hefei University of Technology 3School of Computer Science, Minnan Normal University 4Key Laboratory of Data Science and Intelligence Application, Minnan Normal University 5School of Computing, National University of Singapore 6Anhui Province Key Laboratory of Industry Safety and Emergency Technology
Pseudocode Yes Algorithm 1 The training procedure of CSGRL
Open Source Code Yes The code is available at https://github.com/Ashowman98/CSGRL.
Open Datasets Yes We construct experiments on three image datasets, including Cifar10 [Krizhevsky, 2009], SVHN [Netzer et al., 2011], and Tinyimagenet [Le and Yang, 2015].
Dataset Splits Yes For Cifar10 and SVHN, 6 classes are randomly sampled as known classes and the remaining 4 classes are set as unknown classes. For Tinyimagenet with 200 classes, the ratio of known to unknown classes is 20:180. To avoid chance, the splits of known and unknown classes are randomized five times, and then their average results are reported. In this experiment, we directly use the same data split with [Huang et al., 2023]. and Cifar10 is used as the training set, and samples from other datasets are collected into the test set. and ...models are trained on Cifar10 as the in-distribution dataset. Then, they are tested on Cifar100 or SVHN as the near OOD dataset and far OOD dataset, respectively.
Hardware Specification No The paper does not specify the hardware used for experiments, such as GPU/CPU models or specific compute resources.
Software Dependencies No The paper mentions using a 'backbone network', 'stochastic gradient descent optimizer', but does not provide specific software dependencies with version numbers.
Experiment Setup Yes We train the network for 250 epochs with a mini-batch size of 128, including 200 epochs for Step-I and 50 epochs for Step-II in Algorithm 1. The learning rate was initially set to 0.4. and momentum = 0.9.