All Beings Are Equal in Open Set Recognition

Authors: Chaohua Li, Enhao Zhang, Chuanxing Geng, Songcan Chen

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results indicate DCTAU sets a new state-of-the-art. [...] Extensive experiments on various benchmarks show ours outperforms the state-of-the-art approaches.
Researcher Affiliation Academia 1College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics 2MIIT Key Laboratory of Pattern Analysis and Machine Intelligence {chaohuali, zhangeh, gengchuanxing, s.chen}@nuaa.edu.cn
Pseudocode No The paper describes procedures and mathematical formulations but does not include any explicitly labeled pseudocode blocks or algorithms.
Open Source Code Yes All technical appendices: https://github.com/SuperL7/DCTAU
Open Datasets Yes MNIST(Lake, Salakhutdinov, and Tenenbaum 2015), SVHN(Netzer et al. 2011) and CIFAR10(Krizhevsky, Hinton et al. 2009) all consist 10 of classes [...] 10 50 classes sampled from CIFAR100(Krizhevsky, Hinton et al. 2009) as unknown. Tiny Image Net is a subset derived from Image Net(Russakovsky et al. 2015) [...] Omniglot(Lake, Salakhutdinov, and Tenenbaum 2015) [...] LSUN(Yu et al. 2015).
Dataset Splits Yes Following the protocol defined in (Neal et al. 2018) and the dataset splits with (Chen et al. 2021; Xu, Shen, and Zhao 2023), a summary of 6 benchmark datasets is provided: MNIST,SVHN,CIFAR10. MNIST... of which 6 classes are randomly selected as known classes and the other 4 classes as unknown. [...] The number of OOD data is 10,000, equal to the test data of MNIST.
Hardware Specification No The paper does not specify the hardware used to run the experiments, such as specific GPU or CPU models.
Software Dependencies No The paper mentions general software components like 'feature encoder backbone' and 'MLP' but does not provide specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes the training epochs of these two steps are 600 and 20 respectively. [...] For Targeted Mixup weight λ, we vary it from 0.1 to 0.9 and 0.5 achieves the best performance. [...] In the contrastive learning step, the feature encoder backbone is the same with (Neal et al. 2018), and an MLP with two fully connected layers is employed as the projection network. In the classifier training step, the network is also an MLP with a 128-node fully connected layer.