Contrastive Open Set Recognition
Authors: Baile Xu, Furao Shen, Jian Zhao
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our method on multiple benchmark datasets and testing scenarios, achieving experimental results that verify the effectiveness of the proposed method. |
| Researcher Affiliation | Academia | 1State Key Laboratory for Novel Software Technology, Nanjing University 2Department of Computer Science and Technology, Nanjing University 3School of Artificial Intelligence, Nanjing University 4School of Electronic Science and Engineering, Nanjing University |
| Pseudocode | No | No structured pseudocode or algorithm blocks (e.g., a figure or section explicitly labeled 'Pseudocode' or 'Algorithm') were found. |
| Open Source Code | Yes | An implementations of our method can be found at https://github.com/NJURINC/Con%20OSR. |
| Open Datasets | Yes | MNIST \ SVHN \ CIFAR-10: These datasets are classification datasets with 10 classes, of which 6 classes are selected as known data, leaving the remaining classes to simulate the open space in OSR scenarios. CIFAR+10 \ CIFAR+50: 4 classes from CIFAR-10 are selected as known classes, and 10\50 classes selected from CIFAR-100 are used as unknown. Tiny Image Net: Tiny Image Net consists of 200 classes. We select 20 classes as known classes and use the remaining 180 classes as unknown. |
| Dataset Splits | Yes | On each benchmark dataset, we conduct the experiment over five trials using the same data split as (Chen et al. 2021), and report the mean results. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running the experiments were provided. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) were provided. |
| Experiment Setup | Yes | The projection network in the contrastive learning step is an MLP with two fully connected layers, both consisting of 128 nodes. The classification network is also an MLP with a 128-node fully connected layer. The magnitude of the transformation is controlled by a global hyper-parameter M. We set the λ = 5 by default, which represents the desired false negative rate on the training set. However, the actual false negative rate on the test data is usually higher than λ. We set the value of λ by grid search in range [0, 15]. |