Learning Bounds for Open-Set Learning

Authors: Zhen Fang, Jie Lu, Anjin Liu, Feng Liu, Guangquan Zhang

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments verify the efficacy of AOSR. Experiments on real datasets indicate that AOSR achieves competitive performance when compared with baselines.
Researcher Affiliation Academia 1AAII, University of Technology Sydney.
Pseudocode Yes Section 4. A Principle Guided OSL Algorithm: Step 1 (Feature Encoding). Step 2 (Initialize the Auxiliary Domain). Step 3 (Construct the Auxiliary Domain). Step 4 (Softmax C+1). Step 5 (Open-set Learning).
Open Source Code Yes The code is available at github.com/ Anjin-Liu/Openset_Learning_AOSR.
Open Datasets Yes Double-moon dataset, MNIST (Le Cun & Cortes, 2010), Omniglot (Ager, 2008), CIFAR-10 (Krizhevsky & Hinton, 2009), SVHN (Netzer et al., 2011), CIFAR100 (Krizhevsky & Hinton, 2009).
Dataset Splits Yes Following the set up in Yoshihashi et al. (2019), we use MNIST (Le Cun & Cortes, 2010) as the training samples and use Omniglot (Ager, 2008), MNIST-Noise, and Noise (Liu et al., 2021) datasets as unknown classes. We implement double-moon dataset with varying size n 2. We also generate n test samples.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models or memory amounts used for running experiments.
Software Dependencies No The paper does not list specific versions of software dependencies or libraries used for the experiments.
Experiment Setup Yes AOSR has several hyper-parameters: β, t, µ and m. For all tasks, we set m = 3n, t = 10% as default. µ is a dynamic parameter depending on β: µ = nβ n +0.0001, where n is number of samples in training samples actually predicted as unknown. For example, if β = 0.05, n is 1000, there are 10 samples in training samples are predicted as unknown, then µ 5.