Not All Out-of-Distribution Data Are Harmful to Open-Set Active Learning

Authors: Yang Yang, Yuxuan Zhang, XIN SONG, Yi Xu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on various open-set AL scenarios demonstrate the effectiveness of the proposed PAL, compared with the state-of-the-art methods.
Researcher Affiliation Collaboration Yang Yang1, Yuxuan Zhang1, Xin Song2, Yi Xu3 1Nanjing University of Science and Technology 2Baidu Talent Intelligence Center, Baidu Inc 3Dalian University of Technology {yyang, xuan_zhang}@njust.edu.cn, songxin06@baidu.com, yxu@dlut.edu.cn
Pseudocode No The paper includes a framework diagram (Figure 2) but no explicitly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes The code is available at https://github.com/njustkmg/PAL.
Open Datasets Yes We evaluate the efficiency of PAL on several image classification benchmarks, i.e., CIFAR10, CIFAR-100 [13] and Tiny-Imagenet [36] datasets following standard open-set AL methods [12, 22].
Dataset Splits Yes The CIFAR-10 dataset consists of 50,000 training images and 10,000 test images... The CIFAR-100 dataset has 100 classes and 50,000 training images and 5,000 test images... The Tiny-Imagenet dataset consists of 100,000 training images and 10,000 validation images... For all AL methods, following [22], we randomly sample 1%, 10% and 10% of the examples as the initial labeled set on CIFAR-10, CIFAR-100, and Tiny-Imagenet datasets, respectively.
Hardware Specification Yes All experiments are implemented on a single NVIDIA V100 GPU.
Software Dependencies No The paper mentions using Wide Res Net as backbone and SGD optimizer, but does not provide specific version numbers for programming languages or deep learning frameworks.
Experiment Setup Yes In each AL round, we train the model for 100 epochs, using the SGD optimizer with the momentum parameter of 0.9. The learning rate is initialized as 0.01 with a mini-batch size of 128, and the weight decay is set to be 5 10 4.