Meta-Query-Net: Resolving Purity-Informativeness Dilemma in Open-set Active Learning

Authors: Dongmin Park, Yooju Shin, Jihwan Bang, Youngjun Lee, Hwanjun Song, Jae-Gil Lee

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on multiple open-set active learning scenarios demonstrate that the proposed MQ-Net achieves 20.14% improvement in terms of accuracy, compared with the state-of-the-art methods.
Researcher Affiliation Collaboration Dongmin Park1, Yooju Shin1, Jihwan Bang2,3, Youngjun Lee1, Hwanjun Song2 , Jae-Gil Lee1 1 KAIST, 2 NAVER AI Lab, 3 NAVER CLOVA
Pseudocode Yes The pseudocode of MQ-Net can be found in Appendix B.
Open Source Code Yes The code is available at https://github.com/kaist-dmlab/MQNet.
Open Datasets Yes We perform the active learning task on three benchmark datasets; CIFAR10 [41], CIFAR100 [41], and Image Net [42].
Dataset Splits Yes Without assuming a hard-to-obtain clean validation set, we propose to use a self-validation set, which is instantaneously generated in every AL round.
Hardware Specification Yes All methods are implemented with Py Torch 1.8.0 and executed on a single NVIDIA Tesla V100 GPU.
Software Dependencies Yes All methods are implemented with Py Torch 1.8.0 and executed on a single NVIDIA Tesla V100 GPU.
Experiment Setup Yes The total number r of rounds is set to 10. Following the prior open-set AL setup [13, 16], we set the labeling cost c IN = 1 for IN examples and c OOD = 1 for OOD examples. For the class-split setup, the labeling budget b per round is set to 500 for CIFAR10/100 and 1,000 for Image Net. Regarding the open-set noise ratio τ, we configure four different levels from light to heavy noise in {10%, 20%, 40%, 60%}. For the architecture of MQ-Net, we use a 2-layer MLP with the hidden dimension size of 64 and the Sigmoid activation fuction.