Entropic Open-Set Active Learning

Authors: Bardia Safaei, Vibashan VS, Celso M. de Melo, Vishal M. Patel

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments, we show that the proposed method outperforms existing state-of-the-art methods on CIFAR-10, CIFAR-100, and Tiny Image Net datasets.
Researcher Affiliation Collaboration Bardia Safaei1, Vibashan VS1, Celso M. de Melo2, Vishal M. Patel1 1Johns Hopkins University, Baltimore, MD, USA 2DEVCOM Army Research Laboratory, Adelphi, MD, USA
Pseudocode Yes Algorithm 1: Our Proposed Algorithm for Open-set AL
Open Source Code Yes Code is available at https://github.com/bardisafa/EOAL.
Open Datasets Yes We perform extensive experiments on the CIFAR-10, CIFAR-100 (Krizhevsky, Hinton et al. 2009), and Tiny Image Net (Yao and Miller 2015) datasets to demonstrate the effectiveness of our approach.
Dataset Splits No For CIFAR-10, CIFAR-100, and Tiny Image Net, we initialize the labeled dataset by randomly sampling 1%, 8%, and 8% of the samples from known classes, respectively. (The paper specifies initial labeled data and test data, but does not explicitly define a separate validation split for hyperparameter tuning or early stopping.)
Hardware Specification Yes We utilize Py Torch (Paszke et al. 2019) to implement our method and an NVIDIA A5000 GPU to run each experiment.
Software Dependencies No We utilize Py Torch (Paszke et al. 2019) to implement our method and an NVIDIA A5000 GPU to run each experiment. (PyTorch is mentioned, but without a specific version number, which is required for reproducibility. No other software dependencies with version numbers are listed.)
Experiment Setup Yes In each AL cycle, we train models for 300 epochs via SGD optimizer (Ruder 2016) with an initial learning rate of 0.01, a momentum of 0.9, and a weight decay of 0.005. The learning rate is decayed by 0.5 every 60 epochs. The batch size is set to 128 for all experiments. We generally set the values of both β and λ to 0.1.