Active Learning Guided by Efficient Surrogate Learners

Authors: Yunpyo An, Suyeong Park, Kwang In Kim

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on four benchmark datasets demonstrate that this approach yields significant enhancements, either rivaling or aligning with the performance of state-of-the-art techniques.
Researcher Affiliation Academia Yunpyo An*1, Suyeong Park*1, Kwang In Kim2 1UNIST 2POSTECH {anyunpyo,suyeong}@unist.ac.kr, kimkin@postech.ac.kr
Pseudocode Yes Algorithm 1: Active learning guided by GP proxies.
Open Source Code No The paper does not provide an explicit statement or link to its open-source code. It cites a supplementary document but does not state that code is released there.
Open Datasets Yes We evaluated the performance of our algorithm using four benchmark datasets: The Tiny-Image Net dataset (Le and Yang 2015), CIFAR100 (Krizhevsky 2009), Fashion MNIST (Xiao et al. 2017), and Caltech256 (Griffin et al. 2007) datasets.
Dataset Splits No The paper mentions training data and budget but does not explicitly provide details on train/validation/test splits, such as percentages or sample counts for each split. It mentions that 600 images were initially selected and labeled for training the baseline learner, and then the AL algorithms augmented the labeled set up to 11,000.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. It only mentions general network architectures used as baseline learners.
Software Dependencies No The paper mentions that learners were trained using 'stochastic gradient descent' but does not specify any software names with version numbers (e.g., Python, PyTorch, TensorFlow versions or specific libraries/solvers). It refers to ResNet and VGG models, but not software dependencies with versions.
Experiment Setup Yes The number of basis points K for U and V (Eq. 3) is fixed at 500... The input kernel parameter σx is set to 0.5 times the average distance between data instances in X, while the output kernel parameter σf is determined as the number of classes per dataset. The noise level σ2 (Eq. 2) is kept small at 10 10. ... Our learners were trained using stochastic gradient descent with an initial learning rate of 0.01. The learning rate was reduced to 10% for every 10 epochs. The mini-batch size and the total number of epochs were fixed at 30 and 100, respectively.