Neural Active Learning Beyond Bandits
Authors: Yikun Ban, Ishika Agarwal, Ziwei Wu, Yada Zhu, Kommy Weldemariam, Hanghang Tong, Jingrui He
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We use extensive experiments to evaluate the proposed algorithms, which consistently outperform state-of-the-art baselines. |
| Researcher Affiliation | Collaboration | 1University of Illinois Urbana-Champaign, 2IBM Research |
| Pseudocode | Yes | Algorithm 1 NEURONAL-S and Algorithm 2 NEURONAL-P are included in the paper. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing code or a link to a code repository for its methodology. |
| Open Datasets | Yes | We evaluate NEURONAL for both stream-based and pool-based settings on the following six public classification datasets: Adult, Covertype (CT), Magic Telescope (MT), Shuttle [24], Fashion [61], and Letter [18]. |
| Dataset Splits | Yes | The default label budget is 30% T. We perform hyperparameter tuning on the training set. |
| Hardware Specification | No | The paper describes neural network models and training, but it does not specify any hardware details like GPU models, CPU types, or memory used for the experiments. |
| Software Dependencies | No | The paper mentions using neural networks and activation functions (e.g., ReLU) but does not specify the versions of any software libraries, frameworks, or programming languages used (e.g., Python, PyTorch, TensorFlow). |
| Experiment Setup | Yes | For all NN models, we use the same width m = 100 and depth L = 2. We perform hyperparameter tuning on the training set. Each method has a couple of hyperparameters: the learning rate, number of epochs, batch size, label budget percentage, and threshold (if applicable). During hyperparameter tuning for all methods, we perform a grid search over the values {0.0001, 0.0005, 0.001} for the learning rate, {10, 20, 30, 40, 50, 60, 70, 80, 90} for the number of epochs, {32, 64, 128, 256} for the batch size, {0.1, 0.3, 0.5, 0.7, 0.9} for the label budget percentage and {1, 2, 3, 4, 5, 6, 7, 8, 9} for the threshold (exploration) parameter. |