Improved Algorithms for Neural Active Learning
Authors: Yikun Ban, Yuheng Zhang, Hanghang Tong, Arindam Banerjee, Jingrui He
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In the end, we use extensive experiments to evaluate the proposed algorithm and SOTA baselines, to show the improved empirical performance. |
| Researcher Affiliation | Academia | University of Illinois Urbana-Champaign {yikunb2, yuhengz2, htong, arindamb, jingrui}@illinois.edu |
| Pseudocode | Yes | Algorithm 1 I-Neur AL |
| Open Source Code | Yes | Codes are available1. 1https://github.com/matouk98/I-Neur AL |
| Open Datasets | Yes | We report the experimental results on the following six data sets: Phishing2, IJCNN [38], Letter [18], Fashion [49], MNIST [32] and CIFAR-10 [31]. |
| Dataset Splits | No | The paper describes a streaming setting where instances are drawn dynamically from the dataset and does not explicitly specify fixed training, validation, or test dataset splits in percentages or counts. It states 'In each round, one instance is randomly drawn from the data set'. |
| Hardware Specification | No | The paper states 'we only report the main results here and leave the implementation details and parameter sensitivity in the Appendix 10', but the provided text does not contain the appendix or any specific hardware details such as GPU models, CPU types, or cloud providers used for the experiments. |
| Software Dependencies | No | The paper states 'we only report the main results here and leave the implementation details and parameter sensitivity in the Appendix 10', but the provided text does not contain the appendix or any specific software dependencies with version numbers. |
| Experiment Setup | No | The paper states 'we only report the main results here and leave the implementation details and parameter sensitivity in the Appendix 10'. The main text refers to parameters like 'γ (exploration parameter), b (batch size), δ (confidence level)' for Algorithm 1 but does not provide their specific values or other detailed experimental setup information like learning rates or number of epochs. |