Bayesian Batch Active Learning as Sparse Subset Approximation
Authors: Robert Pinsler, Jonathan Gordon, Eric Nalisnick, José Miguel Hernández-Lobato
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the benefits of our approach on several large-scale regression and classification tasks. and 7 Experiments and results |
| Researcher Affiliation | Academia | Robert Pinsler Department of Engineering University of Cambridge rp586@cam.ac.uk Jonathan Gordon Department of Engineering University of Cambridge jg801@cam.ac.uk Eric Nalisnick Department of Engineering University of Cambridge etn22@cam.ac.uk José Miguel Hernández-Lobato Department of Engineering University of Cambridge jmh233@cam.ac.uk |
| Pseudocode | Yes | The complete AL procedure, Active Bayesian Core Sets with Frank-Wolfe optimization (ACS-FW), is outlined in Appendix A (see Algorithm A.1). |
| Open Source Code | Yes | Source code is available at https://github.com/rpinsler/active-bayesian-coresets. |
| Open Datasets | Yes | We evaluate the performance of ACS-FW on several UCI regression datasets. and on the classification datasets cifar10, SVHN and Fashion MNIST. These are widely recognized public datasets. |
| Dataset Splits | No | The paper describes 'randomized 80/20% train-test splits' for regression tasks and 'holdout test set' with the remainder for training for classification tasks, but does not explicitly mention a separate validation split or provide details on how a validation set was used for hyperparameter tuning or early stopping. |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper mentions 'Adam [36]' as an optimizer and 'Py Torch' but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | The model is re-trained for 1000 epochs after every AL iteration using Adam [36]. and trained from scratch at every AL iteration for 250 epochs using Adam [36]. and Further details, including architectures and learning rates, are in Appendix C. |