Improved Algorithms for Agnostic Pool-based Active Classification
Authors: Julian Katz-Samuels, Jifan Zhang, Lalit Jain, Kevin Jamieson
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we demonstrate that our algorithm is superior to state of the art agnostic active learning algorithms on image classification datasets. |
| Researcher Affiliation | Academia | 1University of Wisconsin, Madison, WI 2Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA. |
| Pseudocode | Yes | Algorithm 1 ACED (Active Classification using Experimental Design). Algorithm 2 Fixed Budget ACED. |
| Open Source Code | Yes | Code can be found at https://github.com/jifanz/ ACED. |
| Open Datasets | Yes | MNIST 0-4 vs 5-9 (Le Cun et al., 1998). SVHN 2 vs 7 (Netzer et al., 2011). CIFAR Bird vs Plane (Netzer et al., 2011). Fashion MNIST T-shirt vs Pants (Xiao et al., 2017). |
| Dataset Splits | No | The paper describes using active learning from a pool of unlabeled examples and retraining models, but it does not specify explicit train/validation/test splits with percentages or counts for the entire dataset. |
| Hardware Specification | No | Computational resources from Amazon Web Services were generously gifted as part of an Amazon Research Award. No specific hardware (e.g., GPU/CPU models, memory) is mentioned. |
| Software Dependencies | No | We used the logistic regression implementation in Scikit-learn (Pedregosa et al., 2011). No version numbers for Scikit-learn or Vowpal Wabbit are provided. |
| Experiment Setup | Yes | Detailed hyperparameters considered for the baselines are included in Appendix M. |