Towards Controlled Data Augmentations for Active Learning
Authors: Jianan Yang, Haobo Wang, Sai Wu, Gang Chen, Junbo Zhao
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive empirical experiments, we bring the performance of active learning methods to a new level: an absolute performance boost of 16.99% on CIFAR-10 and 12.25% on SVHN with 1,000 annotated samples. |
| Researcher Affiliation | Academia | 1Department of Computer Science, Zhejiang University, Hangzhou, China. |
| Pseudocode | Yes | We summarize the pseudo-code of our CAMPAL in Algorithm 1 as in the Appendix B. We given the pseudo code of CAMPAL as shown in algorithm 1. |
| Open Source Code | Yes | Codes are available at https://github.com/jnzju/CAMPAL. |
| Open Datasets | Yes | We conduct experiments on four benchmark datasets: Fashion MNIST, SVHN, CIFAR-10, and CIFAR-100. |
| Dataset Splits | No | The paper describes the active learning cycle, including initial dataset sizes and acquisition cycles, but does not specify explicit training/validation/test dataset splits needed to reproduce the experiment in a traditional supervised learning context. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware used to run its experiments, such as GPU models, CPU models, or memory specifications. |
| Software Dependencies | No | The paper mentions using an "SGD optimizer" and "Res Net-18 as the architecture" but does not specify other key software components with version numbers (e.g., Python, PyTorch, TensorFlow versions or specific library versions). |
| Experiment Setup | Yes | We adopt Res Net-18 as the architecture and train the model for 300 epochs with an SGD optimizer of learning rate 0.01, momentum 0.9, and weight decay 5e-4. Specifically, all the experiments run 1048576 training iterations with a batch size 64, the model of Res Net-18, an optimizer of SGD of learning rate 0.03, momentum 0.9 and weight decay 0.0005. Some different parameters across these algorithms are shown in Table 7. |