Active Learning in Bayesian Neural Networks with Balanced Entropy Learning Principle
Authors: Jae Oh Woo
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we demonstrate that our balanced entropy learning principle with Bal Ent Acq1 consistently outperforms well-known linearly scalable active learning methods, including a recently proposed Power BALD, a simple but diversified version of BALD, by showing experimental results obtained from MNIST, CIFAR-100, SVHN, and Tiny Image Net datasets. |
| Researcher Affiliation | Collaboration | Jae Oh Woo Samsung SDS Research America San Jose, CA 95134 jaeoh.woo@aya.yale.edu |
| Pseudocode | Yes | Algorithm 1: Bal Ent Acq active learning algorithm |
| Open Source Code | Yes | 1Code is available. https://github.com/jaeohwoo/Balanced Entropy |
| Open Datasets | Yes | obtained from MNIST (Le Cun & Cortes, 2010), CIFAR-100 (Krizhevsky et al., 2012), SVHN (Netzer et al., 2011), and Tiny Image Net (Le & Yang, 2015) datasets |
| Dataset Splits | No | The paper specifies 'Train Size' and 'Test size' in tables (e.g., Table 1, Table 2) but does not explicitly mention or detail a separate validation set split or how validation was used in the experimental setup. |
| Hardware Specification | Yes | We used a single NVIDIA A100 GPU for each experiment |
| Software Dependencies | No | The paper mentions software components implicitly through common frameworks (e.g., Adam optimizer, ResNet architectures) but does not provide specific version numbers for any libraries or software dependencies like Python, PyTorch, or CUDA. |
| Experiment Setup | Yes | Table 2 shows a summary of dataset, configurations, and hyperparmeters used in our experiments. For each experiment, we repeat 3 times to generate the full active learning accuracy curve. |