LADA: Look-Ahead Data Acquisition via Augmentation for Deep Active Learning

Authors: Yoon-Yeong Kim, Kyungwoo Song, JoonHo Jang, Il-chul Moon

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments, 4.2 Quantitative Performance Evaluations, Table 1 shows the average test accuracy of five replications
Researcher Affiliation Collaboration Yoon-Yeong Kim1 Kyungwoo Song2 Joonho Jang1 Il-Chul Moon1,3 1Korea Advanced Institute of Science and Technology (KAIST) 2University of Seoul 3Summary.AI
Pseudocode Yes Algorithm 1 LADA with Max Entropy and Manifold Mixup
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We conduct experiments on four benchmark datasets: Fashion MNIST (Fashion) [28], SVHN [29], CIFAR-10, and CIFAR-100 [30].
Dataset Splits No The paper describes initial labeled dataset sizes and acquisition iterations, and mentions 'five-fold repeated trials', but it does not provide specific details on how the training, validation, and test sets were explicitly split in terms of percentages, sample counts, or clear partitioning methodology for validation.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions ResNet-18 and Adam optimizer but does not provide specific version numbers for software dependencies or libraries (e.g., Python, PyTorch, TensorFlow, etc.) that would be necessary for reproduction.
Experiment Setup Yes We utilize Adam optimizer [32] with a learning rate of 1e-03. The policy generator network, πφ is much smaller network. Appendix A provides details of our experimental settings.