Querying Easily Flip-flopped Samples for Deep Active Learning

Authors: Seong Jin Cho, Gwangsu Kim, Junghyun Lee, Jinwoo Shin, Chang D. Yoo

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This section presents the empirical evaluation of the proposed LDM-based active learning algorithm. We compare its performance against various uncertainty-based active learning algorithms on diverse datasets: 1) three Open ML (OML) datasets ...; 2) six benchmark image datasets ... We employ various deep learning architectures: MLP, S-CNN, K-CNN (Chollet et al., 2015), Wide-Res Net (WRN-16-8; Zagoruyko & Komodakis (2016)), and Res Net-18 (He et al., 2016). All results represent the average performance over 5 repetitions (3 for Image Net). Detailed descriptions of the experimental settings are provided in Appendix D.
Researcher Affiliation Academia Seong Jin Cho1,2, Gwangsu Kim3, Junghyun Lee4, Jinwoo Shin4, Chang D. Yoo1 1Department of Electrical Engineering, KAIST, 2Korea Institute of Oriental Medicine, 3Department of Statistics, Jeonbuk National University, 4Kim Jaechul Graduate School of AI, KAIST
Pseudocode Yes Algorithm 1 Empirical Evaluation of LDM
Open Source Code Yes The source code is available on the authors Git Hub repository3. 3https://github.com/ipcng00/LDM-S
Open Datasets Yes D.1 DATASETS Open ML#6 (Frey & Slate, 1991) is a letter image recognition dataset... MNIST (Lecun et al., 1998) is a handwritten digit dataset... CIFAR10 and CIFAR100 (Krizhevsky, 2009) are tiny image datasets... SVHN (Netzer et al., 2011) is a real-world digit dataset... Tiny Image Net (Le & Yang, 2015) is a subset of the ILSVRC (Russakovsky et al., 2015) dataset... FOOD101 (Bossard et al., 2014) is a fine-grained food image dataset... Image Net (Russakovsky et al., 2015) is an image dataset...
Dataset Splits Yes Table 3: Settings for data and acquisition size. ... MNIST S-CNN 55,000 / 5,000 / 10,000
Hardware Specification No The paper does not specify the hardware (e.g., GPU models, CPU types, or cloud compute instances) used for running the experiments.
Software Dependencies No The paper mentions optimizers such as Adam, RMSProp, and Nesterov, and uses Keras (Chollet et al., 2015), but it does not specify version numbers for these or other software dependencies.
Experiment Setup Yes Table 4: Settings for training. This table specifies 'Epochs', 'Batch size', 'Optimizer', 'Learning Rate', and 'Learning Rate Schedule decay' for various datasets and models.