Algorithm Selection for Deep Active Learning with Imbalanced Datasets

Authors: Jifan Zhang, Shuai Shao, Saurabh Verma, Robert Nowak

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments in multi-class and multi-label applications demonstrate TAILOR s effectiveness in achieving accuracy comparable or better than that of the best of the candidate algorithms.
Researcher Affiliation Collaboration Jifan Zhang University of Wisconsin Madison Madison, WI 53715 jifan@cs.wisc.edu Shuai Shao Meta Inc. Menlo Park, CA 94025 sshao@meta.com Saurabh Verma Meta Inc. Menlo Park, CA 94025 saurabh08@meta.com Robert Nowak University of Wisconsin Madison Madison, WI 53715 rdnowak@wisc.edu
Pseudocode Yes Algorithm 1 General Meta Active Learning Framework for Baram et al. [2004], Hsu and Lin [2015], Pang et al. [2018]
Open Source Code Yes Our implementation of TAILOR is open-sourced at https://github.com/jifanz/TAILOR.
Open Datasets Yes Our experiments span ten datasets with class-imbalance as shown in Table 1. For multi-label experiments, we experiment on four datasets including Celeb A, COCO, VOC and Stanford Car datasets. For multi-class classification datasets, Image Net, Kuzushiji-49 and Caltech256 are naturally unbalanced datasets, while CIFAR-10 with 2 classes, CIFAR-100 with 10 classes and SVHN with 2 classes are derived from the original dataset following Zhang et al. [2022].
Dataset Splits No The paper does not provide specific details on how the datasets were split into training, validation, and test sets. It mentions 'All experiments are measured based on active annotation performance over the pool [Zhang et al., 2022]' but does not specify validation splits.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU specifications, or memory amounts used for running the experiments.
Software Dependencies No The paper mentions 'Res Net-18 architecture [He et al., 2016]' and 'Adam optimizer [Kingma and Ba, 2014]' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We conduct experiments on varying batch sizes anywhere from B = 500 to B = 10000. To mirror a limited training budget [...], we allow 10 or 20 batches in total for each dataset [...]. We set the discounting factor γ to be .9 across all experiments. All of our experiments are conducted using the Res Net-18 architecture [He et al., 2016] pretrained on Image Net. We use the Adam optimizer [Kingma and Ba, 2014] with learning rate of 1e-4 and weight decay of 5e-5.