Agreement-Discrepancy-Selection: Active Learning with Progressive Distribution Alignment

Authors: Mengying Fu, Tianning Yuan, Fang Wan, Songcen Xu, Qixiang Ye7466-7473

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on image classification and object detection tasks demonstrate that ADS is task-agnostic, while significantly outperforms the previous methods when the labeled sets are small.
Researcher Affiliation Collaboration Mengying Fu1, Tianning Yuan1, Fang Wan1 , Songcen Xu2 and Qixiang Ye1 1 University of Chinese Academy of Sciences, Beijing, China 2 Noah s Ark Lab, Huawei Technologies, Shenzhen, China
Pseudocode Yes Algorithm 1: ADS Training Procedure
Open Source Code Yes 1Code is enclosed in the supplementary material.
Open Datasets Yes Dataset. The commonly used CIFAR-10 and CIFAR-100 datasets are used in the image classification task, following the experimental settings (Yoo and Kweon 2019; Sinha, Ebrahimi, and Darrell 2019; Zhang et al. 2020). ... The experiments are conducted on PASCAL VOC 2007 and 2012 (Everingham et al. 2010)
Dataset Splits Yes CIFAR-10 consists of 60000 images of 32 32 3 pixels. The training and test sets contain 50000 and 10000 images, respectively.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU models, CPU models, or memory specifications. It only mentions 'training the model' generally.
Software Dependencies No The paper mentions using ResNet-18 and VGG-16 as backbone networks, but it does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes For each learning iteration, we train the model for 200 epochs with the mini-batch size 128 and the initial learning rate 0.1. After 160 epochs, the learning rate decrease to 0.01. The momentum and the weight decay are respectively set to 0.9 and 0.0005.