Asynchronous Active Learning with Distributed Label Querying

Authors: Sheng-Jun Huang, Chen-Chen Zong, Kun-Peng Ning, Hai-Bo Ye

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Both theoretical analysis and experimental study validate the effectiveness of the proposed approach.
Researcher Affiliation Academia College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics Collaborative Innovation Center of Novel Software Technology and Industrialization MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, Nanjing, 211106, China {huangsj, chencz, ningkp, yhb}@nuaa.edu.cn
Pseudocode Yes The complete pseudo code of AAL execution process can be found in the supplementary file.
Open Source Code No The paper states 'The complete pseudo code of AAL execution process can be found in the supplementary file.' but does not provide concrete access to open-source code for the methodology or a repository link.
Open Datasets Yes The experiments are performed on three data sets: Cifar-10, Cifar-100 and mini-Image Net.
Dataset Splits No The paper specifies 'We randomly sample 50,000 images as the training set and the rest 10,000 images as test set' for mini-Image Net, but does not explicitly mention a validation set split.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments, only mentioning that VGG-16 was used as the classification model.
Software Dependencies No The paper mentions 'VGG-16 is employed as the classification model' but does not provide specific software dependencies with version numbers (e.g., libraries, frameworks, or programming languages) used for the implementation.
Experiment Setup Yes For each dataset, we examine the results with the query batch size varies from {1000, 2500, 5000}. We perform the experiments with 1, 2 and 4 servers for all of the four active learning strategies, respectively.