Multi-Classifier Adversarial Optimization for Active Learning
Authors: Lin Geng, Ningzhong Liu, Jie Qin
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the superiority of our approach over state-of-the-art AL methods in terms of image classification and object detection. Experiments We evaluate MAOAL against various state-of-the-art AL methods with respect to two computer vision tasks, i.e., image classification and object detection, on four benchmark datasets. |
| Researcher Affiliation | Academia | Lin Geng, Ningzhong Liu , and Jie Qin* School of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China {lingeng, ningzhongliu, jie.qin}@nuaa.edu.cn |
| Pseudocode | Yes | Algorithm 1: The training process of multi-classifier adversarial optimization for active learning (MAOAL) Input: Labeled pool (XL, YL), Unlabeled pool XU. Parameter: Network parameters θG, classifiers parameters θC, θC1 and θC2. |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code release. |
| Open Datasets | Yes | For image classification, we evaluate our method on three classical datasets, including CIFAR-10, CIFAR-100 (Krizhevsky 2009), and Caltech-101 (Li Fei-Fei, Fergus, and Perona 2006). Pascal VOC (Everingham et al. 2010) contains 20 object categories, consisting of the VOC 2007 trainval set, the VOC 2012 trainval set, and the VOC 2007 test set. |
| Dataset Splits | Yes | Both CIFAR-10 and CIFAR-100 contain 60,000 images of 32x32x3 pixels, with 50,000 images for training and 10,000 for testing. For all classification datasets, we randomly select 10% samples from the entire dataset to initialize the labeled pool, and the rest is considered the unlabeled pool. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments (e.g., GPU model, CPU type). |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies. |
| Experiment Setup | Yes | For each learning iteration, we train the model for 200 epochs using the Stochastic Gradient Descent (SGD) optimizer with a learning rate of 0.1, a momentum of 0.9, a weight decay of 0.0005, and a batch size of 128. After 80% of the training epochs, the learning rate is decreased to 0.01. We learn the model set for 300 epochs with the mini-batch size of 32. The learning rate for the first 240 epochs is 0.001 and decreased to 0.0001 for the last 60 epochs. |