Adversarial Learning from Crowds

Authors: Pengpeng Chen, Hailong Sun, Yongqiang Yang, Zhijun Chen5304-5312

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our empirical results on three real-world datasets show that the proposed approach can substantially improve the performance of the trained classifier even with the existence of adversarial examples. On average, A-LFC has 10.05% and 11.34% higher test robustness than the state-of-the-art in the white-box and black-box attack settings, respectively.
Researcher Affiliation Academia Pengpeng Chen,1,3 Hailong Sun,*2,3 Yongqiang Yang, 1,3 Zhijun Chen 1,3 1 SKLSDE Lab, School of Computer Science and Engineering, Beihang University, Beijing, China 2 SKLSDE Lab, School of Software, Beihang University, Beijing, China 3 Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing, China {chenpp, sunhl, yangyongqiang, zhijunchen}@buaa.edu.cn
Pseudocode Yes Algorithm 1: A-LFC
Open Source Code Yes Our code is available at https://github.com/yongqiangyang/ALFC.
Open Datasets Yes We use three publicly available, widely used benchmark datasets with real annotations from AMT. Music Genre Classification dataset (MGC). The MGC dataset (Rodrigues, Pereira, and Ribeiro 2013) [...] Label Me. Dataset Label Me is an image classification dataset (Rodrigues and Pereira 2018) [...] Sentiment Polarity Classification (Sentiment). The Sentiment dataset (Rodrigues, Pereira, and Ribeiro 2013)
Dataset Splits No The paper describes training parameters but does not explicitly provide training/validation/test dataset splits (percentages or counts) or reference a predefined split that includes a validation set. It mentions test sets but not validation sets.
Hardware Specification Yes All experiments were performed 50 times on NVIDIA Tesla V100 GPUs and we report the average result.
Software Dependencies No The paper mentions using specific networks like VGG and general concepts like backpropagation and Adam optimization but does not provide specific version numbers for software libraries (e.g., Python, PyTorch, TensorFlow, CUDA) that would enable reproducible environment setup.
Experiment Setup Yes For each dataset, we use Re LU activations, 50% dropout, and Adam stochastic optimization. The learning rate is 0.001, the batch size is 64, and the number of epoch is 200. The worker parameters were initially with the result of MV. The training attack is 10-step PGD with random start and step size "," of training attack is 8/255 and the parameter "alpha" is set to 0.5.