Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
AUTE: Peer-Alignment and Self-Unlearning Boost Adversarial Robustness for Training Ensemble Models
Authors: Lifeng Huang, Tian Su, Chengying Gao, Ning Liu, Qiong Huang
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments across various datasets and networks illustrate that AUTE achieves superior performance compared to baselines. For instance, a 5-member AUTE with Res Net-20 networks outperforms state-of-the-art method by 2.1% and 3.2% in classifying clean and adversarial data. Additionally, AUTE can easily extend to non-adversarial training paradigm, surpassing current standard ensemble learning methods by a large margin. |
| Researcher Affiliation | Academia | Lifeng Huang1, Tian Su2, Chengying Gao2, Ning Liu2, Qiong Huang1 1College of Mathematics and Informatics, South China Agricultural University 2School of Computer Science and Engineering, Sun Yat-sen University EMAIL, EMAIL |
| Pseudocode | Yes | Detailed pseudo-code for training an AUTE ensemble is shown in Appendix. C. |
| Open Source Code | Yes | Code https://github.com/mesunhlf/AUTE |
| Open Datasets | Yes | Dataset. We mainly evaluate the ensemble models using the CIFAR-10 dataset. To illustrate the generalizability of the proposed method, we show that AUTE can consistently achieves superior performance across varying dataset scales specifically, on smaller datasets like MNIST as well as complex datasets such as CIFAR-100 and Tiny-Image Net. |
| Dataset Splits | No | The paper mentions using well-known datasets such as CIFAR-10, MNIST, CIFAR-100, and Tiny-Image Net. However, it does not explicitly provide specific details about how these datasets were split into training, validation, or test sets (e.g., percentages, sample counts, or specific files used for splits) within the main text. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory amounts used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library names like PyTorch, TensorFlow, or specific solvers with their versions). |
| Experiment Setup | Yes | We mainly evaluate the ensemble models using the CIFAR-10 dataset. ... We utilize the light-weighting Res Net-20 architecture to develop robust ensemble models. Furthermore, we extend the experiments to include deeper and wider DNNs, such as Vgg Net16, Res Net-18 and Wide Res Net-34. To demonstrate the scalability of AUTE, we build ensemble models with 3, 5, and 8 members together, respectively. ... We train the AUTE with different weights β (Eq. (6)). ... The threshold M determines unlearning behaviors of ensembles (Eq. (5)). |