TRS: Transferability Reduced Ensemble via Promoting Gradient Diversity and Model Smoothness

Authors: Zhuolin Yang, Linyi Li, Xiaojun Xu, Shiliang Zuo, Qian Chen, Pan Zhou, Benjamin Rubinstein, Ce Zhang, Bo Li

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on TRS and compare with 6 state-of-the-art ensemble baselines against 8 whitebox attacks on different datasets, demonstrating that the proposed TRS outperforms all baselines significantly.
Researcher Affiliation Collaboration 1 University of Illinois Urbana-Champaign 2 Tencent Inc. 3 University of Melbourne 4 Huazhong University of Science and Technology 5 ETH Zurich
Pseudocode Yes We present one-epoch training pseudo code in Algorithm 1 of Appendix F.
Open Source Code Yes The code is publicly available2. 2https://github.com/AI-secure/Transferability-Reduced-Smooth-Ensemble
Open Datasets Yes We conduct our experiments on widely-used image datasets including hand-written dataset MNIST [29]; and colourful image datasets CIFAR-10 and CIFAR-100 [26].
Dataset Splits No The paper mentions training and testing but does not explicitly provide details about validation splits (e.g., percentages or sample counts) in the main text.
Hardware Specification No The paper does not provide any specific details regarding the hardware (e.g., GPU model, CPU, memory) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., PyTorch 1.9, TensorFlow 2.x).
Experiment Setup Yes The detailed hyper-parameter setting and training criterion are discussed in Appendix F.