Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder

Authors: Ji Feng, Qi-Zhi Cai, Zhi-Hua Zhou

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on various datasets including CIFAR-10 and a reduced version of Image Net confirmed the effectiveness of the proposed method and empirical results showed that, such bounded perturbations have good transferability across different types of victim classifiers. To validate the effectiveness of our method, we used the classical MNIST [18], CIFAR-10 [17] for multi-classification and a subset of Image Net [7] for 2-class classification. The classification results are summarized in Table 1. Each experiment is repeated 10 times.
Researcher Affiliation Collaboration Ji Feng1,2, Qi-Zhi Cai2, Zhi-Hua Zhou1 1National Key Laboratory for Novel Software Technology Nanjing University, Nanjing 210023, China 2Sinovation Ventures AI Institute {fengj, zhouzh}@lamda.nju.edu.cn, caiqizhi@chuangxin.com
Pseudocode Yes Algorithm 1: Deep Confuse; Algorithm 2: Mem-Efficient Deep Confuse
Open Source Code No The paper does not contain any statement about releasing source code for the methodology or provide a link to a code repository.
Open Datasets Yes To validate the effectiveness of our method, we used the classical MNIST [18], CIFAR-10 [17] for multi-classification and a subset of Image Net [7] for 2-class classification.
Dataset Splits No The paper does not explicitly mention a validation set or describe how data was split for validation purposes. It discusses training data and test data.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions 'Adam [15]' as an optimizer and 'Re LU as activation' but does not specify any software libraries (e.g., PyTorch, TensorFlow) or their version numbers. Therefore, it does not provide a reproducible description of ancillary software with specific version numbers.
Experiment Setup Yes Cross-entropy is used for loss function whereas the learning rate and batch size for the classifiers fθ are set to be 0.01 and 64 for MNIST and CIFAR-10 and 0.1 and 32 for Image Net. The number of trials T is set to be 500 for both cases. The learning rate for the noise generator gξ is set to be 10 4 via Adam [15].