Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Towards Understanding Dual BN In Hybrid Adversarial Training

Authors: Chenshuang Zhang, Chaoning Zhang, Kang Zhang, Axi Niu, Junmo Kim, In So Kweon

TMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this work, we perform experiments on CIFAR10 (Krizhevsky et al., 2009; Andriushchenko & Flammarion, 2020; Zhang et al., 2022) with Res Net18 (Andriushchenko & Flammarion, 2020; Targ et al., 2016; Wu et al., 2019; Li et al., 2016; Zhang et al., 2022). Specifically, we train the model for 110 epochs. The learning rate is set to 0.1 and decays by a factor of 0.1 at the epoch 100 and 105. We adopt an SGD optimizer with weight decay 5 × 10−4. For generating adversarial examples during training, we use ℓ PGD attack with 10 iterations and step size α = 2/255. For the perturbation constraint, ϵ is set to ℓ 8/255 (Pang et al., 2020) or 16/255 (Xie & Yuille, 2020). Following Pang et al. (2020), we evaluate the model robustness under PGD-10 attack (PGD attack with 10 steps) and Auto Attack (AA) (Croce & Hein, 2020).
Researcher Affiliation Academia Chenshuang Zhang EMAIL KAIST Chaoning Zhang EMAIL Kyung Hee University Kang Zhang EMAIL KAIST Axi Niu EMAIL Northwestern Polytechnical University Junmo Kim EMAIL KAIST In So Kweon EMAIL KAIST
Pseudocode No The paper describes methods and procedures in narrative text and does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about releasing code, a link to a code repository, or a mention of code being available in supplementary materials.
Open Datasets Yes In this work, we perform experiments on CIFAR10 (Krizhevsky et al., 2009; Andriushchenko & Flammarion, 2020; Zhang et al., 2022) with Res Net18
Dataset Splits Yes In this work, we perform experiments on CIFAR10 (Krizhevsky et al., 2009; Andriushchenko & Flammarion, 2020; Zhang et al., 2022) with Res Net18 (Andriushchenko & Flammarion, 2020; Targ et al., 2016; Wu et al., 2019; Li et al., 2016; Zhang et al., 2022). Specifically, we train the model for 110 epochs.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU model, CPU type) used for running the experiments.
Software Dependencies No The paper mentions general tools and techniques like 'SGD optimizer' but does not specify any software libraries or frameworks with version numbers (e.g., Python version, PyTorch version).
Experiment Setup Yes Specifically, we train the model for 110 epochs. The learning rate is set to 0.1 and decays by a factor of 0.1 at the epoch 100 and 105. We adopt an SGD optimizer with weight decay 5 × 10−4. For generating adversarial examples during training, we use ℓ PGD attack with 10 iterations and step size α = 2/255. For the perturbation constraint, ϵ is set to ℓ 8/255 (Pang et al., 2020) or 16/255 (Xie & Yuille, 2020). Following Pang et al. (2020), we evaluate the model robustness under PGD-10 attack (PGD attack with 10 steps) and Auto Attack (AA) (Croce & Hein, 2020).