Enhance the Visual Representation via Discrete Adversarial Training

Authors: Xiaofeng Mao, YueFeng Chen, Ranjie Duan, Yao Zhu, Gege Qi, shaokai ye, Xiaodan Li, Rong Zhang, Hui Xue'

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experiment Discrete Adversarial Training (DAT) on multiple tasks including image classification, object detection and self-supervised learning.
Researcher Affiliation Collaboration Alibaba Group, Zhejiang University, EPFL {mxf164419,yuefeng.chenyf,ranjie.drj}@alibaba-inc.com
Pseudocode Yes Algorithm 1: Pseudo code of DAT
Open Source Code Yes The code will be available at https://github.com/alibaba/easyrobust.
Open Datasets Yes We adopt Image Net-1K for both training and indistribution testing.
Dataset Splits Yes We study this effect by sampling 1000 mini-batches in Image Net validation set
Hardware Specification No The paper does not provide specific details on the hardware used for experiments, such as GPU models, CPU types, or memory.
Software Dependencies No We implement DAT with vanilla training recipes using "robustness" library.
Experiment Setup Yes We set = 0.1 by default in DAT.