Revisiting Visual Model Robustness: A Frequency Long-Tailed Distribution View
Authors: Zhiyu Lin, Yifei Gao, Yunfan Yang, Jitao Sang
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In the frequency long-tailed scenario, experimental results on common datasets and various network structures consistently indicate that models in standard training exhibit high sensitivity to HFC. Extensive experimental results demonstrate that our method achieves a substantially better robustness-accuracy trade-off when combined with existing defense methods, while also indicating the potential of encouraging HFC learning in improving model performance. |
| Researcher Affiliation | Collaboration | Zhiyu Lin1, Yifei Gao1, Yunfan Yang1, Jitao Sang1,2 1Beijing Jiaotong University, China 2Peng Cheng Lab, Shenzhen 518066, China {zyllin, yf-gao, yfyang, jtsang}@bjtu.edu.cn |
| Pseudocode | No | The paper describes its methodology using equations and textual explanations, but it does not include a formally structured pseudocode block or algorithm. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing its source code or a link to a code repository. |
| Open Datasets | Yes | We conduct experiments on Res Net18 naturally trained on CIFAR10 dataset. We analyze the frequency sensitivity across the CIFAR10 [25], Tiny-Image Net [26], and Image Net [12] datasets with resolutions of 322, 642, and 2242 pixels, respectively. |
| Dataset Splits | No | The paper explicitly mentions using CIFAR10, Tiny-Image Net, and Image Net datasets for training and testing, and refers to CIFAR10-C and CIFAR100-C for corruption robustness. However, it does not explicitly state the specific percentages or counts for training, validation, and test splits (e.g., 80/10/10 split), nor does it provide citations to predefined splits with specific details about the validation set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as GPU models, CPU types, or memory amounts. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies, libraries, or frameworks used in the experiments (e.g., 'PyTorch 1.9', 'Python 3.8'). |
| Experiment Setup | Yes | We use a set of Res Net-18 [19] models with standard training. We perform PGD attack [30] with 20 steps and ℓ∞ constraint ϵ = 1/255, 2/255, 4/255, 8/255 to evaluate adversarial robustness. The ℓ∞ attack perturbation was bounded to ϵ = 8/255 with a step size of 2/255 in both the training and test processes of CIFAR-10 and CIFAR-100. For Restricted-Image Net, ϵ = 4/255 with a step size of 1/255. Detailed settings are available in the Appendix F.2.2. |