Revealing the Dark Secrets of Extremely Large Kernel ConvNets on Robustness
Authors: Honghao Chen, Yurong Zhang, Xiaokun Feng, Xiangxiang Chu, Kaiqi Huang
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this paper, we first conduct a comprehensive evaluation of large kernel convnets robustness and their differences from typical small kernel counterparts and Vi Ts on six diverse robustness benchmark datasets. Then to analyze the underlying factors behind their strong robustness, we design experiments from both quantitative and qualitative perspectives to reveal large kernel convnets intriguing properties that are completely different from typical convnets. |
| Researcher Affiliation | Collaboration | 1CRISE, Institute of Automation, Chinese Academy of Sciences 2School of Artificial Intelligence, University of Chinese Academy of Sciences 3Shanghai Jiao Tong University 4Meituan. Work done during the first two authors internship at Meituan. |
| Pseudocode | No | The paper describes various experimental procedures and analyses but does not include any formal pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code available at: https: //github.com/Lauch1ng/LKRobust. |
| Open Datasets | Yes | Note that all the reported variants were initially pre-trained on Image Net-21K and then fine-tuned on Image Net-1K. |
| Dataset Splits | Yes | Specifically, we feed a batch of Image Net validation images into different networks, the batch size is set to 64 and we use regular validation augmentation (only center crop and normalization). |
| Hardware Specification | No | The paper mentions 'computation constraints' in the limitation section but does not specify any particular hardware components such as GPU models, CPU types, or memory specifications used for the experiments. |
| Software Dependencies | No | The paper does not explicitly list any software dependencies (e.g., programming languages, libraries, or frameworks) with specific version numbers. |
| Experiment Setup | Yes | Specifically, we feed a batch of Image Net validation images into different networks, the batch size is set to 64 and we use regular validation augmentation (only center crop and normalization). ... Specifically, we train Conv Next-Tiny with different kernel sizes in a 120 epoch schedule on Image Net-1K... |