Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power

Authors: Binghui Li, Jikai Jin, Han Zhong, John Hopcroft, Liwei Wang

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we provide a theoretical understanding of this puzzling phenomenon from the perspective of expressive power for deep neural networks. ... Overall, our theoretical analysis suggests that the hardness of achieving robust generalization may stem from the expressive power of practical models.
Researcher Affiliation Academia 1School of EECS, Peking University 2School of Mathematical Sciences, Peking University 3Center for Data Science, Peking University 4Cornell University 5National Key Laboratory of General Artificial Intelligence, School of Intelligence Science and Technology, Peking University 6Peng Cheng Laboratory 7Pazhou Laboratory (Huangpu) {libinghui,jkjin}@pku.edu.cn, hanzhong@stu.pku.edu.cn, jeh17@cornell.edu, wanglw@cis.pku.edu.cn
Pseudocode No The paper does not include any pseudocode or clearly labeled algorithm blocks. It is a theoretical paper focusing on mathematical proofs and bounds.
Open Source Code No The paper states 'If you ran experiments... Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [N/A]' in its checklist, and no other mention of open-source code is found in the paper.
Open Datasets No The paper is theoretical and does not conduct empirical studies. It mentions datasets like 'CIFAR-10' and 'MNIST' when discussing related work's empirical observations, but does not use or provide access information for any public dataset for its own research.
Dataset Splits No The paper is theoretical and does not involve empirical validation. It defines theoretical concepts such as 'robust training error' and 'robust test error' but does not specify data splits for experiments.
Hardware Specification No The paper states 'If you ran experiments... [N/A]' in its checklist, confirming no experiments were conducted. Therefore, no hardware specifications are provided.
Software Dependencies No The paper states 'If you ran experiments... [N/A]' in its checklist, confirming no experiments were conducted. Therefore, no software dependencies with version numbers are listed.
Experiment Setup No The paper states 'If you ran experiments... [N/A]' in its checklist, confirming no experiments were conducted. Therefore, no experimental setup details like hyperparameters or training settings are provided.