DISCOVERING AND EXPLAINING THE REPRESENTATION BOTTLENECK OF DNNS
Authors: Huiqi Deng, Qihan Ren, Hao Zhang, Quanshi Zhang
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This paper explores the bottleneck of feature representations of deep neural networks (DNNs), from the perspective of the complexity of interactions between input variables encoded in DNNs. To this end, we focus on the multi-order interaction between input variables, where the order represents the complexity of interactions. We discover that a DNN is more likely to encode both too simple and too complex interactions, but usually fails to learn interactions of intermediate complexity. Such a phenomenon is widely shared by different DNNs for different tasks. This phenomenon indicates a cognition gap between DNNs and humans, and we call it a representation bottleneck. We theoretically prove the underlying reason for the representation bottleneck. Furthermore, we propose losses to encourage/penalize the learning of interactions of specific complexities, and analyze the representation capacities of interactions of different complexities. The code is available at https://github.com/Nebularaid2000/bottleneck. |
| Researcher Affiliation | Academia | Huiqi Deng , Qihan Ren , Hao Zhang, Quanshi Zhang Shanghai Jiao Tong University {denghq7,renqihan,1603023-zh,zqs1022}@sjtu.edu.cn |
| Pseudocode | No | No pseudocode or algorithm blocks are explicitly presented in the paper. |
| Open Source Code | Yes | The code is available at https://github.com/Nebularaid2000/bottleneck. |
| Open Datasets | Yes | In order to measure J(m), we conducted experiments on three image datasets including the Image Net dataset (Russakovsky et al., 2015), the Tiny-Image Net dataset (Le & Yang, 2015) and the CIFAR-10 dataset (Krizhevsky et al., 2009). ... In addition, we conducted experiments on two tabular datasets, including the UCI census income dataset (census) and the UCI TV news channel commercial detection dataset (commercial) (Dua et al., 2017). |
| Dataset Splits | No | In order to measure J(m), we conducted experiments on three image datasets including the Image Net dataset (Russakovsky et al., 2015), the Tiny-Image Net dataset (Le & Yang, 2015) and the CIFAR-10 dataset (Krizhevsky et al., 2009). We mainly analyzed several DNNs trained on these datasets for image classification, including Alex Net (Krizhevsky et al., 2012), VGG-16 (Simonyan & Zisserman, 2014) and Res Net-18/20/50/56 (He et al., 2016). |
| Hardware Specification | Yes | We trained these models with a mini-batch size of 128 on a single NVIDIA Ge Force RTX 3090 GPU and used 4 subprocesses in data loading. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) are mentioned in the paper. |
| Experiment Setup | Yes | We propose two simple-yet-efficient losses in the training process. The two losses encourage and penalize interactions of specific orders, respectively. ... Loss = Lossclassification + λ1L+(r1, r2) + λ2L (r1, r2) ... Specifically, the second DNN was trained to penalize interactions of the [0.7n, n]-th orders by minimizing the L (r1, r2) loss with λ1 = 0, λ2 = 1, r1 = 0.7, r2 = 1.0. ... We set the attack strength ϵ = 0.6 with 100 steps for the census dataset, and set ϵ = 0.2 with 50 steps for the commercial dataset. The step size was uniformly set to 0.01 for all attacks. ... We trained these models with a mini-batch size of 128 on a single NVIDIA Ge Force RTX 3090 GPU and used 4 subprocesses in data loading. |