Quantification and Analysis of Layer-wise and Pixel-wise Information Discarding
Authors: Haotian Ma, Hao Zhang, Fan Zhou, Yinqing Zhang, Quanshi Zhang
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments have shown the effectiveness of our metrics in analyzing classic DNNs and explaining existing deep-learning techniques. |
| Researcher Affiliation | Academia | 1 Shanghai Jiao Tong University, Shanghai, China 2 Southern University of Science and Technology, Shenzhen, China. |
| Pseudocode | No | The paper describes mathematical formulations and procedures (e.g., Equation (3) and (4)), but does not include any clearly labeled 'Pseudocode' or 'Algorithm' block or figure. |
| Open Source Code | Yes | The code is available at https://github.com/haotianSustc/deepinfo. |
| Open Datasets | Yes | CUB200-2011 dataset (Wah et al., 2011), CIFAR-10 dataset (Krizhevsky, 2009), ISBI cell tracking challenge (WWW, 2012), Image Net dataset (Russakovsky et al., 2015). |
| Dataset Splits | No | No explicit training/validation/test splits (e.g., percentages, sample counts) are provided for the datasets used in the experiments. The paper states 'we used object images cropped by object bounding boxes for both training and testing' but lacks specific split details. |
| Hardware Specification | No | No specific hardware details (such as GPU models, CPU types, or cloud instance specifications) used for running the experiments are mentioned in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9) are mentioned in the paper. |
| Experiment Setup | Yes | In order to learn the parameter σ, we used the learning rate 1 10 4, and learned σ for 100 epochs. ... In the following experiments, we set β = 1.5 10 4. |