PICNN: A Pathway towards Interpretable Convolutional Neural Networks
Authors: Wengang Guo, Jiayi Yang, Huilin Yin, Qijun Chen, Wei Ye
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the effectiveness of our method on ten widely used network architectures (including nine CNNs and a Vi T) and five benchmark datasets. Experimental results have demonstrated that our method PICNN (the combination of standard CNNs with our proposed pathway) exhibits greater interpretability than standard CNNs while achieving higher or comparable discrimination power. |
| Researcher Affiliation | Academia | College of Electronic and Information Engineering, Tongji University, Shanghai, China {guowg, 2111125, yinhuilin, qjchen, yew}@tongji.edu.cn |
| Pseudocode | No | The paper describes its algorithms and processes in text and mathematical equations but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is based on the Py Torch (Paszke et al. 2019) toolbox and publicly available at Github2. 2https://github.com/spdj2271/PICNN |
| Open Datasets | Yes | We use three benchmark classification datasets in Table 1, including CIFAR-10 (Krizhevsky, Hinton et al. 2009), STL-10 (Coates, Ng, and Lee 2011), and PASCAL VOC Part (Chen et al. 2014). To further evaluate the efficacy and effectiveness of PICNN on large datasets with more classes, we use CIFAR-100 (Krizhevsky, Hinton et al. 2009) and Tiny Image Net (Deng et al. 2009). |
| Dataset Splits | Yes | CIFAR-10 consists of 50,000 training images and 10,000 test images in 10 classes. STL-10 contains 5,000 training images and 8,000 test images in 10 classes. Following (Liang et al. 2020), we select six animal classes from PASCAL VOC Part with a 70%/30% training/test split. Like CIFAR-10, CIFAR-100 also consists of 50,000 training images and 10,000 test images evenly distributed into 100 classes. Tiny Image Net is a scaled-down version of the original Image Net involving 200 classes with a 10:1 ratio of training images to test images. We use the official training/test data split, except for PASCAL VOC Part. |
| Hardware Specification | Yes | The experiments are carried out on a server with an Xeon(R) Platinum 8352V CPU and one Nvidia RTX 4090 GPU. |
| Software Dependencies | No | The paper mentions 'Our code is based on the Py Torch (Paszke et al. 2019) toolbox' but does not specify a version number for PyTorch or list any other software dependencies with specific versions. |
| Experiment Setup | Yes | Other default settings include: a batch size of 128; the Adam optimizer with an initial learning rate of 0.001; pretrained weights from Image Net (Deng et al. 2009); and a total of 200 training epochs. The regularization parameter λ is set to 2 and the effect of the λ values is discussed later. |