KDExplainer: A Task-oriented Attention Model for Explaining Knowledge Distillation

Authors: Mengqi Xue, Jie Song, Xinchao Wang, Ying Chen, Xingen Wang, Mingli Song

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that with a negligible additional cost, student models equipped with VAM consistently outperform their non-VAM counterparts across different benchmarks. Furthermore, when combined with other KD methods, VAM remains competent in promoting results, even though it is only motivated by vanilla KD. The section "5 Experiments" details the empirical studies conducted.
Researcher Affiliation Academia Mengqi Xue1 , Jie Song1 , Xinchao Wang2 , Ying Chen1 , Xingen Wang1 , Mingli Song1 1Zhejiang University 2National University of Singapore {mqxue, sjie, lynesychen, newroot, brooksong}@zju.edu.cn, xinchao@nus.edu.sg
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https: //github.com/zju-vipa/KDExplainer.
Open Datasets Yes Experiments are conducted on CIFAR-10 and CIFAR-100 [Krizhevsky et al., 2009]. We also include Tiny-Image Net in Section 5.2. These are widely recognized public datasets.
Dataset Splits Yes Experiments are conducted on CIFAR-10 and CIFAR-100 [Krizhevsky et al., 2009]. While specific percentages for splits are not explicitly stated, CIFAR-10 and CIFAR-100 are standard benchmark datasets with predefined, well-known training and validation/test splits, which is sufficient for reproducibility.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup Yes For all models, we use Res Net50 as their teacher model for KD. The initial learning rate is 0.1 and decayed every 30 epochs. The training ceases at 300 epochs. During the training phase of the student model, the learning rate is initially 0.05 (attention module 0.01), gets decayed at epoch 150, 180, 210 by a factor of 0.1, and ceases at 240. The temperature hyper-parameter is set to 1 for all attention modules and 4 for KD loss. The trading-off factor α is set to 0.9. The number of channels in each virtual block is 8 for VGG8, WRN-16-2, and WRN-401, 4 for resnet20, 16 for Res Net18, and 10 for Shuffle Net V1.