Safe Distillation Box

Authors: Jingwen Ye, Yining Mao, Jie Song, Xinchao Wang, Cheng Jin, Mingli Song3117-3124

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments across various datasets and architectures demonstrate that, with SDB, the performance of an unauthorized KD drops significantly while that of an authorized gets enhanced, demonstrating the effectiveness of SDB.
Researcher Affiliation Collaboration Jingwen Ye1,2, Yining Mao1, Jie Song1, Xinchao Wang2, Cheng Jin3, Mingli Song1,4 1 Zhejiang University 2 National University of Singapore 3 Fudan University 4 Alibaba-Zhejiang University Joint Research Institute of Frontier Technologies
Pseudocode No The paper describes the SDB framework and its strategies (key embedding, knowledge disturbance, knowledge preservation) but does not provide structured pseudocode or an algorithm block.
Open Source Code No The paper does not contain any explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes Two public datasets are employed in the experiments, including the CIFAR10 dataset and CIFAR100 dataset.
Dataset Splits No The paper mentions using CIFAR10 and CIFAR100 datasets for experiments but does not explicitly provide the train/validation/test split percentages or sample counts in the main text.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No We used Py Torch framework for the implementation.
Experiment Setup Yes For optimizing the SDB models, we used stochastic gradient descent with momentum of 0.9 and learning rate of 0.1 for 200 epochs. For applying distillation, we set T = 4 for CIFAR10 dataset and T = 20 for CIFAR100 dataset. In the random key generation, we set λ = 0.5. In the knowledge disturbance, we set Tdis = 4 for CIFAR10 dataset and Tdis = 20 for CIFAR100 dataset.