Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Contrastive Model Invertion for Data-Free Knolwedge Distillation
Authors: Gongfan Fang, Jie Song, Xinchao Wang, Chengchao Shen, Xingen Wang, Mingli Song
IJCAI 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments of pre-trained models on CIFAR-10, CIFAR-100, and Tiny-Image Net demonstrate that CMI not only generates more visually plausible instances than the state of the arts, but also achieves significantly superior performance when the generated data are used for knowledge distillation. |
| Researcher Affiliation | Collaboration | Gongfan Fang1,3 , Jie Song1 , Xinchao Wang2 , Chengchao Shen1 , Xingen Wang1 , Mingli Song1,3 1Zhejiang University 2National University of Singapore 3Alibaba-Zhejiang University Joint Research Institute of Frontier Technologies |
| Pseudocode | Yes | The proposed algorithm is summarized in Alg. 1. Algorithm 1: Contrastive Model Inversion. |
| Open Source Code | Yes | Code is available at https://github.com/zju-vipa/Data Free. |
| Open Datasets | Yes | Three standard classification datasets, CIFAR-10, CIFAR100 [Krizhevsky et al., 2009] and Tiny-Image Net [Le and Yang, 2015] are utilized to benchmarks existing data-free KD approaches. |
| Dataset Splits | Yes | CIFAR-10 and CIFAR-100 both contain 50,000 samples for training and 10,000 samples for validation, whose resolution is 32 32. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper does not specify software dependencies with version numbers (e.g., Python, PyTorch, or other libraries with their exact versions). |
| Experiment Setup | No | The paper states, "Details about datasets and training can be found in supplementary materials." This indicates that specific experimental setup details, such as hyperparameters or training configurations, are not provided within the main text. |