Frequency Domain-Based Dataset Distillation
Authors: Donghyeok Shin, Seungjae Shin, Il-chul Moon
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through the selection of frequency dimensions based on the explained variance, Fre D demonstrates both theoretical and empirical evidence of its ability to operate efficiently within a limited budget, while better preserving the information of the original dataset compared to conventional parameterization methods. Furthermore, based on the orthogonal compatibility of Fre D with existing methods, we confirm that Fre D consistently improves the performances of existing distillation methods over the evaluation scenarios with different benchmark datasets. ... We evaluate the efficacy of Fre D on various benchmark datasets, i.e. SVHN [13], CIFAR-10, CIFAR-100 [8] and Image Net-Subset [6, 3, 4]. |
| Researcher Affiliation | Collaboration | Donghyeok Shin KAIST tlsehdgur0@kaist.ac.kr Seungjae Shin KAIST tmdwo0910@kaist.ac.kr Il-Chul Moon KAIST, Summary.AI icmoon@kaist.ac.kr |
| Pseudocode | Yes | Algorithm 1 Fre D: Frequency domain-based Dataset Distillation |
| Open Source Code | Yes | We release the code at https://github.com/sdh0818/Fre D. |
| Open Datasets | Yes | We evaluate the efficacy of Fre D on various benchmark datasets, i.e. SVHN [13], CIFAR-10, CIFAR-100 [8] and Image Net-Subset [6, 3, 4]. |
| Dataset Splits | No | The paper mentions training and evaluation but does not explicitly detail training/validation/test splits (e.g., specific percentages, sample counts, or explicit mention of a validation set beyond general deep learning context). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9, and CUDA 11.1'). |
| Experiment Setup | Yes | We evaluate each method by training 5 randomly initialized networks from scratch on optimized S. Please refer to Appendix C for a detailed explanation of datasets and experiment settings. |