Dataset Condensation with Gradient Matching

Authors: Bo Zhao, Konda Reddy Mopuri, Hakan Bilen

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We rigorously evaluate its performance in several computer vision benchmarks and demonstrate that it significantly outperforms the state-of-the-art methods1.
Researcher Affiliation Academia Bo Zhao, Konda Reddy Mopuri, Hakan Bilen School of Informatics, The University of Edinburgh {bo.zhao, kmopuri, hbilen}@ed.ac.uk
Pseudocode Yes Algorithm 1: Dataset condensation with gradient matching Input: Training set T
Open Source Code Yes 1The implementation is available at https://github.com/VICO-Uo E/Dataset Condensation.
Open Datasets Yes We evaluate classification performance with the condensed images on four standard benchmark datasets: digit recognition on MNIST (Le Cun et al., 1998), SVHN (Netzer et al., 2011) and object classification on Fashion MNIST (Xiao et al., 2017), CIFAR10 (Krizhevsky et al., 2009).
Dataset Splits Yes In all experiments, we use the standard train/test splits of the datasets the train/test statistics are shown in Table T5. We randomly sample 5,000 images from the 50,000 training images in CIFAR10 as the validation set.
Hardware Specification Yes We compare the training time and memory cost required by DD and our method with one NVIDIA GTX1080-Ti GPU.
Software Dependencies No The paper mentions using Stochastic Gradient Descent (SGD) as an optimizer but does not specify any software libraries or frameworks with version numbers (e.g., PyTorch 1.9, TensorFlow 2.x).
Experiment Setup Yes In all experiments, we set K = 1000, ηS = 0.1, ηθ = 0.01, ςS = 1 and employ Stochastic Gradient Descent (SGD) as the optimizer.