GSDD: Generative Space Dataset Distillation for Image Super-resolution

Authors: Haiyu Zhang, Shaolin Su, Yu Zhu, Jinqiu Sun, Yanning Zhang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that when trained with our distilled data, GSDD can achieve comparable performance to the state-of-the-art (SOTA) SISR algorithms, while a nearly 8 increase in training efficiency and a saving of almost 93.2% data storage space can be realized.
Researcher Affiliation Academia 1School of Computer Science, Northwestern Polytechnical University 2School of Astronautics, Northwestern Polytechnical University
Pseudocode No The paper describes the proposed method using flow diagrams (Figure 2) and mathematical formulations, but it does not include a distinct pseudocode block or algorithm labeled as such.
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes To satisfy the requirement, we select the SR dataset OST (Wang et al. 2018b) for our main experiments, in which the images all have their labeled classes. It consists of data totalling over 10,324 images in 7 categories (sky, water, building, grass, plant, animal, and mountain)... We further employ a real-world dataset DPED (Ignatov et al. 2017) to verify the generalization capability of GSDD under realistic degradation.
Dataset Splits No The paper states 'we use Outdoor Scene Training for training and Outdoor Scene Test300 for testing.' It explicitly mentions training and testing sets but does not specify a separate validation set or how it was derived.
Hardware Specification No The paper describes the software used (ResNet18, Adam optimizer) and training parameters, but it does not specify any hardware details such as GPU models, CPU types, or memory capacity used for running the experiments.
Software Dependencies No The paper mentions 'Res Net18 (He et al. 2016) for feature extraction' and 'Adam optimizer (Kingma and Ba 2014)' but does not provide specific version numbers for these or any other software libraries or frameworks used.
Experiment Setup Yes We use Adam optimizer (Kingma and Ba 2014) with β1 = 0.9, β2 = 0.999, and learning rate η = 0.001 for training. The training iterations for optimizing latent vectors is 500K and regularization coefficient is set to λ = 0.1.