Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Distilling Dataset into Neural Field
Authors: Donghyeok Shin, HeeSun Bae, Gyuwon Sim, Wanmo Kang, Il-chul Moon
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments, we demonstrate that DDi F achieves superior performance on several benchmark datasets, extending beyond the image domain to include video, audio, and 3D voxel. We release the code at https://github.com/aailab-kaist/DDi F. |
| Researcher Affiliation | Academia | 1Korea Advanced Institute of Science and Technology (KAIST), 2summary.ai EMAIL |
| Pseudocode | Yes | Algorithms 1 and 2 specify a training procedure and decoding process of DDi F, respectively. |
| Open Source Code | Yes | We release the code at https://github.com/aailab-kaist/DDi F. |
| Open Datasets | Yes | Image Net-Subset (Howard, 2019; Cazenavette et al., 2022), CIFAR-10 (Krizhevsky et al., 2009), CIFAR-100 (Krizhevsky et al., 2009), Mini UCF (Wang et al., 2024; Khurram, 2012), Mini Speech Commands (Kim et al., 2022; Warden, 2018), Model Net (Wu et al., 2015), Shape Net (Chang et al., 2015) |
| Dataset Splits | Yes | Each class contains 5,000 images for training and 1,000 images for testing. ... Each class is split into 500 for training and 100 for testing. ... The dataset consists of 8 classes, and each class has 875/125 data for training/testing, respectively. |
| Hardware Specification | Yes | We use a mixture of RTX 3090, L40S, and Tesla A100 to run our experiments. |
| Software Dependencies | No | The paper mentions software components like 'Adam optimizer' and 'Kornia' but does not specify their version numbers. For example, 'We utilize Adam optimizer (Kingma & Ba, 2017)' and 'We adopt ZCA whitening ... with the Kornia (Riba et al., 2020) implementation'. |
| Experiment Setup | Yes | We fix the iteration number and learning rate for warm-up initialization of synthetic neural field as 5,000 and 0.0005. ... We run 15,000 iterations for TM and 20,000 iterations for DM. ... We provide the detailed hyperparameters in Table 9. |