Deep Support Vectors
Authors: JunHoo Lee, Hyunho Lee, Kyomin Hwang, Nojun Kwak
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate the effectiveness of DSVs using common datasets (Image Net, CIFAR10 and CIFAR100) on the general architectures (Res Net and Conv Net), proving their practical applicability. (See Fig. 1) |
| Researcher Affiliation | Academia | Junhoo Lee Hyunho Lee Kyomin Hwang Nojun Kwak Seoul National University {mrjunoo, hhlee822, kyomin98, nojunk}@snu.ac.kr |
| Pseudocode | Yes | Alg. 1 presents our algorithm of generating Deep Support Vectors (DSVs). Initialized either from a noise xs i N(0, I) or a real sample, it iterates to obtain the primal XS and dual ΛS variables. |
| Open Source Code | Yes | Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: We submitted code in supplementary. |
| Open Datasets | Yes | We validate the effectiveness of DSVs using common datasets (Image Net, CIFAR10 and CIFAR100) on the general architectures (Res Net and Conv Net), proving their practical applicability. (See Fig. 1) |
| Dataset Splits | Yes | The experimental setup includes clear specifications on data splits, selected hyperparameters, and optimizers for each experimental task. Additional settings, such as augmentation parameters and model architecture details, are included, enabling a comprehensive understanding of the experimental environment. |
| Hardware Specification | Yes | The paper specifies the use of GPUs for all major experiments and provides approximate training times. Resources are sufficiently detailed to allow for replication, indicating required compute types and time estimates for reproducibility. |
| Software Dependencies | No | The paper mentions 'pytorch [22]' and 'Adam optimizer [9]' but does not specify exact version numbers for these software dependencies to ensure reproducibility. |
| Experiment Setup | Yes | To synthesize DSVs in Image Net, we used translation, crop, cutout, flip, and noise for augmentation, with hyperparameters set to 0.125, 0.2, 0.15, 0.5, and 0.01, respectively. In Eq. (9), we set α to 2e-5, β to 40, and γ to 1e-6. ... For retraining models with synthesized images, we used a learning rate of 1e-4 while the other parameters set to the default values of the Adam optimizer [9]. |