Post-training Iterative Hierarchical Data Augmentation for Deep Networks
Authors: Adil Khan, Khadija Fraz
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experiments, This section presents the results of the empirical validation of IHDA on three image classification benchmarks: CIFAR-10, CIFAR-100 [1], and Image Net [32]. |
| Researcher Affiliation | Academia | Adil Khan Khadija Fraz Institute of Data Science and Artificial Intelligence Innopolis University Universitetskaya St, 1, Innopolis, Russia, 420500 a.khan@innopolis.ru, k.fraz@innopolis.university |
| Pseudocode | Yes | Algorithm 1: The algorithm to compute potential of a point p hl (X), Algorithm 2: The IHDA Algorithm |
| Open Source Code | No | The paper does not include an unambiguous statement about releasing source code for the described methodology, nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | CIFAR-10, CIFAR-100 [1], and Image Net [32]., Sussex-Huawei Locomotion-Transportation (SHL) challeneg dataset [33] |
| Dataset Splits | Yes | For CIFAR datasets, the validation set had 5000 images, which were taken from the training set. For Image Net, we used its reduced subset, which was created by randomly choosing 150 classes and 50,000 samples. From this reduced subset, we held out 5000 images for the validation set to tune the hyperparameters. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU models, CPU types, or cloud computing resources used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies or library versions, only mentioning general techniques like backpropagation and models like VAEs. |
| Experiment Setup | Yes | In all experiments, the spread of RBF γ in Algorithm 1 was set to 0.05. For other hyper-parameters (including p, and w), we held out a part of the training dataset as the validation set to find their optimum values. The hyper-parameter p and w, for each experiment, are selected from the interval [0, 1], with a step size of 0.05, based on the performance on the validation set. |