Deep invariant networks with differentiable augmentation layers
Authors: Cédric ROMMEL, Thomas Moreau, Alexandre Gramfort
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide empirical evidence showing that our approach is easier and faster to train than modern automatic data augmentation techniques based on bilevel optimization, while achieving comparable results. |
| Researcher Affiliation | Academia | Cédric Rommel, Thomas Moreau & Alexandre Gramfort Université Paris-Saclay, Inria, CEA, Palaiseau, 91120, France {firstname.lastname}@inria.fr |
| Pseudocode | No | The paper describes the architecture and method but does not include a formal pseudocode or algorithm block. |
| Open Source Code | Yes | The accompanying code can be found at https://github.com/cedricrommel/augnet. |
| Open Datasets | Yes | In this experiment we showcase Aug Net on a standard image recognition task using the CIFAR10 dataset [27]. |
| Dataset Splits | Yes | All models considered in this experiment are trained for 300 epochs over 5 different seeds on a random 80% fraction of the official CIFAR10 training set. The remaining 20% is used as a validation set for early-stopping and choosing hyperparameters, and the official test set is used for reporting performance metrics. |
| Hardware Specification | Yes | It was also granted access to the HPC resources of IDRIS under the allocation 2021-AD011012284R1 and 2022-AD011011172R2 made by GENCI. |
| Software Dependencies | No | Our work is based on code from [19] and [12], as well as open-source libraries like MNE-PYTHON [35] and BRAINDECODE [36]. |
| Experiment Setup | Yes | All models considered in this experiment are trained for 300 epochs over 5 different seeds on a random 80% fraction of the official CIFAR10 training set. The reader is referred to Section A.3 for further experimental details, and to Section B.2 for a sensitivity analysis on hyperparameters C and λ. |