Learning Instance-Specific Augmentations by Capturing Local Invariances
Authors: Ning Miao, Tom Rainforth, Emile Mathieu, Yann Dubois, Yee Whye Teh, Adam Foster, Hyunjik Kim
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically demonstrate that Insta Aug learns meaningful input-dependent augmentations for a wide range of transformation classes, which in turn provides better performance on both supervised and self-supervised tasks. |
| Researcher Affiliation | Collaboration | 1Dept. of Statistics, University of Oxford 2Stanford University 3Microsoft Research 4Deep Mind, UK. |
| Pseudocode | Yes | Algorithm 1 Location related parameterization |
| Open Source Code | Yes | Accompanying code is provided at https://github.com/Ning Miao/Insta Aug. |
| Open Datasets | Yes | We first evaluate the performance of jointly training Insta Aug and the classifier on Tiny-Imagenet (Tiny IN, 64 64)... We exploit this on the larger Imagenet dataset (224 224) (Deng et al., 2009)... We benchmark on the texture classification dataset Raw Foo T (Bianco et al., 2017). |
| Dataset Splits | Yes | A scheduler is used to decrease the learning rate by a factor of 0.9 once validation accuracy doesn t increase for 10 epochs. ... To further investigate the effect of each augmentation method, we additionally split the 46 test sets into two equally-sized groups. |
| Hardware Specification | Yes | On a single 1080Ti, each iteration of training Insta Aug on Tiny IN takes 0.25s |
| Software Dependencies | No | The paper mentions using SGD and Adam optimizers, and refers to 'Mixmo codebase' and a codebase from Ermolov et al., 2021, which implies software frameworks like PyTorch. However, it does not specify exact version numbers for any software libraries or dependencies. |
| Experiment Setup | Yes | For the classifier, the initial learning rate is set to 0.2 (with momentum 0.9 and weight decay 1e 4). ... The learning rate of the augmentation module ϕ is fixed at 1e 5. Batch size is set to 100 and we pre-train Insta Aug for 10 epochs without augmentation. We train the model until convergence and the maximum epoch is set to 150. |