Improving Transformation Invariance in Contrastive Representation Learning

Authors: Adam Foster, Rattana Pukdee, Tom Rainforth

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our approaches first on CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009), using transformations appropriate to natural images and evaluating on a downstream classification task. To validate that our ideas transfer to other settings, and to use our gradient regularizer within a fully differentiable generative process, we further introduce a new synthetic dataset called Spirograph.
Researcher Affiliation Academia Adam Foster , Rattana Pukdee & Tom Rainforth Department of Statistics University of Oxford {adam.foster,rainforth}@stats.ox.ac.uk
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes For an open source implementation of our methods, see https://github.com/ae-foster/invclr.
Open Datasets Yes We evaluate our approaches first on CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009)... To validate that our ideas transfer to other settings, and to use our gradient regularizer within a fully differentiable generative process, we further introduce a new synthetic dataset called Spirograph. A standalone implementation of this dataset can be found at https://github.com/rattaoup/ spirograph.
Dataset Splits No The paper states: "The final dataset consists of 100k training and 20k test images of size 32 32." and "We train these linear models with L-BFGS... on the training set and evaluate performance on the test set." No specific validation split or set is mentioned for reproduction.
Hardware Specification Yes Our experiments were implemented in Py Torch (Paszke et al., 2019) and ran on 8 Nvidia Ge Force GTX 1080Ti GPUs.
Software Dependencies Yes Our experiments were implemented in Py Torch (Paszke et al., 2019)...
Experiment Setup Yes Table 4: Hyperparameters used for CIFAR-10, CIFAR-100 and Spirograph. Parameter: Training batch size 512, Training epochs 1000, Optimizer LARS, Scheduler Cosine annealing, Learning rate 3e-3, Momentum 0.9, Temperature τ 0.5. Table 5: Hyperparameters for gradient penalty calculation. Parameter: L 100, λ 0.1/0.01, Clip value 1/1000.