RényiCL: Contrastive Representation Learning with Skew Rényi Divergence

Authors: Kyungmin Lee, Jinwoo Shin

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through experiments on Image Net, we show that Rényi contrastive learning with stronger augmentations outperforms other self-supervised methods without extra regularization or computational overhead. Moreover, we also validate our method on other domains such as graph and tabular, showing empirical gain over other contrastive methods.
Researcher Affiliation Academia Kyungmin Lee Jinwoo Shin Korea Advanced Institute of Science and Technology (KAIST)
Pseudocode Yes The pseudo-code for the RMLCPC objective is in Appendix Algorithm 1.
Open Source Code Yes The implementation and pre-trained models are available at 1. 1https://github.com/kyungmnlee/Renyi_CL
Open Datasets Yes Through experiments on Image Net, we show that Rényi contrastive learning with stronger augmentations outperforms other self-supervised methods without extra regularization or computational overhead. Moreover, we also validate our method on other domains such as graph and tabular, showing empirical gain over other contrastive methods. (Also other mentions like "For Image Net [18] experiments," "CIFAR-10/100 [51] and Image Net-100 [64]," "Forest Cover Type (Cov Type) and Higgs Boson (Higgs) [66] dataset from UCI repository [67]," and "graph TUDataset [70]")
Dataset Splits Yes Linear evaluation. We follow the linear evaluation protocol, where we report the Top-1 Image Net validation accuracy (%) of a linear classifier trained on the top of frozen features. ... Semi-supervised learning. We evaluate the usefulness of the learned feature in a semi-supervised setting with 1% and 10% subsets of Image Net dataset [3, 43]. (Also "For CIFAR-10 and CIFAR-100, we follow the settings in [3]")
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments, only general statements about training.
Software Dependencies No The paper mentions general software components like 'Res Net-50', 'MLP projection head', but does not provide specific version numbers for software dependencies or libraries.
Experiment Setup Yes For Image Net [18] experiments, we use Res Net-50 [49] for encoder g. We use MLP projection head with momentum encoder [4], which is updated by EMA, and we use predictor, following the practice of [47, 6]. Then the critic f is implemented by the cosine similarity between the output of momentum encoder and base encoder with predictor, divided by temperature = 0.5. We maximize RMLCPC objective with γ = 2.0. ... For tabular experiments... We use γ = 1.1 for Cov Type and γ = 1.2 for Higgs.