MetAug: Contrastive Learning via Meta Feature Augmentation

Authors: Jiangmeng Li, Wenwen Qiang, Changwen Zheng, Bing Su, Hui Xiong

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, our method achieves state-of-the-art results on several benchmark datasets. ... We benchmark our Met Aug on five established datasets: Tiny Image Net (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), CIFAR10 (Krizhevsky et al., 2009), CIFAR100 (Krizhevsky et al., 2009), and Image Net (Jia et al., 2009). The compared benchmark methods include: Bi GAN (Donahue et al., 2016), NAT (Bojanowski & Joulin, 2017), DIM (Hjelm et al., 2018), Split Brain (Zhang et al., 2017), CPC (Hénaff et al., 2019), Sw AV (Caron et al., 2020), Sim CLR (Chen et al., 2020), CMC (Tian et al., 2019), Mo Co (He et al., 2020), Sim Siam (Chen & He, 2020), Info Min Aug. (Tian et al., 2020), BYOL (Grill et al., 2020), Barlow Twins (Zbontar et al., 2021), DACL (Verma et al., 2021) Loo C (Xiao et al., 2021), Debiased (Chuang et al., 2020), Hard (Robinson et al., 2020), and NNCLR (Dwibedi et al., 2021).
Researcher Affiliation Academia 1Science & Technology on Integrated Information System Laboratory, Institute of Software Chinese Academy of Sciences, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China 3Southern Marine Science and Engineering Guangdong Laboratory (Guangzhou), Guangdong, China. 4Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China 5Beijing Key Laboratory of Big Data Management and Analysis Methods, Beijing, China 6Thrust of Artificial Intelligence, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China 7Department of Computer Science Engineering, The Hong Kong University of Science and Technology, Hong Kong SAR, China.
Pseudocode Yes Here, we provide a pseudo-code for Met Aug training described in the style of Py Torch, which is without the inclusion of the detailed matrix processing or helper utility functions & codes that are irrelevant to the algorithm:
Open Source Code No The paper does not provide an explicit statement or a link to open-source code for the Met Aug methodology described.
Open Datasets Yes We benchmark our Met Aug on five established datasets: Tiny Image Net (Krizhevsky et al., 2009), STL-10 (Coates et al., 2011), CIFAR10 (Krizhevsky et al., 2009), CIFAR100 (Krizhevsky et al., 2009), and Image Net (Jia et al., 2009).
Dataset Splits No The paper mentions 'we conduct comparisons of using different hyperparameters on the validation set of corresponding benchmark datasets' but does not specify the size, percentage, or method for creating this split.
Hardware Specification No The paper does not provide specific details regarding the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions 'Py Torch' in the context of pseudocode style, but does not provide specific version numbers for PyTorch or any other software dependencies required for replication.
Experiment Setup Yes To efficiently perform CL within a restricted amount of the inputs in training, we uniformly set the batch size as 64... The learning rates and weight decay rates are uniform over comparisons. ... The hyperparameter β is proposed as a temperature coefficient in OUCL. γ is a specific parameter to replace the hyperparameters in OUCL such that the number of hyperparameters can be reduced. δ balances the impact of OUCL that uses augmented features and OUCL that does not use these features. ... we fix γ = 0.40 and study on the impacts of other hyperparameters. ... we first fixed α = 10 13, and then we selected β from the range of {2, 22, 23, 24, 25, 26, 27, 28} and δ from the range of {10 1, 10 2, 10 3, 10 4, 10 5, 10 6, 10 7, 10 8}.