MetaMask: Revisiting Dimensional Confounder for Self-Supervised Learning
Authors: Jiangmeng Li, Wenwen Qiang, Yanan Zhang, Wenyi Mo, Changwen Zheng, Bing Su, Hui Xiong
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, our method achieves state-of-the-art performance on various benchmarks.1 |
| Researcher Affiliation | Academia | Jiangmeng Li , Wenwen Qiang , Yanan Zhang University of Chinese Academy of Sciences Institute of Software Chinese Academy of Sciences Southern Marine Science and Engineering Guangdong Laboratory (Guangzhou) {jiangmeng2019, wenwen2018, yanan2018}@iscas.ac.cn Wenyi Mo Gaoling School of Artificial Intelligence Renmin University of China 2022101010@ruc.edu.cn Changwen Zheng Institute of Software Chinese Academy of Sciences Southern Marine Science and Engineering Guangdong Laboratory (Guangzhou) changwen@iscas.ac.cn Bing Su Gaoling School of Artificial Intelligence Renmin University of China Beijing Key Laboratory of Big Data Management and Analysis Methods subingats@gmail.com Hui Xiong Thrust of Artificial Intelligence The Hong Kong University of Science and Technology (Guangzhou) Guangzhou HKUST Fok Ying Tung Research Institute xionghui@ust.hk |
| Pseudocode | Yes | Algorithm 1 Meta Mask |
| Open Source Code | Yes | 1The implementation is available at https://github.com/lionellee9089/Meta Mask |
| Open Datasets | Yes | Model IN-200 [46] STL-10 [47] CIFAR-10 [46] CIFAR-100 [46] |
| Dataset Splits | Yes | Did you specify all the training details (e.g., data splits, hyper-parameters, how they were chosen)? [Yes] See Section 6, Appendix A.4, and the supplementary files. |
| Hardware Specification | Yes | Note that we adopt the official code of Barlow Twins and train on 8 GPUs of NVIDIA Tesla V100. |
| Software Dependencies | No | The paper mentions using "official code of Barlow Twins" but does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | For the comparisons demonstrated in Table 1, we uniformly set the batch size as 64, and we adopt a network with the 5 convolutional layers in Alex Net [48] as conv and a network with 2 additional fully connected layers as fc. [...] For the experiments in Table 2, the batch size is valued by 512, and Res Net-18 [51] is used as the backbone encoder. We adopt the data augmentation and other experimental settings following [2]. |