Self-Supervised Set Representation Learning for Unsupervised Meta-Learning
Authors: Dong Bok Lee, Seanie Lee, Kenji Kawaguchi, Yunji Kim, Jihwan Bang, Jung-Woo Ha, Sung Ju Hwang
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We also empirically validate its effectiveness on various benchmark datasets, showing that Set-Sim CLR largely outperforms both UML and instance-level self-supervised learning baselines. |
| Researcher Affiliation | Collaboration | Dong Bok Lee1 Seanie Lee1 Kenji Kawaguchi2 Yunji Kim3 Jihwan Bang3 Jung-Woo Ha3 Sung Ju Hwang1 KAIST1, National University of Singapore2, NAVER3 {markhi, lsnfamily02, sjhwang82}@kaist.ac.kr {yunji.kim, jihwan.bang, jungwoo.ha}@navercorp.com kenji@comp.nus.edu.sg |
| Pseudocode | Yes | We provide the pseudo-code for Set-Sim CLR described in Section 3.3. Algorithm 1 Meta-Training for Set-Sim CLR. Algorithm 2 Meta-Test for Set-Sim CLR. |
| Open Source Code | Yes | In Supplementary File, we further provide the code for reproducing the main experimental results in Table 1 and Figure 2. |
| Open Datasets | Yes | Dataset We use the Mini-Image Net dataset introduced by Ravi & Larochelle (2017)... Tiny-Image Net (Le & Yang, 2015), CIFAR100 (Krizhevsky et al., 2009), Aircraft (Maji et al., 2013), Stanford Cars (Krause et al., 2013) and CUB (Wah et al., 2011) datasets. |
| Dataset Splits | Yes | We use 64 classes for unsupervised meta-training, 16 classes for meta-validation, and the remaining 20 classes for meta-test. |
| Hardware Specification | No | The paper mentions running augmentations 'on GPU' but does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for experiments. |
| Software Dependencies | No | The paper mentions software packages like 'scikit-learn', 'Kornia framework', and 'huggingface transformers library' but does not specify their version numbers, which is required for reproducibility. |
| Experiment Setup | Yes | We optimize the base encoder, set encoder and head network for 400 epochs using Adam optimizer (Kingma & Ba, 2015) with default settings (i.e., β1 = 0.9 and β2 = 0.999). We use constant learning rate of 0.001. For our method Set-Sim CLR, we apply the augmentations (which is defined in Appendix J) 8 times to the mini-batch of 64 images (i.e., M = 64, V = 8), resulting in 4 elements in each set, while performing the same augmentation twice on the mini-batch of 256 images (i.e., M = 256, V = 2) for the other baselines. |